RSS

Category Archives: Digital Antiquaria

The Next Generation in Graphics, Part 2: Three Dimensions in Hardware

Most of the academic papers about 3D graphics that John Carmack so assiduously studied during the 1990s stemmed from, of all times and places, the Salt Lake City, Utah, of the 1970s. This state of affairs was a credit to one man by the name of Dave Evans.

Born in Salt Lake City in 1924, Evans was a physicist by training and an electrical engineer by inclination, who found his way to the highest rungs of computing research by way of the aviation industry. By the early 1960s, he was at the University of California, Berkeley, where he did important work in the field of time-sharing, taking the first step toward the democratization of computing by making it possible for multiple people to use one of the ultra-expensive big computers of the day at the same time, each of them accessing it through a separate dumb terminal. During this same period, Evans befriended one Ivan Sutherland, who deserves perhaps more than any other person the title of Father of Computer Graphics as we know them today.

For, in the course of earning his PhD at MIT, Sutherland developed a landmark software application known as Sketchpad, the first interactive computer-based drawing program of any stripe. Sketchpad did not do 3D graphics. It did, however, record its user’s drawings as points and lines on a two-dimensional plane. The potential for adding a third dimension to its Flatland-esque world — a Z coordinate to go along with X and Y — was lost on no one, least of all Sutherland himself. His 1963 thesis on Sketchpad rocketed him into the academic stratosphere.

Sketchpad in action.

In 1964, at the ripe old age of 26, Sutherland succeeded J.C.R. Licklider as head of the computer division of the Defense Department’s Advanced Research Projects Agency (ARPA), the most remarkable technology incubator in computing history. Alas, he proved ill-suited to the role of administrator: he was too young, too introverted — just too nerdy, as a later generation would have put it. But during the unhappy year he spent there before getting back to the pure research that was his real passion, he put the University of Utah on the computing map, largely as a favor to his friend Dave Evans.

Evans may have left Salt Lake City more than a decade ago, but he remained a devout Mormon, who found the counterculture values of the Berkeley of the 1960s rather uncongenial. So, he had decided to take his old alma mater up on an offer to come home and build a computer-science department there. Sutherland now awarded said department a small ARPA contract, one fairly insignificant in itself. What was significant was that it brought the University of Utah into the ARPA club of elite research institutions that were otherwise clustered on the coasts. An early place on the ARPANET, the predecessor to the modern Internet, was not the least of the perks which would come its way as a result.

Evans looked for a niche for his university amidst the august company it was suddenly joining. The territory of time-sharing was pretty much staked; extensive research in that field was already going full steam ahead at places like MIT and Berkeley. Ditto networking and artificial intelligence and the nuts and bolts of hardware design. Computer graphics, though… that was something else. There were smart minds here and there working on them — count Ivan Sutherland as Exhibit Number One — but no real research hubs dedicated to them. So, it was settled: computer graphics would become the University of Utah’s specialty. In what can only be described as a fantastic coup, in 1968 Evans convinced Sutherland himself to abandon the East Coast prestige of Harvard, where he had gone after leaving his post as the head of ARPA, in favor of the Mormon badlands of Utah.

Things just snowballed from there. Evans and Sutherland assembled around them an incredible constellation of bright young sparks, who over the course of the next decade defined the terms and mapped the geography of the field of 3D graphics as we still know it today, writing papers that remain as relevant today as they were half a century ago — or perchance more so, given the rise of 3D games. For example, the two most commonly used algorithms for calculating the vagaries of light and shade in 3D games stem directly from the University of Utah: Gouraud shading was invented by a Utah student named Henri Gouraud in 1971, while Phong shading was invented by another named Bui Tuong Phong in 1973.

But of course, lots of other students passed through the university without leaving so indelible a mark. One of these was Jim Clark, who would still be semi-anonymous today if he hadn’t gone on to become an entrepreneur who co-founded two of the most important tech companies of the late twentieth century.



When you’ve written as many capsule biographies as I have, you come to realize that the idea of the truly self-made person is for the most part a myth. Certainly almost all of the famous names in computing history were, long before any of their other qualities entered into the equation, lucky: lucky in their time and place of birth, in their familial circumstances, perhaps in (sad as it is to say) their race and gender, definitely in the opportunities that were offered to them. This isn’t to disparage their accomplishments; they did, after all, still need to have the vision to grasp the brass ring of opportunity and the talent to make the most of it. Suffice to say, then, that luck is a prerequisite but the farthest thing from a guarantee.

Every once in a while, however, I come across someone who really did almost literally make something out of nothing. One of these folks is Jim Clark. If today as a soon-to-be octogenarian he indulges as enthusiastically as any of his Old White Guy peers in the clichéd trappings of obscene wealth, from the mansions, yachts, cars, and wine to the Victoria’s Secret model he has taken for a fourth wife, he can at least credibly claim to have pulled himself up to his current station in life entirely by his own bootstraps.

Clark was born in 1944, in a place that made Salt Lake City seem like a cosmopolitan metropolis by comparison: the small Texas Panhandle town of Plainview. He grew up dirt poor, the son of a single mother living well below the poverty line. Nobody expected much of anything from him, and he obliged their lack of expectations. “I thought the whole world was shit and I was living in the middle of it,” he recalls.

An indifferent student at best, he was expelled from high school his junior year for telling a teacher to go to hell. At loose ends, he opted for the classic gambit of running away to sea: he joined the Navy at age seventeen. It was only when the Navy gave him a standardized math test, and he scored the highest in his group of recruits on it, that it began to dawn on him that he might actually be good at something. Encouraged by a few instructors to pursue his aptitude, he enrolled in correspondence courses to fill his free time when out plying the world’s oceans as a crewman on a destroyer.

Ten years later, in 1971, the high-school dropout, now six years out of the Navy and married with children, found himself working on a physics PhD at Louisiana State University. Clark:

I noticed in Physics Today an article that observed that physicists getting PhDs from places like Harvard, MIT, Yale, and so on didn’t like the jobs they were getting. And I thought, well, what am I doing — I’m getting a PhD in physics from Louisiana State University! And I kept thinking, well, I’m married, and I’ve got these obligations. By this time, I had a second child, so I was real eager to get a good job, and I just got discouraged about physics. And a friend of mine pointed to the University of Utah as having a computer-graphics specialty. I didn’t know much about it, but I was good with geometry and physics, which involves a lot of geometry.

So, Clark applied for a spot at the University of Utah and was accepted.

But, as I already implied, he didn’t become a star there. His 1974 thesis was entitled “3D Design of Free-Form B-Spline Surfaces”; it was a solid piece of work addressing a practical problem, but not anything to really get the juices flowing. Afterward, he spent half a decade bouncing around from campus to campus as an adjunct professor: the Universities of California at Santa Cruz and Berkeley, the New York Institute of Technology, Stanford. He was fairly miserable throughout. As an academic of no special note, he was hired primarily as an instructor rather than a researcher, and he wasn’t at all cut out for the job, being too impatient, too irascible. Proving the old adage that the child is the father of the man, he was fired from at least one post for insubordination, just like that angry teenager who had once told off his high-school teacher. Meanwhile he went through not one but two wives. “I was in this kind of downbeat funk,” he says. “Dark, dark, dark.”

It was now early 1979. At Stanford, Clark was working right next door to Xerox’s famed Palo Alto Research Center (PARC), which was inventing much of the modern paradigm of computing, from mice and menus to laser printers and local-area networking. Some of the colleagues Clark had known at the University of Utah were happily ensconced over there. But he was still on the outside looking in. It was infuriating — and yet he was about to find a way to make his mark at last.

Hardware engineering at the time was in the throes of a revolution and its backlash, over a technology that went by the mild-mannered name of “Very Large Scale Integration” (VLSI). The integrated circuit, which packed multiple transistors onto a single microchip, had been invented at Texas Instruments at the end of the 1950s, and had become a staple of computer design already during the following decade. Yet those early implementations often put only a relative handful of transistors on a chip, meaning that they still required lots of chips to accomplish anything useful. A turning point came in 1971 with the Intel 4004, the world’s first microprocessor — i.e., the first time that anyone put the entire brain of a computer on a single chip. Barely remarked at the time, that leap would result in the first kit computers being made available for home users in 1975, followed by the Trinity of 1977, the first three plug-em-in-and-go personal computers suitable for the home. Even then, though, there were many in the academic establishment who scoffed at the idea of VLSI, which required a new, in some ways uglier approach to designing circuitry. In a vivid illustration that being a visionary in some areas doesn’t preclude one from being a reactionary in others, many of the folks at PARC were among the scoffers. Look how far we’ve come doing things one way, they said. Why change?

A PARC researcher named Lynn Conway was enraged by such hidebound thinking. A rare female hardware engineer, she had made scant progress to date getting her point of view through to the old boy’s club that surrounded her at PARC. So, broadening her line of attack, she wrote a paper about the basic techniques of modern chip design, and sent it out to a dozen or so universities along with a tempting offer: if any students or faculty wished to draw up schematics for a chip of their own and send them to her, she would arrange to have the chip fabricated in real silicon and sent back to its proud parent. The point of it all was just to get people to see the potential of VLSI, not to push forward the state of the art. And indeed, just as she had expected, almost all of the designs she received were trivially simple by the standards of even the microchip industry of 1979: digital time keepers, adding machines, and the like. But one was unexpectedly, even crazily complex. Alone among the submissions, it bore a precautionary notice of copyright, from one James Clark. He called his creation the Geometry Engine.

The Geometry Engine was the first and, it seems likely, only microchip that Jim Clark ever personally attempted to design in his life. It was created in response to a fundamental problem that had been vexing 3D modelers since the very beginning: that 3D graphics required shocking quantities of mathematical calculations to bring to life, scaling almost exponentially with the complexity of the scene to be depicted. And worse, the type of math they required was not the type that the researchers’ computers were especially good at.

Wait a moment, some of you might be saying. Isn’t math the very thing that computers do? It’s right there in the name: they compute things. Well, yes, but not all types of math are created equal. Modern computers are also digital devices, meaning they are naturally equipped to deal only with discrete things. Like the game of DOOM, theirs is a universe of stair steps rather than smooth slopes. They like integer numbers, not decimals. Even in the 1960s and 1970s, they could approximate the latter through a storage format known as floating point, but they dealt with these floating-point numbers at least an order of magnitude slower than they did whole numbers, as well as requiring a lot more memory to store them. For this reason, programmers avoided them whenever possible.

And it actually was possible to do so a surprisingly large amount of the time. Most of what computers were commonly used for could be accomplished using only whole numbers — for example, by using Euclidean division that yields a quotient and a remainder in place of decimal division. Even financial software could be built using integers only to count the total number of cents rather than floating-point values to represent dollars and cents. 3D-graphics software, however, was one place where you just couldn’t get around them. Creating a reasonably accurate mathematical representation of an analog 3D space forced you to use floating-point numbers. And this in turn made 3D graphics slow.

Jim Clark certainly wasn’t the first person to think about designing a specialized piece of hardware to lift some of the burden from general-purpose computer designs, an add-on optimized for doing the sorts of mathematical operations that 3D graphics required and nothing else. Various gadgets along these lines had been built already, starting a decade or more before his Geometry Engine. Clark was the first, however, to think of packing it all onto a single chip — or at worst a small collection of them — that could live on a microcomputer’s motherboard or on a card mounted in a slot, that could be mass-produced and sold in the thousands or millions. His description of his “slave processor” sounded disarmingly modest (not, it must be said, a quality for which Clark is typically noted): “It is a four-component vector, floating-point processor for accomplishing three basic operations in computer graphics: matrix transformations, clipping, and mapping to output-device coordinates [i.e., going from an analog world space to pixels in a digital raster].” Yet it was a truly revolutionary idea, the genesis of the graphical processing units (GPUs) of today, which are in some ways more technically complex than the CPUs they serve. The Geometry Engine still needed to use floating-point numbers — it was, after all, still a digital device — but the old engineering doctrine that specialization yields efficiency came into play: it was optimized to do only floating-point calculations, and only a tiny subset of all the ones possible at that, just as quickly as it could.

The Geometry Engine changed Clark’s life. At last, he had something exciting and uniquely his. “All of these people started coming up and wanting to be part of my project,” he remembers. Always an awkward fit in academia, he turned his thinking in a different direction, adopting the mindset of an entrepreneur. “He reinvented his relationship to the world in a way that is considered normal only in California,” writes journalist Michael Lewis in a book about Clark. “No one who had been in his life to that point would be in it ten years later. His wife, his friends, his colleagues, even his casual acquaintances — they’d all be new.” Clark himself wouldn’t hesitate to blast his former profession in later years with all the fury of a professor scorned.

I love the metric of business. It’s money. It’s real simple. You either make money or you don’t. The metric of the university is politics. Does that person like you? Do all these people like you enough to say, “Yeah, he’s worthy?”

But by whatever metric, success didn’t come easy. The Geometry Engine and all it entailed proved a harder sell with the movers and shakers in commercial computing than it had with his colleagues at Stanford. It wasn’t until 1982 that he was able to scrape together the funding to found a company called Silicon Graphics, Incorporated (SGI), and even then he was forced to give 85 percent of his company’s shares to others in order to make it a reality. Then it took another two years after that to actually ship the first hardware.

The market segment SGI was targeting is one that no longer really exists. The machines it made were technically microcomputers, being built around microprocessors, but they were not intended for the homes of ordinary consumers, nor even for the cubicles of ordinary office workers. These were much higher-end, more expensive machines than those, even if they could fit under a desk like one of them. They were called workstation computers. The typical customer spent tens or hundreds of thousands of dollars on them in the service of some highly demanding task or another.

In the case of the SGI machines, of course, that task was almost always related to graphics, usually 3D graphics. Their expense wasn’t bound up with their CPUs; in the beginning, these were fairly plebeian chips from the Motorola 68000 series, the same line used in such consumer-grade personal computers as the Apple Macintosh and the Commodore Amiga. No, the justification of their high price tags rather lay with their custom GPUs, which even in 1984 already went far beyond the likes of Clark’s old Geometry Engine. An SGI GPU was a sort of black box for 3D graphics: feed it all of the data that constituted a scene on one side, and watch a glorious visual representation emerge at the other, thanks to an array of specialized circuitry designed for that purpose and no other.

Now that it had finally gotten off the ground, SGI became very successful very quickly. Its machines were widely used in staple 3D applications like computer-aided industrial design (CAD) and flight simulation, whilst also opening up new vistas in video and film production. They drove the shift in Hollywood from special effects made using miniature models and stop-motion techniques dating back to the era of King Kong to the extensive use of computer-generated imagery (CGI) that we see even in the purportedly live-action films of today. (Steven Spielberg and George Lucas were among SGI’s first and best customers.) “When a moviegoer rubbed his eyes and said, ‘What’ll they think of next?’,” writes Michael Lewis, “it was usually because SGI had upgraded its machines.”

The company peaked in the early 1990s, when its graphics workstations were the key to CGI-driven blockbusters like Terminator 2 and Jurassic Park. Never mind the names that flashed by in the opening credits; everyone could agree that the computer-generated dinosaurs were the real stars of Jurassic Park. SGI was bringing in over $3 billion in annual revenue and had close to 15,000 employees by 1993, the year that movie was released. That same year, President Bill Clinton and Vice President Al Gore came out personally to SGI’s offices in Silicon Valley to celebrate this American success story.

SGI’s hardware subsystem for graphics, the beating heart of its business model, was known in 1993 as the RealityEngine2. This latest GPU was, wrote Byte magazine in a contemporary article, “richly parallel,” meaning that it could do many calculations simultaneously, in contrast to a traditional CPU, which could only execute one instruction at a time. (Such parallelism is the reason that modern GPUs are so often used for some math-intensive non-graphical applications, such as crypto-currency mining and machine learning.) To support this black box and deliver to its well-heeled customers a complete turnkey solution for all their graphics needs, SGI had also spearheaded an open-source software library for 3D applications, known as the Open Graphics Library, or OpenGL. Even the CPUs in its latest machines were SGI’s own; it had purchased a maker of same called MIPS Technologies in 1990.

But all of this success did not imply a harmonious corporation. Jim Clark was convinced that he had been hard done by back in 1982, when he was forced to give up 85 percent of his brainchild in order to secure the funding he needed, then screwed over again when he was compelled by his board to give up the CEO post to a former Hewlett Packard executive named Ed McCracken in 1984. The two men had been at vicious loggerheads for years; Clark, who could be downright mean when the mood struck him, reduced McCracken to public tears on at least one occasion. At one memorable corporate retreat intended to repair the toxic atmosphere in the board room, recalls Clark, “the psychologist determined that everyone else on the executive committee was passive aggressive. I was just aggressive.”

Clark claims that the most substantive bone of contention was McCracken’s blasé indifference to the so-called low-end market, meaning all of those non-workstation-class personal computers that were proliferating in the millions during the 1980s and early 1990s. If SGI’s machines were advancing by leaps and bounds, these consumer-grade computers were hopscotching on a rocket. “You could see a time when the PC would be able to do the sort of graphics that [our] machines did,” says Clark. But McCracken, for one, couldn’t see it, was content to live fat and happy off of the high prices and high profit margins of SGI’s current machines.

He did authorize some experiments at the lower end, but his heart was never in it. In 1990, SGI deigned to put a limited subset of the RealityEngine smorgasbord onto an add-on card for Intel-based personal computers. Calling it IrisVision, it hopefully talked up its price of “under $5000,” which really was absurdly low by the company’s usual standards. What with its complete lack of software support and its way-too-high price for this marketplace, IrisVision went nowhere, whereupon McCracken took the failure as a vindication of his position. “This is a low-margin business, and we’re a high-margin company, so we’re going to stop doing that,” he said.

Despite McCracken’s indifference, Clark eventually managed to broker a deal with Nintendo to make a MIPS microprocessor and an SGI GPU the heart of the latter’s Nintendo 64 videogame console. But he quit after yet another shouting match with McCracken in 1994, two years before it hit the street.

He had been right all along about the inevitable course of the industry, however undiplomatically he may have stated his case over the years. Personal computers did indeed start to swallow the workstation market almost at the exact point in time that Clark bailed. The profits from the Nintendo deal were rich, but they were largely erased by another of McCracken’s pet projects, an ill-advised acquisition of the struggling supercomputer maker Cray. Meanwhile, with McCracken so obviously more interested in selling a handful of supercomputers for millions of dollars each than millions upon millions of consoles for a few hundred dollars each, a group of frustrated SGI employees left the company to help Nintendo make the GameCube, the followup to the Nintendo 64, on their own. It was all downhill for SGI after that, bottoming out in a 2009 bankruptcy and liquidation.

As for Clark, he would go on to a second entrepreneurial act as remarkable as his first, abandoning 3D graphics to make a World Wide Web browser with Marc Andreessen. We will say farewell to him here, but you can read the story of his second company Netscape’s meteoric rise and fall elsewhere on this site.



Now, though, I’d like to return to the scene of SGI’s glory days, introducing in the process three new starring players. Gary Tarolli and Scott Sellers were talented young engineers who were recruited to SGI in the 1980s; Ross Smith was a marketing and business-development type who initially worked for MIPS Technologies, then ended up at SGI when it acquired that company in 1990. The three became fast friends. Being of a younger generation, they didn’t share the contempt for everyday personal computers that dominated among their company’s upper management. Whereas the latter laughed at the primitiveness of games like Wolfenstein 3D and Ultima Underworld, if they bothered to notice them at all, our trio saw a brewing revolution in gaming, and thought about how much it could be helped along by hardware-accelerated 3D graphics.

Convinced that there was a huge opportunity here, they begged their managers to get into the gaming space. But, still smarting from the recent failure of IrisVision, McCracken and his cronies rejected their pleas out of hand. (One of the small mysteries in this story is why their efforts never came to the attention of Jim Clark, why an alliance was never formed. The likely answer is that Clark had, by his own admission, largely removed himself from the day-to-day running of SGI by this time, being more commonly seen on his boat than in his office.) At last, Tarolli, Sellers, Smith, and some like-minded colleagues ran another offer up the flagpole. You aren’t doing anything with IrisVision, they said. Let us form a spinoff company of our own to try to sell it. And much to their own astonishment, this time management agreed.

They decided to call their new company Pellucid — not the best name in the world, sounding as it did rather like a medicine of some sort, but then they were still green at all this. The technology they had to peddle was a couple of years old, but it still blew just about anything else in the MS-DOS/Windows space out of the water, being able to display 16 million colors at a resolution of 1024 X 768, with 3D acceleration built-in. (Contrast this with the SVGA card found in the typical home computer of the time, which could do 256 colors at 640 X 480, with no 3D affordances). Pellucid rebranded the old IrisVision the ProGraphics 1024. Thanks to the relentless march of chip-fabrication technology, they found that they could now manufacture it cheaply enough to be able to sell it for as little as $1000 — still pricey, to be sure, but a price that some hardcore gamers, as well as others with a strong interest in having the best graphics possible, might just be willing to pay.

The problem, the folks at Pellucid soon came to realize, was a well-nigh intractable deadlock between the chicken and the egg. Without software written to take advantage of its more advanced capabilities, the ProGraphics 1024 was just another SVGA graphics card, selling for a ridiculously high price. So, consumers waited for said software to arrive. Meanwhile software developers, seeing the as-yet non-existent installed base, saw no reason to begin supporting the card. Breaking this logjam must require a concentrated public-relations and developer-outreach effort, the likes of which the shoestring spinoff couldn’t possibly afford.

They thought they had done an end-run around the problem in May of 1993, when they agreed, with the blessing of SGI, to sell Pellucid kit and caboodle to a major up-and-comer in consumer computing known as Media Vision, which currently sold “multimedia upgrade kits” consisting of CD-ROM drives and sound cards. But Media Vision’s ambitions knew no bounds: they intended to branch out into many other kinds of hardware and software. With proven people like Stan Cornyn, a legendary hit-maker from the music industry, on their management rolls and with millions and millions of dollars on hand to fund their efforts, Media Vision looked poised to dominate.

It seemed the perfect landing place for Pellucid; Media Vision had all the enthusiasm for the consumer market that SGI had lacked. The new parent company’s management said, correctly, that the ProGraphics 1024 was too old by now and too expensive to ever become a volume product, but that 3D acceleration’s time would come as soon as the current wave of excitement over CD-ROM and multimedia began to ebb and people started looking for the next big thing. When that happened, Media Vision would be there with a newer, more reasonably priced 3D card, thanks to the people who had once called themselves Pellucid. It sounded pretty good, even if in the here and now it did seem to entail more waiting around than anything else.

The ProGraphics 1024 board in Media Vision livery.

There was just one stumbling block: “Media Vision was run by crooks,” as Scott Sellers puts it. In April of 1994, a scandal erupted in the business pages of the nation’s newspapers. It turned out that Media Vision had been an experiment in “fake it until you make it” on a gigantic scale. Its founders had engaged in just about every form of malfeasance imaginable, creating a financial house of cards whose honest revenues were a minuscule fraction of what everyone had assumed them to be. By mid-summer, the company had blown away like so much dust in the wind, still providing income only for the lawyers who were left to pick over the corpse. (At least two people would eventually be sent to prison for their roles in the conspiracy.) The former Pellucid folks were left as high and dry as everyone else who had gotten into bed with Media Vision. All of their efforts to date had led to the sale of no more than 2000 graphics cards.

That same summer of 1994, a prominent Silicon Valley figure named Gordon Campbell was looking for interesting projects in which to invest. Campbell had earned his reputation as one of the Valley’s wise men through a company called Chips and Technologies (C&T), which he had co-founded in 1984. One of those hidden movers in the computer industry, C&T had largely invented the concept of the chipset: chips or small collections of them that could be integrated directly into a computer’s motherboard to perform functions that used to be placed on add-on cards. C&T had first made a name for itself by reducing IBM’s bulky nineteen-chip EGA graphics card to just four chips that were cheaper to make and consumed less power. Campbell’s firm thrived alongside the cost-conscious PC clone industry, which by the beginning of the 1990s was rendering IBM itself, the very company whose products it had once so unabashedly copied, all but irrelevant. Onboard video, onboard sound, disk controllers, basic firmware… you name it, C&T had a cheap, good-enough-for-the-average-consumer chipset to handle it.

But now Campbell had left C&T “in pursuit of new opportunities,” as they say in Valley speak. Looking for a marketing person for one of the startups in which he had invested a stake, he interviewed a young man named Ross Smith who had SGI on his résumé — always a plus. But the interview didn’t go well. Campbell:

It was the worst interview I think I’ve ever had. And so finally, I just turned to him and I said, “Okay, your heart’s not in this interview. What do you really want to do?”

And he kind of looks surprised and says, well, there are these two other guys, and we want to start a 3D-graphics company. And the next thing I know, we had set up a meeting. And we had, over a lot of beers, a discussion which led these guys to all come and work at my office. And that set up the start of 3Dfx.

It seemed to all of them that, after all of the delays and blind alleys, it truly was now or never to make a mark. For hardware-accelerated 3D graphics were already beginning to trickle down into the consumer space. In standup arcades, games like Daytona USA and Virtua Fighter were using rudimentary GPUs. Ditto the Sega Saturn and the Sony PlayStation, the latest in home-videogame consoles, both which were on the verge of release in Japan, with American debuts expected in 1995. Meanwhile the software-only, 2.5D graphics of DOOM were taking the world of hardcore computer gamers by storm. The men behind 3Dfx felt that the next move must surely seem obvious to many other people besides themselves. The only reason the masses of computer-game players and developers weren’t clamoring for 3D graphics cards already was that they didn’t yet realize what such gadgets could do for them.

Still, they were all wary of getting back into the add-on board market, where they had been burned so badly before. Selling products directly to consumers required retail access and marketing muscle that they still lacked. Instead, following in the footsteps of C&T, they decided to sell a 3D chipset only to other companies, who could then build it into add-on boards for personal computers, standup-arcade machines, whatever they wished.

At the same time, though, they wanted their technology to be known, in exactly the way that the anonymous chipsets made by C&T were not. In the pursuit of this aspiration, Gordon Campbell found inspiration from another company that had become a household name despite selling very little directly to consumers. Intel had launched the “Intel Inside” campaign in 1990, just as the era of the PC clone was giving way to a more amorphous commodity architecture. The company introduced a requirement that the makers of computers which used its CPUs include the Intel Inside logo on their packaging and on the cases of the computers themselves, even as it made the same logo the centerpiece of a standalone advertising campaign in print and on television. The effort paid off; Intel became almost as identified with the Second Home Computer Revolution in the minds of consumers as was Microsoft, whose own logo showed up on their screens every time they booted into Windows. People took to calling the emerging duopoly the “Wintel” juggernaut, a name which has stuck around to this day.

So, it was decided: a requirement to display a similarly snazzy 3Dfx logo would be written into that company’s contracts as well. The 3Dfx name itself was a vast improvement over Pellucid. As time went on, 3Dfx would continue to display a near-genius for catchy branding: “Voodoo” for the chipset itself, “GLide” for the software library that controlled it. All of this reflected a business savvy the likes of which hadn’t been seen from Pellucid, that was a credit both to Campbell’s steady hand and the accumulating experience of the other three partners.

But none of it would have mattered without the right product. Campbell told his trio of protégés in no uncertain terms that they were never going to make a dent in computer gaming with a $1000 video card; they needed to get the price down to a third of that at the most, which meant the chipset itself could cost the manufacturers who used it in their products not much more than $100 a pop. That was a tall order, especially considering that gamers’ expectations of graphical fidelity weren’t diminishing. On the contrary: the old Pellucid card hadn’t even been able to do 3D texture mapping, a failing that gamers would never accept post-DOOM.

It was left to Gary Tarolli and Scott Sellers to figure out what absolutely had to be in there, such as the aforementioned texture mapping, and what they could get away with tossing overboard. Driven by the remorseless logic of chip-fabrication costs, they wound up going much farther with the tossing than they ever could have imagined when they started out. There could be no talk of 24-bit color or unusually high resolutions: 16-bit color (offering a little over 65,000 onscreen shades) at a resolution of 640 X 480 would be the limit.[1]A resolution of 800 X 600 was technically possible using the Voodoo chipset, but using this resolution meant that the programmer could not use a vital affordance known as Z-buffering. For this reason, it was almost never seen in the wild. Likewise, they threw out the capability of handling any polygons except for the simplest of them all, the humble triangle. For, they realized, you could make almost any solid you liked by combining triangular surfaces together. With enough triangles in your world — and their chipset would let you have up to 1 million of them — you needn’t lament the absence of the other polygons all that much.

Sellers had another epiphany soon after. Intel’s latest CPU, to which gamers were quickly migrating, was the Pentium. It had a built-in floating-point co-processor which was… not too shabby, actually. It should therefore be possible to take the first phase of the 3D-graphics pipeline — the modeling phase — out of the GPU entirely and just let the CPU handle it. And so another crucial decision was made: they would concern themselves only with the rendering or rasterization phase, which was a much greater challenge to tackle in software alone, even with a Pentium. Another huge piece of the puzzle was thus neatly excised — or rather outsourced back to the place where it was already being done in current games. This would have been heresy at SGI, whose ethic had always been to do it all in the GPU. But then, they were no longer at SGI, were they?

Undoubtedly their bravest decision of all was to throw out any and all 2D-graphics capabilities — i.e., the neat rasters of pixels used to display Windows desktops and word processors and all of those earlier, less exciting games. Makers of Voodoo boards would have to include a cable to connect the existing, everyday graphics cards inside their customers’ machines to their new 3D ones. When you ran non-3D applications, the Voodoo card would simply pass the video signal on to the monitor unchanged. But when you fired up a 3D game, it would take over from the other board. A relay inside made a distinctly audible click when this happened. Far from a bug, gamers would soon come to consider the noise a feature.”Because you knew it was time to have fun,” as Ross Smith puts it.

It was a radical plan, to be sure. These new cards would be useful only for games, would have no other purpose whatsoever; there would be no justifying this hardware purchase to the parents or the spouse with talk of productivity or educational applications. Nevertheless, the cost savings seemed worth it. After all, almost everyone who initially went out to buy the new cards would already have a perfectly good 2D video card in their computer. Why make them pay extra to duplicate those functions?

The final design used just two custom chips. One of them, internally known as the T-Rex (Jurassic Park was still in the air), was dedicated exclusively to the texture mapping that had been so conspicuously missing from the Pellucid board. Another, called the FBI (“Frame Buffer Interface”), did everything else required in the rendering phase. Add to this pair a few less exciting off-the-shelf chips and four megabytes worth of RAM chips, put it on a board with the appropriate connectors, and you had yourself a 3Dfx Voodoo GPU.

Needless to say, getting this far took some time. Tarolli, Sellers, and Smith spent the last half of 1994 camped out in Campbell’s office, deciding what they wanted to do and how they wanted to do it and securing the funding they needed to make it happen. Then they spent all of 1995 in offices of their own, hiring about a dozen people to help them, praying all the time that no other killer product would emerge to make all of their efforts moot. While they worked, the Sega Saturn and Sony PlayStation did indeed arrive on American shores, becoming the first gaming devices equpped with 3D GPUs to reach American homes in quantity. The 3Dfx crew were not overly impressed by either console — and yet they found the public’s warm reception of the PlayStation in particular oddly encouraging. “That showed, at a very rudimentary level, what could be done with 3D graphics with very crude texture mapping,” says Scott Sellers. “And it was pretty abysmal quality. But the consumers were just eating it up.”

They got their first finished chipsets back from their Taiwanese fabricator at the end of January 1996, then spent Super Bowl weekend soldering them into place and testing them. There were a few teething problems, but in the end everything came together as expected. They had their 3D chipset, at the beginning of a year destined to be dominated by the likes of Duke Nukem 3D and Quake. It seemed the perfect product for a time when gamers couldn’t get enough 3D mayhem. “If it had been a couple of years earlier,” says Gary Tarolli, “it would have been too early. If it had been a couple of years later, it would have been too late.” As it was, they were ready to go at the Goldilocks moment. Now they just had to sell their chipset to gamers — which meant they first had to sell it to game developers and board makers.



Did you enjoy this article? If so, please think about pitching in to help me make many more like it. You can pledge any amount you like.



(Sources: the books The Dream Machine by M. Mitchell Waldrop Dealers of Lightning: Xerox PARC and the Dawn of the Computer Age by Michael A. Hiltzik, and The New New Thing: A Silicon Valley Story by Michael Lewis; Byte of May 1992 and November 1993; InfoWorld of April 22 1991 and May 31 1993; Next Generation of October 1997; ACM’s Computer Graphics journal of July 1982; Wired of January 1994 and October 1994. Online sources include the Computer History Museum’s “oral histories” with Jim Clark, Forest Baskett, and the founders of 3Dfx; Wayne Carlson’s “Critical History of Computer Graphics and Animation”; “Fall of Voodoo” by Ernie Smith at Tedium; Fabian Sanglard’s reconstruction of the workings of the Voodoo 1 chips; “Famous Graphics Chips: 3Dfx’s Voodoo” by Dr. Jon Peddie at the IEEE Computer Society’s site; an internal technical description of the Voodoo technology archived at bitsavers.org.)

Footnotes

Footnotes
1 A resolution of 800 X 600 was technically possible using the Voodoo chipset, but using this resolution meant that the programmer could not use a vital affordance known as Z-buffering. For this reason, it was almost never seen in the wild.
 

Tags: ,

The Next Generation in Graphics, Part 1: Three Dimensions in Software (or, Quake and Its Discontents)

“Mathematics,” wrote the historian of science Carl Benjamin Boyer many years ago, “is as much an aspect of culture as it is a collection of algorithms.” The same might be said about the mathematical algorithms we choose to prioritize — especially in these modern times, when the right set of formulas can be worth many millions of dollars, can be trade secrets as jealously guarded as the recipes for Coca-Cola or McDonald’s Special Sauce.

We can learn much about the tech zeitgeist from those algorithms the conventional wisdom thinks are most valuable. At the very beginning of the 1990s, when “multimedia” was the buzzword of the age and the future of games was believed to lie with “interactive movies” made out of video clips of real actors, the race was on to develop video codecs: libraries of code able to digitize footage from the analog world and compress it to a fraction of its natural size, thereby making it possible to fit a reasonable quantity of it on CDs and hard drives. This was a period when Apple’s QuickTime was regarded as a killer app in itself, when Philips’s ill-fated CD-i console could be delayed for years by the lack of a way to get video to its screen quickly and attractively.

It is a rule in almost all kinds of engineering that, the more specialized a device is, the more efficiently it can perform the tasks that lie within its limited sphere. This rule holds true as much in computing as anywhere else. So, when software proved able to stretch only so far in the face of the limited general-purpose computing power of the day, some started to build their video codecs into specialized hardware add-ons.

Just a few years later, after the zeitgeist in games had shifted, the whole process repeated itself in a different context.

By the middle years of the decade, with the limitations of working with canned video clips becoming all too plain, interactive movies were beginning to look like a severe case of the emperor’s new clothes. The games industry therefore shifted its hopeful gaze to another approach, one that would prove a much more lasting transformation in the way games were made. This 3D Revolution did have one point of similarity with the mooted and then abandoned meeting of Silicon Valley and Hollywood: it too was driven by algorithms, implemented first in software and then in hardware.

It was different, however, in that the entire industry looked to one man to lead it into its algorithmic 3D future. That man’s name was John Carmack.



Whether they happen to be pixel art hand-drawn by human artists or video footage captured by cameras, 2D graphics already exist on disk before they appear on the monitor screen. And therein lies the source of their limitations. Clever programmers can manipulate them to some extent — pixel art generally more so than digitized video — but the possibilities are bounded by the fundamentally static nature of the source material. 3D graphics, however, are literally drawn by the computer. They can go anywhere and do just about anything. For, while 2D graphics are stored as a concrete grid of pixels, 3D graphics are described using only the abstract language of mathematics — a language able to describe not just a scene but an entire world, assuming you have a powerful enough computer running a good enough algorithm.

Like so many things that get really complicated really quickly, the basic concepts of 3D graphics are disarmingly simple. The process behind them can be divided into two phases: the modeling phase and the rendering, or rasterization, phase.

It all begins with simple two-dimensional shapes of the sort we all remember from middle-school geometry, each defined as a collection of points on a plane and straight lines connecting them together. By combining and arranging these two-dimensional shapes, or surfaces, together in three-dimensional space, we can make solids — or, in the language of computerized 3D graphics, objects.

Here we see how 3D objects can be made ever more more complex by building them out of ever more surfaces. The trade-off is that more complex objects require more computing power to render in a timely fashion.

Once we have a collection of objects, we can put them into a world space, wherever we like and at whatever angle of orientation we like. This world space is laid out as a three-dimensional grid, with its point of origin — i.e., the point where X, Y, and Z are all zero — wherever we wish it to be. In addition to our objects, we also place within it a camera — or, if you like, an observer in our world — at whatever position and angle of orientation we wish. At their simplest, 3D graphics require nothing more at the modeling phase.

We sometimes call the second phase the “rasterization” phase in reference to the orderly two-dimensional grid of pixels which make up the image seen on a monitor screen, which in computer-science parlance is known as a raster. The whole point of this rasterization phase, then, is to make our computer’s monitor a window into our imaginary world from the point of view of our imaginary camera. This entails converting said world’s three dimensions back into our two-dimensional raster of pixels, using the rules of perspective that have been understood by human artists since the Renaissance.

We can think of rasterizing as observing a scene through a window screen. Each square in the mesh is one pixel, which can be exactly one color. The whole process of 3D rendering ultimately comes down to figuring out what each of those colors should be.

The most basic of all 3D graphics are of the “wire-frame” stripe, which attempt to draw only the lines that form the edges of their surfaces. They were seen fairly frequently on microcomputers as far back as the early 1980s, the most iconic example undoubtedly being the classic 1984 space-trading game Elite.

Even in something as simple as Elite, we can begin to see how 3D graphics blur the lines between a purely presentation-level technology and a full-blown world simulation. When we have one enemy spaceship in our sights in Elite, there might be several others above, behind, or below us, which the 3D engine “knows” about but which we may not. Combined with a physics engine and some player and computer agency in the model world (taking here the form of lasers and thrusters), it provides the raw materials for a game. Small wonder that so many game developers came to see 3D graphics as such a natural fit.

But, for all that those wire frames in Elite might have had their novel charm in their day, programmers realized that the aesthetics of 3D graphics had to get better for them to become a viable proposition over the long haul. This realization touched off an algorithmic arms race that is still ongoing to this day. The obvious first step was to paint in the surfaces of each solid in single blocks of color, as the later versions of Elite that were written for 16-bit rather than 8-bit machines often did. It was an improvement in a way, but it still looked jarringly artificial, even against a spartan star field in outer space.

The next way station on the road to a semi-realistic-looking computer-generated world was light sources of varying strengths, positioned in the world with X, Y, and Z coordinates of their own, casting their illumination and shadows realistically on the objects to be found there.

A 3D scene with light sources.

The final step was to add textures, small pictures that were painted onto surfaces in place of uniform blocks of color; think of the pitted paint job of a tired X-Wing fighter or the camouflage of a Sherman tank. Textures introduced an enormous degree of complication at the rasterization stage; it wasn’t easy for 3D engines to make them look believable from a multitude of different lines of sight. That said, believable lighting was almost as complicated. Textures or lighting, or both, were already the fodder for many an academic thesis before microcomputers even existed.

A 3D scene with light sources and textures.

In the more results-focused milieu of commercial game development, where what was possible was determined largely by which types of microprocessors Intel and Motorola were selling the most of in any given year, programmers were forced to choose between compromised visions of the academic ideal. These broke down into two categories, neatly exemplified by the two most profitable computer games of the 1990s. Those games that followed in one or the other’s footsteps came to be known as the “Myst clones” and the “DOOM clones.” They could hardly have been more dissimilar in personality, yet they were both symbols of a burgeoning 3D revolution.

The Myst clones got their name from a game developed by Cyan Studios and published by Brøderbund in September of 1993, which went on to sell at least 6 million copies as a boxed retail product and quite likely millions more as a pack-in of one description or another. Myst and the many games that copied its approach tended to be, as even their most strident detractors had to admit, rather beautiful to look at. This was because they didn’t attempt to render their 3D imagery in real time; their rendering was instead done beforehand, often on beefy workstation-class machines, then captured as finished rasters of pixels on disk. Given that they worked with graphics that needed to be rendered only once and could be allowed to take hours to do so if necessary, the creators of games like this could pull out all the stops in terms of textures, lighting, and the sheer number and complexity of the 3D solids that made up their worlds.

These games’ disadvantage — a pretty darn massive one in the opinion of many players — was that their scope of interactive potential was as sharply limited in its way as that of all those interactive movies built around canned video clips that the industry was slowly giving up on. They could present their worlds to their players only as a collection of pre-rendered nodes to be jumped between, could do nothing on the fly. These limitations led most of their designers to build their gameplay around set-piece puzzles found in otherwise static, non-interactive environments, which most players soon started to find a bit boring. Although the genre had its contemplative pleasures and its dedicated aficionados who appreciated them, its appeal as anything other than a tech demo — the basis on which the original Myst was primarily sold — turned out to be the very definition of niche, as the publishers of Myst clones belatedly learned to their dismay. The harsh reality became undeniable once Riven, the much-anticipated, sumptuously beautiful sequel to Myst, under-performed expectations by “only” selling 1 million copies when it finally appeared four years after its hallowed predecessor. With the exception only of Titanic: Adventure out of Time, which owed its fluke success to a certain James Cameron movie with which it happened to share a name and a setting, no other game of this style ever cracked half a million in unit sales. The genre has been off the mainstream radar for decades now.

The DOOM clones, on the other hand, have proved a far more enduring fixture of mainstream gaming. They took their name, of course, from the landmark game of first-person carnage which the energetic young men of id Software released just a couple of months after Myst reached store shelves. John Carmack, the mastermind of the DOOM engine, managed to present a dynamic, seamless, apparently 3D world in place of the static nodes of Myst, and managed to do it in real time, even on a fairly plebeian consumer-grade computer. He did so first of all by being a genius programmer, able to squeeze every last drop out of the limited hardware at his disposal. And then, when even that wasn’t enough to get the job done, he threw out feature after feature that the academics whose papers he had pored over insisted was essential for any “real” 3D engine. His motto was, if you can’t get it done honestly, cheat, by hard-coding assumptions about the world into your algorithms and simply not letting the player — or the level designer — violate them. The end result was no Myst-like archetype of beauty in still screenshots. It pasted 2D sprites into its world whenever there wasn’t horsepower enough to do real modeling, had an understanding of light and its properties that is most kindly described as rudimentary, and couldn’t even handle sloping floors or ceilings, or walls that weren’t perfectly vertical. Heck, it didn’t even let you look up or down.

And absolutely none of that mattered. DOOM may have looked a bit crude in freeze-frame, but millions of gamers found it awe-inspiring to behold in motion. Indeed, many of them thought that Carmack’s engine, combined with John Romero and Sandy Petersen’s devious level designs, gave them the most fun they’d ever had sitting behind a computer. This was immersion of a level they’d barely imagined possible, the perfect demonstration of the real potential of 3D graphics — even if it actually was, as John Carmack would be the first to admit, only 2.5D at best. No matter; DOOM felt like real 3D, and that was enough.

A hit game will always attract imitators, and a massive hit will attract legions of them. Accordingly, the market was soon flooded with, if anything, even more DOOM clones than Myst clones, all running in similar 2.5D engines, the product of both intense reverse engineering of DOOM itself and Carmack’s habit of talking freely about how he made the magic happen to pretty much anyone who asked him, no matter how much his colleagues at id begged him not to. “Programming is not a zero-sum game,” he said. “Teaching something to a fellow programmer doesn’t take it away from you. I’m happy to share what I can because I’m in it for the love of programming.” Carmack was elevated to veritable godhood, the prophet on the 3D mountaintop passing down whatever scraps of wisdom he deigned to share with the lesser mortals below.

Seen in retrospect, the DOOM clones are, like the Myst clones, a fairly anonymous lot for the most part, doubling down on transgressive ultra-violence instead of majestic isolation, but equally failing to capture a certain ineffable something that lay beyond the nuts and bolts of their inspiration’s technology. The most important difference between the Myst and DOOM clones came down to the filthy lucre of dollar and unit sales: whereas Myst‘s coattails proved largely illusory, producing few other hits, DOOM‘s were anything but. Most people who had bought Myst, it seemed, were satisfied with that single purchase; people who bought DOOM were left wanting more first-person mayhem, even if it wasn’t quite up to the same standard.

The one DOOM clone that came closest to replacing DOOM itself in the hearts of gamers was known as Duke Nukem 3D. Perhaps that isn’t surprising, given its pedigree: it was a product of 3D Realms, the rebranded incarnation of Scott Miller’s Apogee Software. Whilst trading under the earlier name, Miller had pioneered the episodic shareware model of game distribution, a way of escaping the heavy-handed group-think of the major boxed-game publishers and their tediously high-concept interactive movies in favor of games that were exponentially cheaper to develop, but also rawer, more visceral, more in line with what the teenage and twenty-something males who still constituted the large majority of dedicated gamers were actually jonesing to play. Miller had discovered the young men of id when they were still working for a disk magazine in Shreveport, Louisiana. He had then convinced them to move to his own glossier, better-connected hometown of Dallas, Texas, and distributed their proto-DOOM shooter Wolfenstein 3D to great success. His protégées had elected to strike out on their own when the time came to release DOOM, but it’s fair to say that that game would probably never have come to exist at all if not for their shareware Svengali. And even if it had, it probably wouldn’t have made them so much money; Jay Wilbur, id’s own tireless guerilla marketer, learned most of his tricks from watching Scott Miller.

Still a man with a keen sense of what his customers really wanted, Miller re-branded Apogee as 3D Realms as a way of signifying its continuing relevance amidst the 3D revolution that took the games industry by storm after DOOM. Then he, his junior partner George Broussard, and 3D Realms’s technical mastermind Ken Silverman set about making a DOOM-like engine of their own, known as Build, which they could sell to other developers who wanted to get up and running quickly. And they used the same engine to make a game of their own, which would turn out to be the most memorable of all those built with Build.

Duke Nukem 3D‘s secret weapon was one of the few boxes in the rubric of mainstream gaming success that DOOM had failed to tick off: a memorable character to serve as both star and mascot. First conceived several years earlier for a pair of Apogee 2D platformers, Duke Nukem was Joseph Lieberman’s worst nightmare, an unrepentant gangster with equally insatiable appetites for bombs and boobies, a fellow who “thinks the Bureau of Alcohol, Tobacco, and Firearms is a convenience store,” as his advertising trumpeted. His latest game combined some of the best, tightest level design yet seen outside of DOOM with a festival of adolescent transgression, from toilet water that served as health potions to strippers who would flash their pixelated breasts at you for the price of a dollar bill. The whole thing was topped off with the truly over-the-top quips of Duke himself: “I’m gonna rip off your head and shit down your neck!”; “Your face? Your ass? What’s the difference?” It was an unbeatable combination, proof positive that Miller’s ability to read his market was undimmed. Released in January of 1996, relatively late in the day for this generation of 3D — or rather 2.5D — technology, Duke Nukem 3D became by some reports the best-selling single computer game of that entire year. It is still remembered with warm nostalgia today by countless middle-aged men who would never want their own children to play a game like this. And so the cycle of life continues…

In a porno shop, shooting it out with policemen who are literally pigs…

Duke Nukem 3D was a triumph of design and attitude rather than technology; in keeping with most of the DOOM clones, the Build engine’s technical innovations over its inspiration were fairly modest. John Carmack scoffed that his old friends’ creation looked like it was “held together with bubble gum.”

The game that did push the technology envelope farthest, albeit without quite managing to escape the ghetto of the DOOM clones, was also a sign in another way of how quickly DOOM was changing the industry: rather than stemming from scruffy veterans of the shareware scene like id and 3D Realms, it came from the heart of the industry’s old-money establishment — from no less respectable and well-financed an entity than George Lucas’s very own games studio.

LucasArts’s Dark Forces was a shooter set in the Star Wars universe, which disappointed everyone right out of the gate with the news that it was not going to let you fight with a light saber. The developers had taken a hard look at it, they said, but concluded in the end that it just wasn’t possible to pull off satisfactorily within the hardware specifications they had to meet. This failing was especially ironic in light of the fact that they had chosen to name their new 2.5D engine “Jedi.” But they partially atoned for it by making the Jedi engine capable of hosting unprecedentedly enormous levels — not just horizontally so, but vertically as well. Dark Forces was full of yawning drop-offs and cavernous open spaces, the likes which you never saw in DOOM — or Duke Nukem 3D, for that matter, despite its release date of almost a year after Dark Forces. Even more importantly, Dark Forces felt like Star Wars, right from the moment that John Williams’s stirring theme song played over stage-setting text which scrolled away into the frame rather than across it. Although they weren’t allowed to make any of the movies’ characters their game’s star, LucasArts created a serviceable if slightly generic stand-in named Kyle Katarn, then sent him off on vertigo-inducing chases through huge levels stuffed to the gills with storm troopers in urgent need of remedial gunnery training, just like in the movies. Although Dark Forces toned down the violence that so many other DOOM clones were making such a selling point out of — there was no blood whatsoever on display here, just as there had not been in the movies — it compensated by giving gamers the chance to live out some of their most treasured childhood media memories, at a time when there were no new non-interactive Star Wars experiences to be had.

Unfortunately, LucasArts’s design instincts weren’t quite on a par with their presentation and technology. Dark Forces‘s levels were horribly confusing, providing little guidance about what to do or where to go in spaces whose sheer three-dimensional size and scope made the two-dimensional auto-map all but useless. Almost everyone who goes back to play the game today tends to agree that it just isn’t as much fun as it ought to be. At the time, though, the Star Wars connection and its technical innovations were enough to make Dark Forces a hit almost the equal of DOOM and Duke Nukem 3D. Even John Carmack made a point of praising LucasArts for what they had managed to pull off on hardware not much better than that demanded by DOOM.

Yet everyone seemed to be waiting on Carmack himself, the industry’s anointed Master of 3D Algorithms, to initiate the real technological paradigm shift. It was obvious what that must entail: an actual, totally non-fake rendered-on-the-fly first-person 3D engine, without all of the compromises that had marked DOOM and its imitators. Such engines weren’t entirely unheard of; the Boston studio Looking Glass Technologies had been working with them for five years, employing them in such innovative, immersive games as Ultima Underworld and System Shock. But those games were qualitatively different from DOOM and its clones: slower, more complex, more cerebral. The mainstream wanted a game that played just as quickly and violently and viscerally as DOOM, but that did it in uncompromising real 3D. With computers getting faster every year and with a genius like John Carmack to hand, it ought to be possible.

And so Carmack duly went to work on just such an engine, for a game that was to be called Quake. His ever-excitable level designer John Romero, who had the looks and personality to be the rock star gaming had been craving for years, was all in with bells on. “The next game is going to blow DOOM all to hell,” he told his legions of adoring fans. “DOOM totally sucks in comparison to our next game! Quake is going to be a bigger step over DOOM than DOOM was over Wolf 3D.” Drunk on success and adulation, he said that Quake would be more than just a game: “It will be a movement.” (Whatever that meant!) The drumbeat of excitement building outside of id almost seemed to justify his hyperbole; from all the way across the Atlantic, the British magazine PC Zone declared that the upcoming Quake would be “the most important PC game ever made.” The soundtrack alone was to be a significant milestone in the incorporation of gaming into mainstream pop culture, being the work of Trent Reznor and his enormously popular industrial-rock band Nine Inch Nails. Such a collaboration would have been unthinkable just a few years earlier.

While Romero was enjoying life as gaming’s own preeminent rock star and waiting for Carmack to get far enough along on the Quake engine to give him something to do, Carmack was living like a monk, working from 4 PM to 4 AM every day. In another sign of just how quickly id had moved up in the world, he had found himself an unexpectedly well-credentialed programming partner. Michael Abrash was one of the establishment’s star programmers, who had written a ton of magazine articles and two highly regarded technical tomes on assembly-language and graphics programming and was now a part of Microsoft’s Windows NT team. When Carmack, who had cut his teeth on Abrash’s writings, invited him out of the blue to come to Dallas and do Quake with him, Bill Gates himself tried to dissuade his employee. “You might not like it down there,” he warned. Abrash was, after all, pushing 40, a staid sort with an almost academic demeanor, while id was a nest of hyperactive arrested adolescence on a permanent sugar high. But he went anyway, because he was pretty sure Carmack was a genius, and because Carmack seemed to Abrash a bit lonely, working all night every night with only his computer for company. Abrash thought he saw in Quake a first glimmer of a new form of virtual existence that companies like Meta are still chasing eagerly today: “a pretty complicated, online, networked universe,” all in glorious embodied 3D. “We do Quake, other companies do other games, people start building worlds with our format and engine and tools, and these worlds can be glommed together via doorways from one to another. To me this sounds like a recipe for the first real cyberspace, which I believe will happen the way a real space station or habitat probably would — by accretion.”

He may not have come down if he had known precisely what he was getting into; he would later compare making Quake to “being strapped onto a rocket during takeoff in the middle of a hurricane.” The project proved a tumultuous, exhausting struggle that very nearly broke id as a cohesive company, even as the money from DOOM was continuing to roll in. (id’s annual revenues reached $15.6 million in 1995, a very impressive figure for what was still a relatively tiny company, with a staff numbering only a few dozen.)

Romero envisioned a game that would be as innovative in terms of gameplay as technology, that would be built largely around sword-fighting and other forms of hand-to-hand combat rather than gun play — the same style of combat that LucasArts had decided was too impractical for Dark Forces. Some of his early descriptions make Quake sound more like a full-fledged CRPG in the offing than another straightforward action game. But it just wouldn’t come together, according to some of Romero’s colleagues because he failed to communicate his expectations to them, rather leading them to suspect that even he wasn’t quite sure what he was trying to make.

Carmack finally stepped in and ordered his design team to make Quake essentially a more graphically impressive DOOM. Romero accepted the decision outwardly, but seethed inwardly at this breach of longstanding id etiquette; Carmack had always made the engines, then given Romero free rein to turn them into games. Romero largely checked out, opening a door that ambitious newcomers like American McGee and Tim Willits, who had come up through the thriving DOOM modding community, didn’t hesitate to push through. The offices of id had always been as hyper-competitive as a DOOM deathmatch, but now the atmosphere was becoming a toxic stew of buried resentments.

In a misguided attempt to fix the bad vibes, Carmack, whose understanding of human nature was as shallow as his understanding of computer graphics was deep, announced one day that he had ordered a construction crew in to knock down all of the walls, so that everybody could work together from a single “war room.” One for all and all for one, and all that. The offices of the most profitable games studio in the world were transformed into a dystopian setting perfect for a DOOM clone, as described by a wide-eyed reporter from Wired magazine who came for a visit: “a maze of drywall and plastic sheeting, with plaster dust everywhere, loose acoustic tiles, and cables dangling from the ceiling. Almost every item not directly related to the completion of Quake was gone. The only privacy to be found was between the padded earpieces of headphones.”

Wired magazine’s August 1996 cover, showing John Carmack flanked by John Romero and Adrian Carmack, marked the end of an era. By the time it appeared on newsstands, Romero had already been fired.

Needless to say, it didn’t have the effect Carmack had hoped for. In his book-length history of id’s early life and times, journalist David Kushner paints a jittery, unnerving picture of the final months of Quake‘s development: they “became a blur of silent and intense all-nighters, punctuated by the occasional crash of a keyboard against a wall. The construction crew had turned the office into a heap. The guys were taking their frustrations out by hurling computer parts into the drywall like knives.” Michael Abrash is more succinct: “A month before shipping, we were sick to death of working on Quake.” And level designer Sandy Petersen, the old man of the group, who did his best to keep his head down and stay out of the intra-office cold war, is even more so: “[Quake] was not fun to do.”

Quake was finally finished in June of 1996. It would prove a transitional game in more ways than one, caught between where games had recently been and where they were going. Still staying true to that odd spirit of hacker idealism that coexisted with his lust for ever faster Ferraris, Carmack insisted that Quake be made available as shareware, so that people could try it out before plunking down its full price. The game accordingly got a confusing, staggered release, much to the chagrin of its official publisher GT Interactive. To kick things off, the first eight levels went up online. Shortly after, there appeared in stores a $10 CD of the full game that had to be unlocked by paying id an additional $50 in order to play beyond the eighth level. Only after that, in August of 1996, did the game appear in a conventional retail edition.

Predictably enough, it all turned into a bit of a fiasco. Crackers quickly reverse-engineered the algorithms used for generating the unlocking codes, which were markedly less sophisticated than the ones used to generate the 3D graphics on the disc. As a result, hundreds of thousands of people were able to get the entirety of the most hotly anticipated game of the year for $10. Meanwhile even many of those unwilling or unable to crack their shareware copies decided that eight levels was enough for them, especially given that the unregistered version could be used for multiplayer deathmatches. Carmack’s misplaced idealism cost id and GT Interactive millions, poisoning relations between them; the two companies soon parted ways.

So, the era of shareware as an underground pipeline of cutting-edge games came to an end with Quake. From now on, id would concentrate on boxed games selling for full price, as would all of their fellow survivors from that wild and woolly time. Gaming’s underground had become its establishment.

But its distribution model wasn’t the only sense in which Quake was as much a throwback as a step forward. It held fast as well to Carmack’s disinterest in the fictional context of id’s games, as illustrated by his famous claim that the story behind a game was no more important than the story behind a porn movie. It would be blatantly incorrect to claim that the DOOM clones which flooded the market between 1994 and 1996 represented some great exploding of the potential of interactive narrative, but they had begun to show some interest, if not precisely in elaborate set-piece storytelling in the way of adventure games, at least in the appeal of setting and texture. Dark Forces had been a pioneer in this respect, what with its between-levels cut scenes, its relatively fleshed-out main character, and most of all its environments that really did look and feel like the Star Wars films, from their brutalist architecture to John Williams’s unmistakable score. Even Duke Nukem 3D had the character of Duke, plus a distinctively seedy, neon-soaked post-apocalyptic Los Angeles for him to run around in. No one would accuse it of being an overly mature aesthetic vision, but it certainly was a unified one.

Quake, on the other hand,  displayed all the signs of its fractious process of creation, of half a dozen wayward designers all pulling in different directions. From a central hub, you took “slipgates” into alternate dimensions that contained a little bit of everything on the designers’ not-overly-discriminating pop-culture radar, from zombie flicks to Dungeons & Dragons, from Jaws to H.P. Lovecraft, from The Terminator to heavy-metal music, and so wound up not making much of a distinct impression at all.

Most creative works are stamped with the mood of the people who created them, no matter how hard the project managers try to separate the art from the artists. With its color palette dominated by shocks of orange and red, DOOM had almost literally burst off the monitor screen with the edgy joie de vivre of a group of young men whom nobody had expected to amount to much of anything, who suddenly found themselves on the verge of remaking the business of games in their own unkempt image. Quake felt tired by contrast. Even its attempts to blow past the barriers of good taste seemed more obligatory than inspired; the Satanic symbolism, elaborate torture devices, severed heads, and other forms of gore were outdone by other games that were already pushing the envelope even further. This game felt almost somber — not an emotion anyone had ever before associated with id. Its levels were slower and emptier than those of DOOM, with a color palette full of mournful browns and other earth tones. Even the much-vaunted soundtrack wound up rather underwhelming. It was bereft of the melodic hooks that had made Nine Inch Nails’s previous output more palatable for radio listeners than that of most other “extreme” bands; it was more an exercise in sound design than music composition. One couldn’t help but suspect that Trent Reznor had held back all of his good material for his band’s next real record.

At its worst, Quake felt like a tech demo waiting for someone to turn it into an actual game, proving that John Carmack needed John Romero as badly as Romero needed him. But that once-fruitful relationship was never to be rehabilitated: Carmack fired Romero within days of finishing Quake. The two would never work together again.

It was truly the end of an era at id. Sandy Petersen was soon let go as well, Michael Abrash went back to the comfortable bosom of Microsoft, and Jay Wilbur quit for the best of all possible reasons: because his son asked him, “How come all the other daddies go to the baseball games and you never do?” All of them left as exhausted as Quake looks and feels.

Of course, there was nary a hint of Quake‘s infelicities to be found in the press coverage that greeted its release. Even more so than most media industries, the games industry has always run on enthusiasm, and it had no desire at this particular juncture to eat its own by pointing out the flaws in the most important PC game ever made. The coverage in the magazines was marked by a cloying fan-boy fawning that was becoming ever more sadly prominent in gamer culture. “We are not even worthy to lick your toenails free of grit and fluffy sock detritus,” PC Zone wrote in a public letter to id. “We genuflect deeply and offer our bare chests for you to stab with a pair of scissors.” (Eww! A sense of proportion is as badly lacking as a sense of self-respect…) Even the usually sober-minded (by gaming-journalism standards) Computer Gaming World got a little bit creepy: “Describing Quake is like talking about sex. It must be experienced to be fully appreciated.”

Still, I would be a poor historian indeed if I called all the hyperbole of 1996 entirely unjustified. The fact is that the passage of time has tended to emphasize Quake‘s weaknesses, which are mostly in the realm of design and aesthetics, whilst obscuring its contemporary strengths, which were in the realm of technology. Although not quite the first game to graft a true 3D engine onto ultra-fast-action gameplay — Interplay’s Descent beat it to the market by more than a year — it certainly did so more flexibly and credibly than anything else to date, even if Carmack still wasn’t above cheating a bit when push came to shove. (By no means is the Quake engine entirely free of tricksy 2D sprites in places where proper 3D models are just too expensive to render.)

Nevertheless, it’s difficult to fully convey today just how revolutionary the granular details of Quake seemed in 1996: the way you could look up and down and all around you with complete freedom; the way its physics engine made guns kick so that you could almost feel it in your mouse hand; the way you could dive into water and experience the visceral sensation of actually swimming; the way the wood paneling of its walls glinted realistically under the overhead lighting. Such things are commonplace today, but Quake paved the way. Most of the complaints I’ve raised about it could be mitigated by the simple expedient of not even bothering with the lackluster single-player campaign, of just playing it with your mates in deathmatch.

But even if you preferred to play alone, Quake was a sign of better things to come. “It goes beyond the game and more into the engine and the possibilities,” says Rob Smith, who watched the Quake mania come and go as the editor of PC Gamer magazine. “Quake presented options to countless designers. The game itself doesn’t make many ‘all-time’ lists, but its impact [was] as a game changer for 3D gaming, [an] engine that allowed other game makers to express themselves.” For with the industry’s Master of 3D Algorithms John Carmack having shown what was possible and talking as freely as ever about how he had achieved it, with Michael Abrash soon to write an entire book about how he and Carmack had made the magic happen, more games of this type, ready and able to harness the technology of true 3D to more exciting designs, couldn’t be far behind. “We’ve pretty much decided that our niche is in first-person futuristic action games,” said John Carmack. “We stumble when we get away from the techno stuff.” The industry was settling into a model that would remain in place for years to come: id would show what was possible with the technology of 3D graphics, then leave it to other developers to bend it in more interesting directions.

Soon enough, then, titles like Jedi Knight and Half-Life would push the genre once known as DOOM clones, now trading under the more sustainable name of the first-person shooter, in more sophisticated directions in terms of storytelling and atmosphere, without losing the essence of what made their progenitors so much fun. They will doubtless feature in future articles.

Next time, however, I want to continue to focus on the technology, as we turn to another way in which Quake was a rough draft for a better gaming future: months after its initial release, it became one of the first games to display the potential of hardware acceleration for 3D graphics, marking the beginning of a whole new segment of the microcomputer industry, one worth many billions of dollars today.



Did you enjoy this article? If so, please think about pitching in to help me make many more like it. You can pledge any amount you like.



(Sources: the books Rocket Jump: Quake and the Golden Age of First-Person Shooters by David L. Craddock, The Graphics Programming Black Book by Michael Abrash, Masters of DOOM: How Two Guys Created an Empire and Transformed Pop Culture by David Kushner, Dungeons and Dreamers: The Rise of Computer Game Culture from Geek to Chic by Brad King and John Borland, Principles of Three-Dimensional Computer Animation by Michael O’Rourke, and Computer Graphics from Scratch: A Programmer’s Introduction by Gabriel Gambetta. PC Zone of May 1996; Computer Gaming World of July 1996 and October 1996; Wired of August 1996 and January 2010. Online sources include Michael Abrash’s “Ramblings in Realtime” for Blue’s News.

Quake is available as a digital purchase at GOG.com, as is Star Wars: Dark Forces. Duke Nukem 3D can be found on Steam.)

 
 

Tags: , , , , , , ,

Wing Commander IV

It’s tough to put a neat label on Wing Commander IV: The Price of Freedom. On the one hand, it was a colossally ambitious and expensive project — in fact, the first computer game in history with a budget exceeding $10 million. On the other, it was a somewhat rushed, workmanlike game, developed in half the time of Wing Commander III using the same engine and tools. That these two things can simultaneously be true is down to the strange economics of mid-1990s interactive movies.



Origin Systems and Chris Roberts, the Wing Commander franchise’s development studio and mastermind respectively, wasted very little time embarking on the fourth numbered game in the series after finishing up the third one in the fall of 1994. Within two weeks, Roberts was hard at work on his next story outline. Not long after the holiday season was over and it was clear that Wing Commander III had done very well indeed for itself, his managers gave him the green light to start production in earnest, on a scale of which even a dreamer like him could hardly have imagined a few years earlier.

Like its predecessor, Wing Commander IV was destined to be an oddly bifurcated project. The “game” part of the game — the missions you actually fly from the cockpit of a spaceborne fighter — was to be created in Origin’s Austin, Texas, offices by a self-contained and largely self-sufficient team of programmers and mission designers, using the existing flight engine with only modest tweaks, without a great deal of day-to-day communication with Roberts himself. Meanwhile the latter would spend the bulk of 1995 in Southern California, continuing his career as Hollywood’s most unlikely and under-qualified movie director, shooting a script created by Frank DePalma and Terry Borst from his own story outline. It was this endeavor that absorbed the vast majority of a vastly increased budget.

For there were two big, expensive changes on this side of the house. One was a shift away from the green-screen approach of filming real actors on empty sound stages, with the scenery painted in during post-production by pixel artists; instead Origin had its Hollywood partners Crocodille Productions build traditional sets, no fewer than 37 of them in all. The other was the decision to abandon videotape in favor of 35-millimeter stock, the same medium on which feature films were shot. This was a dubiously defensible decision on practical grounds, what with the sharply limited size and resolution of the computer-monitor screens on which Roberts’s movie would be seen, but it says much about where the young would-be auteur’s inspirations and aspirations lay. “My goal is to bring the superior production values of Hollywood movies to the interactive realm,” he said in an interview. Origin would wind up paying Crocodile $7.7 million in all in the pursuit of that lofty goal.

The hall of the Terran Assembly was one of the more elaborate of the Wing Commander IV sets, showing how far the series had come but also in a way how far it still had to go, what with its distinctly plastic, stage-like appearance. It will be seen on film in a clip later on in this article.

These changes served only to distance the movie part of Wing Commander from the game part that much more; now the folks in Austin didn’t even have to paint backgrounds for Roberts’s film shoot. More than ever, the two halves of the whole were water and oil rather than water and wine. All told, it’s doubtful whether the flying-and-shooting part of Wing Commander IV absorbed much more than 10 percent of the total budget.

Origin was able to hire most of the featured actors from last time out to return for Wing Commander IV. Once again, Mark Hamill, one of the most sensible people in Hollywood, agreed to head up the cast as Colonel Blair, the protagonist and the player’s avatar, for a salary of $419,100 for the 43-day shoot. (“A lot of actors spend their whole lives wanting to be known as anything,” he said when delicately asked if he ever dwelt upon his gradual, decade-long slide down through the ranks of the acting profession, from starring as Luke Skywalker in the Star Wars blockbusters to starring in videogames. “I always thought I should be happy for what I have instead of being unhappy for what I don’t have. So, you know, if things are going alright with your family… I don’t know, not really. I think it’s good.”) Likewise, Tom Wilson ($117,300) returned to play Blair’s fellow pilot and frenemy Maniac; Malcolm McDowell ($285,500) again played the stiffly starched Admiral Tolwyn; and John Rhys-Davies ($52,100) came back as the fighter jock turned statesman Paladin. After the rest of the cast and incidental expenses were factored in, the total bill for the actors came to just under $1.4 million.

Far from being taken aback by the numbers involved, Origin made them a point of pride. If anything, it inflated them; the total development cost of $12 million which was given to magazines like Computer Gaming World over the course of one of the most extensive pre-release hype campaigns the industry had ever seen would appear to be a million or two over the real figure, based on what I’ve been able to glean from the company’s internal budgeting documents. Intentionally or not, the new game’s subtitle made the journalists’ headlines almost too easy to write: clearly, the true “price of freedom” was $12 million. The award for the most impassioned preview must go to the British edition of PC Gamer, which proclaimed that the game’s eventual release would be “one of the most important events of the twentieth century.” On an only slightly more subdued note, Computer Gaming World noted that “if Wing Commander III was like Hollywood, this game is Hollywood.” The mainstream media got in on the excitement as well: CNN ran a segment on the work in progress, Newsweek wrote it up, and Daily Variety was correct in calling it “the most expensive CD-ROM production ever” — never mind a million or two here or there. Mark Hamill and Malcolm McDowell earned some more money by traveling the morning-radio and local-television circuit in the final weeks before the big release.


Wing Commander IV was advertised on television at a time when that was still a rarity for computer games. The advertisements blatantly spoiled what was intended to be a major revelation about the real villain of the story. (You have been warned!)


The game was launched on February 8, 1996, in a gala affair at the Beverly Hills Planet Hollywood, with most of the important cast members in attendance to donate their costumes — “the first memorabilia from a CD-ROM game to be donated to the internationally famous restaurant,” as Origin announced proudly. (The restaurant itself appears to have been less enthused; the costumes were never put on display after the party, and seem to be lost now.) The assembled press included representatives of CNN, The Today Show, HBO, Delta Airlines’s in-flight magazine, and the Associated Press among others. In the weeks that followed, Chris Roberts and Mark Hamill did a box-signing tour in conjunction with Incredible Universe, a major big-box electronics chain of the time.

Tom Wilson, Malcolm McDowell, and Mark Hamill at the launch party.

The early reviews were positive, and not just those in the nerdy media. “The game skillfully integrates live-action video with computer-generated graphics and sophisticated gameplay. Has saving the universe ever been this much fun?” asked Newsweek, presumably rhetorically. Entertainment Weekly called Wing Commander IV “a movie game that takes CD-ROM warfare into the next generation,” giving it an A- on its final report card. The Salt Lake City Tribune said that it had “a cast that would make any TV-movie director jealous — and more than a few feature-film directors as well. While many games tout themselves as interactive movies, Wing Commander IV is truly deserving of the title — a pure joy to watch and play.” The Detroit Free Press said that “at times, it was like watching an episode of a science-fiction show.”

The organs of hardcore gaming were equally fulsome. Australia’s Hyper magazine lived up to its name (Hyperventilate? Hyperbole?) with the epistemologically questionable assertion that “if you don’t play this then you really don’t own a computer.” Computer Gaming World, still the United States’s journal of record, was almost as effusive, writing that “as good as the previous installment was, it served only as a rough prototype for the polished chrome that adorns Wing Commander IV. This truly is the vanguard of the next generation of electronic entertainment.”

Surprisingly, it was left to PC Gamer, the number-two periodical in the American market, normally more rather than less hype-prone than its older and somewhat stodgier competitor, to inject a note of caution into the critical discourse, by acknowledging how borderline absurd it was on the face of it to release a game in which 90 percent of the budget had gone into the cut scenes.

How you feel about Wing Commander IV: The Price of Freedom is going to depend a lot on how you felt about Wing Commander III and the direction the series seems to be headed in.

When the original Wing Commander came out, it was a series of incredible, state-of-the-art space-combat sequences, tied together with occasional animated cut scenes. Today, Wing Commander IV seems more like a series of incredible, full-motion-video cut scenes tied together with occasional space-combat sequences. You can see the shift away from gameplay and toward multimedia flash in one of the ads for Wing Commander IV; seven of the eight little “bullet points” that list the game’s impressive new features are devoted to improvements in the quality of the video. Only the last point says anything about actual gameplay. If the tail’s not wagging the dog yet, it’s getting close.

For all its cosmetic improvements, Wing Commander IV feels just a little hollow. I can’t help thinking about what the fourth Wing Commander game might be like if the series had moved in the opposite direction, making huge improvements in the actual gameplay, rather than spending more and more time and effort on the stuff in between.

Still, these concerns were only raised parenthetically; even PC Gamer‘s reviewer saw fit to give the game a rating of 90 percent after unfurrowing his brow.



Today, however, the imbalance described above has become even more difficult to overlook, and seems even more absurd. As my regular readers know, narrative-oriented games are the ones I tend to be most passionate about; I’m the farthest thing from a Chris Crawford, insisting that the inclusion of any set-piece story line is a betrayal of interactive entertainment’s potential. My academic background is largely in literary studies, which perhaps explains why I tend to want to read games like others do books. And yet, with all that said, I also recognize that a game needs to give its player something interesting to do.

I’m reminded of an anecdote from Steve Diggle, a guitarist for the 1970s punk band Buzzcocks. He tells of seeing the keyboardist for the progressive-rock band Yes performing with “a telephone exchange of electronic things that nobody could afford or relate to. At the end, he brought an alpine horn out — because he was Swiss. It was a long way from Little Richard. I thought, ‘Something’s got to change.'” There’s some of the same quality to Wing Commander IV. Matters have gone so far out on a limb that one begins to suspect the only thing left to be done is just to burn it all down and start over.

But we do strive to be fair around here, so let’s try to evaluate the movie and the game of Wing Commander IV on their own merits before we address their imperfect union.

Chris Roberts is not a subtle storyteller; his influences are always close to the surface. The first three Wing Commander games were essentially a retelling of World War II in the Pacific, with the Terran Confederation for which Blair flies in the role of the United States and its allies and the evil feline Kilrathi in that of Japan. Now, with the alien space cats defeated once and for all, Roberts has moved on to the murkier ethical terrain of the Cold War, where battles are fought in the shadows and friend and foe are not so easy to distinguish. Instead of being lauded like the returning Greatest Generation were in the United States after World War II, Blair and his comrades who fought the good fight against the Kilrathi are treated more like the soldiers who came back from Vietnam. We learn that we’ve gone from rah-rah patriotism to something else the very first time we see Blair, when he meets a down-on-his-luck fellow veteran in a bar and can, at you the player’s discretion, give him a few coins to help him out. Shades of gray are not really Roberts’s forte; earnest guy that he is, he prefers the primary-color emotions. Still, he’s staked out his dramatic territory and now we have to go with it.

Having been relegated to the reserves after the end of the war with the Kilrathi, Blair has lately been running a planetside farm, but he’s called back to active duty to deal with a new problem on the frontiers of the Terran Confederation: a series of pirate raids in the region of the Border Worlds, a group of planets that is allied with the Confederation but has always preferred not to join it formally. Because the attacks are all against Confederation vessels rather than those of the Border Worlds, it is assumed that the free-spirited inhabitants of the latter are behind them. I trust that it won’t be too much a spoiler if I reveal here that the reality is far more sinister.

By all means, we should give props to Roberts for not just finding some way to bring the Kilrathi back as humanity’s existential threat. They are still around, and even make an appearance in Wing Commander IV, but they’ve seen the error of their ways with Confederation guidance and are busily rebuilding their society on more peaceful lines. (The parallels with World War II-era — and now postwar — Japan, in other words, still hold true.)

For all the improved production values, the Kilrathi in Wing Commander IV still look as ridiculous as ever, more cuddly than threatening.

The returns from Origin’s $9 million investment in the movie are front and center. An advantage of working with real sets instead of green screens is the way that the camera is suddenly allowed to move, making the end result look less like something filmed during the very earliest days of cinema and more like a product of the post-Citizen Kane era. One of the very first scenes is arguably the most impressive of them all. The camera starts on the ceiling of a meeting hall, looking directly down at the assembled dignitaries, then slowly sweeps to ground level, shifting as it moves from a vertical to a horizontal orientation. I’d set this scene up beside the opening of Activision’s Spycraft — released at almost the same time as Wing Commander IV, as it happens — as the most sophisticated that this generation of interactive movies ever got by the purely technical standards of film-making. (I do suspect that Wing Commander IV‘s relative adroitness is not so much down to Chris Roberts as to its cinematographer, a 21-year Hollywood veteran named Eric Goldstein.)


The acting, by contrast, is on about the same level as Wing Commander III: professional if not quite passionate. Mark Hamill’s dour performance is actually among the least engaging. (This is made doubly odd by the fact that he had recently been reinventing himself as a voice actor, through a series of portrayals — including a memorable one in the game Gabriel Knight: Sins of the Fathers — that are as giddy and uninhibited as his Colonel Blair isn’t.) On the other hand, it’s a pleasure to hear Malcolm McDowell and John Rhys-Davies deploy their dulcet Shakespearian-trained voices on even pedestrian (at best) dialog like this. But the happiest member of the cast must be Tom Wilson, whose agent’s phone hadn’t exactly been ringing off the hook in recent years; his traditional-cinema career had peaked with his role as the cretinous villain Biff in the Back to the Future films. Here he takes on the similarly over-the-top role of Maniac, a character who had become a surprise hit with the fans in Wing Commander III, and sees his screen time increased considerably in the fourth game as a result. As comic-relief sidekicks go, he’s no Sancho Panza, but he does provide a welcome respite from Blair’s always prattling on, a little listlessly and sleepy-eyed at times, about duty and honor and what hell war is (such hell that Chris Roberts can’t stop making games about it).

That said, the best humor in Wing Commander IV is of the unintentional kind. There’s a sort of Uncanny Valley in the midst of this business of interactive movies, as there is in so many creative fields. When the term was applied to games that merely took some inspiration from cinema, perhaps with a few (bad) actors mouthing some lines in front of green screens, it was easier to accept fairly uncritically. But the closer games like this one come to being real movies, the more their remaining shortcomings seem to stand out, and, paradoxically, the farther from their goal they seem to be. The reality is that 37 sets isn’t many by Hollywood standards — and most of these are cheap, sparse, painfully plastic-looking sets at that. Like in those old 1960s episodes of Star Trek, everybody onscreen visibly jumps — not in any particular unison, mind you — when the camera shakes to indicate an explosion and the party-supply-store smoke machines start up. The ray guns they shoot each other with look like gaudy plastic toys that Wal Mart would be ashamed to stock, while the accompanying sound effects would have been rejected as too cheesy by half by the producers of Battlestar Galactica.

All of this is understandable, even forgivable. A shooting budget of $9 million may have been enormous in game terms, but it was nothing by the standards of a Hollywood popcorn flick. (The 1996 film Star Trek: First Contact, for example, had five times the budget of Wing Commander IV, and it was not even an especially expensive example of its breed.) In the long run, interactive movies would find their Uncanny Valley impossible to bridge. Those who made them believed that they were uniquely capable of attracting a wider, more diverse audience than the people who typically played games in the mid-1990s. That proposition may have been debatable, but we’ll take it at face value. The problem was that, in order to attract these folks, they had to look like more than C-movies with aspirations of reaching B status. And the games industry’s current revenues simply didn’t give them any way to get from here to there. Wing Commander IV is a prime case in point: the most expensive game ever made still looked like a cheap joke by Hollywood standards.

The spaceships of the far future are controlled by a plastic steering wheel that looks like something you’d find hanging off of a Nintendo console. Pity the poor crew member whose only purpose in life seems to be to standing there holding on to it and fending off the advances of Major Todd “Maniac” “Sexual Harassment is Hilarious!” Marshall.

Other failings of Wing Commander IV, however, are less understandable and perchance less forgivable. It’s sometimes hard to believe that this script was the product of professional screenwriters, given the quantity of dialog which seems lifted from a Saturday Night Live sketch, which often had my wife and I rolling on the floor when we played the game together recently. (Or rather, when I played and she watched and laughed.) “Just because we operate in the void of space, is loyalty equally weightless?” Malcolm McDowell somehow manages to intone in that gorgeously honed accent of his without smirking. A young woman mourning the loss of her beau — as soon as you saw that these two had a thing going, you knew he was doomed, by the timeless logic of war movies — chooses the wrong horse as her metaphor and then just keeps on riding it out into the rhetorical sagebrush: “He’s out there along with my heart. Both no more than space dust. People fly through him every day and don’t even know it.”

Then there’s the way that everyone, excepting only Blair, is constantly referred to only by his or her call sign. This doesn’t do much to enhance the stateliness of a formal military funeral: “Some may think that Catscratch will be forgotten. They’re wrong. He’ll stay in our hearts always.” There’s the way that all of the men are constantly saluting each other at random moments, as if they’re channeling all of the feelings they don’t know how to express into that act — saluting to keep from crying, saluting as a way to avoid saying, “I love you, man!,” saluting whenever the screenwriters don’t know what the hell else to have them do. (Of course, they all do it so sloppily that anyone who really was in the military will be itching to jump through the monitor and smack them into shape.) And then there’s the ranks and titles, which sound like something children on a playground — or perhaps (ahem!) someone else? — came up with: Admiral Tolwyn gets promoted to “Space Marshal,” for Pete’s sake.

I do feel just a little bad to make fun of all this so much because Chris Roberts’s heart is clearly in the right place. As a time when an increasing number of games were appealing only to the worst sides of their players, Wing Commander IV at least gave lip service to the ties that bind, the thing things we owe to one another. It’s not precisely wrong in anything it says, even if it does become a bit one-note in that tedious John Wayne kind of way. Deep into the game, you discover that the sinister conspiracy you’ve been pursuing involves a new spin on the loathsome old arguments of eugenics, those beliefs that some of us have better genes than others and are thus more useful, valuable human beings, entitled to things that their inferior counterparts are not. Wing Commander IV knows precisely where it stands on this issue — on the right side. But boy, can its delivery be clumsy. And its handling of a more complex social issue like the plight of war veterans trying to integrate back into civilian society is about as nuanced as the old episodes of Magnum, P.I. that probably inspired it.

But betwixt and between all of the speechifying and saluting, there is still a game to play, consisting of about 25 to 30 missions worth of space-combat action, depending on the choices you make from the interactive movie’s occasional menus and how well you fly the missions themselves. The unsung hero of Wing Commander IV must surely be one Anthony Morone, who bore the thankless title of “Game Director,” meaning that he was the one who oversaw the creation of the far less glamorous game part of the game back in Austin while Chris Roberts was off in Hollywood shooting his movie. He did what he could with the limited time and resources at his disposal.

I noted above how the very way that this fourth game was made tended to pull the two halves of its personality even farther apart. That’s true on one level, but it’s also true that Morone made some not entirely unsuccessful efforts to push back against that centrifugal drift. Some of the storytelling now happens inside the missions themselves — something Wing Commander II, the first heavily plot-based entry in the series, did notably well, only to have Wing Commander III forget about it almost completely. Now, though, it’s back, such that your actions during the missions have a much greater impact on the direction of the movie. For example, at one point you’re sent to intercept some Confederation personnel who have apparently turned traitor. In the course of this mission, you learn what their real motivations are, and, if you think they’re good ones, you can change sides and become their escort rather than their attacker.

Indeed, there are quite a few possible paths through the story line and a handful of different endings, based on both the choices you take from those menus that pop up from time to time during the movie portions and your actions in the heat of battle. In this respect too, Wing Commander IV is more ambitious and more sophisticated than Wing Commander III.

A change in Wing Commander IV that feels very symbolic is the removal of any cockpit graphics. In the first game, seeing your pilot avatar manipulate the controls and seeing evidence of damage in your physical surroundings was extraordinarily verisimilitudious. Now, all that has been discarded without a second thought by a game with other priorities.

But it is enough? It’s hard to escape a creeping sense of ennui as you play this game. The flight engine and mission design still lag well behind LucasArts’s 1994 release TIE Fighter, a game that has aged much better than this one in all of its particulars. Roughly two out of every three missions here still don’t have much to do with the plot and aren’t much more than the usual “fly between these way points and shoot whatever you find there” — a product of the need to turn Roberts’s movie into a game that lasts longer than a few hours, in order to be sure that players feel like they have gotten their $50 worth. Worse, the missions are poorly balanced, being much more difficult than those in the previous game; enemy missiles are brutally overpowered, being now virtually guaranteed to kill you with one hit. The sharply increased difficulty feels more accidental than intentional, a product of the compressed development schedule and a resultant lack of play-testing. However it came about, it pulls directly against Origin’s urgent need to attract more — read, more casual — gamers to the series in order to justify its escalating budgets. Here as in so many other places in this game, the left hand didn’t know what the right hand was doing, to the detriment of both.

In the end, then, neither the movie nor the game of Wing Commander IV can fully stand up on its own, and in combination they tend to clash more than they create any scintillating synergy. One senses when playing through the complete package that Origin’s explorations in this direction have indeed reached a sort of natural limit akin to that alpine-horn-playing keyboard player, that the only thing left to do now is to back up and try something else.


The magazines may have been carried away by the hype around Wing Commander IV, but not all ordinary gamers were. For example, one by the name of Robert Fletcher sent Origin the following letter:

I have noticed that the game design used by Origin has stayed basically the same. Wing Commander IV is a good example of a game design that has shown little growth. If one were to strip away the film clips, there would be a bare-bones game. The game would look and play like a game from the early 1980s. A very simple branching story line, with a little arcade action.

With all the muscle and talent at Origin’s command, it makes me wonder if Origin is really trying to push the frontier of game design. I know a little of what it takes to develop a game, from all the articles I have read (and I have read many). Many writers and developers are calling for their peers to get back to pushing the frontier of game design, over the development of better graphics.

Wing Commander IV has the best graphics I have seen, and it will be a while before anyone will match this work of art. But as a game, Wing Commander IV makes a better movie.

In its April 1996 issue — notice that date! — Computer Gaming World published an alleged preview of Origin’s plans for Wing Commander V. Silly though the article is, it says something about the reputation that Chris Roberts and his franchise were garnering among gamers like our Mr. Fletcher for pushing the envelope of money and technology past the boundaries of common sense, traveling far out on a limb that was in serious danger of being cut off behind them.

With Wing Commander IV barely a month old, Origin has already announced incredible plans for the next game in the highly successful series. In another first for a computer-game company, Origin says it will design small working models of highly maneuverable drones which can be launched into space, piloted remotely, and filmed. The craft will enable Wing V to have “unprecedented spaceflight realism and true ‘star appeal,'” said a company spokesman.

Although the next game in the science-fiction series sounds more like fiction than science, Origin’s Chris Roberts says it’s the next logical step for his six-year-old creation. “If you think about it,” he says, “Wing Commander [I] was the game where we learned the mechanics of space fighting. We made lots of changes and improvements in Wing II. With Wing III, we raised the bar considerably with better graphics, more realistic action, full-motion video, and big-name stars in video segments. In Wing IV, we upped the ante again with real sets, more video, and, in my opinion, a much better story. We’ve reached the point of using real stars and real sets — now it’s time to take our act on location: real space.”

Analysts say it’s nearly impossible to estimate the cost of such an undertaking. Some put figures at between $100 million and $10 billion, just to deploy a small number of remotely pilotable vehicles beyond Earth’s atmosphere. Despite this, Origin’s Lord British (Richard Garriott) claims that he has much of the necessary financial support from investors. Says Garriott, “When we told [investors] what we wanted to do for Wing Commander V, they were amazed. We’re talking about one of man’s deepest desires — to break free of the bonds of Earth. We know it seems costly in comparison with other games, but this is unlike anything that’s ever been done. I don’t see any problem getting the financial backing for this project, and we expect to recoup the investment in the first week. You’re going to see a worldwide release on eight platforms in 36 countries. It’s going to be a huge event. It’ll dwarf even Windows 95.”

Tellingly, some fans believed the announcement was real, writing Origin concerned letters about whether this was really such a good use of its resources.

Still, the sense of unease about Origin’s direction was far from universal. In a sidebar that accompanied its glowing review of Wing Commander IV in that same April 1996 issue, Computer Gaming World asked on a less satirical note, “Is it time to take interactive movies seriously?” The answer according to the magazine was yes: “Some will continue to mock the concept of ‘Siliwood,’ but the marriage of Hollywood and Silicon Valley is definitely real and here to stay. In this regard, no current game charts a more optimistic path to the future of multimedia entertainment than Wing Commander IV.” Alas, the magazine’s satire would prove more prescient than this straightforward opinion piece. Rather than the end of the beginning of the era of interactive movies, Wing Commander IV would go down in history as the beginning of the end, a limit of grandiosity beyond which further progress was impossible.

The reason came down to the cold, hard logic of dollars and cents, working off of a single data point: Wing Commander IV sold less than half as many copies as Wing Commander III. Despite the increased budget and improved production values, despite all the mainstream press coverage, despite the gala premiere at Planet Hollywood, it just barely managed to break even, long after its initial release. I believe the reason why had everything to with that Uncanny Valley I described for you. Those excited enough by the potential of the medium to give these interactive movies the benefit of the doubt had already done so, and even many of these folks were now losing interest. Meanwhile the rest of the world was, at best, waiting for such productions to mature enough that they could sit comfortably beside real movies, or even television. But this was a leap that even Origin Systems, a subsidiary of Electronic Arts, the biggest game publisher in the country, was financially incapable of making. And as things currently stood, the return on investment on productions even the size of Wing Commander IV — much less still larger — simply wasn’t there.

During this period, a group of enterprising Netizens took it upon themselves to compile a weekly “Internet PC Games Chart” by polling thousands of their fellow gamers on what they were playing just at that moment. Wing Commander IV is present on the lists they published during the spring of 1996, rising as high as number four for a couple of weeks. But the list of games that consistently place above it is telling: Command & Conquer, Warcraft II, DOOM II, Descent, Civilization II. Although some of them do have some elements of story to bind their campaigns together and deliver a long-form single-player experience, none of them aspires to full-blown interactive movie-dom (not even Command & Conquer, which does feature real human actors onscreen giving its mission briefings). In fact, no games meeting that description are ever to be found anywhere in the top ten at the same time as Wing Commander IV.

Thanks to data like this, it was slowly beginning to dawn on the industry’s movers and shakers that the existing hardcore gamers — the people actually buying games today, and thereby sustaining their companies — were less interested in a merger of Silicon Valley and Hollywood than they were. “I don’t think it’s necessary to spend that much money to suspend disbelief and entertain the gamer,” said Jim Namestka of Dreamforge Intertainment by way of articulating the emerging new conventional wisdom. “It’s alright to spend a lot of money on enhancing the game experience, but a large portion spent instead on huge salaries for big-name actors… I question whether that’s really necessary.”

I’ve written quite a lot in recent articles about 1996 as the year that essentially erased the point-and-click adventure game as one of the industry’s marquee genres. Wing Commander IV isn’t one of those, of course, even if it does look a bit like one at times, when you’re wandering around a ship talking to your crew mates. Still, the Venn diagram of the interactive movie does encompass games like Wing Commander IV, just as it does games like, say, Phantasmagoria, the biggest adventure hit of 1995, which sold even more copies than Wing Commander III. In 1996, however, no game inside that Venn diagram became a million-selling breakout hit. The best any could manage was a middling performance relative to expectations, as was the case for Wing Commander IV. And so the retrenchment began.

It would have been financially foolish to do anything else. The titles that accompanied and often bested Wing Commander IV on those Internet PC Games Charts had all cost vastly less money to make and yet sold as well or better. id Software’s Wolfenstein 3D and DOOM, the games that had started the shift away from overblown storytelling and extended multimedia cut scenes and back to the nuts and bolts of gameplay, had been built by a tiny team of scruffy outsiders working on a shoestring; call this the games industry’s own version of Buzzcocks versus Yes.

The shift away from interactive movies didn’t happen overnight. At Origin, the process of bargaining with financial realities would lead to one more Wing Commander game before the franchise was put out to pasture, still incorporating real actors in live-action cut scenes, but on a less lavish, more sustainable — read, cheaper — scale. The proof was right there in the box: Wing Commander: Prophecy, which but for a last-minute decision by marketing would have been known as Wing Commander V, shipped on three CDs in early 1997 rather than the six of Wing Commander IV. By that time, the whole franchise was looking hopelessly passé in a sea of real-time strategy and first-person shooters whose ethic was to get you into the action fast and keep you there, without any clichéd meditations about the hell that is war. Wing Commander IV had proved to be the peak of the interactive-movie mountain rather than the next base camp which Chris Roberts had imagined it to be.

This is not to say that digital interactive storytelling as a whole died in 1996. It just needed to find other, more practical and ultimately more satisfying ways to move forward. Some of those would take shape in the long-moribund CRPG genre, which enjoyed an unexpected revival close to the decade’s end. Adventure games too would soldier on, but on a smaller scale more appropriate to their reduced commercial circumstances, driven now by passion for the medium rather than hype, painted once again in lovely pixel art instead of grainy digitized video. For that matter, even space simulators would enjoy a golden twilight before falling out of fashion for good, thanks to several titles that kicked against what Wing Commander had become by returning the focus to what happened in the cockpit.

All of these development have left Wing Commander IV standing alone and exposed, its obvious faults only magnified that much more by its splendid isolation. It isn’t a great game, nor even all that good a game, but it isn’t a cynical or unlikable one either. Call it a true child of Chris Roberts: a gawky chip off the old block, with too much money and talent and yet not quite enough.



Did you enjoy this article? If so, please think about pitching in to help me make many more like it. You can pledge any amount you like.



(Sources: the book Origin’s Official Guide to Wing Commander IV: The Price of Freedom by Melissa Tyler; Computer Gaming World of February 1995, May 1995, December 1995, April 1996, and July 1997; Strategy Plus of December 1995; the American PC Gamer of September 1995 and May 1996; Origin’s internal newsletter Point of Origin of September 8 1995, January 12 1996, February 12 1996, April 5 1996, and May 17 1996; Retro Gamer 59. Online sources include the various other internal Origin documents, video clips, pictures, and more hosted at Wing Commander News and Mark Asher’s CNET GameCenter columns from March 24 1999 and October 29 1999. And, for something completely different, Buzzcocks being interview at the British Library in 2016. RIP Pete Shelley.

Wing Commander IV: The Price of Freedom is available from GOG.com as a digital purchase.)

 

Tags: , , ,

Spycraft: The Great Game, Part 2

Warning: this article spoils the ending of Spycraft: The Great Game!

On January 6, 1994, Activision announced in a press release that it was “teaming up with William Colby, the former head of the Central Intelligence Agency, to develop and publish espionage-thriller videogames.” Soon after, Colby brought his good friend Oleg Kalugin into the mix as well. With the name-brand, front-of-the-box talent for Spycraft: The Great Game — and, if all went swimmingly, its sequels — thus secured, it was time to think about who should do the real work of making it.

Even as late as 1994, Activision’s resurrection from its near-death experience of 1991 was still very much a work in progress. The company was chronically understaffed in relation to its management’s ambitions. To make matters worse, much of the crew that had made Return to Zork, including that project’s mastermind William Volk, had just left. (On balance, this may not have been such a bad thing; that game is so unfair and obtuse as to come off almost as a satire of player-hostile adventure-game design.)

Luckily, Activision’s base in Los Angeles left it well situated, geographically speaking, to become a hotbed of interactive movie-making. Bobby Kotick hired Alan Gershenfeld, a former film critic and logistical enabler for Hollywood, to spearhead his efforts in that direction. Realizing that he still needed help with the interactive part of interactive movies, Gershenfeld in turn took the unusual step of reaching out to Bob Bates, co-founder of the Virginia-based rival studio and publisher Legend Entertainment, to see if he would be interested in designing Spycraft for Activision.

He was very interested. One reason for this was that Legend lived perpetually hand to mouth in a sea of bigger fish, and couldn’t afford to look askance at paying work of almost any description. But another, better one was that he was a child of the Washington Beltway with a father who had been employed by the National Security Agency. Bates had read his first spy novel before starting high school. Ever since, his literary consumption had included plenty of Frederick Forsyth, Robert Ludlum, and John Le Carré. It was thus with no small excitement that he agreed to spend 600 hours creating a script and design document for an espionage game, which Legend’s programmers and artists might also end up playing a role in bringing to fruition if all went well.

At this time, writers of espionage fiction and techno-thrillers were still trying to figure out what the recent ending of the Cold War meant for their trade. Authors like those Bates had grown up reading were trying out international terrorist gangs, mafiosi, and drug runners as replacements for that handy all-purpose baddie the Soviet Union. Activision faced the same problem with Spycraft. One alternative — the most logical one in a way, given the time spans of its two star advisors’ intelligence careers — was to look to the past, to make the game a work of historical fiction. But the reality was that there was little appetite for re-fighting the Cold War in the popular culture of the mid-1990s; that would have to wait until a little later, until the passage of time had given those bygone days of backyard fallout shelters and duck-and-cover drills a glow of nostalgia to match that of radioactivity. In the meanwhile, Activision wanted something fresh, something with the sort of ripped-from-the-headlines relevance that Ken Williams liked to talk about.

Bates settled on a story line involving Boris Yeltsin’s Russia, that unstable fledgling democracy whose inheritance from the Soviet Union encompassed serious organized-crime and corruption problems along with the ongoing potential to initiate thermonuclear Armageddon any time it chose to do so. He prepared a 25,000-word walkthrough of a plot whose broad strokes would survive into the finished game. Changing the names of all of the real-world leaders involved in order to keep the lawyers at bay, it hinged around a race for the Russian presidency involving a moderate, Yeltsin-like incumbent and two right-wing opposition candidates. When one of the latter is assassinated, it redounds greatly to the benefit of his counterpart; the two right-wingers had otherwise looked likely to split the vote between themselves and hand the presidency back to the incumbent. So, there are reasons for suspicion from the get-go, and the surviving opposition candidate’s established ties with the Russian Mafia only gives more reasons. That said, it would presumably be a matter for Russia’s internal security police alone — if only the assassination hadn’t been carried out with an experimental CIA weapon, a new type of sniper rifle that can fire a deadly accurate and brutally lethal package of flechettes over long distances. It seems that there is a mole in the agency, possibly one with an agenda to incriminate the United States in the killing.

On the one hand, one can see in this story line some of the concerns that William Colby and Oleg Kalugin were expressing in the press at the time. On the other, they were hardly alone in identifying the instability of internal political Russia as a threat to the whole world, what with that country’s enormous nuclear arsenal. Bates himself says that he quickly realized that Activision was content to use Colby and Kalugin essentially as a commercial license, much like it would a hit movie or book. In the more than six months that he worked on Spycraft, he met Colby in person only one time, at his palatial Georgetown residence. (“It was clear that he was wealthy. He was very old-school. Circumspect, as you might imagine.”) Kalugin he never met at all. Fortunately, Legend’s niche in recent years had become the adaptation of commercial properties into games, and thus Bates had become very familiar with playing in other people’s universes, as it were. The milieu inhabited by Colby and Kalugin, as described by the two men in their memoirs, became in an odd sort of way just another of these pocket universes.

In other ways, however, Bates proved less suited to the game Activision was imagining. He was as traditionalist as adventure-game designers came, having originally founded Legend with the explicit goal of making it the heir to Infocom’s storied legacy. Activision’s leadership kept complaining that his design was not exciting enough, not “explosive” enough, too “tame.” To spice it up, they brought in an outside consultant named James Adams, a British immigrant to the United States who had written seven nonfiction books on the worlds of espionage and covert warfare along with three fictional thrillers. In the early fall of 1994, Bates, Adams, and some of Activision’s executives had a conversation which is seared on Bates’s memory like nothing else involving Spycraft.

They were saying it wasn’t intense or exciting enough. We were just kicking around ideas, and as a joke I said, “Well, we could always do a torture scene.”

And they said, “Yes! Yes!”

And I said, “No! No! I’m kidding. We’re not going to do that.”

And they said, “Yes, we really want to do that.”

And I said, “No. I am not putting the player in a position where they have to commit an act of torture. I just won’t do that.” At that point, the most violent thing I’d ever put into a game was having a boar charge onto a spear in Arthur

Shortly after this discussion, Bates accepted Activision’s polite thanks for his contributions along with his paycheck for 600 hours of his time, and bowed out to devote himself entirely to Legend’s own games once again. Neither he nor his company had any involvement with Spycraft after that. His name doesn’t even appear in the finished game’s credits.

James Adams now took over full responsibility for the convoluted script, wrestling it into shape for production to begin in earnest by the beginning of 1995. The final product was released on Leap Day, 1996. It isn’t the game Bates would have made, but neither is it the uniformly thoughtless, exploitive one he might have feared its becoming when he walked away. What appears for long stretches to be a rah-rah depiction of the CIA — exactly what you might expect from a game made in partnership with one of the agency’s former directors — betrays from time to time an understanding of the moral bankruptcy of the spy business that is more John Le Carré than Ian Fleming. In the end, it sends you away with a distinctly queasy feeling about the things you’ve done and the logic you’ve used to justify them. All due credit goes to James Adams for delivering a game that’s more subtle than the one Activision — and probably Colby and Kalugin as well — thought they were getting.

But let’s table that topic for the moment, while I first go over the ways in which Spycraft also succeeds in being an unusually fun interactive procedural, the digital equivalent of a page-turning airport read.

Being a product of its era, Spycraft relies heavily on canned video clips of real actors. It’s distinguished, however, by the unusual quality of same, thanks to what must have been a substantial budget and to the presence of movie-making veterans like Alan Gershenfeld on Activision’s payroll. It was Gershenfeld who hired Ken Berris, an experienced director of music videos and commercials, to run the video shoots; he may not have been Steven Spielberg, but he was a heck of a lot more qualified than most people who fancied themselves interactive-movie auteurs. Most of those other games were shot like the movies of the 1930s, with the actors speaking their lines on a static sound stage before a fixed camera. Berris, by contrast, has seen Citizen Kane; he mostly shoots on location rather than in front of green screens that are waiting to be filled in with computer graphics later, and his environments are alive, with a camera that moves through them. Spycraft‘s bravura opening sequence begins with a single long take shown from your point of view as you sign in at CIA headquarters and walk deeper into the building. I will go so far as to say that this painstakingly choreographed and shot high-wire act, involving several dozen extras moving through a space along with the camera and hitting their marks just so, might be the most technically impressive live-action video sequence I’ve ever seen in a game. It wouldn’t appear at all out of place in a prestige television show or a feature film. Suffice to say that it’s light years beyond the hammy amateurism of something like The 7th Guest, a sign of how far the industry had come in only a few years, just before the collapse of the adventure market put an end to the era of big-budget live-action interactive movies for better or for worse.


There are no stars among the journeyman cast of supporting players, but there are at least a few faces and voices that might ring a bell somewhere at the back of your memory, thanks to their regular appearances in commercials, television shows, and films. Although some of the actors are better than others, by the usual B-movie standards of the 1990s games industry the performances as a whole are first rate. Both William Colby and Oleg Kalugin also appear in the game, playing themselves. Colby becomes an advisor of sorts to you, popping up from time to time to offer insights on your investigations; Kalugin has only one short and rather pointless cameo, dropping into the office for a brief aside when you’re meeting with another agent of Russia’s state-security apparatus. Both men acquit themselves unexpectedly well in their roles, undemanding though they may be. I can only conclude that all those years of pretending to be other people while engaged in the espionage trade must have been good training for acting in front of a camera.

You play a rookie CIA agent who is identified only as “Thorn.” You never actually appear onscreen; everything is shown from your first-person perspective. Thus you can imagine yourself to be of any gender, race, or appearance that you like. Spycraft still shows traces of the fairly conventional adventure-game structure it would doubtless have had if Bob Bates had continued as its lead designer: you have an inventory that you need to dig into from time to time, and will occasionally find yourself searching rooms and the like, using an interface not out of keeping with that found in Legend’s own contemporaneous graphic adventures, albeit built from still photographs rather than hand-drawn pixel art.

A lock pick should do the trick here…

But those parts of the game take up a relatively small part of your time. Mostly, Thorn lives in digital rather than meat space, reading and responding to a steady stream of emails, poking around in countless public and private databases, and using a variety of computerized tools that have come along to transform the nature of spying since the Cold War heyday of Colby and Kalugin. These tools — read, “mini-games” — take the place of the typical adventure game’s set-piece puzzles. In the course of playing Spycraft, you’ll have to ferret out license-plate numbers and the like from grainy satellite images; trace the locations of gunmen by analyzing bullet trajectories (this requires the use of the aptly named “Kennedy Assassination Tool”); identify faces captured by surveillance cameras; listen to phone taps; decode secret messages hidden in Usenet post headers; doctor photographs; trace suspects’ travels using airline-reservation systems and Department of Treasury banknote databases; even run a live exfiltration operation over a digital link-up.

The tactical exfiltration mini-game is the most ambitious of them all, reminding me of a similar one in Sid Meier’s Covert Action, another espionage game whose design approach is otherwise the exact opposite of Spycraft‘s. It’s good enough that I kind of wish it was used more than once.

These mini-games serve their purpose well. If most of them are too simplistic to be very compelling in the long term, well, they don’t need to be; most of them only turn up once. Their purpose is to trip you up just long enough to give you a thrill of triumph when you figure them out and are rocketed onward to the next plot twist. Spycraft is meant to be an impressionistic thrill ride, what Rick Banks of Artech Digital Productions liked to call an “aesthetic simulation” back in the 1980s. If you find yourself complaining that you’re almost entirely on rails, you’re playing the wrong game; the whole point of Spycraft is the subjective experience of living out a spy movie, not presenting you with “interesting decisions” of the sort favored by more purist game designers like Sid Meier.

In Spycraft, you roam a simulated version of cyberspace using a Web-browser interface, complete with “Home,” “Back,” and “Forward” buttons — a rather remarkable inclusion, considering how new the very notion of browsing the Web still was when this game was released in February of 1996. The game even included a real online component: some of the sites you could access through the games received live updates if your computer was connected to the real Internet. Thankfully, nothing critical to completing the game was communicated in this way, for these sites are all, needless to say, long gone today.

As is par for the course with spy stories, the plot just keeps getting more and more tangled, perchance too much so for its own good. Just in case the murder of a Russian presidential candidate with a weapon stolen from the CIA isn’t enough for you, other threads eventually emerge, involving a gang of terrorists who are attempting to secure a live nuclear bomb and a plan to assassinate the president of the United States when he comes to Russia to sign a nuclear-arms-control agreement. You’re introduced to at least 50 different names, many of them with multiple aliases — again, this is a spy story — in the handful of hours it will take you to play the game. The fact that you spend most of your time at such a remove from them — shuffling through their personnel files and listening to them over phone taps rather than meeting them face to face — only makes it that much harder to keep them all straight, much less feel any real emotional investment in them. There are agents, double agents, triple agents, and, I’m tempted to say, quadruple agents around every corner.

I must confess that I really have no idea how well it all hangs together in the end. Just thinking about it makes my head hurt. I suppose it doesn’t really matter all that much; as I said, there’s only one path through the game, with minimal deviations allowed. Should you ever feel stuck, forward progress is just a matter of rummaging around until you find that email you haven’t read yet, that phone number you haven’t yet dialed, or that mini-game you haven’t yet completed successfully. Spycraft never demands that you understand its skein of conspiracies and conspirators, only that you jump through the series of hoops it sets before you in order to help your alter ego Thorn understand it. And that’s enough to deliver the impressionistic thrill ride it wants to give you.

The plot is as improbable as it is gnarly, making plenty of concessions to the need to entertain; it strains credibility to say the least that a rookie agent would be assigned to lead three separate critical investigations at the same time. And yet the game does demonstrate that it knows a thing or two about the state of the world. Indeed, it can come across as almost eerily prescient today, and not only for its recognition that a hollowed-out Russia with an aggressively revanchist leader could become every bit as great a threat to the democratic West as the Soviet Union once was. It also recognizes what an incredible tool for mass surveillance and oppression the Internet and other forms of networked digital technology were already becoming in 1996, seventeen years before the stunning revelations by Edward Snowden about the activities of the United States’s own National Security Agency. And then there is the torture so unwittingly proposed by Bob Bates, which did indeed make it into the game, some seven years before the first rumors began to emerge that the real CIA was engaging in what it called “enhanced interrogation techniques” in the name of winning the War on Terror.

Let’s take a moment now to look more closely at how Spycraft deals with this fraught subject in particular. Doing so should begin to show how this game is more morally conflicted than its gung-ho surface presentation might lead you to expect.

Let me first make one thing very clear: you don’t have to engage in torture to win Spycraft. This is one of the few places where you do have a measure of agency in choosing your path. The possibility of employing torture as a means to your ends is introduced about a third of the way into the game, after your colleagues have captured one Ying Chungwang, a former operative for North Korea, now a mercenary on the open market who has killed several CIA agents at the behest of various employers. She’s the Bonnie to another rogue operative’s Clyde. Your superiors suggest that you might be able to turn her by convincing her that her lover has also been captured and has betrayed her; this you can do by making a fake photograph of him looking relaxed and cooperative in custody. But there may also be another way to turn her, a special gadget hidden in the basement of the American embassy in Moscow, involving straps, electrodes, and high-voltage wiring. Most of your superiors strongly advise against using it: “There’s something called the Geneva Convention, Thorn, and we’d like to abide by it. Simply put, what you’re considering is illegal. Let’s not get dirty on this one.” Still, one does have to wonder why they keep it around if they’re so opposed to it…

Coincidentally or not, the photo-doctoring mini-game is easily the most frustrating of them all, an exercise in trial and error that’s made all the worse by the fact that you aren’t quite sure what you’re trying to create in the first place. You might therefore feel an extra temptation to just say screw it and head on down to the torture chamber. If you do, another, more chilling sort of mini-game ensues, in which you must pump enough electric current through your victim to get her to talk, without turning the dial so high that you kill her. “It burns!” she screams as you twist the knob. If you torture like Goldilocks — not too little, not too much — she breaks down eventually and tells you everything you want to know. And that’s that. Nobody ever mentions what happened in that basement again.

What are we to make of this? We might wish that the game would deliver Thorn some sort of comeuppance for this horrid deed. Maybe Ying could give you bad intelligence just to stop the pain, or you could get automatically hauled away to prison as soon as you leave the basement, as does happen if you kill her by using too much juice. But if there’s one thing we can learn from the lives of Colby and Kalugin, it’s that such an easy, cause-and-effect moral universe isn’t the one inhabited by spies. Yes, torture does often yield bad intelligence; in the 1970s, Colby claimed this was a reason the CIA was not in the habit of using it, a utilitarian argument which has been repeated again and again in the decades since to skeptics who aren’t convinced that the agency’s code of ethics alone would be enough to cause it to resist the temptation. Yet torture is not unique in being fallible; other interrogation techniques have weaknesses of their own, and can yield equally bad intelligence. The decision to torture or not to torture shouldn’t be based on its efficacy or lack thereof. Doing so just leads us back to the end-justifies-the-means utilitarianism that permitted the CIA and the KGB to commit so many outrages, with the full complicity of upstanding patriots like Colby and Kalugin who were fully convinced that everything they did was for the greater good. In the end, the decision not to torture must be a matter of moral principle if we are ever to trust the people making it.

Then again, if you had hold of an uncooperative member of a terrorist cell that was about to detonate an atomic bomb in a major population center, what would you do? This is where the slippery slope begins. The torture scene in Spycraft is deeply disturbing, but I don’t think that James Adams put it there strictly for the sake of sensationalism. Ditto the lack of consequences that follow. In the real world, virtue must often be its own reward, and the wages of sin are often a successful career. I think I’m glad that Spycraft recognizes this and fails to engage in any tit-for-tat vision of temporal justice — disturbed, yes, but oddly proud of the game at the same time. I’m not sure that I would have had the guts to put torture in there myself, but I’m convinced by some of the game’s other undercurrents that it was put there for purposes other than shock value. (Forgive the truly dreadful pun…)

Let’s turn the clock back to the very beginning of the game for an example. The first thing you see when you click the “New Game” button is the CIA’s official Boy Scout-esque values statement: “We conduct ourselves according to the highest standards of integrity, morality, and honor, and to the spirit and letter of our law and constitution.” Meanwhile a gruffer, more cynical voice is telling you how it really is: “Some things the president shouldn’t know. For a politician, ignorance can be the key to survival, so the facts might be… flexible. The best thing you can do is to treat your people right… and watch every move they make.” It’s a brilliant juxtaposition, culminating in the irony that is the agency’s hilariously overwrought Biblical motto: “And ye shall know the truth, and the truth shall make you free.” And then we’re walking into CIA headquarters, an antiseptic place filled with well-scrubbed, earnest-looking people, and that note of moral ambiguity is forgotten for the nonce as we “build the team” for a new “op.”


But as you play on, the curtain keeps wafting aside from time to time to reveal another glimpse of an underlying truth that you — or Thorn, at least — may not have signed on for. One who has seen this truth and not been set free is a spy known as Birdsong, a mole in the Russian defense establishment who first started leaking secrets to the CIA because he was alarmed by some of his more reactionary colleagues and genuinely thought it was the right thing to do. He gets chewed up and spit out by both sides. “I can tell the truth from lies no more,” he says in existential despair. “Everything is blurry. This has been hell. Everyone has betrayed me and I have betrayed everyone.” Many an initially well-meaning spy in the real world has wound up saying the same.

And then — and most of all — there’s the shocking, unsatisfying, but rather amazingly brave ending of the game. By this point, the plot has gone through more twists and turns than a Klein bottle, and the CIA has decided it would prefer for the surviving Russian opposition candidate to win the election after all, because only he now looks likely to sign the arms-control treaty that the American president whom the CIA serves so desperately desires. Unfortunately, one Yuri, a dedicated and incorruptible Russian FSB agent who has been helping you throughout your investigations, is still determined to bring the candidate down for his entanglements with the Russian Mafia. In the very last interactive scene of the game, you can choose to let Yuri take the candidate into custody and uphold the rule of law in a country not much known for it, which will also result in the arms-control agreement failing to go through and you getting drummed out of the CIA. Or you can shoot your friend Yuri in cold blood, allowing the candidate to become the new president of Russia and escape any sort of reckoning for his crimes — but also getting the arms-control agreement passed, and getting yourself a commendation.

As adventure-game endings go, it’s the biggest slap in the face to the player since Infocom’s Infidel, upending her moral universe at a stroke. It becomes obvious now, if we still doubted it, that James Adams appreciates very well the perils of trying to achieve worthy goals by unworthy means. Likewise, he appreciates the dangers that are presented to a free society by a secretive institution like the CIA — an arrogant institution, which too often throughout its history has been convinced that it is above the moral reckoning of tedious ground dwellers. Perhaps he even sees how a man like William Colby could become a reflection of the agency he served, could be morally and spiritually warped by it until it had cost him his family and his faith. “Uniquely in the American bureaucracy,” wrote Colby in his memoir, “the CIA understood the necessity to combine political, psychological, and paramilitary tools to carry out a strategic concept of pressure on an enemy or to strengthen an incumbent.” When you begin to believe that only you and “your” people are “uniquely” capable of understanding anything, you’ve started down a dangerous road indeed, one that before long will allow you to do almost anything in the name of some ineffable greater good, using euphemisms like “pressure” in place of “assassinate,” “strengthen an incumbent” in place of “interfere in a sovereign foreign country’s elections” — or, for that matter, “enhanced interrogation techniques” in place of “torture.”

Spycraft is a fascinating, self-contradictory piece of work, slick but subversive, escapist but politically aware, simultaneously carried away by the fantasy of being a high-tech spy with gadgets and secrets to burn and painfully aware of the yawning ethical abyss that lies at the end of that path. Like the trade it depicts, the game sucks you in, then it repulses you. Nevertheless, you should by all means play it. And as you do so, be on the lookout for the other points of friction where it seems to be at odds with its own box copy.

Spycraft wasn’t a commercial success. It arrived too late for that, at the beginning of the year that rather broke the back of interactive movies and adventure games in general. Thus the Spycraft II that is boldly promised during the end credits never appeared. Luckily, Activision was in a position to absorb the failure of their conflicted spy game. For the company was already changing with the times, riding high on the success of Mechwarrior 2, a 3D action game in which you drive a giant robot into combat. “How about a big mech with an order to fry?” ran its tagline; this was the very definition of pure escapism. Mainstream gaming, it turned out, was not destined to be such a ripped-from-the-headlines affair after all.



I do wonder sometimes whether Colby and Kalugin ever knew what a bleak note their one and only game ended on. Somehow I suspect not. It was, after all, just another business deal to them, another way of cashing in on the careers they had put behind them. Their respective memoirs tell us that both were very, very smart men, but neither comes across as overly introspective. I’m not sure they would even recognize what a telling commentary Spycraft‘s moral bleakness is on their own lives.

It was just two months after the game’s release that William Colby disappeared from his vacation home. When his body turned up on May 6, 1996, those few people who had both bought the game and been following the manhunt were confronted with an eyebrow-raising coincidence. For it just so happens that the CIA’s flechette gun isn’t the only experimental weapon you encounter in the course of the game. Later on, an even more devious one turns up, a sort of death ray that can kill its victims without leaving a mark on them — that causes them to die from what appears to be a massive coronary arrest. The coroner who examined Colby’s body insisted that he must have had a “cardiovascular incident,” despite having no previous history of heart disease. Hmm…

The case of Colby’s demise has never been officially reopened, but one more theory has been added to those of death by misadventure and death by murder since 1996. His son Carl Colby, who made a documentary film about his father in 2011, believes that he took his own life purposefully. “I think he’d had enough of this life,” he reveals at the end of his film. “He called me two weeks before he died, asking for my absolution for his not doing enough for my sister Catherine when she was so ill. When his body was found, he was carrying a picture of my sister.” In a strange way, it does seem consistent with this analytical, distant man, for whom brutal necessities were a stock in trade, to calmly eat his dinner, get into his canoe, paddle out from shore, and drown himself.

Oleg Kalugin, on the other hand, lived on. Russia’s new President Vladimir Putin, a former KGB agent himself, opened a legal case against Kalugin shortly after he took office, charging him with “disclosing sources and methods” in his 1994 memoir that he had sworn an oath to keep secret. Kalugin was already living in the United States at that time, and has not dared to return to his homeland since. From 2002, when a Russian court pronounced him guilty as charged, he has lived under the shadow of a lengthy prison sentence, or worse, should the Russian secret police ever succeed in taking him into custody. In light of the fate that has befallen so many other prominent critics of Russia’s current regime, one has to assume that he continues to watch his back carefully even today, at age 88. You can attempt to leave the great game, but the great game never leaves you.

(Sources: the book Game Plan by Alan Gershenfeld, Mark Loparco, and Cecilia Barajas; the documentary film The Man Nobody Knew: In Search of My Father, CIA Spymaster William Colby; Sierra On-Line’s newsletter InterAction of Summer 1993; Questbusters of February 1994; Electronic Entertainment of December 1995; Mac Addict of September 1996; Next Generation of February 1996; Computer Gaming World of July 1996; New York Times of January 6 1994 and June 27 2002. And thanks as always to Bob Bates for taking the time to talk to me about his long career in games.

Spycraft: The Great Game is available as a digital purchase at GOG.com.)

 
 

Tags: , ,

Spycraft: The Great Game, Part 1 (or, Parallel Spies)

Police recover William Colby’s body on the coast of Maryland, May 6, 1996.

The last people known to have seen William Colby alive are a cottage caretaker and his sister. They bumped into the former head of the CIA early on the evening of April 27, 1996, watering the willow trees around his vacation home on Neale Sound in Maryland, about 60 miles south of Washington, D.C. The trio chatted together for a few minutes about the fine weather and about the repairs Colby had spent the day doing to his sailboat, which was moored in the marina on Cobb Island, just across the sound. Then the caretaker and his sister went on their way. Everything seemed perfectly normal to them.

The next morning, a local handyman, his wife, and their two children out on the water in their motorboat spotted a bright green canoe washed up against a spit of land that extended from the Maryland shore. The canoe appeared to be abandoned. Moving in to investigate, they found that it was full of sand. This was odd, thought the handyman; he had sailed past this same place the day before without seeing the canoe, and yet so much sand could hardly have collected in it naturally over the course of a single night. It was almost as if someone had deliberately tried to sink the canoe. Oh, well; finders keepers. It really was a nice little boat. He and his family spent several hours shoveling out the sand, then towed the canoe away with them.

In the meantime, Colby’s next-door neighbor was surprised not to see him out and about. The farthest thing from a layabout, the wiry 76-year-old was usually up early, puttering about with something or other around his cottage or out on the sound. Yet now he was nowhere to be seen outside and didn’t answer his door, even though his car was still in the driveway and the neighbor thought she could hear a radio playing inside the little house. Peeking around back, she saw that Colby’s green canoe was gone. At first, she thought the mystery was solved. But as the day wore on and he failed to return, she grew more and more concerned. At 7:00 that evening, she called the police.

When they arrived, the police found that both doors to the cottage were unlocked. The radio was indeed turned on, as was Colby’s computer. Even weirder, a half-eaten meal lay in the sink, surrounded by unwashed dishes and half a glass of white wine. It wasn’t at all like the man not to clean up after himself. And his wallet and keys were also lying there on the table. Why on earth would he go out paddling without them?

Inquiries among the locals soon turned up Colby’s canoe and the story of its discovery. Clearly something was very wrong here. The police ordered a search. Two helicopters, twelve divers, and 100 volunteers in boats pulling drag-lines behind them scoured the area, while CIA agents also arrived to assist the investigation into the disappearance of one of their own; their presence was nothing to be alarmed at, they assured everyone, just standard procedure. Despite the extent of the search effort, it wasn’t until the morning of May 6, nine days after he was last seen, that William Colby’s body was found washed up on the shore, just 130 feet from where the handyman had found his canoe, but on the other side of the same spit of land. It seemed that Colby must have gone canoeing on the lake, then fallen overboard and drowned. He was 76 years old, after all.

But the handyman who had found the canoe, who knew these waters and their currents as well as anyone, didn’t buy this. He was sure that the body could not have gotten so separated from the canoe as to wind up on the opposite side of the spit. And why had it taken it so long to wash up on shore? Someone must have gone out and planted it there later on, he thought. Knowing Colby’s background, and having seen enough spy movies to know what happened to inconvenient witnesses in cases like this one, he and his family left town and went into hiding.

The coroner noticed other oddities. Normally a body that has been in the water a week or more is an ugly, bloated sight. But Colby’s was bizarrely well-preserved, almost as if it had barely spent any time in the water at all. And how could the divers and boaters have missed it for so long, so close to shore as it was?

Nonetheless, the coroner concluded that Colby had probably suffered a “cardiovascular incident” while out in his canoe, fallen into the water, and drowned. This despite the fact that he had had no known heart problems, and was in general in a physical shape that would have made him the envy of many a man 30 years younger than he was. Nor could the coroner explain why he had chosen to go canoeing long after dark, something he was most definitely not wont to do. (It had been dusk already when the caretaker and his sister said goodbye to him, and he had presumably sat down to his dinner after that.) Why had he gone out in such a rush, leaving his dinner half-eaten and his wine half-drunk, leaving his radio and computer still turned on, leaving his keys and wallet lying there on the table? It just didn’t add up in the eyes of the locals and those who had known Colby best.

But that was that. Case closed. The people who lived around the sound couldn’t help but think about the CIA agents lurking around the police station and the morgue, and wonder at everyone’s sudden eagerness to put a bow on the case and be done with it…


Unusually for a septuagenarian retired agent of the security state, William Colby had also been a game developer, after a fashion at least. In fact, at the time of his death a major game from a major publisher that bore his name very prominently right on the front of the box had just reached store shelves. This article and the next will partly be the story of the making of that game. But they will also be the story of William Colby himself, and of another character who was surprisingly similar to him in many ways despite being his sworn enemy for 55 years — an enemy turned friend who consulted along with him on the game and appeared onscreen in it alongside him. Then, too, they will be an inquiry into some of the important questions the game raises but cannot possibly begin to answer.


Sierra’s Police Quest: Open Season, created with the help of controversial former Los Angeles police chief Daryl Gates, was one of the few finished products to emerge from a brief-lived vision of games as up-to-the-minute, ripped-from-the-headlines affairs. Spycraft: The Great Game was another.

Activision’s Spycraft: The Great Game is the product of a very specific era of computer gaming, when “multimedia” and “interactive movies” were among the buzzwords of the zeitgeist. Most of us who are interested in gaming history today are well aware of the set of technical and aesthetic approaches these terms imply: namely, games built from snippets of captured digitized footage of real actors, with interactivity woven as best the creators can manage between these dauntingly large chunks of static content.

There was a certain ideology that sometimes sprang up in connection with this inclusion of real people in games, a belief that it would allow games to become relevant to the broader culture in a way they never had before, tackling stories, ideas, and controversies that ordinary folks were talking about around their kitchen tables. At the margins, gaming could almost become another form of journalism. Ken Williams, the founder and president of Sierra On-Line, was the most prominent public advocate for this point of view, as exemplified by his decision to make a game with Daryl F. Gates, the chief of police for Los Angeles during the riots that convulsed that city in the spring of 1992. Williams, writing during the summer of 1993, just as the Gates game was being released:

I want to find the top cop, lawyer, airline pilot, fireman, race-car driver, politician, military hero, schoolteacher, white-water rafter, mountain climber, etc., and have them work with us on a simulation of their world. Chief Gates gives us the cop game. We are working with Emerson Fittipaldi to simulate racing, and expect to announce soon that Vincent Bugliosi, the lawyer who locked up Charles Manson, will be working with us to do a courtroom simulation. My goal is that products in the Reality Role-Playing series will be viewed as serious simulations of real-world events, not games. If we do our jobs right, this will be the closest most of us will ever get to seeing the world through these people’s eyes.

It sounded good in theory, but would never get all that far in practice, for a whole host of reasons: a lack of intellectual bandwidth and sufficient diversity of background in the games industry to examine complex social questions in an appropriately multi-faceted way (the jingoistic Gates game is a prime case in point here); a lack of good ideas for turning such abstract themes into rewarding forms of interactivity, especially when forced to work with the canned video snippets that publishers like Sierra deemed an essential part of the overall vision; the expense of the games themselves, the expense of the computers needed to run them, and the technical challenges involved in getting them running, which in combination created a huge barrier to entry for newcomers from outside the traditional gamer demographics; and, last but not least, the fact that those existing gamers who did meet all the prerequisites were generally perfectly happy with more blatantly escapist entertainments, thank you very much. Tellingly, none of the game ideas Ken Williams mentions above ever got made. And I must admit that this failure does not strike me as any great loss for world culture.

That said, Williams, being the head of one of the two biggest American game publishers, had a lot of influence on the smaller ones when he prognosticated on the future of the industry. Among the latter group was Activision, a toppled giant which had been rescued from the dustbin of bankruptcy in 1991 by a young wheeler-dealer named Bobby Kotick. His version of the company got fully back onto its feet the same year that Williams wrote the words above, thanks largely to Return to Zork, a cutting-edge multimedia evocation of the Infocom text adventures of yore, released at the perfect time to capitalize on a generation of gamers’ nostalgia for those bygone days of text and parsers (whilst not actually asking them to read much or to type out their commands, of course).

With that success under their belts, Kotick and his cronies thought about what to do next. Adventure games were hot — Myst, the bestselling adventure of all time, was released at the end of 1993 — and Ken Williams’s ideas about famous-expert-driven “Reality Role-Playing” were in the air. What might they do with that? And whom could they get to help them do it?

They hit upon espionage, a theme that, in contrast to many of those outlined by Williams, seemed to promise a nice balance of ripped-from-the-headlines relevance with interesting gameplay potential. Then, when they went looking for the requisite famous experts, they hit the mother lode with William Colby, the head of the CIA from September of 1973 to January of 1976, and Oleg Kalugin, who had become the youngest general in the history of the First Central Directorate of the Soviet Committee for State Security, better known as the KGB, in 1974.

I’ll return to Spycraft itself in due course. But right now, I’d like to examine the lives of these two men, which parallel one another in some perhaps enlightening ways. Rest assured that in doing so I’m only following the lead of Activision’s marketers; they certainly wanted the public to focus first and foremost on the involvement of Colby and Kalugin in their game.


William Colby (center), looking every inch the dashing war hero in Norway just after the end of World War II.

William Colby was born in St. Paul, Minnesota on January 4, 1920. He was the only child of Elbridge Colby, a former soldier and current university professor who would soon rejoin the army as an officer and spend the next 40 years in the service. His family was deeply Catholic — his father thanks to a spiritual awakening and conversion while a student at university, his mother thanks to long family tradition. The son too absorbed the ethos of a stern but loving God and the necessity of serving Him in ways both heavenly and worldly.

The little family bounced around from place to place, as military families generally do. They wound up in China for three years starting in 1929, where young Bill learned a smattering of Chinese and was exposed for the first time to the often compromised ethics of real-world politics, in this case in the form of the United States’s support for the brutal dictatorship of Chiang Kei-shek. Colby’s biographer Randall Bennett Woods pronounces his time in China “one of the formative influences of his life.” It was, one might say, a sort of preparation for the many ugly but necessary alliances — necessary as Colby would see them, anyway — of the Cold War.

At the age of seventeen, Colby applied to West Point, but was rejected because of poor eyesight. He settled instead for Princeton, a university whose faculty included Albert Einstein among many other prominent thinkers. Colby spent the summer of 1939 holidaying in France, returning home just after the fateful declarations of war in early September, never imagining that the idyllic environs in which he had bicycled and picnicked and practiced his French on the local girls would be occupied by the Nazis well before another year had passed. Back at Princeton, he made the subject of his senior thesis the ways in which France’s weakness had allowed the Nazi threat on its doorstep to grow unchecked. This too was a lesson that would dominate his worldview throughout the decades to come. After graduating, Colby received his officer’s commission in the United States Army, under the looming shadow of a world war that seemed bound to engulf his own country sooner or later.

When war did come on December 7, 1941, he was working as an artillery instructor at Fort Sill in Oklahoma. To his immense frustration, the Army thought he was doing such a good job in that role that it was inclined to leave him there. “I was afraid the war would be over before I got a chance to fight,” he writes in his memoir. He therefore leaped at the opportunity when he saw an advertisement on a bulletin board for volunteers to become parachutists with the 82nd Airborne. He tried to pass the entrance physical by memorizing the eye chart. The doctor wasn’t fooled, but let him in anyway: “I guess your eyesight is good enough for you to see the ground.”

Unfortunately, he broke his ankle in a training jump, and was forced to watch, crestfallen, as his unit shipped out to Europe without him. Then opportunity came calling again, in a chance to join the new Office of Strategic Services (OSS), the forerunner of the CIA. Just as the CIA would later on, the OSS had two primary missions: foreign intelligence gathering and active but covert interference. Colby was to be dropped behind enemy lines, whence he would radio back reports of enemy troop movements and organize resistance among the local population. It would be, needless to say, an astonishingly dangerous undertaking. But that was the way Colby wanted it.

William Colby finally left for Britain in December of 1943, aboard the British luxury liner Queen Elizabeth, now refitted to serve as a troop transport. It was in a London bookstore that he first encountered another formative influence, the book Seven Pillars of Wisdom by T.E. Lawrence — the legendary Lawrence of Arabia, who had convinced the peoples of the Middle East to rise up against their Turkish overlords during the last world war. Lawrence’s book was, Colby would later say, an invaluable example of “an outsider operat[ing] within the political framework of a foreign people.” It promptly joined the Catholic Bible as one of the two texts Colby carried with him everywhere he went.

As it happened, he had plenty of time for reading: the weeks and then months passed in Britain, and still there came no orders to go into action. There was some talk of using Colby and his fellow American commandos to sow chaos during the run-up to D-Day, but this role was given to British units in the end. Instead Colby watched from the sideline, seething, as the liberation of France began. Then, out of the blue, action orders came at last. On the night of August 14, 1944, Colby and two exiled French soldiers jumped out of a B-24 bomber flying over central France.

The drop was botched; the men landed fifteen miles away from the intended target, finding themselves smack dab in the middle of a French village instead of out in the woods. Luckily, there were no Germans about, and the villagers had no desire to betray them. There followed a hectic, doubtless nerve-wracking month, during which Colby and his friends made contact with the local resistance forces and sent back to the advancing Allied armies valuable information about German troop movements and dispositions. Once friendly armies reached their position, the commandos made their way back to the recently liberated Paris, thence to London. It had been a highly successful mission, with more than enough danger and derring-do to suffice for one lifetime in the eyes of most people. But for Colby it all felt a bit anticlimactic; he had never even discharged his weapon at the enemy. Knowing that his spoken German wasn’t good enough to carry out another such mission behind the rapidly advancing Western European front, Colby requested a transfer to China.

He got another offer instead. Being an accomplished skier, he was asked to lead 35 commandos into the subarctic region of occupied Norway, to interdict the German supply lines there. Naturally, he agreed.

The parachute drop that took place on the night of March 24, 1945, turned into another botched job. Only fifteen of the 35 commandos actually arrived; the other planes strayed far off course in the dark and foggy night, accidentally dropping their passengers over neutral Sweden, or giving up and not dropping them at all. But Colby was among the lucky (?) fifteen who made it to their intended destination. Living off the frigid land, he and his men set about dynamiting railroad tracks and tunnels. This time, he got to do plenty of shooting, as his actions frequently brought him face to face with the Wehrmacht.

On the morning of May 7, word came through on the radio that Adolf Hitler was dead and his government had capitulated; the war in Europe was over. Colby now set about accepting the surrender of the same German occupiers he had recently been harassing. While the operation he had led was perhaps of doubtful necessity in the big picture of a war that Germany had already been well along the path of losing, no one could deny that he had demonstrated enormous bravery and capability. He was awarded the Silver Star.

Gung ho as ever, Colby proposed to his superiors upon returning to London that he lead a similar operation into Francisco Franco’s Spain, to precipitate the downfall of that last bastion of fascism in Europe. Having been refused this request, he returned to the United States, still seeming a bit disappointed that it had all ended so quickly. Here he asked for and was granted a discharge from the Army, asked for and was granted the hand in marriage of his university sweetheart Barbara Heinzen, and asked for and was granted a scholarship to law school. He wrote on his application that he hoped to become a lawyer in the cause of organized labor. (Far from the fire-breathing right-wing extremist some of his later critics would characterize him to be, Colby would vote Democrat throughout his life, maintaining a center-left orientation when it came to domestic politics at least.)


Oleg Kalugin at age seventeen, a true believer in Joseph Stalin and the Soviet Communist Party.

While the war hero William Colby was seemingly settling into a more staid time of life, another boy was growing up in the heart of the nation that Colby and most other Americans would soon come to regard as their latest great enemy. Born on September 6, 1934, in Leningrad (the once and future Saint Petersburg), Oleg Kalugin was, like Colby, an only child of a couple with an ethic of service, the son of a secret-police agent and a former factory worker, both of whose loyalty to communism was unimpeachable; the boy’s grandmother caused much shouting and hand-wringing in the family when she spirited him away to have him baptized in a furtive Orthodox ceremony in a dark basement. That piece of deviancy notwithstanding, Little Oleg was raised to see Joseph Stalin as his god on earth, the one and only savior of his people.

On June 22, 1941, he was “hunting maybugs with a pretty girl,” as he writes, when he saw a formation of airplanes roar overhead and drop a load of bombs not far away. The war had come to his country, six months before it would reach that of William Colby. With the German armies nearing Leningrad, he and his mother fled to the Siberian city of Omsk while his father stayed behind to fight. They returned to a devastated hometown in the spring of 1944. Oleg’s father had survived the terrible siege, but the boy had lost all of his grandparents — including that gentle soul who had caused him to be baptized — along with four uncles to either starvation or enemy bullets.

Kalugin remained a true believer after the Great Patriotic War was over, joining the Young Communist League as soon as he was eligible at the age of fourteen. At seventeen, he decided to join the KGB; it “seemed like the logical place for a person with my academic abilities, language skills, and fervent desire to fight class enemies, capitalist parasites, and social injustice.” Surprisingly, his father, who had seen rather too much of what Soviet-style class struggle really meant over the last couple of decades, tried to dissuade him. But the boy’s mind was made up. He entered Leningrad’s Institute of Foreign Languages, a shallow front for the training of future foreign agents, in 1952.

When Stalin died in March of the following year, the young zealot wrote in his diary that “Stalin isn’t dead. He cannot die. His physical death is just a formality, one that needn’t deprive people of their faith in the future. The fact that Stalin is still alive will be proven by our country’s every new success, both domestically and internationally.” He was therefore shocked when Stalin’s successor, Nikita Khrushchev, delivered a speech that roundly condemned the country’s erstwhile savior as a megalomaniac and a mass-murderer who had cynically corrupted the ideology of Marx and Lenin to serve his own selfish ends. It was Kalugin’s initiation into the reality that the state he so earnestly served was less than incorruptible and infallible.

Nevertheless, he kept the faith, moving to Moscow for advanced training in 1956. In 1958, he was selected on the basis of his aptitude for English to go to the United States as a graduate student. “Just lay the foundation for future work,” his superiors told him. “Buy yourself good maps. Improve your English. Find out about their way of life. Communicate with people and make as many friends as possible.” Kalugin’s joyous reaction to this assignment reflects the ambivalence with which young Soviets like him viewed the United States. It was, they fervently believed, the epicenter of the imperialism, capitalism, racism, and classism they hated, and must ultimately be destroyed for that reason. Yet it was also the land of jazz and rock and roll, of fast cars and beautiful women, with a standard of living so different from anything they had ever known that it might as well have been Shangri-La. “I daydreamed constantly about America,” Kalugin admits. “The skyscrapers of New York and Chicago, the cowboys of the West…” He couldn’t believe he was being sent there, and on a sort of paid vacation at that, with few concrete instructions other than to experience as much of the American way of life as he could. Even his sadness about leaving behind the nice Russian girl he had recently married couldn’t overwhelm his giddy excitement.


William Colby in Rome circa 1955, with his son Carl and daughter Catherine.

As Oleg Kalugin prepared to leave for the United States, William Colby was about to return to that same country, where he hadn’t been living for seven years. He had become a lawyer as planned and joined the National Labor Relations Board to forward the cause of organized labor, but his tenure there had proved brief. In 1950, he was convinced to join the new CIA, the counterweight to the KGB on the world stage. He loved his new “band of brothers,” filled as he found it to be with “adventuresome spirits who believed fervently that the communist threat had to be met aggressively, innovatively, and courageously.”

In April of 1951, he took his family with him on his first foreign assignment, under the cover identity of a mid-level diplomatic liaison in Stockholm, Sweden. His real purpose was to build and run an intelligence operation there. (All embassies were nests of espionage in those days, as they still are today.) “The perfect operator in such operations is the traditional gray man, so inconspicuous that he can never catch the waiter’s eye in a restaurant,” Colby wrote. He was — or could become — just such a man, belying his dashing commando past. Small wonder that he proved very adept at his job. The type of spying that William Colby did was, like all real-world espionage, more John Le Carré than Ian Fleming, an incrementalist milieu inhabited by just such quiet gray men as him. Dead-letter drops, secret codes, envelopes stuffed with cash, and the subtle art of recruitment without actually using that word — the vast majority of his intelligence contacts would have blanched at the label of “spy,” having all manner of other ways of defining what they did to themselves and others — were now his daily stock in trade.

In the summer of 1953, Colby and his family left Stockholm for Rome. Still riven by discontent and poverty that the Marshall Plan had never quite been able to quell, with a large and popular communist party that promised the people that it alone could make things better, Italy was considered by both the United States and the Soviet Union to be the European country most in danger of changing sides in the Cold War through the ballot box, making this assignment an unusually crucial one. Once again, Colby performed magnificently. Through means fair and occasionally slightly foul, he propped up Italy’s Christian Democratic Party, the one most friendly to American interests. His wife and five young children would remember these years as their happiest time together, with the Colosseum visible outside their snug little apartment’s windows, with the trapping of their Catholic faith all around them. The sons became altar boys, learning to say Mass in flawless Latin, and Barbara amazed guests with her voluble Italian, which was even better than her husband’s.

She and her children would gladly have stayed in Rome forever, but after five years there her husband was growing restless. The communist threat in Italy had largely dissipated by now, thanks to an improving economy that made free markets seem more of a promise than a threat, and Colby was itching to continue the shadowy struggle elsewhere. In 1958, he was recalled to the States to begin preparing for a new, more exotic assignment: to the tortured Southeast Asian country of Vietnam, which had recently won its independence from France, only to become a battleground between the Western-friendly government of Ngo Dinh Diem and a communist insurgency led by Ho Chi Minh.


Oleg Kalugin (center) at Columbia University, 1958.

While Colby was hitting the books at CIA headquarters in Langley, Virginia, in preparation for his latest assignment, Kalugin was doing the same as a philology student on a Fulbright scholarship to New York City’s Columbia University. (Fully half of the eighteen exchange students who traveled with him were also spies-in-training.) A natural charmer, he had no trouble ingratiating himself with the native residents of the Big Apple as he had been ordered to do.

He went home when his one-year scholarship expired, but returned to New York City one year after that, to work as a journalist for Radio Moscow. Now, however, his superiors expected a bit more from him. Despite the wife and young daughter he had left behind, he seduced a string of women who he believed could become valuable informants — so much so that American counter-espionage agents, who were highly suspicious of him, labeled him a “womanizer” and chalked it up as his most obvious weakness, should they ever be in need of one to exploit. (For his part, Kalugin writes that “I always told my officers, male and female, ‘Don’t be afraid of sex.’ If they found themselves in a situation where making love with a foreigner could help our work, I advised them to hop into bed.”)

Kalugin’s unlikely career as Radio Moscow’s foreign correspondent in New York City lasted almost four years in all. He covered — with a pro-Soviet spin, naturally — the election of President John F. Kennedy, the trauma of the Bay of Pigs Invasion and the Cuban Missile Crisis, and the assassination of Kennedy by a man with Soviet ties. He was finally called home in early 1964, his superiors having decided he was now attracting too much scrutiny from the Americans. He found returning to the dingy streets of Moscow from the Technicolor excitement of New York City to be rather dispiriting. “Worshiping communism from afar was one thing. Living in it was another thing altogether,” he writes wryly, echoing sentiments shared by many an idealistic Western defector for the cause. Shortly after his return, the reform-minded Nikita Khrushchev was ousted in favor of Leonid Brezhnev, a man who looked as tired as the rest of the Soviet Union was beginning to feel. It was hard to remain committed to the communist cause in such an environment as this, but Kalugin continued to do his best.


William Colby, looking rather incongruous in his typical shoe salesman’s outfit in a Vietnamese jungle.

William Colby might have been feeling similar sentiments somewhere behind that chiseled granite façade of his. For he was up to his eyebrows in the quagmire that was Vietnam, the place where all of the world’s idealism seemed to go to die.

When he had arrived in the capital of Saigon in 1959, with his family in tow as usual, he had wanted to treat this job just as he had his previous foreign postings, to work quietly behind the scenes to support another basically friendly foreign government with a communist problem. But Southeast Asia was not Europe, as he learned to his regret — even if the Diem family were Catholic and talked among themselves in French. There were systems of hierarchy and patronage inside the leader’s palace that baffled Colby at every turn. Diem himself was aloof, isolated from the people he ruled, while Ho Chi Minh, who already controlled the northern half of the country completely and had designs on the rest of it, had enormous populist appeal. The type of espionage Colby had practiced in Sweden and Italy — all mimeographed documents and furtive meetings in the backs of anonymous cafés — would have been useless against such a guerilla insurgency even if it had been possible. Which it was not: the peasants fighting for and against the communists were mostly illiterate.

Colby’s thinking gradually evolved, to encompass the creation of a counter-insurgency force that could play the same game as the communists. His mission in the country became less an exercise in pure espionage and overt and covert influencing than one in paramilitary operations. He and his family left Vietnam for Langley in the summer of 1962, but the country was still to fill a huge portion of Colby’s time; he was leaving to become the head of all of the CIA’s Far Eastern operations, and there was no hotter spot in that hot spot of the world than Vietnam. Before departing, the entire Colby family had dinner with President Diem in his palace, whose continental cuisine, delicate furnishings, and manicured gardens almost could lead one to believe one was on the French Riviera rather than in a jungle in Southeast Asia. “We sat there with the president,” remembers Barbara. “There was really not much political talk. Yet there was a feeling that things were not going well in that country.”

Sixteen months later — in fact, just twenty days before President Kennedy was assassinated — Diem was murdered by the perpetrators of a military coup that had gone off with the tacit support of the Americans, who had grown tired of his ineffectual government and felt a change was needed. Colby was not involved in that decision, which came down directly from the Kennedy White House to its ambassador in the country. But, good soldier that he was, he accepted it after it had become a fait accompli. He even agreed to travel to Vietnam in the immediate aftermath, to meet with the Vietnamese generals who had perpetrated the coup and assure them that they had powerful friends in Washington. Did he realize in his Catholic heart of hearts that his nation had forever lost the moral high ground in Vietnam on the day of Diem’s murder? We cannot say.

The situation escalated quickly under the new President Lyndon Johnson, as more and more American troops were sent to fight a civil war on behalf of the South Vietnamese, a war which the latter didn’t seem overly inclined to fight for themselves. Colby hardly saw his family now, spending months at a stretch in the country. Lawrence of Arabia’s prescription for winning over a native population through ethical persuasion and cultural sensitivity was proving unexpectedly difficult to carry out in Vietnam, most of whose people seemed just to want the Americans to go away. It appeared that a stronger prescription was needed.

Determined to put down the Viet Cong — communist partisans in the south of the country who swarmed over the countryside, killing American soldiers and poisoning their relations with the locals — Colby introduced a “Phoenix Program” to eliminate them. It became without a doubt the biggest of all the moral stains on his career. The program’s rules of engagement were not pretty to begin with, allowing for the extra-judicial execution of anyone believed to be in the Viet Cong leadership in any case where arresting him was too “hard.” But it got entirely out of control in practice, as described by James S. Olsen and Randy W. Roberts in their history of the war: “The South Vietnamese implemented the program aggressively, but it was soon laced with corruption and political infighting. Some South Vietnamese politicians identified political enemies as Viet Cong and sent Phoenix hit men after them. The pressure to identify Viet Cong led to a quota system that incorrectly labeled many innocent people the enemy.” Despite these self-evident problems, the Americans kept the program going for years, saying that its benefits were worth the collateral damage. Olsen and Roberts estimate that at least 20,000 people lost their lives as a direct result of Colby’s Phoenix Program. A large proportion of them — possibly even a majority — were not really communist sympathizers at all.

In July of 1971, Colby was hauled before the House Committee on Government Operations by two prominent Phoenix critics, Ogden Reid and Pete McCloskey (both Republicans.) It is difficult to absolve him of guilt for the program’s worst abuses on the basis of his circuitous, lawyerly answers to their straightforward questions.

Reid: Can you state categorically that Phoenix has never perpetrated the premeditated killing of a civilian in a noncombat situation?

Colby: No, I could not say that, but I do not think it happens often. Individual members of it, subordinate people in it, may have done it. But as a program, it is not designed to do that.

McCloskey: Did Phoenix personnel resort to torture?

Colby: There were incidents, and they were treated as an unjustifiable offense. If you want to get bad intelligence, you use bad interrogation methods. If you want to get good intelligence, you had better use good interrogation methods.


Oleg Kalugin (right) receives from Bulgarian security minister Dimitri Stoyanov the Order of the Red Star, thanks largely to his handling of John Walker. The bespectacled man standing between and behind the two is Yuri Andropov, then the head of the KGB, who would later become the fifth supreme leader of the Soviet Union.

During the second half of the 1960s, Oleg Kalugin spent far more time in the United States than did William Colby. He returned to the nation that had begun to feel like more of a home than his own in July of 1965. This time, however, he went to Washington, D.C., instead of New York City. His new cover was that of a press officer for the Soviet Foreign Ministry; his real job was that of a deputy director in the KGB’s Washington operation. He was to be a spy in the enemy’s city of secrets. “By all means, don’t treat it as a desk job,” he was told.

Kalugin took the advice to heart. He had long since developed a nose for those who could be persuaded to share their country’s deepest secrets with him, long since recognized that the willingness to do so usually stemmed from weakness rather than strength. Like a lion on the hunt, he had learned to spot the weakest prey — the nursers of grudges and harborers of regrets; the sexually, socially, or professionally frustrated — and isolate them from the pack of their peers for one-on-one persuasion. At one point, he came upon a secret CIA document that purported to explain the psychology of those who chose to spy for that yin to his own service’s yang. He found it to be so “uncannily accurate” a description of the people he himself recruited that he squirreled it away in his private files, and quoted from it in his memoir decades later.

Acts of betrayal, whether in the form of espionage or defection, are almost in every case committed by morally or psychologically unsteady people. Normal, psychologically stable people — connected with their country by close ethnic, national, cultural, social, and family ties — cannot take such a step. This simple principle is confirmed by our experience of Soviet defectors. All of them were single. In every case, they had a serious vice or weakness: alcoholism, deep depression, psychopathy of various types. These factors were in most cases decisive in making traitors out of them. It would only be a slight exaggeration to say that no [CIA] operative can consider himself an expert in Soviet affairs if he hasn’t had the horrible experience of holding a Soviet friend’s head over the sink as he poured out the contents of his stomach after a five-day drinking bout.

What follows from that is that our efforts must mostly be directed against weak, unsteady members of Soviet communities. Among normal people, we should pay special attention to the middle-aged. People that age are starting their descent from their psychological peak. They are no longer children, and they suddenly feel the acute realization that their life is passing, that their ambitions and youthful dreams have not come true in full or even in part. At this age comes the breaking point of a man’s career, when he faces the gloomy prospect of pending retirement and old age. The “stormy forties” are of great interest to an [intelligence] operative.

It’s great to be good, but it’s even better to be lucky. John Walker, the spy who made Kalugin’s career, shows the truth in this dictum. He was that rarest of all agents in the espionage trade: a walk-in. A Navy officer based in Norfolk, Virginia, he drove into Washington one day in late 1967 with a folder full of top-secret code ciphers on the seat of his car next to him, looked up the address of the Soviet embassy in the directory attached to a pay phone, strode through the front door, plunked his folder down on the front desk, and said matter-of-factly, “I want to see the security officer, or someone connected with intelligence. I’m a naval officer. I’d like to make some money, and I’ll give you some genuine stuff in return.” Walker was hastily handed a down payment, ushered out of the embassy, and told never under any circumstances to darken its doors again. He would be contacted in other ways if his information checked out.

Kalugin was fortunate enough to be ordered to vet the man. The picture he filled in was sordid, but it passed muster. Thirty years old when his career as a spy began, Walker had originally joined the Navy to escape being jailed for four burglaries he committed as a teenager. A born reprobate, he had once tried to convince his wife to become a prostitute in order to pay off the gambling debts he had racked up. Yet he could also be garrulous and charming, and had managed to thoroughly conceal his real self from his Navy superiors. A fitness report written in 1972, after he had already been selling his country’s secrets for almost five years, calls him “intensely loyal, taking great pride in himself and the naval service, fiercely supporting its principles and traditions. He possesses a fine sense of personal honor and integrity, coupled with a great sense of humor.” Although he was only a warrant officer in rank, he sat on the communications desk at Norfolk, handling radio traffic with submarines deployed all over the world. It was hard to imagine a more perfect posting for a spy. And this spy required no counseling, needed no one to pretend to be his friend, to talk him down from crises of conscience, or to justify himself to himself. Suffering from no delusions as to who and what he was, all he required was cold, hard cash. A loathsome human being, he was a spy handler’s dream.

Kalugin was Walker’s primary handler for two years, during which he raked in a wealth of almost unbelievably valuable information without ever meeting the man face to face. Walker was the sort of asset who turns up “once in a lifetime,” in the words of Kalugin himself. He became the most important of all the spies on the Kremlin’s payroll, even recruiting several of his family members and colleagues to join his ring. “K Mart has better security than the Navy,” he laughed. He would continue his work long after Kalugin’s time in Washington was through. Throughout the 1970s and into the 1980s, Navy personnel wondered at how the Soviets always seemed to know where their ships and submarines were and where their latest exercises were planned to take place. Not until 1985 was Walker finally arrested. In a bit of poetic justice, the person who turned him in to the FBI was his wife, whom he had been physically and sexually abusing for almost 30 years.

The luster which this monster shed on Kalugin led to the awarding of the prestigious Order of the Red Star, and then, in 1974, his promotion to the rank of KGB general while still just shy of his 40th birthday, making him the youngest such in the post-World War II history of the service. By that time, he was back in Moscow again, having been recalled in January of 1970, once again because it was becoming common knowledge among the Americans that his primary work in their country was that of a spy. He was too hot now to be given any more long-term foreign postings. Instead he worked out of KGB headquarters in Moscow, dealing with strategic questions and occasionally jetting off to far-flung trouble spots to be the service’s eyes and ears on the ground. “I can honestly say that I loved my work,” he writes in his memoir. “My job was always challenging, placing me at the heart of the Cold War competition between the Soviet Union and the United States.” As ideology faded, the struggle against imperialism had become more of an intellectual fascination — an intriguing game of chess — than a grand moral crusade.


William Colby testifies before Congress, 1975.

William Colby too was now back in his home country on a more permanent basis, having been promoted to executive director of the CIA — the third highest position on the agency’s totem pole — in July of 1971. Yet he was suffering through what must surely have been the most personally stressful period of his life since he had dodged Nazis as a young man behind enemy lines.

In April of 1973, his 23-year-old daughter Catherine died of anorexia. Her mental illness was complicated, as they always are, but many in the family believed it to have been aggravated by being the daughter of the architect of the Phoenix Program, a man who was in the eyes of much of her hippie generation Evil Incarnate. His marriage was now, in the opinion of his biographer Randall Bennett Woods, no more than a “shell.” Barbara blamed him not only for what he had done in Vietnam but for failing to be there with his family when his daughter needed him most, for forever skipping out on them with convenient excuses about duty and service on his lips.

Barely a month after Catherine’s death, Colby got a call from Alexander Haig, chief of staff in Richard Nixon’s White House: “The president wants you to take over as director of the CIA.” It ought to have been the apex of his professional life, but somehow it didn’t seem that way under current conditions. At the time, the slow-burning Watergate scandal was roiling the CIA almost more than the White House. Because all five of the men who had been arrested attempting to break into the Democratic National Committee’s headquarters the previous year had connections to the CIA, much of the press was convinced it had all been an agency plot. Meanwhile accusations about the Phoenix Program and other CIA activities, in Vietnam and elsewhere, were also flying thick and fast. The CIA seemed to many in Congress to be an agency out of control, ripe only for dismantling. And of course Colby was still processing the loss of his daughter amidst it all. It was a thankless promotion if ever there was one. Nevertheless, he accepted it.

Colby would later claim that he knew nothing of the CIA’s many truly dirty secrets before stepping into the top job. These were the ones that other insiders referred to as the “family jewels”: its many bungled attempts to assassinate Fidel Castro, before and after he became the leader of Cuba, as well as various other sovereign foreign leaders; the coups it had instigated against lawfully elected foreign governments; its experiments with mind control and psychedelic drugs on unwilling and unwitting human subjects; its unlawful wiretapping and surveillance of scores of Americans; its longstanding practice of opening mail passing between the United States and less-than-friendly nations. That Colby could have risen so high in the agency without knowing these secrets and many more seems dubious on the face of it, but it is just possible; the CIA was very compartmentalized, and Colby had the reputation of being a bit of a legal stickler, just the type who might raise awkward objections to such delicate necessities. “Colby never became a member of the CIA’s inner club of mandarins,” claims the agency’s historian Harold Ford. But whether he knew about the family jewels or not beforehand, he was stuck with them now.

Perhaps in the hope that he could make the agency’s persecutors go away if he threw them just a little red meat, Colby came clean about some of the dodgy surveillance programs. But that only whet the public’s appetite for more revelations. For as the Watergate scandal gradually engulfed the White House and finally brought down the president, as it became clear that the United States had invested more than $120 billion and almost 60,000 young American lives into South Vietnam only to see it go communist anyway, the public’s attitude toward institutions like the CIA was not positive; a 1975 poll placed the CIA’s approval rating at 14 percent. President Gerald Ford, the disgraced Nixon’s un-elected replacement, was weak and unable to protect the agency. Indeed, a commission chaired by none other than Vice President Nelson Rockefeller laid bare many of the family jewels, holding back only the most egregious incidents of meddling in foreign governments. But even those began to come out in time. Both major political parties had their sights set on future elections, and thus had a strong motivation to blame a rogue CIA for any and all abuses by previous administrations. (Attorney General Robert F. Kennedy, for example, had personally ordered and supervised some of the attempts on Fidel Castro’s life during the early 1960s.)

It was a no-win situation for William Colby. He was called up to testify in Congress again and again, to answer questions in the mold of “When did you stop beating your wife?”, as he put it to colleagues afterward. Everybody seemed to hate him: right-wing hardliners because they thought he was giving away the store (“It is an act of insanity and national humiliation,” said Secretary of State Henry Kissinger, “to have a law prohibiting the president from ordering assassinations”), left-wingers and centrists because they were sure he was hiding everything he could get away with and confessing only to that which was doomed to come out anyway — which was probably true. Colby was preternaturally cool and unflappable at every single hearing, which somehow only made everyone dislike him that much more. Some of his few remaining friends wanted to say that his relative transparency was a product of Catholic guilt — over the Phoenix Program, over the death of his daughter, perchance over all of the CIA’s many sins — but it was hard to square that notion with the rigidly composed, lawyerly presence that spoke in clipped, minimalist phrases before the television cameras. He seemed more like a cold fish than a repentant soul.

On November 1, 1975 — exactly six months after Saigon had fallen, marking the humiliating final defeat of South Vietnam at the hands of the communists — William Colby was called into the White House by President Ford and fired. “There goes 25 years just like that,” he told Barbara when he came home in a rare display of bitterness. His replacement was George Herbert Walker Bush, an up-and-coming Republican politician who knew nothing about intelligence work. President Ford said such an outsider was the only viable choice, given the high crimes and misdemeanors with which all of the rank and file of the CIA were tarred. And who knows? Maybe he was right. Colby stayed on for three more months while his green replacement got up to speed, then left public service forever.


An Oleg Kalugin campaign poster from 1990, after he reinvented himself as a politician. “Let’s vote for Oleg Kalugin!” reads the caption.

Oleg Kalugin was about to suffer his own fall from grace. According to his account, his rising star flamed out when he ventured out on a limb to support a defector from the United States, one of his own first contacts as a spy handler, who was now accused of stealing secrets for the West. The alleged double agent was sent to a Siberian prison despite Kalugin’s advocacy. Suspected now of being a CIA mole himself, Kalugin was reassigned in January of 1980 to a dead-end job as deputy director of the KGB’s Leningrad branch, where he would be sure not to see too much valuable intelligence. You live by the sword, you die by the sword; duplicity begets suspicions of duplicity, such that spies always end up eating their own if they stay in the business long enough.

Again according to Kalugin himself, it was in Leningrad that his nagging doubts about the ethics and efficacy of the Soviet system — the same ones that had been whispering at the back of his mind since the early 1960s — rose to a roar which he could no longer ignore. “It was all an elaborately choreographed farce, and in my seven years in Leningrad I came to see that we had created not only the most extensive totalitarian state apparatus in history but also the most arcane,” he writes. “Indeed, the mind boggled that in the course of seven decades our communist leaders had managed to construct this absurd, stupendous, arcane ziggurat, this terrifyingly centralized machine, this religion that sought to control all aspects of life in our vast country.” We might justifiably wonder that it took him so long to realize this, and note with some cynicism that his decision to reject the system he had served all his life came only after that system had already rejected him. He even confesses that, when Leonid Brezhnev died in 1982 and was replaced by Yuri Andropov, a former head of the KGB who had always thought highly of Kalugin, he wasn’t above dreaming of a return to the heart of the action in the intelligence service. But it wasn’t to be. Andropov soon died, to be replaced by another tired old man named Konstantin Chernenko who died even more quickly, and then Mikhail Gorbachev came along to accidentally dismantle the Soviet Union in the name of saving it.

In January of 1987, Kalugin was given an even more dead-end job, as a security officer in the Academy of Sciences in Moscow. From here, he watched the extraordinary events of 1989, as country after country in the Soviet sphere rejected its communist government, until finally the Berlin Wall fell, taking the Iron Curtain down with it. Just like that, the Cold War was over, with the Soviet Union the undeniable loser. Kalugin must surely have regarded this development with mixed feelings, given what a loyal partisan he had once been for the losing side. Nevertheless, on February 26, 1990, he retired from the KGB. After picking up his severance check, he walked a few blocks to the Institute of History and Archives, where a group of democracy activists had set up shop. “I want to help the democratic movement,” he told them, in a matter-of-fact tone uncannily similar to that of John Walker in a Soviet embassy 22 years earlier. “I am sure that my knowledge and experience will be useful. You can use me in any capacity.”

And so Oleg Kalugin reinvented himself as an advocate for Russian democracy. A staunch supporter of Boris Yeltsin and his post-Soviet vision for Russia, he became an outspoken opponent of the KGB, which still harbored in its ranks many who wished to return the country to its old ways. He was elected to the Supreme Soviet in September of 1990, in the first wave of free and fair elections ever held in Russia. When some of his old KGB colleagues attempted a coup in August of 1991, he was out there manning the barricades for democracy. The coup was put down — just.


William Colby in his later years, enjoying his sailboat, one of his few sources of uncalculated joy.

William Colby too had to reinvent himself after the agency he served declared that it no longer needed him. He wrote a circumspect, slightly anodyne memoir about his career; its title of Honorable Men alone was enough to tell the world that it wasn’t the tell-all book from an angry spy spurned that it might have been hoping for. He consulted for the government on various issues for larger sums than he had ever earned as a regular federal employee, appeared from time to time as an expert commentator on television, and wrote occasional opinion pieces for the national press, most commonly about the ongoing dangers posed by nuclear weapons and the need for arms-control agreements with the Soviet Union.

In 1982, at the age of 62, this stiff-backed avatar of moral rectitude fell in love with a pretty, vivacious 37-year-old, a former American ambassador to Grenada named Sally Shelton. It struck those who knew him as almost a cliché of a mid-life crisis, of the sort that the intelligence services had been exploiting for decades — but then, clichés are clichés for a reason, aren’t they? “I thought Bill Colby had all the charisma of a shoe clerk,” said one family friend. “Sally is a very outgoing woman, even flamboyant. She found him a sex object, and with her he was.” The following year, Colby asked his wife Barbara for a divorce. She was taken aback, even if their marriage hadn’t been a particularly warm one in many years. “People like us don’t get a divorce!” she exclaimed — meaning, of course, upstanding Catholic couples of the Greatest Generation who were fast approaching their 40th wedding anniversary. But there it was. Whatever else was going on behind that granite façade, it seemed that Colby felt he still had some living to do.

None of Colby’s family attended the marriage ceremony, or had much to do with him thereafter. He lost not only his family but his faith: Sally Shelton had no truck with Catholicism, and he only went to church after he married her for weddings and funerals. Was the gain worth the loss? Only Colby knew the answer.


Old frenemies: Oleg Kalugin and William Colby flank Ken Berris, who directed the Spycraft video sequences.

Oleg Kalugin met William Colby for the first time in May of 1991, when both were attending the same seminar in Berlin — appropriately enough, on the subject of international terrorism, the threat destined to steal the attention of the CIA and the Russian FSB (the successor to the KGB) as the Cold War faded into history. The two men had dinner together, then agreed to be jointly interviewed on German television, a living symbol of bygones becoming bygones. “What do you think of Mr. Colby as a leading former figure in U.S. intelligence?” Kalugin was asked.

“Had I had a choice in my earlier life, I would have gladly worked under Mr. Colby,” he answered. The two became friends, meeting up whenever their paths happened to cross in the world.

And why shouldn’t they be friends? They had led similar lives in so many ways. Both were ambitious men who had justified their ambition as a call to service, then devoted their lives to it, swallowing any moral pangs they might have felt in the process, until the people they served had rejected them. In many ways, they had more in common with one another than with the wives and children they had barely seen for long stretches of their lives.

And how are we to judge these two odd, distant men, both so adept at the art of concealment as to seem hopelessly impenetrable? “I am not emotional,” Colby said to a reporter during his turbulent, controversy-plagued tenure as director of the CIA. “I admit it. Oh, don’t watch me like that. You’re looking for something underneath which isn’t there. It’s all here on the surface, believe me.”

Our first instinct might be to scoff at such a claim; surely everyone has an inner life, a tender core they dare reveal only to those they love best. But maybe we should take Colby at his word; maybe doing so helps to explain some things. As Colby and Kalugin spouted their high-minded ideals about duty and country, they forgot those closest to them, the ones who needed them most of all, apparently believing that they possessed some undefined special qualities of character or a special calling that exempted them from all that. Journalist Neil Sheehan once said of Colby that “he would have been perfect as a soldier of Christ in the Jesuit order.” There is something noble but also something horrible about such devotion to an abstract cause. One has to wonder whether it is a crutch, a compensation for some piece of a personality that is missing.

Certainly there was an ultimate venality, an amorality to these two men’s line of work, as captured in the subtitle of the computer game they came together to make: “The Great Game.” Was it all really just a game to them? It would seem so, at least at the end. How else could Kalugin blithely state that he would have “gladly” worked with Colby, forgetting the vast gulf of ideology that lay between them? Tragically, the ante in their great game was all too often human lives. Looking back on all they did, giving all due credit to their courage and capability, it seems clear to me that the world would have been better off without their meddling. The institutions they served were full of people like them, people who thought they knew best, who thought they were that much cleverer than the rest of the world and had a right to steer its course from the shadows. Alas, they weren’t clever enough to see how foolish and destructive their arrogance was.

“My father lived in a world of secrets,” says William’s eldest son Carl Colby. “Always watching, listening, his eye on the door. He was tougher, smarter, smoother, and could be crueler than anybody I ever knew. I’m not sure he ever loved anyone, and I never heard him say anything heartfelt.” Was William Colby made that way by the organization he served, or did he join the organization because he already was that way? It’s impossible to say. Yet we must be sure to keep these things in mind when we turn in earnest to the game on which Colby and Kalugin allowed their names to be stamped, and find out what it has to say about the ethical wages of being a spy.

(Sources: the books Legacy of Ashes: The History of the CIA by Tim Weiner, The Sword and the Shield: The Mitrokhin Archive and the Secret History of the KGB by Christopher Andrew and Vasili Mitrokhin, Lost Crusader: The Secret Wars of CIA Director William Colby by John Prados, Spymaster: My Thirty-Two Years in Intelligence and Espionage against the West by Oleg Kalugin, Where the Domino Fell: America and Vietnam, 1945-2010, sixth edition by James S. Olson and Randy Roberts, Shadow Warrior: William Egan Colby and the CIA by Randall B. Woods, Honorable Men: My Life in the CIA by William Colby and Peter Forbath, and Lost Victory: A Firsthand Account of America’s Sixteen-Year Involvement in Vietnam by William Colby and James McCargar; the documentary film The Man Nobody Knew: In Search of My Father, CIA Spymaster William Colby; Sierra On-Line’s newsletter InterAction of Summer 1993; Questbusters of February 1994. Online sources include “Who Murdered the CIA Chief?” by Zalin Grant at Pythia Press.)

 

Tags: , ,