RSS

Search results for ‘trinity’

Trinity Postscript: Selling Tragedy

Like A Mind Forever Voyaging, Trinity seemed destined to become a casualty of an industry that just wasn’t equipped to appreciate what it was trying to do. Traditional game-review metrics like “fun” or “value for money” only cheapened it, while reviewers lacked the vocabulary to even begin to really address its themes. Most were content to simply mention, in passing and often with an obvious unease, that those themes were present. In Computer Gaming World, for instance, Scorpia said that it was “not for the squeamish,” would require of the player “some unpleasant actions,” that it was “overall a serious game, not a light-hearted one,” and then on to the firmer ground of puzzle hints. And that was downright thoughtful in comparison to Shay Addams’s review for Questbusters, which tried in a weird and clunky way to be funny in all the ways that Trinity doesn’t: “It blowed up real good!” runs the review’s tagline, which goes on to ask if they’ll be eating “fission chips” in the Kensington Gardens after the missiles drop. (Okay, that one’s dumb enough to be worth a giggle…) But the review’s most important point is that Trinity is “mainly a game” again after the first Interactive Fiction Plus title, A Mind Forever Voyaging, so disappointed: “The puzzles are back!”

Even Infocom themselves weren’t entirely sure how to sell or even how to talk about Trinity. The company’s creative management had been unstintingly supportive of Brian Moriarty while he was making the game, but “marketing,” as he said later, “was a little more concerned/disturbed. They didn’t quite know what to make of it.” The matrix of genres didn’t have a slot for “Historical Tragedy.” In the end they slapped a “Fantasy” label on it, although it doesn’t take a long look at Trinity and the previous games to wear that label — the Zork and Enchanter series — to realize that one of these things is not quite like the others.

Moriarty admits to “a few tiffs” with marketing over Trinity, but he was a reasonable guy who also understood that Infocom needed to sell their games and that, while the occasional highbrow press from the likes of The New York Times Book Review had been nice and all, the traditional adventure-game market was the only place they had yet succeeded in consistently doing that. Thus in interviews and other promotions for Trinity he did an uncomfortable dance, trying to talk seriously about the game and the reasons he wrote it while also trying not to scare away people just looking for a fun text adventure. The triangulations can be a bit excruciating: “It isn’t a gloomy game, but it does have a dark undertone to it. It’s not like it’s the end of the world.” (Actually, it is.) Or: “It’s kind of a dark game, but it’s also, I like to think, kind of a fun game too.” (With a ringing endorsement like “I like to think it’s kind of a fun game,” how could anyone resist?)

Trinity‘s commercial saving grace proved to be a stroke of serendipity having nothing to do with any of its literary qualities. The previous year Commodore had released what would prove to be their last 8-bit computer, the Commodore 128. Despite selling quite well, the machine had attracted very little software support. The cause, ironically, was also the reason it had done so well in comparison to the Plus/4, Commodore’s previous 8-bit machine. The 128, you see, came equipped with a “64 Mode” in which it was 99.9 percent compatible with the Commodore 64. Forced to choose between a modest if growing 128 user base and the massive 64 user base through which they could also rope in all those 128 users, almost all publishers, with too many incompatible machines to support already, made the obvious choice.

Infocom’s Interactive Fiction Plus system was, however, almost unique in the entertainment-software industry in running on the 128 in its seldom-used (at least for games) native mode. And all those new 128 owners were positively drooling for a game that actually took advantage of the capabilities of their shiny new machines. A Mind Forever Voyaging and Trinity arrived simultaneously on the Commodore 128 when the Interactive Fiction Plus interpreter was ported to that platform in mid-1986. But the puzzleless A Mind Forever Voyaging was a bit too outré for most gamers’ tastes. Plus it was older, and thus not getting the press or the shelf space that Trinity was. Trinity, on the other hand, fit the bill of “game I can use to show off my 128” just well enough, even for 128 users who might otherwise have had little interest in an all-text adventure game. Infocom’s sales were normally quite evenly distributed across the large range of machines they supported, but Trinity‘s were decidedly lopsided in favor of the Commodore 128. Those users’ numbers were enough to push Trinity to the vicinity of 40,000 in sales, not a blockbuster — especially by the standards of Infocom’s glory years — but enough to handily outdo not just A Mind Forever Voyaging but even more traditional recent games like Spellbreaker. Like the Cold War Trinity chronicles, it could have been much, much worse.

 
13 Comments

Posted by on February 26, 2015 in Digital Antiquaria, Interactive Fiction

 

Tags: , ,

Trinity

Trinity

During 1983, the year that Brian Moriarty first conceived the idea of a text adventure about the history of atomic weapons, the prospect of nuclear annihilation felt more real, more terrifyingly imaginable to average Americans, than it had in a long, long time. The previous November had brought the death of longtime Soviet General Secretary Leonid Brezhnev and the ascension to power of Yuri Andropov. Brezhnev had been a corrupt, self-aggrandizing old rascal, but also a known, relatively safe quantity, content to pin medals on his own chest and tool around in his collection of foreign cars while the Soviet Union settled into a comfortable sort of stagnate stability around him. Andropov, however, was to the extent he was known at all considered a bellicose Party hardliner. He had enthusiastically played key roles in the brutal suppression of both the 1956 Hungarian Revolution and the 1968 Prague Spring.

Ronald Reagan, another veteran Cold Warrior, welcomed Andropov into office with two of the most famous speeches of his Presidency. On March 8, 1983, in a speech before the American Society of Evangelicals, he declared the Soviet Union “an evil empire.” Echoing Hannah Arendt’s depiction of Adolf Eichmann, he described Andropov and his colleagues as “quiet men with white collars and cut fingernails and smooth-shaven cheeks who do not need to raise their voice,” committing outrage after outrage “in clean, carpeted, warmed, and well-lighted offices.” Having thus drawn an implicit parallel between the current Soviet leadership and the Nazis against which most of them had struggled in the bloodiest war in history, Reagan dropped some big news on the world two weeks later. At the end of a major televised address on the need for engaging in the largest peacetime military buildup in American history, he announced a new program that would soon come to be known as the Strategic Defense Initiative, or Star Wars: a network of satellites equipped with weaponry to “intercept and destroy strategic ballistic missiles before they reach our own territory or that of our allies.” While researching and building SDI, which would “take years, probably decades, of effort on many fronts” with “failures and setbacks just as there will be successes and breakthroughs” — the diction was oddly reminiscent of Kennedy’s Moon challenge — the United States would in the meantime be deploying a new fleet of Pershing II missiles to West Germany, capable of reaching Moscow in less than ten minutes whilst literally flying under the radar of all of the Soviet Union’s existing early-warning systems. To the Soviet leadership, it looked like the Cuban Missile Crisis in reverse, with Reagan in the role of Khrushchev.

Indeed, almost from the moment that Reagan had taken office, the United States had begun playing chicken with the Soviet Union, deliberately twisting the tail of the Russian bear via feints and probes in the border regions. “A squadron would fly straight at Soviet airspace and their radars would light up and units would go on alert. Then at the last minute the squadron would peel off and go home,” remembers former Undersecretary of State William Schneider. Even as Reagan was making his Star Wars speech, one of the largest of these deliberate provocations was in progress. Three aircraft-carrier battle groups along with a squadron of B-52 bombers all massed less than 500 miles from Siberia’s Kamchatka Peninsula, home of many vital Soviet military installations. If the objective was to make the Soviet leadership jittery — leaving aside for the moment the issue of whether making a country with millions of kilotons of thermonuclear weapons at its disposal jittery is really a good thing — it certainly succeeded. “Every Soviet official one met was running around like a chicken without a head — sometimes talking in conciliatory terms and sometimes talking in the most ghastly and dire terms of real hot war — of fighting war, of nuclear war,” recalls James Buchan, at the time a correspondent for the Financial Times, of his contemporaneous visit to Moscow. Many there interpreted the speeches and the other provocations as setting the stage for premeditated nuclear war.

And so over the course of the year the two superpowers blundered closer and closer to the brink of the unthinkable on the basis of an almost incomprehensible mutual misunderstanding of one another’s national characters and intentions. Reagan and his cronies still insisted on taking the Marxist rhetoric to which the Soviet Union paid lip service at face value when in reality any serious hopes for fomenting a worldwide revolution of the proletariat had ended with Khrushchev, if not with Stalin. As the French demographer Emmanuel Todd wrote in 1976, the Soviet Union’s version of Marxism had long since been transformed “into a collection of high-sounding but irrelevant rhetoric.” Even the Soviet Union’s 1979 invasion of Afghanistan, interpreted by not just the Reagan but also the Carter administration as a prelude to further territorial expansion into the Middle East, was actually a reactionary move founded, like so much the Soviet Union did during this late era of its history, on insecurity rather than expansionist bravado: the new Afghan prime minister, Hafizullah Amin, was making noises about abandoning his alliance with the Soviet Union in favor of one with the United States, raising the possibility of an American client state bordering on the Soviet Union’s soft underbelly. To imagine that this increasingly rickety artificial construct of a nation, which couldn’t even feed itself despite being in possession of vast tracts of some of the most arable land on the planet, was capable of taking over the world was bizarre indeed. Meanwhile, to imagine that the people around him would actually allow Reagan to launch an unprovoked first nuclear strike even if he was as unhinged as some in the Soviet leadership believed him to be is to fundamentally misunderstand America and Americans.

On September 1, 1983, this mutual paranoia took its toll in human lives.  Korean Air Lines Flight 007, on its way from New York City to Seoul, drifted hundreds of miles off-course due to the pilot’s apparent failure to change an autopilot setting. It flew over the very same Kamchatka Peninsula the United States had been so aggressively probing. Deciding enough was enough, the Soviet air-defense commander in charge scrambled fighters and made the tragic decision to shoot the plane down without ever confirming that it really was the American spy plane he suspected it to be. All 269 people aboard were killed. Soviet leadership then made the colossally awful decision to deny that they had shot down the plane; then to admit that, well, okay, maybe they had shot it down, but it had all been an American trick to make their country look bad. If Flight 007 had been an American plot, the Soviets could hardly have played better into the Americans’ hands. Reagan promptly pronounced the downing “an act of barbarism” and “a crime against nature,” and the rest of the world nodded along, thinking maybe there was some truth to this Evil Empire business after all. Throughout the fall dueling search parties haunted the ocean around the Kamchatka Peninsula, sometimes aggressively shadowing one another in ways that could easily lead to real shooting warfare. The Soviets found the black box first, then quickly squirreled it away and denied its existence; it clearly confirmed that Flight 007 was exactly the innocent if confused civilian airliner the rest of the world was saying it had been.

The superpowers came as close to the brink of war as they ever would — arguably closer than during the much more famed Cold War flash point of the Cuban Missile Crisis — that November. Despite a “frenzied” atmosphere of paranoia in Moscow, which some diplomats described as “pre-war,” the Reagan administration made the decision to go ahead with another provocation in the form of Able Archer 83, an elaborately realistic drill simulating the command-and-control process leading up to a real nuclear strike. The Soviets had long suspected that the West might attempt to launch a real attack under the cover of a drill. Now, watching Able Archer unfold, with many in the Soviet military claiming that it likely represented the all-out nuclear strike the world had been dreading for so long, the leaderless Politburo squabbled over what to do while a dying Andropov lay in hospital. Nuclear missiles were placed on hair-trigger alert in their silos; aircraft loaded with nuclear weapons stood fueled and ready on their tarmacs. One itchy trigger finger or overzealous politician over the course of the ten-day drill could have resulted in apocalypse. Somehow, it didn’t happen.

On November 20, nine days after the conclusion of Able Archer, the ABC television network aired a first-run movie called The Day After. Directed by Nicholas Meyer, fresh off the triumph of Star Trek II, it told the story of a nuclear attack on the American heartland of Kansas. If anything, it soft-pedaled the likely results of such an attack; as a disclaimer in the end credits noted, a real attack would likely be so devastating that there wouldn’t be enough people left alive and upright to make a story. Still, it was brutally uncompromising for a program that aired on national television during the family-friendly hours of prime time. Viewed by more than 100 million shocked and horrified people, The Day After became one of the landmark events in American television history and a landmark of social history in its own right. Many of the viewers, myself among them, were children. I can remember having nightmares about nuclear hellfire and radiation sickness for weeks afterward. The Day After seemed a fitting capstone to such a year of brinksmanship and belligerence. The horrors of nuclear war were no longer mere abstractions. They felt palpably real.

This, then, was the atmosphere in which Brian Moriarty first conceived of Trinity, a text adventure about the history of atomic weaponry and a poetic meditation on its consequences. Moriarty was working during 1983 for A.N.A.L.O.G. magazine, editing articles and writing reviews and programs for publication as type-in listings. Among these were two text adventures, Adventure in the Fifth Dimension and Crash Dive!, that did what they could within the limitations of their type-in format. Trinity, however, needed more, and so it went unrealized during Moriarty’s time at A.N.A.L.O.G. But it was still on his mind during the spring of 1984, when Konstantin Chernenko was settling in as Andropov’s replacement — one dying, idea-bereft old man replacing another, a metaphor for the state of the Soviet Union if ever there was one — and Moriarty was settling in as the newest addition to Infocom’s Micro Group. And it was still there six months later, when the United States and the Soviet Union were agreeing to resume arms-control talks the following year — Reagan had become more open to the possibility following his own viewing of The Day After, thus making Meyer’s film one of the few with a real claim to having directly influenced the course of history — and Moriarty was agreeing to do an entry-level Zorkian fantasy as his first work as an Imp.

Immediately upon completion of his charming Wishbringer in May of 1985, Moriarty was back to his old obsession, which looked at last to have a chance of coming to fruition. The basic structure of the game had long been decided: a time-jumping journey through a series of important events in atomic history that would begin with you escaping a near-future nuclear strike on London and end with you at the first test of an atomic bomb in the New Mexico desert on July 16, 1945 — the Trinity test. In a single feverish week he dashed off the opening vignette in London’s Kensington Gardens, a lovely if foreboding sequence filled with mythic signifiers of the harrowing journey that awaits you. He showed it first to Stu Galley, one of the least heralded of the Imps but one possessed of a quiet passion for interactive fiction’s potential and a wisdom about its production that made him a favorite source of advice among his peers. “If you can sustain this, you’ll have something,” said Galley in his usual understated way.

Thus encouraged, Moriarty could lobby in earnest for his ambitious, deeply serious atomic-age tragedy. Here he caught a lucky break: Wishbringer became one of Infocom’s last substantial hits. While no one would ever claim that the Imps were judged solely on the commercial performance of their games, it certainly couldn’t hurt to have written a hit when your next proposal came up for review. The huge success of The Hitchhiker’s Guide to the Galaxy, for instance, probably had a little something to do with Infocom’s decision to green-light Steve Meretzky’s puzzleless experiment A Mind Forever Voyaging. Similarly, this chance to develop the commercially questionable Trinity can be seen, at least partially, as a reward to Moriarty for providing Infocom with one of the few bright spots of a pretty gloomy 1985. They even allowed him to make it the second game (after A Mind Forever Voyaging) written for the new Interactive Fiction Plus virtual machine that allowed twice the content of the normal system at the expense of abandoning at least half the platforms for which Infocom’s games were usually sold. Moriarty would need every bit of the extra space to fulfill his ambitions.

The market at the site of the Trinity test, as photographed by Moriarty on his 1985 visit.

The marker at the site of the Trinity test, as photographed by Moriarty on his 1985 visit.

He plunged enthusiastically into his research, amassing a bibliography some 40 items long that he would eventually publish, in a first and only for Infocom, in the game’s manual. He also reached out personally to a number of scientists and historians for guidance, most notably Ferenc Szasz of the University of Albuquerque, who had just written a book about the Trinity test. That July he took a trip to New Mexico to visit Szasz as well as Los Alamos National Laboratory and other sites associated with early atomic-weapons research, including the Trinity site itself on the fortieth anniversary of that fateful day. His experience of the Land of Enchantment affected him deeply, and in turn affected the game he was writing. In an article for Infocom’s newsletter, he described the weird Strangelovean enthusiasm he found for these dreadful gadgets at Los Alamos with an irony that echoes that of “The Illustrated Story of the Atom Bomb,” the gung-ho comic that would accompany the game itself.

“The Lab” is Los Alamos National Laboratory, announced by a sign that stretches like a CinemaScope logo along the fortified entrance. One of the nation’s leading centers of nuclear-weapons research. The birthplace of the atomic bomb.

The Bradbury Museum occupies a tiny corner in the acres of buildings, parking lots, and barbed-wire fences that comprise the Laboratory. Its collection includes scale models of the very latest in nuclear warheads and guided missiles. You can watch on a computer as animated neutrons blast heavy isotopes to smithereens. The walls are adorned with spectacular color photographs of fireballs and mushroom clouds, each respectfully mounted and individually titled, like great works of art.

I watched a teacher explain a neutron-bomb exhibit to a group of schoolchildren. The exhibit consists of a diagram with two circles. One circle represents the blast radius of a conventional nuclear weapon; a shaded ring in the middle shows the zone of lethal radiation. The other circle shows the relative effects of a neutron bomb. The teacher did her best to point out that the neutron bomb’s “blast” radius is smaller, but its “lethal” radius is proportionally much larger. The benefit of this innovation was not explained, but the kids listened politely.

Trinity had an unusually if not inordinately long development cycle for an Infocom game, stretching from Moriarty’s first foray into Kensington Gardens in May of 1985 to his placing of the finishing touches on the game almost exactly one year later; the released story file bears a compilation datestamp of May 8, 1986. During that time, thanks to the arrival of Mikhail Gorbachev and Perestroika and a less belligerent version of Ronald Reagan, the superpowers crept back a bit from the abyss into which they had stared in 1983. Trinity, however, never wavered from its grim determination that it’s only a matter of time until these Pandorean toys of ours lead to the apocalyptic inevitable. Perhaps we’re fooling ourselves; perhaps it’s still just a matter of time before the wrong weapon in the wrong hands leads, accidentally or on purpose, to nuclear winter. If so, may our current blissful reprieve at least stretch as long as possible.

I’m not much interested in art as competition, but it does feel impossible to discuss Trinity without comparing it to Infocom’s other most obviously uncompromising attempt to create literary Art, A Mind Forever Voyaging. If pressed to name a single favorite from the company’s rich catalog, I would guess that a majority of hardcore Infocom fans would likely name one of these two games. As many of you probably know already, I’m firmly in the Trinity camp myself. While A Mind Forever Voyaging is a noble experiment that positively oozes with Steve Meretzky’s big old warm-and-fuzzy heart, it’s also a bit mawkish and one-note in its writing and even its themes. It’s full of great ideas, mind you, but those ideas often aren’t explored — when they’re explored at all — in all that thoughtful of a way. And I must confess that the very puzzleless design that represents its most obvious innovation presents something of a pacing problem for me. Most of the game is just wandering around under-implemented city streets looking for something to record, an experience that leaves me at an odd disconnect from both the story and the world. Mileages of course vary greatly here (otherwise everyone would be a Trinity person), but I really need a reason to get my hands dirty in a game.

One of the most noteworthy things about Trinity, by contrast, is that it is — whatever else it is — a beautifully crafted traditional text adventure, full of intricate puzzles to die for, exactly the sort of game for which Infocom is renowned and which they did better than anyone else. If A Mind Forever Voyaging is a fascinating might-have-been, a tangent down which Infocom would never venture again, Trinity feels like a culmination of everything the 18 games not named A Mind Forever Voyaging that preceded it had been building toward. Or, put another way, if A Mind Forever Voyaging represents the adventuring avant garde, a bold if problematic new direction, Trinity is a work of classicist art, a perfectly controlled, mature application of established techniques. There’s little real plot to Trinity; little character interaction; little at all really that Infocom hadn’t been doing, albeit in increasingly refined ways, since the days of Zork. If we want to get explicit with the comparisons, we might note that the desolate magical landscape where you spend much of the body of Trinity actually feels an awful lot like that of Zork III, while the vignettes you visit from that central hub parallel Hitchhiker’s design. I could go on, but suffice to say that there’s little obviously new here. Trinity‘s peculiar genius is to be a marvelous old-school adventure game while also being beautiful, poetic and even philosophically profound. It manages to imbed its themes within its puzzles, implicating you directly in the ideas it explores rather than leaving you largely a wandering passive observer as does A Mind Forever Voyaging.

To my thinking, then, Trinity represents the epitome of Infocom’s craft, achieved some nine years after a group of MIT hackers first saw Adventure and decided they could make something even better. There’s a faint odor of anticlimax that clings to just about every game that would follow it, worthy as most of those games would continue to be on their own terms (Infocom’s sense of craft would hardly allow them to be anything else). Some of the Imps, most notably Dave Lebling, have occasionally spoken of a certain artistic malaise that gripped Infocom in its final years, one that was separate from and perhaps more fundamental than all of the other problems with which they struggled. Where to go next? What more was there to really do in interactive fiction, given the many things, like believable characters and character interactions and parsers that really could understand just about anything you typed, that they still couldn’t begin to figure out how to do? Infocom was never, ever going to be able to top Trinity on its own traditionalist terms and really didn’t know how, given the technical, commercial, and maybe even psychological obstacles they faced, to rip up the mold and start all over again with something completely new. Trinity is the top of mountain, from which they could only start down the other side if they couldn’t find a completely new one to climb. (If we don’t mind straining a metaphor to the breaking point, we might even say that A Mind Forever Voyaging represents a hastily abandoned base camp.)

Given that I think Trinity represents Infocom’s artistic peak (you fans of A Mind Forever Voyaging and other games are of course welcome to your own opinions), I want to put my feet up here for a while and spend the first part of this new year really digging into the history and ideas it evokes. We’re going to go on a little tour of atomic history with Trinity by our side, a series of approaches to one of the most important and tragic — in the classical sense of the term; I’ll go into what I mean by that in a future article — moments of the century just passed, that explosion in the New Mexico desert that changed everything forever. We’ll do so by examining the same historical aftershocks of that “fulcrum of history” (Moriarty’s words) as does Trinity itself, like the game probing deeper and moving back through time toward their locus.

I think of Trinity almost as an intertextual work. “Intertextuality,” like many fancy terms beloved by literary scholars, isn’t really all that hard a concept to understand. It simply refers to a work that requires that its reader have a knowledge of certain other works in order to gain a full appreciation of this one. While Moriarty is no Joyce or Pynchon, Trinity evokes huge swathes of history and lots of heady ideas in often abstract, poetic ways, using very few but very well-chosen words. The game can be enjoyed on its own, but it gains so very much resonance when we come to it knowing something about all of this history. Why else did Moriarty include that lengthy bibliography? In lieu of that 40-item reading list, maybe I can deliver some of the prose you need to fully appreciate Moriarty’s poetry. And anyway, I think this stuff is interesting as hell, which is a pretty good justification in its own right. I hope you’ll agree, and I hope you’ll enjoy the little detour we’re about to make before we continue on to other computer games of the 1980s.

(This and the next handful of articles will all draw from the same collection of sources, so I’ll just list them once here.

On the side of Trinity the game and Infocom, we have, first and foremost as always, Jason Scott’s Get Lamp materials. Also the spring 1986 issue of Infocom’s newsletter, untitled now thanks to legal threats from The New York Times; the September/October 1986 and November 1986 Computer Gaming World; the August 1986 Questbusters; and the August 1986 Computer and Video Games.

As far as atomic history, I find I’ve amassed a library almost as extensive as Trinity‘s bibliography. Standing in its most prominent place we have Richard Rhodes’s magisterial “atomic trilogy” The Making of the Atomic Bomb, Dark Sun, and Arsenals of Folly. There’s also Command and Control by Eric Schlosser; The House at Otowi Bridge by Peggy Pond Church; The Nuclear Weapons Encyclopedia; Now It Can Be Told by Leslie Groves; Hiroshima by John Hershey; The Day the Sun Rose Twice by Ferenc Morton Szasz; Enola Gay by Gordon Thomas; and Prompt and Utter Destruction by J. Samuel Walker. I can highly recommend all of these books for anyone who wants to read further in these subjects.)

 
 

Tags: , ,

Putting the “J” in the RPG, Part 1: Dorakue!


Fair warning: this article includes some plot spoilers of Final Fantasy I through VI.

The videogame industry has always run on hype, but the amount of it that surrounded Final Fantasy VII in 1997 was unparalleled in its time. This new game for the Sony PlayStation console was simply inescapable. The American marketing teams of Sony and Square Corporation, the game’s Japanese developer and publisher, had been given $30 million with which to elevate Final Fantasy VII to the same status as the Super Marios of the world. They plastered Cloud, Aerith, Tifa, Sephiroth, and the game’s other soon-to-be-iconic characters onto urban billboards, onto the sides of buses, and into the pages of glossy magazines like Rolling Stone, Playboy, and Spin. Commercials for the game aired round the clock on MTV, during NFL games and Saturday Night Live, even on giant cinema screens in lieu of more traditional coming-attractions trailers. “They said it couldn’t be done in a major motion picture,” the stentorian announcer intoned. “They were right!” Even if you didn’t care a whit about videogames, you couldn’t avoid knowing that something pretty big was going down in that space.

And if you did care… oh, boy. The staffs of the videogame magazines, hardly known for their sober-mindedness in normal times, worked themselves up to positively orgasmic heights under Square’s not-so-gentle prodding. GameFan told its readers that Final Fantasy VII would be “unquestionably the greatest entertainment product ever created.”

The game is ridiculously beautiful. Analyze five minutes of gameplay in Final Fantasy VII and witness more artistic prowess than most entire games have. The level of detail is absolutely astounding. These graphics are impossible to describe; no words are great enough. Both map and battle graphics are rendered to a level of detail completely unprecedented in the videogame world. Before Final Fantasy VII, I couldn’t have imagined a game looking like this for many years, and that’s no exaggeration. One look at a cut scene or call spell should handily convince you. Final Fantasy VII looks so consistently great that you’ll quickly become numb to the power. Only upon playing another game will you once again realize just how fantastic it is.

But graphics weren’t all that the game had going for it. In fact, they weren’t even the aspect that would come to most indelibly define it for most of its players. No… that thing was, for the very first time in a mainstream console-based videogame with serious aspirations of becoming the toppermost of the poppermost, the story.

I don’t have any room to go into the details, but rest assured that Final Fantasy VII possesses the deepest, most involved story line ever in an RPG. There’s few games that have literally caused my jaw to drop at plot revelations, and I’m most pleased to say that Final Fantasy VII doles out these shocking, unguessable twists with regularity. You are constantly motivated to solve the latest mystery.

So, the hype rolled downhill, from Square at the top to the mass media, then on to the hardcore gamer magazines to ordinary owners of PlayStations. You would have to have been an iconoclastic PlayStation owner indeed not to be shivering with anticipation as the weeks counted down toward the game’s September 7 release. (Owners of other consoles could eat their hearts out; Final Fantasy VII was a PlayStation exclusive.)

Just last year, a member of an Internet gaming forum still fondly recalled how

the lead-up for the US launch of this game was absolutely insane, and, speaking personally, it is the most excited about a game I think I had ever been in my life, and nothing has come close since then. I was only fifteen at the time, and this game totally overtook all my thoughts and imagination. I had never even played a Final Fantasy game before, and I didn’t even like RPGs, yet I would spend hours reading and rereading all the articles from all the gaming magazines I had, inspecting all the screenshots and being absolutely blown away at the visual fidelity I was witnessing. I spent multiple days/hours with my Sony Discman listening to music and drawing the same artwork that was in all the mags. It was literally a genre- and generation-defining game.

Those who preferred to do their gaming on personal computers rather than consoles might be excused for scoffing at all these breathless commentators who seemed to presume that Final Fantasy VII was doing something that had never been done before. If you spent your days playing Quake, Final Fantasy VII‘s battle graphics probably weren’t going to impress you overmuch; if you knew, say, Toonstruck, even the cut scenes might strike you as pretty crude. And then, too, computer-based adventure games and RPGs had been delivering well-developed long-form interactive narratives for many years by 1997, most recently with a decidedly cinematic bent more often than not, with voice actors in place of Final Fantasy VII‘s endless text boxes. Wasn’t Final Fantasy VII just a case of console gamers belatedly catching on to something computer gamers had known all along, and being forced to do so in a technically inferior fashion at that?

Well, yes and no. It’s abundantly true that much of what struck so many as so revelatory about Final Fantasy VII really wasn’t anywhere near as novel as they thought it was. At the same time, though, the aesthetic and design philosophies which it applied to the abstract idea of the RPG truly were dramatically different from the set of approaches favored by Western studios. They were so different, in fact, that the RPG genre in general would be forever bifurcated in gamers’ minds going forward, as the notion of the “JRPG” — the Japanese RPG — entered the gaming lexicon. In time, the label would be applied to games that didn’t actually come from Japan at all, but that evinced the set of styles and approaches so irrevocably cemented in the Western consciousness under the label of “Japanese” by Final Fantasy VII.

We might draw a parallel with what happened in music in the 1960s. The Beatles, the Rolling Stones, and all the other Limey bands who mounted the so-called “British Invasion” of their former Colonies in 1964 had all spent their adolescence steeped in American rock and roll. They took those influences, applied their own British twist to them, then sold them back to American teenagers, who screamed and fainted in the concert halls like Final Fantasy VII fans later would in the pages of the gaming magazines, convinced that the rapture they were feeling was brought on by something genuinely new under the sun — which in the aggregate it was, of course. It took the Japanese to teach Americans how thrilling and accessible — even how emotionally moving — the gaming genre they had invented could truly be.



The roots of the JRPG can be traced back not just to the United States but to a very specific place and time there: to the American Midwest in the early 1970s, where and when Gary Gygax and Dave Arneson, a pair of stolid grognards who would have been utterly nonplussed by the emotional histrionics of a Final Fantasy VII, created a “single-unit wargame” called Dungeons & Dragons. I wrote quite some years ago on this site that their game’s “impact on the culture at large has been, for better or for worse, greater than that of any single novel, film, or piece of music to appear during its lifetime.” I almost want to dismiss those words now as the naïve hyperbole of a younger self. But the thing is, I can’t; I have no choice but to stand by them. Dungeons & Dragons really was that earthshaking, not only in the obvious ways — it’s hard to imagine the post-millennial craze for fantasy in mass media, from the Lord of the Rings films to Game of Thrones, ever taking hold without it — but also in subtler yet ultimately more important ones, in the way it changed the role we play in our entertainments from that of passive spectators to active co-creators, making interactivity the watchword of an entire age of media.

The early popularity of Dungeons & Dragons coincided with the rise of accessible computing, and this proved a potent combination. Fans of the game with access to PLATO, a groundbreaking online community rooted in American universities, moved it as best they could onto computers, yielding the world’s first recognizable CRPGs. Then a couple of PLATO users named Robert Woodhead and Andrew Greenberg made a game of this type for the Apple II personal computer in 1981, calling it Wizardry. Meanwhile Richard Garriott was making Ultima, a different take on the same broad concept of “Dungeons & Dragons on a personal computer.”

By the time Final Fantasy VII stormed the gates of the American market so triumphantly in 1997, the cultures of gaming in the United States and Japan had diverged so markedly that one could almost believe they had never had much of anything to do with one another. Yet in these earliest days of digital gaming — long before the likes of the Nintendo Entertainment System, when Japanese games meant only coin-op arcade hits like Space Invaders, Pac-Man, and Donkey Kong in the minds of most Americans — there was in fact considerable cross-pollination. For Japan was the second place in the world after North America where reasonably usable, pre-assembled, consumer-grade personal computers could be readily purchased; the Japanese Sharp MZ80K and Hitachi MB-6880 trailed the American Trinity of 1977 — the Radio Shack TRS-80, Apple II, and Commodore PET — by less than a year. If these two formative cultures of computing didn’t talk to one another, whom else could they talk to?

Thus pioneering American games publishers like Sierra On-Line and Brøderbund forged links with counterparts in Japan. A Japanese company known as Starcraft became the world’s first gaming localizer, specializing in porting American games to Japanese computers and translating their text into Japanese for the domestic market. As late as the summer of 1985, Roe R. Adams III could write in Computer Gaming World that Sierra’s sprawling twelve-disk-side adventure game Time Zone, long since written off at home as a misbegotten white elephant, “is still high on the charts after three years” in Japan. Brøderbund’s platformer Lode Runner was even bigger, having swum like a salmon upstream in Japan, being ported from home computers to coin-op arcade machines rather than the usual reverse. It had even spawned the world’s first e-sports league, whose matches were shown on Japanese television.

At that time, the first Wizardry game and the second and third Ultima had only recently been translated and released in Japan. And yet if Adams was to be believed,[1]Adams was not an entirely disinterested observer. He was already working with Robert Woodhead on Wizardry IV, and had in fact accompanied him to Japan in this capacity. both games already

have huge followings. The computer magazines cover Lord British [Richard Garriott’s nom de plume] like our National Inquirer would cover a television star. When Robert Woodhead of Wizardry fame was recently in Japan, he was practically mobbed by autograph seekers. Just introducing himself in a computer store would start a near-stampede as people would run outside to shout that he was inside.

Robert Woodhead with Japanese Wizardry fans.

The Wizardry and Ultima pump had been primed in Japan by a game called The Black Onyx, created the year before in their image for the Japanese market by an American named Henk Rogers.[2]A man with an international perspective if ever there was one, Rogers would later go on to fame and fortune as the man who brought Tetris out of the Soviet Union. But his game was quickly eclipsed by the real deals that came directly out of the United States.

Wizardry in particular became a smashing success in Japan, even as a rather lackadaisical attitude toward formal and audiovisual innovation on the part of its masterminds was already condemning it to also-ran status against Ultima and its ilk in the United States. It undoubtedly helped that Wizardry was published in Japan by ASCII Corporation, that country’s nearest equivalent to Microsoft, with heaps of marketing clout and distributional muscle to bring to bear on any challenge. So, while the Wizardry series that American gamers knew petered out in somewhat anticlimactic fashion in the early 1990s after seven games,[3]It would be briefly revived for one final game, the appropriately named Wizardry 8, in 2001. it spawned close to a dozen Japanese-exclusive titles later in that decade alone, plus many more after the millennium, such that the franchise remains to this day far better known by everyday gamers in Japan than it is in the United States. Robert Woodhead himself spent two years in Japan in the early 1990s working on what would have been a Wizardry MMORPG, if it hadn’t proved to be just too big a mouthful for the hardware and telecommunications infrastructure at his disposal.

Box art helps to demonstrate Wizardry‘s uncanny legacy in Japan. Here we see the original 1981 American release of the first game.

And here we have a Japan-only Wizardry from a decade later, self-consciously echoing a foreboding, austere aesthetic that had become more iconic in Japan than it had ever been in its home country. (American Wizardry boxes from the period look nothing like this, being illustrated in a more conventional, colorful epic-fantasy style.)

Much of the story of such cultural exchanges inevitably becomes a tale of translation. In its original incarnation, the first Wizardry game had had the merest wisp of a plot. In this as in all other respects it was a classic hack-and-slash dungeon crawler: work your way down through ten dungeon levels and kill the evil wizard, finito. What background context there was tended to be tongue-in-cheek, more Piers Anthony than J.R.R. Tolkien; the most desirable sword in the game was called the “Blade of Cuisinart,” for Pete’s sake. Wizardry‘s Japanese translators, however, took it all in with wide-eyed earnestness, missing the winking and nodding entirely. They saw a rather grim, austere milieu a million miles away from the game that Americans knew — a place where a Cuisinart wasn’t a stainless-steel food processor but a portentous ancient warrior clan.

When the Japanese started to make their own Wizardry games, they continued in this direction, to almost hilarious effect if one knew the source material behind their efforts; it rather smacks of the post-apocalyptic monks in A Canticle for Liebowitz making a theology for themselves out of the ephemeral advertising copy of their pre-apocalyptic forebears. A franchise that had in its first several American releases aspired to be about nothing more than killing monsters for loot — and many of them aggressively silly monsters at that — gave birth to audio CDs full of po-faced stories and lore, anime films and manga books, a sprawling line of toys and miniature figures, even a complete tabletop RPG system. But, lest we Westerners begin to feel too smug about all this, know that the same process would eventually come to work in reverse in the JRPG field, with nuanced Japanese writing being flattened out and flat-out misunderstood by clueless American translators.

The history of Wizardry in Japan is fascinating by dint of its sheer unlikeliness, but the game’s importance on the global stage actually stems more from the Japanese games it influenced than from the ones that bore the Wizardry name right there on the box. For Wizardry, along with the early Ultima games, happened to catch the attention of Koichi Nakamura and Yuji Horii, a software-development duo who had already made several games together for a Japanese publisher called Enix. “Horii-san was really into Ultima, and I was really into Wizardry,” remembers Nakamura. This made sense. Nakamura was the programmer of the pair, naturally attracted to Wizardry‘s emphasis on tactics and systems. Horii, on the other hand, was the storytelling type, who wrote for manga magazines in addition to games, and was thus drawn to Ultima‘s quirkier, more sprawling world and its spirit of open-ended exploration. The pair decided to make their own RPG for the Japanese market, combining what they each saw as the best parts of Wizardry and Ultima.

Yuji Horii in the 1980s. Little known outside his home country, he is a celebrity inside its borders. In his book on Japanese videogame culture, Chris Kohler calls him a Steven Spielberg-like figure there, in terms both of name recognition and the style of entertainment he represents.

This was interesting, but not revolutionary in itself; you’ll remember that Henk Rogers had already done essentially the same thing in Japan with The Black Onyx before Wizardry and Ultima ever officially arrived there. Nevertheless, the choices Nakamura and Horii made as they set about their task give them a better claim to the title of revolutionaries on this front than Rogers enjoys. They decided that making a game that combined the best of Wizardry and Ultima really did mean just that: it did not mean, that is to say, throwing together every feature of each which they could pack in and calling it a day, as many a Western developer might have. They decided to make a game that was simpler than either of its inspirations, much less the two of them together.

Their reasons for doing so were artistic, commercial, and technical. In the realm of the first, Horii in particular just didn’t like overly complicated games; he was the kind of player who would prefer never to have to glance at a manual, whose ideal game intuitively communicated to you everything you needed to know in order to play it. In the realm of the second, the pair was sure that the average Japanese person, like the average person in most countries, felt the same as Horii; even in the United States, Ultima and Wizardry were niche products, and Nakamura and Horii had mass-market ambitions. And in the realm of the third, they were sharply limited in how much they could put into their RPG anyway, because they intended it for the Nintendo Famicom console, where their entire game — code, data, graphics, and sound — would have to fit onto a 64 K cartridge in lieu of floppy disks and would have to be steerable using an eight-button controller in lieu of a keyboard. Luckily, Nakamura and Horii already had experience with just this sort of simplification. Their most recent output had been inspired by the adventure games of American companies like Sierra and Infocom, but had replaced those games’ text parsers with controller-friendly multiple-choice menus.

In deciding to put American RPGs through the same wringer, they established one of the core attributes of the JRPG sub-genre: generally speaking, these games were and would remain simpler than their Western counterparts, which sometimes seemed to positively revel in their complexity as a badge of honor. Another attribute emerged fully-formed from the writerly heart of Yuji Horii. He crafted an unusually rich, largely linear plot for the game. Rather than being a disadvantage, he thought linearity would make this new style of console game “more accessible to consumers”: “We really focused on ensuring people would be able to experience the fun of the story.”

He called upon his friends at the manga magazines to help him illustrate his tale with large, colorful figures in that distinctly Japanese style that has become so immediately recognizable all over the world. At this stage, it was perhaps more prevalent on the box than in the game itself, the Famicom’s graphical fidelity being what it was. Nonetheless, another precedent that has held true in JRPGs right down to the present day was set by the overall visual aesthetic of this, the canonical first example of the breed. Ditto its audio aesthetic, which took the form of a memorable, melodic, eminently hummable chip-tune soundtrack. “From the very beginning, we wanted to create a warm, inviting world,” says Horii.

Dragon Quest. Ultima veterans will almost expect to meet Lord British on his throne somewhere. With its overhead view and its large over-world full of towns to be visited, Dragon Quest owed even more to Ultima than it did to Wizardry — unsurprisingly so, given that the former was the American RPG which its chief creative architect Yuji Horii preferred.

Dragon Quest was released on May 27, 1986. Console gamers — not only those in Japan, but anywhere on the globe — had never seen anything like it. Playing this game to the end was a long-form endeavor that could stretch out over weeks or months; you wrote down an alphanumeric code it provided to you on exit, then entered this code when you returned to the game in order to jump back to wherever you had left off.

That said, the fact that the entire game state could be packed into a handful of numbers and letters does serve to illustrate just how simple Dragon Quest really was at bottom. By the standards of only a few years later, much less today, it was pretty boring. Fighting random monsters wasn’t so much a distraction from the rest of the game as the only thing available to do; the grinding was the game. In 2012, critic Nick Simberg wondered at “how willing we were to sit down on the couch and fight the same ten enemies over and over for hours, just building up gold and experience points”; he compared Dragon Quest to “a child’s first crayon drawing, stuck with a magnet to the fridge.”

And yet, as the saying goes, you have to start somewhere. Japanese gamers were amazed and entranced, buying 1 million copies of Dragon Quest in its first six months, over 2 million copies in all. And so a new sub-genre was born, inspired by American games but indelibly Japanese in a way The Black Onyx had not been. Many or most of the people who played and enjoyed Dragon Quest had never even heard of its original wellspring Dungeons & Dragons.

We all know what happens when a game becomes a hit on the scale of Dragon Quest. There were sequels — two within two years of the first game, then three more in the eight years after them, as the demands of higher production values slowed down Enix’s pace a bit. Wizardry was big in Japan, but it was nothing compared to Dragon Quest, which sold 2.4 million copies in its second incarnation, followed by an extraordinary 3.8 million copies in its third. Middle managers and schoolmasters alike learned to dread the release of a new entry in the franchise, as about half the population of Japan under a certain age would invariably call in sick that day. When Enix started bringing out the latest games on non-business days, a widespread urban legend said this had been done in accordance with a decree from the Japanese Diet, which demanded that “henceforth Dragon Quest games are to be released on Sunday or national holidays only”; the urban legend wasn’t true, but the fact that so many people in Japan could so easily believe it says something in itself. Just as the early American game Adventure lent its name to an entire genre that followed it, the Japanese portmanteau word for “Dragon Quest” — Dorakue — became synonymous with the RPG in general there, such that when you told someone you were “playing dorakue” you might really be playing one of the series’s countless imitators.

Giving any remotely complete overview of these dorakue games would require dozens of articles, along with someone to write them who knows far more about them than I do. But one name is inescapable in the field. I refer, of course, to Final Fantasy.


Hironobu Sakaguchi in 1991.

Legend has it that Hironobu Sakaguchi, the father of Final Fantasy, chose that name because he thought that the first entry in the eventual franchise would be the last videogame he ever made. A former professional musician with numerous and diverse interests, Sakaguchi had been working for the Japanese software developer and publisher Square for a few years already by 1987, designing and programming Famicom action games that he himself found rather banal and that weren’t even selling all that well. He felt ready to do something else with his life, was poised to go back to university to try to figure out what that thing ought to be. But before he did so, he wanted to try something completely different at Square.

Another, less dramatic but probably more accurate version of the origin story has it that Sakaguchi simply liked the way the words “final’ and “fantasy” sounded together. At any rate, he convinced his managers to give him half a dozen assistants and six months to make a dorakue game.[4]In another unexpected link between East and West, one of his most important assistants became Nasir Gebelli, an Iranian who had fled his country’s revolution for the United States in 1979 and become a game-programming rock star on the Apple II. After the heyday of the lone-wolf bedroom auteur began to fade there, Doug Carlston, the head of Brøderbund, brokered a job for him with his friends in Japan. There he maximized the Famicom’s potential in the same way he had that of the Apple II, despite not speaking a word of Japanese when he arrived. (“We’d go to a restaurant and no matter what he’d order — spaghetti or eggs — they’d always bring out steak,” Sakaguchi laughs.) Gebelli would program the first three Final Fantasy games almost all by himself.

 

Final Fantasy I.

The very first Final Fantasy may not have looked all that different from Dragon Quest at first glance — it was still a Famicom game, after all, with all the audiovisual limitations that implies — but it had a story line that was more thematically thorny and logistically twisted than anything Yuji Horii might have come up with. As it began, you found yourself in the midst of a quest to save a princess from an evil knight, which certainly sounded typical enough to anyone who had ever played a dorakue game before. In this case, however, you completed that task within an hour, only to learn that it was just a prologue to the real plot. In his book-length history and study of the aesthetics of Japanese videogames, Chris Kohler detects an implicit message here: “Final Fantasy is about much more than saving the princess. Compared to the adventure that is about to take place, saving a princess is merely child’s play.” In fact, only after the prologue was complete did the opening credits finally roll, thus displaying another consistent quality of Final Fantasy: its love of unabashedly cinematic drama.

Still, for all that it was more narratively ambitious than what had come before, the first Final Fantasy can, like the first Dragon Quest, seem a stunted creation today. Technical limitations meant that you still spent 95 percent of your time just grinding for experience. “Final Fantasy may have helped build the genre, but it didn’t necessarily know exactly how to make it fun,” acknowledges Aidan Moher in his book about JRPGs. And yet when it came to dorakue games in the late 1980s, it seemed that Sakaguchi’s countrymen were happy to reward even the potential for eventual fun. They made Final Fantasy the solid commercial success that had heretofore hovered so frustratingly out of reach of its creator; it sold 400,000 copies. Assured that he would never have to work on a mindless action game again, Sakaguchi agreed to stay on at Square to build upon its template.

Final Fantasy II, which was released exactly one year after the first game in December of 1988 and promptly doubled its sales, added more essential pieces to what would become the franchise’s template. Although labelled and marketed as a sequel, its setting, characters, and plot had no relation to what had come before. Going forward, it would remain a consistent point of pride with Sakaguchi to come up with each new Final Fantasy from whole cloth, even when fans begged him for a reunion with their favorite places and people. In a world afflicted with the sequelitis that ours is, he can only be commended for sticking to his guns.

In another sense, though, Final Fantasy II was notable for abandoning a blank slate rather than embracing it. For the first time, its players were given a pre-made party full of pre-made personalities to guide rather than being allowed to roll their own. Although they could rename the characters if they were absolutely determined to do so — this ability would be retained as a sort of vestigial feature as late as Final Fantasy VII — they were otherwise set in stone, the better to serve the needs of the set-piece story Sakaguchi wanted to tell. This approach, which many players of Western RPGs did and still do regard as a betrayal of one of the core promises of the genre, would become commonplace in JRPGs. Few contrasts illustrate so perfectly the growing divide between these two visions of the RPG: the one open-ended and player-driven, sometimes to a fault; the other tightly scripted and story-driven, again sometimes to a fault. In a Western RPG, you write a story for yourself; in a JRPG, you live a story that someone else has already written for you.

Consider, for example, the two lineage’s handling of mortality. If one of your characters dies in battle in a Western RPG, it might be difficult and expensive, or in some cases impossible, to restore her to life; in this case, you either revert to an earlier saved state or you just accept her death as another part of the story you’re writing and move on to the next chapter with an appropriately heavy heart. In a JRPG, on the other hand, death in battle is never final; it’s almost always easy to bring a character who gets beat down to zero hit points back to life. What are truly fatal, however, are pre-scripted deaths, the ones the writers have deemed necessary for storytelling purposes. Final Fantasy II already contained the first of these; years later, Final Fantasy VII would be host to the most famous of them all, a death so shocking that you just have to call it that scene and everyone who has ever played the game will immediately know what you’re talking about. To steal a phrase from Graham Nelson, the narrative always trumps the crossword in JRPGs; they happily override their gameplay mechanics whenever the story they wish to tell demands it, creating an artistic and systemic discontinuity that’s enough to make Aristotle roll over in his grave. Yet a huge global audience of players are not bothered at all by it — not if the story is good enough.

But we’ve gotten somewhat ahead of ourselves; the evolution of the 1980s JRPG toward the modern-day template came in fits and starts rather than a linear progression. Final Fantasy III, which was released in 1990, actually returned to a player-generated party, and yet the market failed to punish it for its conservatism. Far from it: it sold 1.4 million copies.

Final Fantasy IV, on the other hand, chose to double down on the innovations Final Fantasy II had deployed, and sold in about the same numbers as Final Fantasy III. Released in July of 1991, it provided you with not just a single pre-made party but an array of characters who moved in and out of your control as the needs of the plot dictated, thereby setting yet another longstanding precedent for the series going forward. Ditto the nature of the plot, which leaned into shades of gray as never before. Chris Kohler:

The story deals with mature themes and complex characters. In Final Fantasy II, the squeaky-clean main characters were attacked by purely evil dark knights; here, our main character is a dark knight struggling with his position, paid to kill innocents, trying to reconcile loyalty to his kingdom with his sense of right and wrong. He is involved in a sexual relationship. His final mission for the king turns out to be a mass murder: the “phantom monsters” are really just a town of peaceful humans whose magic the corrupt king has deemed dangerous. (Note the heavy political overtones.)

Among Western RPGs, only the more recent Ultima games had dared to deviate so markedly from the absolute-good-versus-absolute-evil tales of everyday heroic fantasy. (In fact, the plot of Final Fantasy IV bears a lot of similarities to that of Ultima V…)

Ever since Final Fantasy IV, the series has been filled with an inordinate number of moody young James Deans and long-suffering Natalie Woods who love them.

Final Fantasy IV was also notable for introducing an “active-time battle system,” a hybrid between the turn-based systems the series had previously employed and real-time combat, designed to provide some of the excitement of the latter without completely sacrificing the tactical affordances of the former. (In a nutshell, if you spend too long deciding what to do when it’s your turn, the enemies will jump in and take another turn of their own while you dilly-dally.) It too would remain a staple of the franchise for many installments to come.

Final Fantasy V, which was released in December of 1992, was like Final Fantasy III something of a placeholder or even a retrenchment, dialing back on several of the fourth game’s innovations. It sold almost 2.5 million copies.

Both the fourth and fifth games had been made for the Super Famicom, Nintendo’s 16-bit successor to its first console, and sported correspondingly improved production values. But most JRPG fans agree that it was with the sixth game — the last for the Super Famicom — that all the pieces finally came together into a truly friction-less whole. Indeed, a substantial and vocal minority will tell you that Final Fantasy VI rather than its immediate successor is the best Final Fantasy ever, balanced perfectly between where the series had been and where it was going.

Final Fantasy VI abandoned conventional epic-fantasy settings for a steampunk milieu out of Jules Verne. As we’ll see in a later article, Final Fantasy VII‘s setting would deviate even more from the norm. This creative restlessness is one of the series’s best traits, standing it in good stead in comparison to the glut of nearly indistinguishably Tolkienesque Western RPGs of the 1980s and 1990s.

From its ominous opening-credits sequence on, Final Fantasy VI strained for a gravitas that no previous JRPG had approached, and arguably succeeded in achieving it at least intermittently. It played out on a scale that had never been seen before; by the end of the game, more than a dozen separate characters had moved in and out of your party. Chris Kohler identifies the game’s main theme as “love in all its forms — romantic love, parental love, sibling love, and platonic love. Sakaguchi asks the player, what is love and where can we find it?”

Before that scene in Final Fantasy VII, Hironobu Sakaguchi served up a shocker of equal magnitude in Final Fantasy VI. Halfway through the game, the bad guys win despite your best efforts and the world effectively ends, leaving your party wandering through a post-apocalyptic World of Ruin like the characters in a Harlan Ellison story. The effect this had on some players’ emotions could verge on traumatizing — heady stuff for a videogame on a console still best known worldwide as the cuddly home of Super Mario. For many of its young players, Final Fantasy VI was their first close encounter on their own recognizance — i.e., outside of compulsory school assignments — with the sort of literature that attempts to move beyond tropes to truly, thoughtfully engage with the human condition.

It’s easy for an old, reasonably well-read guy like me to mock Final Fantasy VI‘s highfalutin aspirations, given that they’re stuffed into a game that still resolves at the granular level into bobble-headed figures fighting cartoon monsters. And it’s equally easy to scoff at the heavy-handed emotional manipulation that has always been part and parcel of the JRPG; subtle the sub-genre most definitely is not. Nonetheless, meaningful literature is where you find it, and the empathy it engenders can only be welcomed in a world in desperate need of it. Whatever else you can say about Final Fantasy and most of its JRPG cousins, the messages these games convey are generally noble ones, about friendship, loyalty, and the necessity of trying to do the right thing in hard situations, even when it isn’t so easy to even figure out what the right thing is. While these messages are accompanied by plenty of violence in the abstract, it is indeed abstracted — highly stylized and, what with the bifurcation between game and story that is so prevalent in the sub-genre, often oddly divorced from the games’ core themes.

Released in April of 1994, Final Fantasy VI sold 2.6 million copies in Japan. By this point the domestic popularity of the Final Fantasy franchise as a whole was rivaled only by that of Super Mario and Dragon Quest; two of the three biggest gaming franchises in Japan, that is to say, were dorakue games. In the Western world, however, the picture was quite different.

In the United States, the first-generation Nintendo Famicom was known as the Nintendo Entertainment System, the juggernaut of a console that rescued videogames in the eyes of the wider culture from the status of a brief-lived fad to that of a long-lived entertainment staple, on par with movies in terms of economics if not cachet. Yet JRPGs weren’t a part of that initial success story. The first example of the breed didn’t even reach American shores until 1989. It was, appropriately enough, the original Dragon Quest, the game that had started it all in Japan; it was renamed Dragon Warrior for the American market, due to a conflict with an old American tabletop RPG by the name of Dragonquest whose trademarks had been acquired by the notoriously litigious TSR of Dungeons & Dragons fame. Enix did make some efforts to modernize the game, such as replacing the password-based saving system with a battery that let you save your state to the cartridge itself. (This same method had been adopted by Final Fantasy and most other post-Dragon Quest JRPGs on the Japanese market as well.) But American console gamers had no real frame of reference for Dragon Warrior, and even the marketing geniuses of Nintendo, which published the game itself in North America, struggled to provide them one. With cartridges piling up in Stateside warehouses, they were reduced to giving away hundreds of thousands of copies of Dragon Warrior to the subscribers of Nintendo Power magazine. For some of these, the game came as a revelation seven years before Final Fantasy VII; for most, it was an inscrutable curiosity that was quickly tossed aside.

Final Fantasy I, on the other hand, received a more encouraging reception in the United States when it reached there in 1990: it sold 700,000 copies, 300,000 more than it had managed in Japan. Nevertheless, with the 8-bit Nintendo console reaching the end of its lifespan, Square didn’t bother to export the next two games in the series. It did export Final Fantasy IV for the Super Famicom — or rather the Super Nintendo Entertainment System, as it was known in the West. The results were disappointing in light of the previous game’s reception, so much so that Square didn’t export Final Fantasy V.[5]Square did release a few spinoff games under the Final Fantasy label in the United States and Europe as another way of testing the Western market: Final Fantasy Legend and Final Fantasy Adventure for the Nintendo Game Boy handheld console, and Final Fantasy: Mystic Quest for the Super Nintendo. Although none of them were huge sellers, the Game Boy titles in particular have their fans even today. This habit of skipping over parts of the series led to a confusing state of affairs whereby the American Final Fantasy II was the Japanese Final Fantasy IV and the American Final Fantasy III was the Japanese Final Fantasy VI. The latter game shifted barely one-fourth as many copies in the three-times larger American marketplace as it had in Japan — not disastrous numbers, but still less than the first Final Fantasy had managed.

The heart of the problem was translation, in both the literal sense of the words on the screen and a broader cultural sense. Believing with some justification that the early American consoles from Atari and others had been undone by a glut of substandard product, Nintendo had long made a science out of the polishing of gameplay, demanding that every prospective release survive an unrelenting testing gauntlet before it was granted the “Nintendo Seal of Quality” and approved for sale. But the company had no experience or expertise in polishing text to a similar degree. In most cases, this didn’t matter; most Nintendo games contained very little text anyway. But RPGs were the exception. The increasingly intricate story lines which JRPGs were embracing by the early 1990s demanded good translations by native speakers. What many of them actually got was something very different, leaving even those American gamers who wanted to fall in love baffled by the Japanese-English-dictionary-derived word salads they saw before them. And then, too, many of the games’ cultural concerns and references were distinctly Japanese, such that even a perfect translation might have left Americans confused. It was, one might say, the Blade of Cuisinart problem in reverse.

To be sure, there were Americans who found all of the barriers to entry into these deeply foreign worlds to be more bracing than intimidating, who took on the challenge of meeting the games on their own terms, often emerging with a lifelong passion for all things Japanese. At this stage, though, they were the distinct minority. In Japan and the United States alike, the conventional wisdom through the mid-1990s was that JRPGs didn’t and couldn’t sell well overseas; this was regarded as a fact of life as fundamental as the vagaries of climate. (Thanks to this belief, none of the mainline Final Fantasy games to date had been released in Europe at all.) It would take Final Fantasy VII and a dramatic, controversial switch of platforms on the part of Square to change that. But once those things happened… look out. The JRPG would conquer the world yet.


Where to Get It: Remastered and newly translated versions of the Japanese Final Fantasy I, II, III, IV, V, and VI are available on Steam. The Dragon Quest series has been converted to iOS and Android apps, just a search away on the Apple and Google stores.



Did you enjoy this article? If so, please think about pitching in to help me make many more like it. You can pledge any amount you like.


Sources: the books Pure Invention: How Japan Made the Modern World by Matt Alt, Power-Up: How Japanese Video Games Gave the World an Extra Life by Chris Kohler, Fight, Magic, Items: The History of Final Fantasy, Dragon Quest, and the Rise of Japanese RPGs in the West by Aidan Moher, and Atari to Zelda: Japan’s Videogames in Global Contexts by Mia Consalvo. GameFan of September 1997; Retro Gamer 69, 108, and 170; Computer Gaming World of September 1985 and December 1992.

Online sources include Polygon‘s authoritative Final Fantasy 7: An Oral History”; “The Long Life of the Original Wizardry by guest poster Alex on The CRPG Addict blog; Wizardry: Japanese Franchise Outlook” by Sam Derboo at Hardcore Gaming 101, plus an interview Robert Woodhead, conducted by Jared Petty at the same site; Wizardry‘s Wild Ride from West to East” at VentureBeat; “The Secret History of AnimEigo” at that company’s homepage; Robert Woodhead’s slides from a presentation at the 2022 KansasFest Apple II convention; a post on tabletop Wizardry at the Japanese Tabletop RPG blog; and Dragon Warrior: Aging Disgracefully” by Nick Simberg at (the now-defunct) DamnLag.

Footnotes

Footnotes
1 Adams was not an entirely disinterested observer. He was already working with Robert Woodhead on Wizardry IV, and had in fact accompanied him to Japan in this capacity.
2 A man with an international perspective if ever there was one, Rogers would later go on to fame and fortune as the man who brought Tetris out of the Soviet Union.
3 It would be briefly revived for one final game, the appropriately named Wizardry 8, in 2001.
4 In another unexpected link between East and West, one of his most important assistants became Nasir Gebelli, an Iranian who had fled his country’s revolution for the United States in 1979 and become a game-programming rock star on the Apple II. After the heyday of the lone-wolf bedroom auteur began to fade there, Doug Carlston, the head of Brøderbund, brokered a job for him with his friends in Japan. There he maximized the Famicom’s potential in the same way he had that of the Apple II, despite not speaking a word of Japanese when he arrived. (“We’d go to a restaurant and no matter what he’d order — spaghetti or eggs — they’d always bring out steak,” Sakaguchi laughs.) Gebelli would program the first three Final Fantasy games almost all by himself.
5 Square did release a few spinoff games under the Final Fantasy label in the United States and Europe as another way of testing the Western market: Final Fantasy Legend and Final Fantasy Adventure for the Nintendo Game Boy handheld console, and Final Fantasy: Mystic Quest for the Super Nintendo. Although none of them were huge sellers, the Game Boy titles in particular have their fans even today.
 
63 Comments

Posted by on November 17, 2023 in Digital Antiquaria, Interactive Fiction

 

Tags: , , , ,

A Dialog in Real Time (Strategy)

At the end of the 1990s, the two most popular genres in computer gaming were the first-person shooter and the real-time strategy game. They were so dominant that most of the industry’s executives seemed to want to publish little else. And yet at the beginning of the decade neither genre even existed.

The stories of how the two rose to such heady heights are a fascinating study in contrasts, of how influences in media can either go off like an explosion in a TNT factory or like the slow burn of a long fuse. Sometimes something appears and everyone knows instantly that it’s just changed everything; when the Beatles dropped Sgt. Pepper’s Lonely Hearts Club Band in 1967, there was no doubt that the proverbial goalposts in rock music had just been shifted. Other times, though, influence can take years to make itself felt, as was the case for another album of 1967, The Velvet Underground & Nico, about which Brian Eno would later famously say that it “only sold 10,000 copies, but everyone who bought it formed a band.”

Games are the same. Gaming’s Sgt. Pepper was DOOM, which came roaring up out of the shareware underground at the tail end of 1993 to sweep everything from its path, blowing away all of the industry’s extant conventional wisdom about what games would become and what role they would play in the broader culture. Gaming’s Velvet Underground, on the other hand, was the avatar of real-time strategy, which came to the world in the deceptive guise of a sequel in the fall of 1992. Dune II: The Building of a Dynasty sported its Roman numeral because its transnational publisher had gotten its transatlantic cables crossed and accidentally wound up with two separate games based on Frank Herbert’s epic 1965 science-fiction novelone made in Paris, the other in Las Vegas. The former turned out to be a surprisingly evocative and playable fusion of adventure and strategy game, but it was the latter that would quietly — oh, so quietly in the beginning! — shift the tectonic plates of gaming.

For Dune II, which was developed by Westwood Studios and published by Virgin Games, really was the first recognizable implementation of the genre of real-time strategy as we have come to know it since. You chose one of three warring trading houses to play, then moved through a campaign made up of a series of set-piece scenarios, in which your first goal was always to make yourself an army by gathering resources and using them to build structures that could churn out soldiers, tanks, aircraft, and missiles, all of which you controlled by issuing them fairly high-level orders: “go here,” “harvest there,” “defend this building,” “attack that enemy unit.” Once you thought you were strong enough, you could launch your full-on assault on the enemy — or, if you weren’t quick enough, you might find yourself trying to fend off his attack. What made it so different from most of the strategy games of yore was right there in the name: in the fact that it all played out in real time, at a pace that ranged from the brisk to the frantic, making it a test of your rapid-fire mousemanship and your ability to think on your feet. Bits and pieces of all this had been seen before — perhaps most notably in Peter Molyneux and Bullfrog’s Populous and the Sega Genesis game Herzog Zwei — but Dune II was where it all came together to create a gaming paradigm for the ages.

That said, Dune II was very much a diamond in the rough, a game whose groundbreaking aspirations frequently ran up against the brick wall of its limitations. It’s likely to leave anyone who has ever played almost any other real-time-strategy game seething with frustration. It runs at a resolution of just 320 X 200, giving only the tiniest window into the battlefield; it only lets you select and control one unit at a time, making coordinated attacks and defenses hard to pull off; its scenarios are somewhat rote exercises, differing mainly in the number of enemy hordes they throw against you as you advance through the campaign rather than the nature of the terrain or your objectives. Even its fog of war is wonky: the whole battlefield is blank blackness until one of your units gets within visual range, after which you can see everything that goes on there forevermore, whether any of your units can still lay eyes on it or not. And it has no support whatsoever for the multiplayer free-for-alls that are for many or most players the biggest draw of the genre.

Certainly Virgin had no inkling that they had a nascent ludic revolution on their hands. They released Dune II with more of a disinterested shrug than a fulsome fanfare, having expended most of their promotional energies on the other Dune, which had come out just a few months earlier. It’s a testimony to the novelty of the gameplay experience that it did as well as it did. It didn’t become a massive hit, but it sold well enough to earn its budget back and then some on the strength of reasonably positive reviews — although, again, no reviewer had the slightest notion that he was witnessing the birth of what would be one of the two hottest genres in gaming six years in the future. Even Westwood seemed initially to regard Dune II as a one-and-done. They wouldn’t release another game in the genre they had just invented for almost three years.

But the gaming equivalent of all those budding bedroom musicians who listened to that Velvet Underground record was also out there in the case of Dune II. One hungry, up-and-coming studio in particular decided there was much more to be done with the approach it had pioneered. And then Westwood themselves belatedly jumped back into the fray. Thanks to the snowball that these two studios got rolling in earnest during the mid-1990s, the field of real-time strategy would be well and truly saturated by the end of the decade, the yin to DOOM‘s yang. This, then, is the tale of those first few years of these two studios’ competitive dialog, over the course of which they turned the real-time strategy genre from a promising archetype into one of gaming’s two biggest, slickest crowd pleasers.


Blizzard Studios is one of the most successful in the history of gaming, so much so that it now lends its name to the Activision Blizzard conglomerate, with annual revenues in the range of $7.5 billion. In 1993, however, it was Westwood, flying high off the hit dungeon crawlers Eye of the Beholder and Lands of Lore, that was by far the more recognizable name. In fact, Blizzard wasn’t even known yet as Blizzard.

The company had been founded in late 1990 by Allen Adham and Mike Morhaime, a couple of kids fresh out of university, on the back of a $15,000 loan from Morhaime’s grandmother. They called their venture Silicon & Synapse, setting it up in a hole-in-the-wall office in Costa Mesa, California. They kept the lights on initially by porting existing games from one platform to another for publishers like Interplay — the same way, as it happened, that Westwood had gotten off the ground almost a decade before. And just as had happened for Westwood, Silicon & Synapse gradually won opportunities to make their own games once they had proven themselves by porting those of others. First there was a little auto-racing game for the Super Nintendo called RPM Racing, then a pseudo-sequel to it called Rock ‘n’ Roll Racing, and then a puzzle platformer called The Lost Vikings, which appeared for the Sega Genesis, MS-DOS, and the Commodore Amiga in addition to the Super Nintendo. None of these titles took the world by storm, but they taught Silicon & Synapse what it took to create refined, playable, mass-market videogames from scratch. All three of those adjectives have continued to define the studio’s output for the past 30 years.

It was now mid-1993; Silicon & Synapse had been in business for more than two and a half years already. Adham and Morhaime wanted to do something different — something bigger, something that would be suitable for computers only rather than the less capable consoles, a real event game that would get their studio’s name out there alongside the Westwoods of the world. And here there emerged another of their company’s future trademarks: rather than invent something new from whole or even partial cloth, they decided to start with something that already existed, but make it better than ever before, polishing it until it gleamed. The source material they chose was none other than Westwood’s Dune II, now relegated to the bargain bins of last year’s releases, but a perennial after-hours favorite at the Silicon & Synapse offices. They all agreed as to the feature they most missed in Dune II: a way to play it against other people, like you could its ancestor Populous. The bane of most multiplayer strategy games was their turn-based nature, which left you waiting around half the time while your buddy was playing. Real-time strategy wouldn’t have this problem of downtime.

That became the design brief for Warcraft: Orcs & Humans: remake Dune II but make it even better, and then add a multiplayer feature. And then, of course, actually try to sell the thing in all the ways Virgin had not really tried to sell its inspiration.

To say that Warcraft was heavily influenced by Dune II hardly captures the reality. Most of the units and buildings to hand have a direct correspondent in Westwood’s game. Even the menu of icons on the side of the screen is a virtual carbon copy — or at least a mirror image. “I defensively joked that, while Warcraft was certainly inspired by Dune II, [our] game was radically different,” laughs Patrick Wyatt, the lead programmer and producer on the project. “Our radar mini-map was in the upper left corner of the screen, whereas theirs was in the bottom right corner.”

In the same spirit of change, Silicon & Synapse replaced the desert planet of Arrakis with a fantasy milieu pitting, as the subtitle would suggest, orcs against humans. The setting and the overall look of Warcraft owe almost as much to the tabletop miniatures game Warhammer as the gameplay does to Dune II; a Warhammer license was seriously considered, but ultimately rejected as too costly and potentially too restrictive. Years later, Wyatt’s father would give him a set of Warhammer miniatures he’d noticed in a shop: “I found these cool toys and they reminded me a lot of your game. You might want to have your legal department contact them because I think they’re ripping you off.”

Suffice to say, then, that Warcraft was even more derivative than most computer games. The saving grace was the same that it would ever be for this studio: that they executed their mishmash of influences so well. The squishy, squint-eyed art is stylized like a cartoon, a wise choice given that the game is still limited to a resolution of just 320 X 200, so that photo-realism is simply not on the cards. The overall look of Warcraft has more in common with contemporary console games than the dark, gritty aesthetic that was becoming so popular on computers. The guttural exclamations of the orcs and the exaggerated Monty Python and the Holy Grail-esque accents of the humans, all courtesy of regular studio staffers rather than outside voice actors, become a chorus line as you order them hither and yon, making Dune II seem rather stodgy and dull by comparison. “We felt too many games took themselves too seriously,” says Patrick Wyatt. “We just wanted to entertain people.”

Slavishly indebted though it is to Dune II in all the broad strokes, Warcraft doesn’t neglect to improve on its inspiration in those nitty-gritty details that can make the difference between satisfaction and frustration for the player. It lets you select up to four units and give them orders at the same time by simply dragging a box around them, a quality-of-life addition whose importance is difficult to overstate, one so fundamental that no real-time-strategy game from this point forward would dare not to include it. Many more keyboard shortcuts are added, a less technically impressive addition but one no less vital to the cause of playability when the action starts to heat up. There are now two resources you need to harvest, lumber and gold, in places of Dune II‘s all-purpose spice. Units are now a little more intelligent about interpreting your orders, such that they no longer blithely ignore targets of opportunity, or let themselves get mauled to death without counterattacking just because you haven’t explicitly told them to. Scenario design is another area of marked improvement: whereas every Dune II scenario is basically the same drill, just with ever more formidable enemies to defeat, Warcraft‘s are more varied and arise more logically out of the story of the campaign, including a couple of special scenarios with no building or gathering at all, where you must return a runaway princess to the fold (as the orcs) or rescue a stranded explorer (as the humans).

The orc on the right who’s stroking his “sword” looks so very, very wrong — and this screenshot doesn’t even show the animation…

And, as the cherry on top, there was multiplayer support. Patrick Wyatt finished his first, experimental implementation of it in June of 1994, then rounded up a colleague in the next cubicle over so that they could became the first two people ever to play a full-fledged real-time-strategy game online. “As we started the game, I felt a greater sense of excitement than I’d ever known playing any other game,” he says.

It was just this magic moment, because it was so invigorating to play against a human and know that it wasn’t some stupid AI. It was a player who was smart and doing his absolute best to crush you. I knew we were making a game that would be fun, but at that moment I knew the game would absolutely kick ass.

While work continued on Warcraft, the company behind it was going through a whirlwind of changes. Recognizing at long last that “Silicon & Synapse” was actually a pretty terrible name, Adham and Morhaime changed it to Chaos Studios, which admittedly wasn’t all that much better, in December of 1993. Two months later, they got an offer they couldn’t refuse: Davidson & Associates, a well-capitalized publisher of educational software that was looking to break into the gaming market, offered to buy the freshly christened Chaos for the princely sum of $6.75 million. It was a massive over-payment for what was in all truth a middling studio at best, such that Adham and Morhaime felt they had no choice but to accept, especially after Davidson vowed to give them complete creative freedom. Three months after the acquisition, the founders decided they simply had to find a decent name for their studio before releasing Warcraft, their hoped-for ticket to the big leagues. Adham picked up a dictionary and started leafing through it. He hit pay dirt when his eyes flitted over the word “blizzard.” “It’s a cool name! Get it?” he asked excitedly. And that was that.

So, Warcraft hit stores in time for the Christmas of 1994, with the name of “Blizzard Entertainment” on the box as both its developer and its publisher — the wheels of the latter role being greased by the distributional muscle of Davidson & Associates. It was not immediately heralded as a game that would change everything, any more than Dune II had been; real-time strategy continued to be more of a slowly growing snowball than the ton of bricks to the side of the head that the first-person shooter had been. Computer Gaming World magazine gave Warcraft a cautious four stars out of five, saying that “if you enjoy frantic real-time games and if you don’t mind a linear structure in your strategic challenges, Warcraft is a good buy.” At the same time, the extent of the game’s debt to Dune II was hardly lost on the reviewer: “It’s a good thing for Blizzard that there’s no precedent for ‘look and feel’ lawsuits in computer entertainment.”[1]This statement was actually not correct; makers of standup arcade games of the classic era and the makers of Tetris had successfully cowed the cloning competition in the courts.

Warcraft would eventually sell 400,000 units, bettering Dune II‘s numbers by a factor of four or more. As soon as it became clear that it was doing reasonably well, Blizzard started on a sequel.


Out of everyone who looked at Warcraft, no one did so with more interest — or with more consternation at its close kinship with Dune II — than the folks at Westwood. “When I played Warcraft, the similarities between it and Dune II were pretty… blatant, so I didn’t know what to think,” says the Westwood designer Adam Isgreen. Patrick Wyatt of Blizzard got the impression that his counterparts “weren’t exactly happy” at the slavish copying when they met up at trade shows, though he “reckoned they should have been pleased that we’d taken their game as a base for ours.” Only gradually did it become clear why Warcraft‘s existence was a matter of such concern for Westwood: because they themselves had finally decided to make another game in the style of Dune II.

The game that Westwood was making could easily have wound up looking even more like the one that Blizzard had just released. The original plan was to call it Command & Conquer: Fortress of Stone and to set it in a fantasy world. (Westwood had been calling their real-time-strategy engine “Command & Conquer” since the days of promoting Dune II.) “It was going to have goldmines and wood for building things. Sound familiar?” chuckles Westwood’s co-founder Louis Castle. “There were going to be two factions, humans and faerie folk… pretty fricking close to orcs versus humans.”

Some months into development, however, Westwood decided to change directions, to return to a science-fictional setting closer to that of Dune II. For they wanted their game to be a hit, and it seemed to them that fantasy wasn’t the best guarantee of such a thing: CRPGs were in the doldrums, and the most recent big strategy release with a fantasy theme, MicroProse’s cult-classic-to-be Master of Magic, hadn’t done all that well either. Foreboding near-future stories, however, were all the rage; witness the stellar sales of X-COM, another MicroProse strategy game of 1994. “We felt that if we were going to make something that was massive,” says Castle, “it had to be something that anybody and everybody could relate to. Everybody understands a tank; everybody understands a guy with a machine gun. I don’t have to explain to them what this spell is.” Westwood concluded that they had made the right decision as soon as they began making the switch in software: “Tanks and vehicles just felt better.” The game lost its subtitle to become simply Command & Conquer.

While the folks at Blizzard were plundering Warhammer for their units and buildings, those at Westwood were trolling the Jane’s catalogs of current military hardware and Soldier of Fortune magazine. “We assumed that anything that was talked about as possibly coming was already here,” says Castle, “and that was what inspired the units.” The analogue of Dune II‘s spice — the resource around which everything else revolved — became an awesomely powerful space-born element come to earth known as tiberium.

Westwood included most of the shortcuts and conveniences that Blizzard had built into Warcraft, but went one or two steps further more often than not. For example, they also made it possible to select multiple units by dragging a box around them, but in their game there was no limit to the number of units that could be selected in this way. The keyboard shortcuts they added not only let you quickly issue commands to units and buildings, but also jump around the map instantly to custom viewpoints you could define. And up to four players rather than just two could now play together at once over a local network or the Internet, for some true mayhem. Then, too, scenario design was not only more varied than in Dune II but was even more so than in Warcraft, with a number of “guerilla” missions in the campaigns that involved no resource gathering or construction. It’s difficult to say to what extent these were cases of parallel innovation and to what extent they were deliberate attempts to one-up what Warcraft had done. It was probably a bit of both, given that Warcraft was released a good nine months before Command & Conquer, giving Westwood plenty of time to study it.

But other innovations in Command & Conquer were without any precedent. The onscreen menus could now be toggled on and off, for instance, a brilliant stroke that gave you a better view of the battlefield when you really needed it. Likewise, Westwood differentiated the factions in the game in a way that had never been done before. Whereas the different houses in Dune II and the orcs and humans in Warcraft corresponded almost unit for unit, the factions in Command & Conquer reflected sharply opposing military philosophies, demanding markedly different styles of play: the establishment Global Defense Initiative had slow, strong, and expensive units, encouraging a methodical approach to building up and husbanding your forces, while the terroristic Brotherhood of Nod had weaker but faster and cheaper minions better suited to madcap kamikaze rushes than carefully orchestrated combined-arms operations.

Yet the most immediately obvious difference between Command & Conquer and Warcraft was all the stuff around the game. Warcraft had been made on a relatively small budget with floppy disks in mind. It sported only a brief opening cinematic, after which scenario briefings consisted of nothing but scrolling text and a single voice over a static image. Command & Conquer, by contrast, was made for CD-ROM from the outset, by a studio with deeper pockets that had invested a great deal of time and energy into both 3D animation and full-motion video, that trendy art of incorporating real-world actors and imagery into games. The much more developed story line of Command & Conquer is forwarded by little between-mission movies that, if not likely to make Steven Spielberg nervous, are quite well-done for what they are, featuring as they do mostly professional performers — such as a local Las Vegas weatherman playing a television-news anchorman — who were shot by a real film crew in Westwood’s custom-built blue-screen studio. Westwood’s secret weapon here was Joseph Kucan, a veteran theater director and actor who oversaw the film shoots and personally played the charismatic Nod leader Kane so well that he became the very face of Command & Conquer in the eyes of most gamers, arguably the most memorable actual character ever associated with a genre better known for its hordes of generic little automatons. Louis Castle reckons that at least half of Command & Conquer‘s considerable budget went into the cut scenes.

The game was released with high hopes in August of 1995. Computer Gaming World gave it a pretty good review, four stars out of five: “The entertainment factor is high enough and the action fast enough to please all but the most jaded wargamers.”

The gaming public would take to it even more than that review might imply. But in the meantime…


As I noted in an earlier article, numbered sequels weren’t really commonplace for strategy games prior to the mid-1990s. Blizzard had originally imagined Warcraft as a strategy franchise of a different stripe: each game bearing the name would take the same real-time approach into a completely different milieu, as SSI was doing at the time with their “5-Star General” series of turn-based strategy games that had begun with Panzer General and continued with the likes of Fantasy General and Star General. But Blizzard soon decided to make their sequel a straight continuation of the first game, an approach to which real-time strategy lent itself much more naturally than more traditional styles of strategy game; the set-piece story of a campaign could, after all, always be continued using all the ways that Hollywood had long since discovered for keeping a good thing going. The only snafu was that either the orcs or the humans could presumably have won the war in the first game, depending on which side the player chose. No matter: Blizzard decided the sequel would be more interesting if the orcs had been the victors and ran with that.

Which isn’t to say that building upon its predecessor’s deathless fiction was ever the real point of Warcraft II: Tides of Darkness. Blizzard knew now that they had a competitor in Westwood, and were in any case eager to add to the sequel all of the features and ideas that time had not allowed them to include in the first game. There would be waterways and boats to sail on them, along with oil, a third resource, one that could only be mined at sea. Both sides would get new units to play with, while elves, dwarves, trolls, ogres, and goblins would join the fray as allies of one of the two main racial factions. The interface would be tweaked with another welcome shortcut: selecting a unit and right-clicking somewhere would cause it to carry out the most logical action there without having to waste time choosing from a menu. (After all, if you selected a worker unit and sent him to a goldmine, you almost certainly wanted him to start collecting gold. Why should you have to tell the game the obvious in some more convoluted fashion?)

But perhaps the most vital improvement was in the fog of war. The simplistic implementations of same seen in the first Warcraft and Command & Conquer were inherited from Dune II: areas of the map that had been seen once by any of your units were revealed permanently, even if said units went away or were destroyed. Blizzard now made it so that you would see only a back-dated snapshot of areas currently out of your units’ line of sight, reflecting what was there the last time one of your units had eyes on them. This innovation, no mean feat of programming on the part of Patrick Wyatt, brought a whole new strategic layer to the game. Reconnaissance suddenly became something you had to think about all the time, not just once.

Other improvements were not so conceptually groundbreaking, but no less essential for keeping ahead of the Joneses (or rather the Westwoods). For example, Blizzard raised the screen-resolution stakes, from 320 X 200 to 640 X 480, even as they raised the number of people who could play together online from Command & Conquer‘s four to eight. And, while there was still a limit on the number of units you could select at one time using Blizzard’s engine, that limit at least got raised from the first Warcraft‘s four to nine.

The story and its presentation, however, didn’t get much more elaborate than last time out. While Westwood was hedging its bets by keeping one foot in the “interactive movie” space of games like Wing Commander III, Blizzard was happy to “just” make Warcraft a game. The two series were coming to evince very distinct personalities and philosophies, just as gamers were sorting themselves into opposing groups of fans — with a large overlap of less partisan souls in between them, of course.

Released in December of 1995, Warcraft II managed to shake Computer Gaming World free of some of its last reservations about the burgeoning genre of real-time strategy, garnering four and a half stars out of five: “If you enjoy fantasy gaming, then this is a sure bet for you.” It joined Command & Conquer near the top of the bestseller lists, becoming the game that well and truly made Blizzard a name to be reckoned with, a peer in every sense with Westwood.

Meanwhile, and despite the sometimes bitter rivalry between the two studios and their fans, Command & Conquer and Warcraft II together made real-time strategy into a commercial juggernaut. Both games became sensations, with no need to shirk from comparison to even DOOM in terms of their sales and impact on the culture of gaming. Each eventually sold more than 3 million copies, numbers that even the established Westwood, much less the upstart Blizzard, had never dreamed of reaching before, enough to enshrine both games among the dozen or so most popular computer games of the entire 1990s. More than three years after real-time strategy’s first trial run in Dune II, the genre had arrived for good and all. Both Westwood and Blizzard rushed to get expansion packs of additional scenarios for their latest entries in the genre to market, even as dozens of other developers dropped whatever else they were doing in order to make real-time-strategy games of their own. Within a couple of years, store shelves would be positively buckling under the weight of their creations — some good, some bad, some more imaginative, some less so, but all rendered just a bit anonymous by the sheer scale of the deluge. And yet even the most also-ran of the also-rans sold surprisingly well, which explained why they just kept right on coming. Not until well into the new millennium would the tide begin to slacken.


With Command & Conquer and Warcraft II, Westwood and Blizzard had arrived at an implementation of real-time strategy that even the modern player can probably get on with. Yet there is one more game that I just have to mention here because it’s so loaded with a quality that the genre is known for even less than its characters: that of humor. Command & Conquer: Red Alert is as hilarious as it is unexpected, the only game of this style that’s ever made me laugh out loud.

Red Alert was first envisioned as a scenario pack that would move the action of its parent game to World War II. But two things happened as work progressed on it: Westwood decided it was different enough from the first game that it really ought to stand alone, and, as designer Adam Isgreen says, “we found straight-up history really boring for a game.” What they gave us instead of straight-up history is bat-guano insane, even by the standards of videogame fictions.

We’re in World War II, but in a parallel timeline, because Albert Einstein — why him? I have no idea! — chose to travel back in time on the day of the Trinity test of the atomic bomb and kill Adolf Hitler. Unfortunately, all that’s accomplished is to make world conquest easier for Joseph Stalin. Now Einstein is trying to save the democratic world order by building ever more powerful gadgets for its military. Meanwhile the Soviet Union is experimenting with the more fantastical ideas of Nikola Tesla, which in this timeline actually work. So, the battles just keep getting crazier and crazier as the game wears on, with teleporters sending units jumping instantly from one end of the map to the other, Tesla coils zapping them with lightning, and a fetching commando named Tanya taking out entire cities all by herself when she isn’t chewing the scenery in the cut scenes. Those actually display even better production values than the ones in the first game, but the script has become pure, unadulterated camp worthy of Mel Brooks, complete with a Stalin who ought to be up there singing and dancing alongside Der Führer in Springtime for Hitler. Even our old friend Kane shows up for a cameo. It’s one of the most excessive spectacles of stupidity I’ve ever seen in a game… and one of the funniest.

Joseph Stalin gets rough with an underling. When you don’t have the Darth Vader force grip, you have to do things the old-fashioned way…

Up there at the top is the killer commando Tanya, who struts across the battlefield with no regard for proportion.

Released in the dying days of 1996, Red Alert didn’t add that much that was new to the real-time-strategy template, technically speaking; in some areas such as fog of war, it still lagged behind the year-old Warcraft II. Nonetheless, it exudes so much joy that it’s by far my favorite of the games I’ve written about today. If you ask me, it would have been a better gaming world had the makers of at least a few of the po-faced real-time-strategy games that followed looked here for inspiration. Why not? Red Alert too sold in the multiple millions.



Did you enjoy this article? If so, please think about pitching in to help me make many more like it. You can pledge any amount you like.



(Sources: the book Stay Awhile and Listen, Book I by David L. Craddock; Computer Gaming World of January 1995, March 1995, December 1995, March 1996, June 1996, September 1996, December 1996, March 1997, June 1997, and July 1997; Retro Gamer 48, 111, 128, and 148; The One of January 1993; the short film included with the Command & Conquer: The First Decade game collection. Online sources include Patrick Wyatt’s recollections at his blog Code of Honor, Dan Griliopoulos’s collection of interviews with Westwood alumni at Funambulism, Soren Johnson’s interview with Louis Castle for his Designer’s Notes podcast, and Richard Moss’s real-time-strategy retrospective for Ars Technica.

Warcraft: Orcs & Humans and Warcraft II: Tides of Darkness, are available as digital purchases at GOG.com. The first Command & Conquer and Red Alert are available in remastered versions as a bundle from Steam.)

Footnotes

Footnotes
1 This statement was actually not correct; makers of standup arcade games of the classic era and the makers of Tetris had successfully cowed the cloning competition in the courts.
 

Tags: , , , , ,

The Next Generation in Graphics, Part 2: Three Dimensions in Hardware

Most of the academic papers about 3D graphics that John Carmack so assiduously studied during the 1990s stemmed from, of all times and places, the Salt Lake City, Utah, of the 1970s. This state of affairs was a credit to one man by the name of Dave Evans.

Born in Salt Lake City in 1924, Evans was a physicist by training and an electrical engineer by inclination, who found his way to the highest rungs of computing research by way of the aviation industry. By the early 1960s, he was at the University of California, Berkeley, where he did important work in the field of time-sharing, taking the first step toward the democratization of computing by making it possible for multiple people to use one of the ultra-expensive big computers of the day at the same time, each of them accessing it through a separate dumb terminal. During this same period, Evans befriended one Ivan Sutherland, who deserves perhaps more than any other person the title of Father of Computer Graphics as we know them today.

For, in the course of earning his PhD at MIT, Sutherland developed a landmark software application known as Sketchpad, the first interactive computer-based drawing program of any stripe. Sketchpad did not do 3D graphics. It did, however, record its user’s drawings as points and lines on a two-dimensional plane. The potential for adding a third dimension to its Flatland-esque world — a Z coordinate to go along with X and Y — was lost on no one, least of all Sutherland himself. His 1963 thesis on Sketchpad rocketed him into the academic stratosphere.

Sketchpad in action.

In 1964, at the ripe old age of 26, Sutherland succeeded J.C.R. Licklider as head of the computer division of the Defense Department’s Advanced Research Projects Agency (ARPA), the most remarkable technology incubator in computing history. Alas, he proved ill-suited to the role of administrator: he was too young, too introverted — just too nerdy, as a later generation would have put it. But during the unhappy year he spent there before getting back to the pure research that was his real passion, he put the University of Utah on the computing map, largely as a favor to his friend Dave Evans.

Evans may have left Salt Lake City more than a decade ago, but he remained a devout Mormon, who found the counterculture values of the Berkeley of the 1960s rather uncongenial. So, he had decided to take his old alma mater up on an offer to come home and build a computer-science department there. Sutherland now awarded said department a small ARPA contract, one fairly insignificant in itself. What was significant was that it brought the University of Utah into the ARPA club of elite research institutions that were otherwise clustered on the coasts. An early place on the ARPANET, the predecessor to the modern Internet, was not the least of the perks which would come its way as a result.

Evans looked for a niche for his university amidst the august company it was suddenly joining. The territory of time-sharing was pretty much staked; extensive research in that field was already going full steam ahead at places like MIT and Berkeley. Ditto networking and artificial intelligence and the nuts and bolts of hardware design. Computer graphics, though… that was something else. There were smart minds here and there working on them — count Ivan Sutherland as Exhibit Number One — but no real research hubs dedicated to them. So, it was settled: computer graphics would become the University of Utah’s specialty. In what can only be described as a fantastic coup, in 1968 Evans convinced Sutherland himself to abandon the East Coast prestige of Harvard, where he had gone after leaving his post as the head of ARPA, in favor of the Mormon badlands of Utah.

Things just snowballed from there. Evans and Sutherland assembled around them an incredible constellation of bright young sparks, who over the course of the next decade defined the terms and mapped the geography of the field of 3D graphics as we still know it today, writing papers that remain as relevant today as they were half a century ago — or perchance more so, given the rise of 3D games. For example, the two most commonly used algorithms for calculating the vagaries of light and shade in 3D games stem directly from the University of Utah: Gouraud shading was invented by a Utah student named Henri Gouraud in 1971, while Phong shading was invented by another named Bui Tuong Phong in 1973.

But of course, lots of other students passed through the university without leaving so indelible a mark. One of these was Jim Clark, who would still be semi-anonymous today if he hadn’t gone on to become an entrepreneur who co-founded two of the most important tech companies of the late twentieth century.



When you’ve written as many capsule biographies as I have, you come to realize that the idea of the truly self-made person is for the most part a myth. Certainly almost all of the famous names in computing history were, long before any of their other qualities entered into the equation, lucky: lucky in their time and place of birth, in their familial circumstances, perhaps in (sad as it is to say) their race and gender, definitely in the opportunities that were offered to them. This isn’t to disparage their accomplishments; they did, after all, still need to have the vision to grasp the brass ring of opportunity and the talent to make the most of it. Suffice to say, then, that luck is a prerequisite but the farthest thing from a guarantee.

Every once in a while, however, I come across someone who really did almost literally make something out of nothing. One of these folks is Jim Clark. If today as a soon-to-be octogenarian he indulges as enthusiastically as any of his Old White Guy peers in the clichéd trappings of obscene wealth, from the mansions, yachts, cars, and wine to the Victoria’s Secret model he has taken for a fourth wife, he can at least credibly claim to have pulled himself up to his current station in life entirely by his own bootstraps.

Clark was born in 1944, in a place that made Salt Lake City seem like a cosmopolitan metropolis by comparison: the small Texas Panhandle town of Plainview. He grew up dirt poor, the son of a single mother living well below the poverty line. Nobody expected much of anything from him, and he obliged their lack of expectations. “I thought the whole world was shit and I was living in the middle of it,” he recalls.

An indifferent student at best, he was expelled from high school his junior year for telling a teacher to go to hell. At loose ends, he opted for the classic gambit of running away to sea: he joined the Navy at age seventeen. It was only when the Navy gave him a standardized math test, and he scored the highest in his group of recruits on it, that it began to dawn on him that he might actually be good at something. Encouraged by a few instructors to pursue his aptitude, he enrolled in correspondence courses to fill his free time when out plying the world’s oceans as a crewman on a destroyer.

Ten years later, in 1971, the high-school dropout, now six years out of the Navy and married with children, found himself working on a physics PhD at Louisiana State University. Clark:

I noticed in Physics Today an article that observed that physicists getting PhDs from places like Harvard, MIT, Yale, and so on didn’t like the jobs they were getting. And I thought, well, what am I doing — I’m getting a PhD in physics from Louisiana State University! And I kept thinking, well, I’m married, and I’ve got these obligations. By this time, I had a second child, so I was real eager to get a good job, and I just got discouraged about physics. And a friend of mine pointed to the University of Utah as having a computer-graphics specialty. I didn’t know much about it, but I was good with geometry and physics, which involves a lot of geometry.

So, Clark applied for a spot at the University of Utah and was accepted.

But, as I already implied, he didn’t become a star there. His 1974 thesis was entitled “3D Design of Free-Form B-Spline Surfaces”; it was a solid piece of work addressing a practical problem, but not anything to really get the juices flowing. Afterward, he spent half a decade bouncing around from campus to campus as an adjunct professor: the Universities of California at Santa Cruz and Berkeley, the New York Institute of Technology, Stanford. He was fairly miserable throughout. As an academic of no special note, he was hired primarily as an instructor rather than a researcher, and he wasn’t at all cut out for the job, being too impatient, too irascible. Proving the old adage that the child is the father of the man, he was fired from at least one post for insubordination, just like that angry teenager who had once told off his high-school teacher. Meanwhile he went through not one but two wives. “I was in this kind of downbeat funk,” he says. “Dark, dark, dark.”

It was now early 1979. At Stanford, Clark was working right next door to Xerox’s famed Palo Alto Research Center (PARC), which was inventing much of the modern paradigm of computing, from mice and menus to laser printers and local-area networking. Some of the colleagues Clark had known at the University of Utah were happily ensconced over there. But he was still on the outside looking in. It was infuriating — and yet he was about to find a way to make his mark at last.

Hardware engineering at the time was in the throes of a revolution and its backlash, over a technology that went by the mild-mannered name of “Very Large Scale Integration” (VLSI). The integrated circuit, which packed multiple transistors onto a single microchip, had been invented at Texas Instruments at the end of the 1950s, and had become a staple of computer design already during the following decade. Yet those early implementations often put only a relative handful of transistors on a chip, meaning that they still required lots of chips to accomplish anything useful. A turning point came in 1971 with the Intel 4004, the world’s first microprocessor — i.e., the first time that anyone put the entire brain of a computer on a single chip. Barely remarked at the time, that leap would result in the first kit computers being made available for home users in 1975, followed by the Trinity of 1977, the first three plug-em-in-and-go personal computers suitable for the home. Even then, though, there were many in the academic establishment who scoffed at the idea of VLSI, which required a new, in some ways uglier approach to designing circuitry. In a vivid illustration that being a visionary in some areas doesn’t preclude one from being a reactionary in others, many of the folks at PARC were among the scoffers. Look how far we’ve come doing things one way, they said. Why change?

A PARC researcher named Lynn Conway was enraged by such hidebound thinking. A rare female hardware engineer, she had made scant progress to date getting her point of view through to the old boy’s club that surrounded her at PARC. So, broadening her line of attack, she wrote a paper about the basic techniques of modern chip design, and sent it out to a dozen or so universities along with a tempting offer: if any students or faculty wished to draw up schematics for a chip of their own and send them to her, she would arrange to have the chip fabricated in real silicon and sent back to its proud parent. The point of it all was just to get people to see the potential of VLSI, not to push forward the state of the art. And indeed, just as she had expected, almost all of the designs she received were trivially simple by the standards of even the microchip industry of 1979: digital time keepers, adding machines, and the like. But one was unexpectedly, even crazily complex. Alone among the submissions, it bore a precautionary notice of copyright, from one James Clark. He called his creation the Geometry Engine.

The Geometry Engine was the first and, it seems likely, only microchip that Jim Clark ever personally attempted to design in his life. It was created in response to a fundamental problem that had been vexing 3D modelers since the very beginning: that 3D graphics required shocking quantities of mathematical calculations to bring to life, scaling almost exponentially with the complexity of the scene to be depicted. And worse, the type of math they required was not the type that the researchers’ computers were especially good at.

Wait a moment, some of you might be saying. Isn’t math the very thing that computers do? It’s right there in the name: they compute things. Well, yes, but not all types of math are created equal. Modern computers are also digital devices, meaning they are naturally equipped to deal only with discrete things. Like the game of DOOM, theirs is a universe of stair steps rather than smooth slopes. They like integer numbers, not decimals. Even in the 1960s and 1970s, they could approximate the latter through a storage format known as floating point, but they dealt with these floating-point numbers at least an order of magnitude slower than they did whole numbers, as well as requiring a lot more memory to store them. For this reason, programmers avoided them whenever possible.

And it actually was possible to do so a surprisingly large amount of the time. Most of what computers were commonly used for could be accomplished using only whole numbers — for example, by using Euclidean division that yields a quotient and a remainder in place of decimal division. Even financial software could be built using integers only to count the total number of cents rather than floating-point values to represent dollars and cents. 3D-graphics software, however, was one place where you just couldn’t get around them. Creating a reasonably accurate mathematical representation of an analog 3D space forced you to use floating-point numbers. And this in turn made 3D graphics slow.

Jim Clark certainly wasn’t the first person to think about designing a specialized piece of hardware to lift some of the burden from general-purpose computer designs, an add-on optimized for doing the sorts of mathematical operations that 3D graphics required and nothing else. Various gadgets along these lines had been built already, starting a decade or more before his Geometry Engine. Clark was the first, however, to think of packing it all onto a single chip — or at worst a small collection of them — that could live on a microcomputer’s motherboard or on a card mounted in a slot, that could be mass-produced and sold in the thousands or millions. His description of his “slave processor” sounded disarmingly modest (not, it must be said, a quality for which Clark is typically noted): “It is a four-component vector, floating-point processor for accomplishing three basic operations in computer graphics: matrix transformations, clipping, and mapping to output-device coordinates [i.e., going from an analog world space to pixels in a digital raster].” Yet it was a truly revolutionary idea, the genesis of the graphical processing units (GPUs) of today, which are in some ways more technically complex than the CPUs they serve. The Geometry Engine still needed to use floating-point numbers — it was, after all, still a digital device — but the old engineering doctrine that specialization yields efficiency came into play: it was optimized to do only floating-point calculations, and only a tiny subset of all the ones possible at that, just as quickly as it could.

The Geometry Engine changed Clark’s life. At last, he had something exciting and uniquely his. “All of these people started coming up and wanting to be part of my project,” he remembers. Always an awkward fit in academia, he turned his thinking in a different direction, adopting the mindset of an entrepreneur. “He reinvented his relationship to the world in a way that is considered normal only in California,” writes journalist Michael Lewis in a book about Clark. “No one who had been in his life to that point would be in it ten years later. His wife, his friends, his colleagues, even his casual acquaintances — they’d all be new.” Clark himself wouldn’t hesitate to blast his former profession in later years with all the fury of a professor scorned.

I love the metric of business. It’s money. It’s real simple. You either make money or you don’t. The metric of the university is politics. Does that person like you? Do all these people like you enough to say, “Yeah, he’s worthy?”

But by whatever metric, success didn’t come easy. The Geometry Engine and all it entailed proved a harder sell with the movers and shakers in commercial computing than it had with his colleagues at Stanford. It wasn’t until 1982 that he was able to scrape together the funding to found a company called Silicon Graphics, Incorporated (SGI), and even then he was forced to give 85 percent of his company’s shares to others in order to make it a reality. Then it took another two years after that to actually ship the first hardware.

The market segment SGI was targeting is one that no longer really exists. The machines it made were technically microcomputers, being built around microprocessors, but they were not intended for the homes of ordinary consumers, nor even for the cubicles of ordinary office workers. These were much higher-end, more expensive machines than those, even if they could fit under a desk like one of them. They were called workstation computers. The typical customer spent tens or hundreds of thousands of dollars on them in the service of some highly demanding task or another.

In the case of the SGI machines, of course, that task was almost always related to graphics, usually 3D graphics. Their expense wasn’t bound up with their CPUs; in the beginning, these were fairly plebeian chips from the Motorola 68000 series, the same line used in such consumer-grade personal computers as the Apple Macintosh and the Commodore Amiga. No, the justification of their high price tags rather lay with their custom GPUs, which even in 1984 already went far beyond the likes of Clark’s old Geometry Engine. An SGI GPU was a sort of black box for 3D graphics: feed it all of the data that constituted a scene on one side, and watch a glorious visual representation emerge at the other, thanks to an array of specialized circuitry designed for that purpose and no other.

Now that it had finally gotten off the ground, SGI became very successful very quickly. Its machines were widely used in staple 3D applications like computer-aided industrial design (CAD) and flight simulation, whilst also opening up new vistas in video and film production. They drove the shift in Hollywood from special effects made using miniature models and stop-motion techniques dating back to the era of King Kong to the extensive use of computer-generated imagery (CGI) that we see even in the purportedly live-action films of today. (Steven Spielberg and George Lucas were among SGI’s first and best customers.) “When a moviegoer rubbed his eyes and said, ‘What’ll they think of next?’,” writes Michael Lewis, “it was usually because SGI had upgraded its machines.”

The company peaked in the early 1990s, when its graphics workstations were the key to CGI-driven blockbusters like Terminator 2 and Jurassic Park. Never mind the names that flashed by in the opening credits; everyone could agree that the computer-generated dinosaurs were the real stars of Jurassic Park. SGI was bringing in over $3 billion in annual revenue and had close to 15,000 employees by 1993, the year that movie was released. That same year, President Bill Clinton and Vice President Al Gore came out personally to SGI’s offices in Silicon Valley to celebrate this American success story.

SGI’s hardware subsystem for graphics, the beating heart of its business model, was known in 1993 as the RealityEngine2. This latest GPU was, wrote Byte magazine in a contemporary article, “richly parallel,” meaning that it could do many calculations simultaneously, in contrast to a traditional CPU, which could only execute one instruction at a time. (Such parallelism is the reason that modern GPUs are so often used for some math-intensive non-graphical applications, such as crypto-currency mining and machine learning.) To support this black box and deliver to its well-heeled customers a complete turnkey solution for all their graphics needs, SGI had also spearheaded an open-source software library for 3D applications, known as the Open Graphics Library, or OpenGL. Even the CPUs in its latest machines were SGI’s own; it had purchased a maker of same called MIPS Technologies in 1990.

But all of this success did not imply a harmonious corporation. Jim Clark was convinced that he had been hard done by back in 1982, when he was forced to give up 85 percent of his brainchild in order to secure the funding he needed, then screwed over again when he was compelled by his board to give up the CEO post to a former Hewlett Packard executive named Ed McCracken in 1984. The two men had been at vicious loggerheads for years; Clark, who could be downright mean when the mood struck him, reduced McCracken to public tears on at least one occasion. At one memorable corporate retreat intended to repair the toxic atmosphere in the board room, recalls Clark, “the psychologist determined that everyone else on the executive committee was passive aggressive. I was just aggressive.”

Clark claims that the most substantive bone of contention was McCracken’s blasé indifference to the so-called low-end market, meaning all of those non-workstation-class personal computers that were proliferating in the millions during the 1980s and early 1990s. If SGI’s machines were advancing by leaps and bounds, these consumer-grade computers were hopscotching on a rocket. “You could see a time when the PC would be able to do the sort of graphics that [our] machines did,” says Clark. But McCracken, for one, couldn’t see it, was content to live fat and happy off of the high prices and high profit margins of SGI’s current machines.

He did authorize some experiments at the lower end, but his heart was never in it. In 1990, SGI deigned to put a limited subset of the RealityEngine smorgasbord onto an add-on card for Intel-based personal computers. Calling it IrisVision, it hopefully talked up its price of “under $5000,” which really was absurdly low by the company’s usual standards. What with its complete lack of software support and its way-too-high price for this marketplace, IrisVision went nowhere, whereupon McCracken took the failure as a vindication of his position. “This is a low-margin business, and we’re a high-margin company, so we’re going to stop doing that,” he said.

Despite McCracken’s indifference, Clark eventually managed to broker a deal with Nintendo to make a MIPS microprocessor and an SGI GPU the heart of the latter’s Nintendo 64 videogame console. But he quit after yet another shouting match with McCracken in 1994, two years before it hit the street.

He had been right all along about the inevitable course of the industry, however undiplomatically he may have stated his case over the years. Personal computers did indeed start to swallow the workstation market almost at the exact point in time that Clark bailed. The profits from the Nintendo deal were rich, but they were largely erased by another of McCracken’s pet projects, an ill-advised acquisition of the struggling supercomputer maker Cray. Meanwhile, with McCracken so obviously more interested in selling a handful of supercomputers for millions of dollars each than millions upon millions of consoles for a few hundred dollars each, a group of frustrated SGI employees left the company to help Nintendo make the GameCube, the followup to the Nintendo 64, on their own. It was all downhill for SGI after that, bottoming out in a 2009 bankruptcy and liquidation.

As for Clark, he would go on to a second entrepreneurial act as remarkable as his first, abandoning 3D graphics to make a World Wide Web browser with Marc Andreessen. We will say farewell to him here, but you can read the story of his second company Netscape’s meteoric rise and fall elsewhere on this site.



Now, though, I’d like to return to the scene of SGI’s glory days, introducing in the process three new starring players. Gary Tarolli and Scott Sellers were talented young engineers who were recruited to SGI in the 1980s; Ross Smith was a marketing and business-development type who initially worked for MIPS Technologies, then ended up at SGI when it acquired that company in 1990. The three became fast friends. Being of a younger generation, they didn’t share the contempt for everyday personal computers that dominated among their company’s upper management. Whereas the latter laughed at the primitiveness of games like Wolfenstein 3D and Ultima Underworld, if they bothered to notice them at all, our trio saw a brewing revolution in gaming, and thought about how much it could be helped along by hardware-accelerated 3D graphics.

Convinced that there was a huge opportunity here, they begged their managers to get into the gaming space. But, still smarting from the recent failure of IrisVision, McCracken and his cronies rejected their pleas out of hand. (One of the small mysteries in this story is why their efforts never came to the attention of Jim Clark, why an alliance was never formed. The likely answer is that Clark had, by his own admission, largely removed himself from the day-to-day running of SGI by this time, being more commonly seen on his boat than in his office.) At last, Tarolli, Sellers, Smith, and some like-minded colleagues ran another offer up the flagpole. You aren’t doing anything with IrisVision, they said. Let us form a spinoff company of our own to try to sell it. And much to their own astonishment, this time management agreed.

They decided to call their new company Pellucid — not the best name in the world, sounding as it did rather like a medicine of some sort, but then they were still green at all this. The technology they had to peddle was a couple of years old, but it still blew just about anything else in the MS-DOS/Windows space out of the water, being able to display 16 million colors at a resolution of 1024 X 768, with 3D acceleration built-in. (Contrast this with the SVGA card found in the typical home computer of the time, which could do 256 colors at 640 X 480, with no 3D affordances). Pellucid rebranded the old IrisVision the ProGraphics 1024. Thanks to the relentless march of chip-fabrication technology, they found that they could now manufacture it cheaply enough to be able to sell it for as little as $1000 — still pricey, to be sure, but a price that some hardcore gamers, as well as others with a strong interest in having the best graphics possible, might just be willing to pay.

The problem, the folks at Pellucid soon came to realize, was a well-nigh intractable deadlock between the chicken and the egg. Without software written to take advantage of its more advanced capabilities, the ProGraphics 1024 was just another SVGA graphics card, selling for a ridiculously high price. So, consumers waited for said software to arrive. Meanwhile software developers, seeing the as-yet non-existent installed base, saw no reason to begin supporting the card. Breaking this logjam must require a concentrated public-relations and developer-outreach effort, the likes of which the shoestring spinoff couldn’t possibly afford.

They thought they had done an end-run around the problem in May of 1993, when they agreed, with the blessing of SGI, to sell Pellucid kit and caboodle to a major up-and-comer in consumer computing known as Media Vision, which currently sold “multimedia upgrade kits” consisting of CD-ROM drives and sound cards. But Media Vision’s ambitions knew no bounds: they intended to branch out into many other kinds of hardware and software. With proven people like Stan Cornyn, a legendary hit-maker from the music industry, on their management rolls and with millions and millions of dollars on hand to fund their efforts, Media Vision looked poised to dominate.

It seemed the perfect landing place for Pellucid; Media Vision had all the enthusiasm for the consumer market that SGI had lacked. The new parent company’s management said, correctly, that the ProGraphics 1024 was too old by now and too expensive to ever become a volume product, but that 3D acceleration’s time would come as soon as the current wave of excitement over CD-ROM and multimedia began to ebb and people started looking for the next big thing. When that happened, Media Vision would be there with a newer, more reasonably priced 3D card, thanks to the people who had once called themselves Pellucid. It sounded pretty good, even if in the here and now it did seem to entail more waiting around than anything else.

The ProGraphics 1024 board in Media Vision livery.

There was just one stumbling block: “Media Vision was run by crooks,” as Scott Sellers puts it. In April of 1994, a scandal erupted in the business pages of the nation’s newspapers. It turned out that Media Vision had been an experiment in “fake it until you make it” on a gigantic scale. Its founders had engaged in just about every form of malfeasance imaginable, creating a financial house of cards whose honest revenues were a minuscule fraction of what everyone had assumed them to be. By mid-summer, the company had blown away like so much dust in the wind, still providing income only for the lawyers who were left to pick over the corpse. (At least two people would eventually be sent to prison for their roles in the conspiracy.) The former Pellucid folks were left as high and dry as everyone else who had gotten into bed with Media Vision. All of their efforts to date had led to the sale of no more than 2000 graphics cards.

That same summer of 1994, a prominent Silicon Valley figure named Gordon Campbell was looking for interesting projects in which to invest. Campbell had earned his reputation as one of the Valley’s wise men through a company called Chips and Technologies (C&T), which he had co-founded in 1984. One of those hidden movers in the computer industry, C&T had largely invented the concept of the chipset: chips or small collections of them that could be integrated directly into a computer’s motherboard to perform functions that used to be placed on add-on cards. C&T had first made a name for itself by reducing IBM’s bulky nineteen-chip EGA graphics card to just four chips that were cheaper to make and consumed less power. Campbell’s firm thrived alongside the cost-conscious PC clone industry, which by the beginning of the 1990s was rendering IBM itself, the very company whose products it had once so unabashedly copied, all but irrelevant. Onboard video, onboard sound, disk controllers, basic firmware… you name it, C&T had a cheap, good-enough-for-the-average-consumer chipset to handle it.

But now Campbell had left C&T “in pursuit of new opportunities,” as they say in Valley speak. Looking for a marketing person for one of the startups in which he had invested a stake, he interviewed a young man named Ross Smith who had SGI on his résumé — always a plus. But the interview didn’t go well. Campbell:

It was the worst interview I think I’ve ever had. And so finally, I just turned to him and I said, “Okay, your heart’s not in this interview. What do you really want to do?”

And he kind of looks surprised and says, well, there are these two other guys, and we want to start a 3D-graphics company. And the next thing I know, we had set up a meeting. And we had, over a lot of beers, a discussion which led these guys to all come and work at my office. And that set up the start of 3Dfx.

It seemed to all of them that, after all of the delays and blind alleys, it truly was now or never to make a mark. For hardware-accelerated 3D graphics were already beginning to trickle down into the consumer space. In standup arcades, games like Daytona USA and Virtua Fighter were using rudimentary GPUs. Ditto the Sega Saturn and the Sony PlayStation, the latest in home-videogame consoles, both which were on the verge of release in Japan, with American debuts expected in 1995. Meanwhile the software-only, 2.5D graphics of DOOM were taking the world of hardcore computer gamers by storm. The men behind 3Dfx felt that the next move must surely seem obvious to many other people besides themselves. The only reason the masses of computer-game players and developers weren’t clamoring for 3D graphics cards already was that they didn’t yet realize what such gadgets could do for them.

Still, they were all wary of getting back into the add-on board market, where they had been burned so badly before. Selling products directly to consumers required retail access and marketing muscle that they still lacked. Instead, following in the footsteps of C&T, they decided to sell a 3D chipset only to other companies, who could then build it into add-on boards for personal computers, standup-arcade machines, whatever they wished.

At the same time, though, they wanted their technology to be known, in exactly the way that the anonymous chipsets made by C&T were not. In the pursuit of this aspiration, Gordon Campbell found inspiration from another company that had become a household name despite selling very little directly to consumers. Intel had launched the “Intel Inside” campaign in 1990, just as the era of the PC clone was giving way to a more amorphous commodity architecture. The company introduced a requirement that the makers of computers which used its CPUs include the Intel Inside logo on their packaging and on the cases of the computers themselves, even as it made the same logo the centerpiece of a standalone advertising campaign in print and on television. The effort paid off; Intel became almost as identified with the Second Home Computer Revolution in the minds of consumers as was Microsoft, whose own logo showed up on their screens every time they booted into Windows. People took to calling the emerging duopoly the “Wintel” juggernaut, a name which has stuck around to this day.

So, it was decided: a requirement to display a similarly snazzy 3Dfx logo would be written into that company’s contracts as well. The 3Dfx name itself was a vast improvement over Pellucid. As time went on, 3Dfx would continue to display a near-genius for catchy branding: “Voodoo” for the chipset itself, “GLide” for the software library that controlled it. All of this reflected a business savvy the likes of which hadn’t been seen from Pellucid, that was a credit both to Campbell’s steady hand and the accumulating experience of the other three partners.

But none of it would have mattered without the right product. Campbell told his trio of protégés in no uncertain terms that they were never going to make a dent in computer gaming with a $1000 video card; they needed to get the price down to a third of that at the most, which meant the chipset itself could cost the manufacturers who used it in their products not much more than $100 a pop. That was a tall order, especially considering that gamers’ expectations of graphical fidelity weren’t diminishing. On the contrary: the old Pellucid card hadn’t even been able to do 3D texture mapping, a failing that gamers would never accept post-DOOM.

It was left to Gary Tarolli and Scott Sellers to figure out what absolutely had to be in there, such as the aforementioned texture mapping, and what they could get away with tossing overboard. Driven by the remorseless logic of chip-fabrication costs, they wound up going much farther with the tossing than they ever could have imagined when they started out. There could be no talk of 24-bit color or unusually high resolutions: 16-bit color (offering a little over 65,000 onscreen shades) at a resolution of 640 X 480 would be the limit.[1]A resolution of 800 X 600 was technically possible using the Voodoo chipset, but using this resolution meant that the programmer could not use a vital affordance known as Z-buffering. For this reason, it was almost never seen in the wild. Likewise, they threw out the capability of handling any polygons except for the simplest of them all, the humble triangle. For, they realized, you could make almost any solid you liked by combining triangular surfaces together. With enough triangles in your world — and their chipset would let you have up to 1 million of them — you needn’t lament the absence of the other polygons all that much.

Sellers had another epiphany soon after. Intel’s latest CPU, to which gamers were quickly migrating, was the Pentium. It had a built-in floating-point co-processor which was… not too shabby, actually. It should therefore be possible to take the first phase of the 3D-graphics pipeline — the modeling phase — out of the GPU entirely and just let the CPU handle it. And so another crucial decision was made: they would concern themselves only with the rendering or rasterization phase, which was a much greater challenge to tackle in software alone, even with a Pentium. Another huge piece of the puzzle was thus neatly excised — or rather outsourced back to the place where it was already being done in current games. This would have been heresy at SGI, whose ethic had always been to do it all in the GPU. But then, they were no longer at SGI, were they?

Undoubtedly their bravest decision of all was to throw out any and all 2D-graphics capabilities — i.e., the neat rasters of pixels used to display Windows desktops and word processors and all of those earlier, less exciting games. Makers of Voodoo boards would have to include a cable to connect the existing, everyday graphics cards inside their customers’ machines to their new 3D ones. When you ran non-3D applications, the Voodoo card would simply pass the video signal on to the monitor unchanged. But when you fired up a 3D game, it would take over from the other board. A relay inside made a distinctly audible click when this happened. Far from a bug, gamers would soon come to consider the noise a feature.”Because you knew it was time to have fun,” as Ross Smith puts it.

It was a radical plan, to be sure. These new cards would be useful only for games, would have no other purpose whatsoever; there would be no justifying this hardware purchase to the parents or the spouse with talk of productivity or educational applications. Nevertheless, the cost savings seemed worth it. After all, almost everyone who initially went out to buy the new cards would already have a perfectly good 2D video card in their computer. Why make them pay extra to duplicate those functions?

The final design used just two custom chips. One of them, internally known as the T-Rex (Jurassic Park was still in the air), was dedicated exclusively to the texture mapping that had been so conspicuously missing from the Pellucid board. Another, called the FBI (“Frame Buffer Interface”), did everything else required in the rendering phase. Add to this pair a few less exciting off-the-shelf chips and four megabytes worth of RAM chips, put it on a board with the appropriate connectors, and you had yourself a 3Dfx Voodoo GPU.

Needless to say, getting this far took some time. Tarolli, Sellers, and Smith spent the last half of 1994 camped out in Campbell’s office, deciding what they wanted to do and how they wanted to do it and securing the funding they needed to make it happen. Then they spent all of 1995 in offices of their own, hiring about a dozen people to help them, praying all the time that no other killer product would emerge to make all of their efforts moot. While they worked, the Sega Saturn and Sony PlayStation did indeed arrive on American shores, becoming the first gaming devices equpped with 3D GPUs to reach American homes in quantity. The 3Dfx crew were not overly impressed by either console — and yet they found the public’s warm reception of the PlayStation in particular oddly encouraging. “That showed, at a very rudimentary level, what could be done with 3D graphics with very crude texture mapping,” says Scott Sellers. “And it was pretty abysmal quality. But the consumers were just eating it up.”

They got their first finished chipsets back from their Taiwanese fabricator at the end of January 1996, then spent Super Bowl weekend soldering them into place and testing them. There were a few teething problems, but in the end everything came together as expected. They had their 3D chipset, at the beginning of a year destined to be dominated by the likes of Duke Nukem 3D and Quake. It seemed the perfect product for a time when gamers couldn’t get enough 3D mayhem. “If it had been a couple of years earlier,” says Gary Tarolli, “it would have been too early. If it had been a couple of years later, it would have been too late.” As it was, they were ready to go at the Goldilocks moment. Now they just had to sell their chipset to gamers — which meant they first had to sell it to game developers and board makers.



Did you enjoy this article? If so, please think about pitching in to help me make many more like it. You can pledge any amount you like.



(Sources: the books The Dream Machine by M. Mitchell Waldrop Dealers of Lightning: Xerox PARC and the Dawn of the Computer Age by Michael A. Hiltzik, and The New New Thing: A Silicon Valley Story by Michael Lewis; Byte of May 1992 and November 1993; InfoWorld of April 22 1991 and May 31 1993; Next Generation of October 1997; ACM’s Computer Graphics journal of July 1982; Wired of January 1994 and October 1994. Online sources include the Computer History Museum’s “oral histories” with Jim Clark, Forest Baskett, and the founders of 3Dfx; Wayne Carlson’s “Critical History of Computer Graphics and Animation”; “Fall of Voodoo” by Ernie Smith at Tedium; Fabian Sanglard’s reconstruction of the workings of the Voodoo 1 chips; “Famous Graphics Chips: 3Dfx’s Voodoo” by Dr. Jon Peddie at the IEEE Computer Society’s site; an internal technical description of the Voodoo technology archived at bitsavers.org.)

Footnotes

Footnotes
1 A resolution of 800 X 600 was technically possible using the Voodoo chipset, but using this resolution meant that the programmer could not use a vital affordance known as Z-buffering. For this reason, it was almost never seen in the wild.
 

Tags: ,