RSS

Indiana Jones and the Fate of Atlantis (or, Of Movies and Games and Whether the Twain Shall Meet)

You ask why there are movements in movie history. Why all of a sudden there are great Japanese films, or great Italian films, or great Australian films, or whatever. And it’s usually because there are a number of people that cross-pollinated each other.

— Francis Ford Coppola

Over the course of the late 1970s and early 1980s, George Lucas and Steven Spielberg remade the very business of film-making for better or for worse, shifting its focus from message movies and character dramas to the special-effects-heavy, escapist blockbusters that still drive Hollywood’s profits to this day. These events, which we might call the rise of the culture of the blockbuster, have long since been enshrined into the canonical history of film, filed under the names of these two leading lights.

Yet no two personalities could possibly have brought about such a dramatic shift on their own. Orbiting around Lucas and Spielberg was an entire galaxy of less prominent talents whose professional lives were profoundly affected by their association with these two new faces of modern Hollywood. They were the fellow travelers who helped Lucas and Spielberg to change the movie industry, all without ever quite being aware that they were part of any particular movement at all. Among the group were names like John Milius, Walter Murch, Willard Huyck, Randall Kleiser, Matthew Robbins, and Hal Barwood. “I can’t speak for the others,” says Robbins, “but it was my impression that nobody had the foggiest idea that there was any ‘next wave’ coming. Nobody had set their sights — except perhaps for George [Lucas].”

Out of this group of slightly lesser but undeniably accomplished lights, Hal Barwood is our special person of interest for today. He first met George Lucas in the mid-1960s, when the two were students together in the University of Southern California’s film program. Lucas, who had only recently abandoned his original dream of driving race cars in favor of this new one of making movies, was shy almost to the point of complete inarticulateness, and was far more comfortable futzing over a Moviola editing machine than he was trying to cajole live actors into doing his bidding. Nevertheless, there was a drive to this awkward young man that gradually revealed itself over the course of a longer acquaintance, and Barwood — on the surface, a far more assertive, impressive personality — soon joined Lucas’s loose clique in the role of follower rather than leader. Flying in the face of a Hollywood culture which valued realistic character dramas above all else, Lucas, Barwood, and their pals loved science fiction and fantasy, didn’t consider escapism to be a dirty word, and found the visual aesthetics of film to be every bit as interesting as their actors’ performances.

The bond thus forged would remain strong for many years. When Lucas, through the intermediation of his friend and mentor Francis Ford Coppola, got the chance to direct an actual feature film, Barwood added the first professional film credit to his own CV by helping out with the special effects on what became THX-1138, a foreboding, low-budget work of dystopic science fiction that was released to little attention in 1971. When Lucas hit it big two years later with his very different second film, the warmly nostalgic coming-of-age story American Graffiti, he bought a big old ramshackle mansion in San Anselmo, California, with the first of the proceeds, and Barwood became one of several fellow travelers who joined him and his wife in this first headquarters of a nascent corporate entity called Lucasfilm.

George Lucas, standing at far right, discusses Star Wars with his sounding board in the mid-1970s. Steve Spielberg is wearing the orange cap, and Hal Barwood sits on the fence behind him.

Working together with his good friend Matthew Robbins, Barwood wrote a science-fiction script called Star Dancing during the same period. The two commissioned a former Boeing technical illustrator and CBS animator named Ralph McQuarrie to paint some concept art, thereby to help them pitch the project to the studios. In the end, though, Star Dancing never went anywhere — until Lucas, who was toying with ideas for a second, more crowd-pleasing science-fiction movie of his own, saw McQuarrie’s paintings and was greatly inspired by them, hiring him to develop the vision further for what would become Star Wars. McQuarrie’s concept art had much to do with the eventual green-lighting of Star Wars by 20th Century Fox, and he would continue to shape the look of that film and its two sequels over the long course of their production.

Barwood and Robbins, for their part, became two of the eight people entrusted to read the first draft of the film’s script. He and the others in that San Anselmo house then proceeded to slowly shape the Star Wars script we know today over the course of draft after draft.

Even as they helped Lucas with Star Wars, Barwood and Robbins were still trying to make it as screenwriters in their own right. They sold their first script, a chase caper called The Sugarland Express, to George Lucas’s up-and-coming pal Steven Spielberg, a more recently arrived member of the San Anselmo collective; he turned it into his feature-film directorial debut in 1974. More screenwriting followed, including an uncredited rewrite of the 1977 Spielberg blockbuster Close Encounters of the Third Kind, the first film to benefit in a big way commercially from the new interest in science fiction ignited by Star Wars, which had been released about six months prior to it.

Yet the pair found screenwriting to be an inherently frustrating profession in an industry which regarded the director as a movie’s ultimate creative voice. “In writing, you’re always watching directors ruin your stuff,” says Barwood. “As a writer, you have a certain flavor, style, and emphasis in mind when you write the script, and you’re always shocked when the director comes back with something else. There’s a tendency to want to get your hands on the controls and do it yourself.” Accordingly, Matthew Robbins personally directed the duo’s 1978 comedy Corvette Summer, starring Mark Hamill — Luke Skywalker himself — in his first big post-Star Wars role. The film was a commercial success, even if the reviews weren’t great; it turned out that there was only so much you could do with Hamill, the very archetype of an actor who’s good in one role and one role only.

The duo’s next big project that was their most ambitious, time-consuming, and expensive undertaking yet. Dragonslayer was a fantasy epic based loosely on the legend of St. George and the dragon, and was once again directed by Robbins. The special effects were provided by George Lucas’s Industrial Light and Magic. In fact, Dragonslayer became the very first outside, non-Star Wars project the famous effects house took on. Being at least as challenging as anything in any of the Star Wars films, the Dragonslayer effects took them some eighteen months to complete.

Dragonslayer‘s pedigree was such that it was widely heralded in the Hollywood trade press as a “surefire success” prior to its release. But it had the misfortune to arrive in theaters on June 26, 1981, two weeks after Lucas and Spielberg’s new blockbuster collaboration, Raiders of the Lost Ark. The latter film shared with Dragonslayer the distributor Paramount Pictures. “Paramount was quite satisfied to go through the summer with the money they were going to get from Raiders of the Lost Ark,” says Barwood, “and paid no attention to our movie. They just dropped it. They just forgot about it.” Dragonslayer flopped utterly, badly damaging the duo’s reputation inside Hollywood as purveyors of marketable cinema.

Barwood got his one and only chance to direct a feature film in 1985, when he took the reins of Warning Sign, yet another screenplay by himself and Robbins. In a telling sign of the damage Dragonslayer‘s failure had done to their careers, this latest film, a fairly predictable thriller about a genetic-engineering project run amok, had a budget of about one-quarter what the duo had had to work with on their fantasy epic. It garnered mediocre reviews and box-office receipts, and no one seemed eager to entrust its first-time director with more movies. Barwood and Robbins, both frustrated by the directions their careers had taken since Corvette Summer, decided their long creative partnership had run its course.

George Lucas (far left) and some of his old compatriots at Skywalker Ranch in the mid-1980s. Matthew Robbins is third from left, Hal Barwood fourth from left.

So, Hal Barwood found himself at something of a loose end as the 1980s drew to a close. He was still friendly with George Lucas, if perhaps not quite the bosom buddy he once had been, and he still knew many of the most powerful people in the movie industry, starting with Steven Spielberg — who had gradually shown himself to be, even more so than Lucas, the personification of the new, blockbuster-oriented Hollywood, his prolific career cruising along with hit after hit. But, as Spielberg basked in his success, Barwood had parted ways with his partner and seen his directorial debut become a bust. He hadn’t had a hand in a real hit since 1978, and he hadn’t sold a script at all in quite some time. Just to rub salt into the wounds, his old compatriot Matthew Robbins managed to score another modest hit of his own at last just after the breakup, with the distinctly Spielbergian science-fiction comedy Batteries Not Included, which he directed and whose screenplay he had written with others.

Perhaps it was time for Hal Barwood to try something completely different. He had actually been mulling over his future in this cutthroat industry for some time already. During the promotional tour for Warning Sign, he had made a rather odd comment to Starlog magazine, accompanied by what his interviewer described as a “nervous laugh”: “If movies don’t pan out for me, I have a second career lurking around the corner in entertainment software, working on animated computer games, which I’m doing right now. They’re very sophisticated, animated adventure games.”

Barwood had in fact been fascinated by computers for a long time, ever since he had first encountered the hulking number-crunching monstrosities of the 1960s at university. In 1980, while hanging around the set of Dragonslayer, he had programmed his first game on a state-of-the-art HP-41C calculator as a way of passing the time between takes. Soon after, he joined the PC Revolution, buying an Apple II and starting to tinker. He worked for years on a CRPG for that machine in his spare time — worked for so long that it was still in progress when the Apple II games market began to collapse. Undaunted, he moved on to a Macintosh, where he programmed a storyboarding system for movie makers like himself in HyperCard, selling it as shareware under the name of StoryCard.

With all this experience behind him, it was natural for Barwood now to consider a future in games instead of movies — a future in an industry where the budgets were smaller, the competitors were fewer, and it was much easier to come up with an idea and actually see it through from beginning to end in something like its original form. All of this made quite a contrast to the industry where he had cut his teeth. “The movie business is very difficult for most of us,” he says. “We don’t usually get a majority of our projects to completion. Most of our dreams turn into screenplays, but they stall out at that stage.”

Given his long association with George Lucas, Barwood decided to talk to Lucasfilm Games. They were more than happy to have him, and could certainly relate to the reasons that brought him to their doorstep — for, like Barwood, they had had a somewhat complicated life of it to date in the long shadow cast by Lucas.


Before there was Lucasfilm Games, there was the Lucasfilm Computer Division, founded in 1979 to experiment with computer animation and digital effects, technologies with obvious applications for the making of special-effects-heavy films. Lucasfilm Games had been almost literally an afterthought, an outgrowth of the Computer Division that was formed in 1982, a time when George Lucas and Lucasfilm were flying high and throwing money about willy-nilly.

In those days, a hit computer game, one into which Lucasfilm Games had poured their hearts and souls, might be worth about as much to the parent company’s bottom line as a single Jawa action figure — such was the difference in scale between the computer-games industry of the early 1980s and the other markets where Lucasfilm was a player. George Lucas personally had absolutely no interest in or understanding of games, which didn’t do much for the games division’s profile inside his company. And, most frustrating of all for the young developers who came to work for The House That Star Wars Built, they weren’t allowed to make Star Wars games — nor, for that matter, even Indiana Jones games — thanks to Lucas having signed away those rights to others at the height of the Atari VCS fad. Noah Falstein, one of those young developers, would later characterize this situation as “the best thing that could have happened” to them, as it forced them to develop original fictions instead — leading, he believes, to better, more original games. At the time, however, it couldn’t help but frustrate that the only Lucasfilm properties the games division had access to were middling fare like Labyrinth.

Still, somebody inside Lucasfilm apparently believed in the low-profile division’s potential, for it survived the great bloodletting of the mid-1980s. In response to George Lucas’s expensive divorce settlement and the realization that, with the Star Wars trilogy now completed, there would be no more enormous guaranteed paydays in the future, the company’s executives, with Lucas’s tacit blessing, took an axe to many of their more uncertain or idealistic ventures at that time. Among the divisions that were sold off was the rest of the Computer Division that had indirectly spawned Lucasfilm Games; it would go on to worldwide fame and fortune under the name of Pixar. As for the games people: in 1986, they got to move into some of the vacant space all the downsizing had opened up at Skywalker Ranch, Lucasfilm’s sprawling summer camp cum corporate campus in Marin County, California.

The wall separating Lucasfilm Games from the parent company’s most desirable intellectual properties finally began to fall at the end of the 1980s, when the games people were given access to… no, not yet to Star Wars, but to the next best thing: Indiana Jones, George Lucas’s other great cinematic success story. Raiders of the Lost Ark, the first breakneck tale of the adventurous 1930s archaeologist, as conceived by Lucas and passed on to Steven Spielberg to direct, had become the highest-grossing film of 1981 by nearly a factor of two over its nearest competitor; as we’ve seen, it had trampled less fortunate rivals like poor Hal Barwood’s Dragonslayer into the dust during that year’s summer-blockbuster season. A 1984 sequel, Indiana Jones and the Temple of Doom, had done nearly as well. Now a third and presumably final film, to be called Indiana Jones and the Last Crusade, was in the offing. With the earlier licensing deals they had made for the property now expired, the parent company wanted their games division to make an adventure game out of it.

Indiana Jones and the Last Crusade: The Graphic Adventure1 was designed by Noah Falstein, David Fox, and Ron Gilbert, all of whom had worked on previous Lucasfilm adventure games, and written using the division’s established SCUMM adventuring engine. This committee approach to the game’s design is typical of the workaday nature of the project as a whole. The designers were given a copy of the movie’s shooting script, and were expected not to deviate too much from it. Ron Gilbert, a comedy writer by disposition and talent, found the need to play it relatively straight particularly frustrating, but it seems safe to say that all of the designers’ creative instincts were somewhat hemmed in by the project’s fixed rules. The end result, while competently executed, hasn’t the same vim and vinegar as Maniac Mansion, the first SCUMM adventure, nor even Zak McKracken and the Alien Mindbenders, the rather less satisfying second SCUMM game.

The boxing scene which opens the game of Indiana Jones and the Last Crusade was a part of the film’s original script which was cut in the editing room. Scenes like these make the game almost of more interest to film historians than game historians, serving as a window into the movie as it was conceived by its screenwriter Jeffrey Boam.

Perhaps the most interesting aspect of the game arises from the fact that its designers were adapting from the shooting script rather than the finished movie, which they got to see in the Skywalker Ranch theater only when their own project was in the final stages of bug-swatting and polishing. They actually implemented parts of Jeffrey Boam’s script for the movie far more faithfully than Steven Spielberg wound up doing, including numerous scenes — like the boxing sequence at the very beginning of the game — that wound up being cut from the movie. Nevertheless, the game suffers from the fundamental problem of all such overly faithful adaptations from other media: if you’ve seen the movie — and it seemed safe to assume that just about everybody who played the game had seen the movie — what’s the point in walking through the same story again in game form? The designers went to considerable lengths to accommodate curious (or cantankerous) players who make different choices from those of Indiana Jones in the movie, turning their choices into viable alternative pathways rather than mere game overs. But there was only so much they could do in even that respect, given the constraints under which they labored.

Of course, licensed games exist first and foremost because licenses sell games, not because they lead to better ones. Indiana Jones and the Last Crusade became the cinematic blockbuster of the year upon its release in May of 1989, and the game also did extremely well when it hit stores a few weeks later. Having taken this first step into the territory of Lucasfilm’s biggest franchises and been so amply rewarded for it, the people at the games division naturally wanted to keep the good times going. There were likely to be no more Indiana Jones movies; Harrison Ford, the series’s famously prickly star, was publicly declaring himself to be through with playing the character. But did that necessarily mean that there couldn’t be more Indiana Jones games? With the license now free and clear for their use, no one at Lucasfilm Games saw any reason to assume so.


It was at this juncture that Hal Barwood entered the picture, interested in trying a new career in games on for size. Just about any developer in the industry would have jumped at the chance to bring someone like him aboard. Talk of a merger of games with cinema to create a whole new genre of mass-media entertainment — the interactive movie, preferably published on CD-ROM complete with voice acting and perhaps even real-world video footage — dominated games-industry conferences and magazine editorials alike as the 1990s began. But for all their grandiose talk, the game developers clustered in Northern California were all too aware that they lacked the artistic respectability necessary to tempt most of those working with traditional film and video in the southern part of the state into working with them on interactive projects. Hal Barwood might have been no more than a mid-tier Hollywood player at best, but suffice to say that there wasn’t exactly a surfeit of other Hollywood veterans of any stripe who were willing to work on games.

In an online CompuServe conference involving many prominent adventure-game designers that was held on August 24, 1990, not long after Barwood’s arrival at Lucasfilm Games, Noah Falstein could hardly keep himself from openly taunting his competitors about what he had and they didn’t:

One way we’re trying to incorporate real stories into games is to use real storytellers. Next year, we have a game coming out by Hal Barwood, who’s been a successful screenwriter, director, and producer for years. His most well-known movies probably are the un-credited work he did on Close Encounters and Dragonslayer, which he co-wrote and produced. He’s also programmed his own Apple II games in 6502 assembly in his spare time. I’ve already learned a great deal about pacing, tension, character, and other “basic” techniques that come naturally — or seem to — to him. I highly recommend such collaborations to you all. I think we’ve got a game with a new level of story on the way.

Falstein was in a position to learn so much from Barwood because he had been assigned to work with him as his design partner on his first project — the idea being that Barwood would take care of all the “basic” cinematic techniques Falstein enthuses about above, while Falstein would keep the project on track and make sure it worked as a playable adventure game, with soluble puzzles and all the rest.

Ironically given what Raiders of the Lost Ark had done to Dragonslayer, Hal Barwood’s one big chance to become a truly major Hollywood player, the game in question was to be another Indiana Jones game — albeit one with an original story, not bound to any movie. The initial plan had been to sift through the pile of rejected scripts for the third Indiana Jones film and select a likely candidate from them for adaptation into a game. But it turned out that the scripts were all pretty bad, or at least not terribly suitable for interactive adaptation.

The Azores is one of the many exotic locations Indy visits in Indiana Jones and the Fate of Atlantis, all of which are brought to life with aplomb by Lucasfilm Games’s accomplished art team.

So, Barwood and Falstein decided to invent their own story, and thus went looking for legends of lost civilizations that might be worthy of an intrepid archeologist who had already found the Lost Ark of the Covenant and the Holy Grail. “George [Lucas] has established a criterion for Indiana Jones adventures,” said Barwood, “and it’s basically that he should only find things that actually existed — or at least could have existed.” The fabled sunken island of Atlantis seemed the right mixture of myth and history. Barwood:

Our eyes fell upon Atlantis because not only is it an ancient myth known by almost everyone, but it also has wonderful credentials, in that it was first mentioned by Plato a couple thousand years ago. In addition to that, in the early part of this century, the idea was taken over by spiritualists and mystics, who attributed to the Atlanteans this fantastic technology, with airships flying 100 milers per hour, powered by vrill and firestone. When we found this out, we thought to ourselves, “Does this sound as interesting as the Holy Grail? Yes, it does.” Even though it’s a myth, the myth is grounded in a wonderful collection of lore.

The legend’s wide-ranging wellsprings would allow them to send Indy traipsing between exotic locations scattered over much of the world: New York City, Iceland, Guatemala, the Azores, Algiers, Monte Carlo, Crete, Santorini, finally ending up under the ocean at the site of Atlantis itself.

The game known as Indiana Jones and the Fate of Atlantis hits all the notes familiar to anyone who has seen an Indiana Jones film. It seems that the mythical Atlanteans were quite a clever lot, having harnessed energies inconceivable to modern scientists. The Nazis have gotten wind of this, and are fast piecing together the clues that will let them find the undersea site of Atlantis, enter it, and take the technology for themselves, thereby to conquer the world. It’s a fine premise for a globetrotting story of thrills and spills — silly but no more silly than any of those that got made into movies. There’s even a female sparring partner/love interest for Indy, just like in the films. This time her name is Sophia, and she’s a former archaeologist who’s become a professional psychic, much to the scientific-minded Indy’s dismay. Let the battle of barbs begin!

Barwood’s first interactive script really is a good one, with deftly drawn plot beats and characters that, if not exactly deep, nobly fulfill their genre-story purposes as engines of action, tension, or comic relief. Other game writers of the early 1990s weren’t always or even usually all that adept at such basic techniques of fiction as “pacing, tension, and character.” To see how painful a game can be that wants to be like the Indiana Jones movies but lacks the writers to pull it off, one need look no further than Dynamix’s Heart of China, with its humor that lands with the most leaden of thuds, its hero who wants to be a charming rogue but misplaced his charm, and its dull supporting characters who are little more than crude ethnic stereotypes. When you play Fate of Atlantis, by contrast, you feel yourself to be in the hands of a writer who knows exactly what he’s doing.

The game is full of callbacks to the movies’ most famous catchphrases.

That said, the degree to which Indiana Jones and the Fate of Atlantis is truly cinematic can be and too often is overstated. Games are not movies — an obvious point that game developers of the early 1990s frequently lost sight of in their rush to make their interactive movies. Hal Barwood, the Hollywood veteran brought into Lucasfilm Games to apply his cinematic expertise, was ironically far more aware of this fact than were many other game designers who lacked his experience in the medium they were all so eager to ape. Speaking to the science-fiction writer and frequent games-industry commentator Orson Scott Card in 1990, Barwood made some telling observations:

“The companies making animated games keep talking as if games resembled movies,” he [Barwood] said. “But they don’t resemble movies all that much.”

He granted some resemblances, of course, especially with animated film. The dependence on artists; the trickle rate of production, where you’re producing the game or film at the rate of only minutes, or even seconds, of usable footage a day; and the dominant role of the editing process.

Still, though, when it comes to the art of composing a game, inventing it, he said, “What it really resembles is theater. Plays.”

Why? Because as with a play, you have only a few settings you can work with, and they can usually be viewed from only a single angle and at the same distance. You can’t do any meaningful work with closeups (to design and program genuine realistic facial expressions just isn’t worth the huge investment in time and disk space). It’s so hard to make actions clear that you must either rely on dialogue, like most plays, or show only the simplest, most obvious actions.

In movies, it’s just the opposite. You control the pace and rhythm of film by cutting and shifting the action from place to place. The camera never gazes at any one thing for long.

The computer games of this era which most clearly did understand the kinetic language of cinema — the language of a roving camera and a keen-eyed editor — weren’t any of the avowed interactive movies that were being presented in the form of plot- and dialog-heavy adventure games, but rather the comparatively minimalist action games Prince of Persia and Another World, both essentially one-man productions that employed few to no words. Both of these games were aesthetically masterful, but somewhat more problematic in terms of providing their players with interesting forms of interactivity, thus inadvertently illustrating some of the drawbacks of fetishizing movies as the ideal aesthetic model for games.

All the other people who thought they were making interactive movies were “filming” their productions the way only the very earliest movie directors had filmed, before a proper language of film had been created: through a single static “camera.” The end results were anything but cinematic in the way a fellow like Hal Barwood, steeped throughout his life in the language of film, understood that term. His long experience in film-making allowed him to see the essential fact that games were not movies. They might borrow the occasional technique from cinema, but games were a medium — or, perhaps better stated, a matrix of mediums, only one of which was the point-and-click adventure game — with their own unique sets of aesthetic affordances. Countless game developers seemed to be using the term “interactive movie” to designate any game that had a lot of story, but the qualities of being cinematic and being narrative-oriented were really orthogonal to one another.

As in the first Indiana Jones movie, a Nazi submarine features prominently in Fate of Atlantis.

In later years, Hal Barwood would describe a narrative-driven game as more akin to a vintage Russian novel than a movie, a “continuous experience in a fictional world”: something the player lives with over a period of days or weeks, working through it at her own pace, mulling it over even when she isn’t actively sitting in front of the computer. The control or lack thereof of pacing is a critical distinction: a game which leaves any reasonable scope of agency to its player must necessarily cede much or all of the control of pacing to her. And yet pacing is absolutely key to the final effect of any movie, so much so that the director may very well spend months in the editing room after all the shooting is done, trying to get the pacing just right. The game designer doesn’t have anything like the same measure of direct control over the player’s experience, and so must deliver a very different sort of fiction.

Indiana Jones and the Fate of Atlantis works because it understands these differences in media. It plays with the settings, characters, and themes of the Indiana Jones movies to fine effect, but never forgets that it’s an adventure game. The driving mechanic of an adventure game — the solving of intellectual puzzles — is quite distinct from that which drives the movies, and the plot points must be adapted to match. Can you imagine the cinematic Indy sticking wads of chewing gum to the soles of his shoes so as to climb up a coal chute? Or using a bathroom plunger as a handy replacement for a missing control lever inside a Nazi submarine? Puzzles of this nature inevitably make Indy himself into a rather different sort of personality — one a little less cool and collected, a little more prone to be the butt of the joke rather than the source of it. It’s clear from the very opening scenes of Fate of Atlantis, an innovative interactive credits sequence in which poor Indy must endure pratfall after pratfall, that this isn’t quite the same character we thrilled to in the movies. Harrison Ford would have walked out if asked to play a series of scenes like these.

This phenomenon, which we might called the Guybrush Threepwoodization of the hero, is very common in adventure games adapted from other media, given that the point-and-click adventure game as a medium wants always to collapse back into comedy as its default setting. (See, for example, the Neuromancer game, which similarly turns the cool cat Case from its source novel into a put-upon loser, and winds up becoming a pretty great game in the process.) Barwood and Falstein as well decided that Indiana Jones must adapt to the adventure-game genre rather than the adventure-game genre adapting to Indiana Jones. This was most definitely the right approach to take, and is the overarching reason why this game succeeds when so many other interactive adaptations fail.

The one place where the otherwise traditionalist Fate of Atlantis clearly does try to do something new has nothing directly to do with its source material. It rather takes the form of three possible narrative through lines, depending on the player’s individual predilections. After playing through the introductory stages of the game, you’re given a choice between a “Team” path, where Indy and Sophia travel together and cooperate to solve problems; a “Wits” path, where Indy travels largely alone and solves problems using his noggin; and a “Fists” path, where Indy travels alone and solves some though by no means all of his problems using more, shall we say, active measures, which translate into little action-oriented minigames for the player. The last is seemingly the closest to the spirit of the films, but is, tellingly, almost universally agreed to be the least interesting way to play the game.

The Team path, like all of them, has its advantages and disadvantages. It’s great to have Sophia around to help out with things — until she falls down a pit and needs your help getting out.

Although the message would get a little muddled once the game reached stores — “three games in one!” was a tagline few marketers could resist — Barwood and Falstein’s primary motivation in making these separate paths wasn’t to create a more replayable game, but rather a more accessible one. Lucasfilm Games always placed great emphasis on giving their players a positive, non-frustrating experience. Different players would prefer to play in different ways, Barwood and Falstein reasoned, and their game should adapt to that. “Socially-oriented” players — possibly including the female players they were always hoping to reach — would enjoy the Team path with its constant banter and pronounced romantic tension between Indy and Sophia; the stereotypically hardcore, cerebral adventure gamers would enjoy the Wits path; those who just wanted to get through the story and check out all the exotic destinations could go for the Fists path.

Falstein liked to call Fate of Atlantis a “self-tuning game.” In this spirit, until very late in development the branching pathways were presented not as an explicit choice but rather as a more subtle in-game personality test. Early on, Indy needs to get into a theater even though he doesn’t have a ticket. There are three ways to accomplish this: talking his way past the guard at the door; puzzling his way through a maze of boxes to find a hidden fire-escape ladder; or simply sucker-punching the guard. Thus would the player’s predilections be determined. In the interest of transparency and as a sop to replayability, however, the personality test wound up being replaced by a simple menu for choosing your pathway.

The first substantial interactive scene in the game, taking place outside and inside a theater in New York City where Indy’s adventuring partner-to-be Sophia is giving a lecture, was intended to function as a personality test of sorts, determining whether the player was sent down the Team, Wits, or Fists path. In the end, though, its finding were softened to a mere recommendation preceding an explicit choice of paths which is offered to the player at its conclusion.

The idea of multiple pathways turned out not to be as compelling in practice as in theory. Most players did take it more as an invitation to play the game three times than an opportunity to play it once their way, and were disappointed to discover that the branching pathways encompass only about 60 percent of the game as a whole; the first 10 percent or so, as well as the lengthy climax, are the same no matter which pathway you choose. Nor are the Team and Wits pathways different enough from one another to give the game all that much of a different flavor; they both ultimately come down to solving a series of logic puzzles. The designers’ time would probably have been better spent making one pathway through the game that combined elements of the Team and Wits pathways. Lucasfilm Games never tried anything similar again. The branching pathways were an experiment, and in that sense at least they served their purpose.

A substantial but by no means enormous game, Indiana Jones and the Fate of Atlantis nevertheless spent some two years in development, a lengthy span of time indeed by the standards of the early 1990s, being at least three times as long as it had taken to make the Last Crusade game. The protracted development cycle wasn’t a symptom of acrimony, lack of focus, or disorganization, as such things so often tend to be. It was rather a byproduct of the three pathways and, most of all, of Lucasfilm Games’s steadfast commitment to getting everything right, prioritizing quality and polish over release dates in that way that always set them apart from the majority of their peers.

Shortly before the belated release of Fate of Atlantis in the summer of 1992, Lucasfilm Games became LucasArts. The slicker, less subservient appellation was a sign of their rising status within the hierarchy of their parent company, as their games sold in bigger quantities and became a substantial revenue stream in their own right, less and less dwarfed by the money that could be made in movies. Those changing circumstances would prove a not-unmixed blessing for them, forcing them to move out of the rustic environs of Skywalker Ranch and shed much of the personality of a quirky artists’ collective for that of a more hard-nosed media enterprise. On the other hand, at least they’d finally get to make Star Wars games…

But that’s an article for another day. I should conclude this one by noting that Indiana Jones and the Fate of Atlantis2 was greeted with superlative reviews and equally strong sales; even Steven Spielberg, who unlike his friend George Lucas was a big fan of games, played through it and reportedly enjoyed it very much. A year after the original floppy-disk-based release, LucasArts made a “talkie” version for CD-ROM. Getting Harrison Ford to play Indiana Jones was, as you might imagine, out of the question, but they found a credible soundalike, and handled the voice acting as a whole with their usual commitment to quality, recruiting professional voice talent in Hollywood and recording them in the state-of-the-art facilities of Skywalker Sound.

While hard sales numbers for LucasArts’s adventure games have never surfaced to my knowledge, Noah Falstein claims that Indiana Jones and the Fate of Atlantis sold the most of all of them — a claim I can easily imagine to be correct, given its rapturous critical reception and the intrinsic appeal of its license. Today, it tends to be placed just half a step down from the most-loved of the LucasArts adventures, lacking perhaps some of the manic inspiration of the studio’s completely original creations. Nonetheless, it’s a fine, fine game, well worth playing through twice or thrice — at least its middle section, where the pathways diverge — to experience all it has to offer. This game adapted from a movie franchise, which succeeds by not trying to be a movie, marked a fine start for Hal Barwood’s new career.

(Sources: the books The Secret History of Star Wars by Michael Kaminski and Droidmaker: George Lucas and the Digital Revolution by Michael Rubin; LucasArts’s Adventurer magazine of Fall 1991, Spring 1992, and Spring 1993; Starlog of July 1981, September 1981, August 1982, May 1985, September 1985, November 1985, December 1985, and February 1988; Amiga Format of February 1992; Compute! of February 1991; Computer Gaming World of September 1992; CU Amiga of June 1992; Electronic Games of October 1992; MacWorld of June 1989; Next Generation of October 1998; PC Review of September 1992; PC Zone of January 2000; Questbusters of September 1992; Zero of August 1991 and March 1992. Online sources include Arcade Attack‘s interviews with Noah Falstein and Hal Barwood; Noah Falstein’s Reddit AMA; MCV‘s articles on “The Early Days of LucasArts”; Noah Falstein’s presentation on LucasArts at Øredev 2017.

Indiana Jones and the Fate of Atlantis is available for purchase on GOG.com.)


  1. An Action Game was also published under the auspices of Lucasfilm Games, but its development was outsourced to a British house. 

  2. Once again, there was also a Fate of Atlantis action game, made in Britain with a particular eye to the 8-bit machines in Europe which couldn’t run the adventure game. And once again, it garnered little attention in comparison to its big brother. 

 
30 Comments

Posted by on September 28, 2018 in Digital Antiquaria, Interactive Fiction

 

Tags: , , , ,

The Gateway Games of Legend (Preceded by the Legend of Gateway)

Frederik Pohl was still a regular speaker at science-fiction conventions in 2008.

Frederik Pohl, who died on September 2, 2013, at age 93, had one of the most multifaceted careers in the history of written science fiction. Almost uniquely, he played major roles in all three of the estates that constitute science fiction’s culture: the first estate of the creators, in which he wrote stories and novels over a span of many decades; the second estate of the publishers and other business interests, in which he served as a highly respected and influential agent, editor, and anthologist over a similar period of time; and the third estate of fandom, in which his was an important voice from the very dawn of the pulp era, and for which he never lost his enthusiasm, attending science-fiction conventions and casting his votes on fan committees right up to the end.

Growing up between the world wars in Brooklyn, New York, Pohl discovered the nascent literary genre of science fiction in 1930 at the age of 10, when he stumbled upon an issue of Science Wonder Stories. From that moment on, he spent his time at every opportunity with the likes of Edgar Rice Burroughs’s Princess of Mars and Doc Smith’s Lensmen — catnip for any red-blooded young boy with any sense of wonder at all. In comparison to other young science-fiction fanatics, however, Pohl stood out for his personableness, his ambition, his spirit of innovation, and his sheer commitment to the things he loved. He became a founding member of the Brooklyn Science Fiction League, one of the earliest instances of organized science-fiction fandom anywhere in the country, and by the ripe old age of 13 or so had become a prolific editor and publisher of fanzines, many of which enjoyed a total circulation reaching all the way into two figures.

The world of science fiction was indeed still a small one, but that had its advantages in terms of access, especially when one was fortunate enough to live in the pulp publishing capital that was New York City. The boundaries between science-fiction fan and the “profession” of science-fiction writer were porous, and by the latter half of the 1930s Pohl was hobnobbing with such luminaries as Isaac Asimov and Cyril Kornbluth in an informal club of like-minded souls who called themselves the Futurians. He stumbled into the job of acting as the Futurians’ literary agent, which entailed buying stamps and envelopes in bulk, mailing off his friends’ stories to every pulp publisher in the Big Apple, and collecting lots of rejection slips alongside the occasional letter of acceptance in the return post.

In 1939, a 19-year-old Frederik Pohl got himself an editor’s job at the pulp house Popular Publications by virtue of knocking on their door and asking for one. He was given responsibility for Astonishing and Super Science Stories, second-tier magazines that paid their writers a penny per word and trafficked in the stories that weren’t good enough for John W. Campbell’s Astounding, the class of the field. Most of the authors whose stories Pohl accepted are justifiably forgotten today, but he did get his hands every now and then on a sub-par offering from the likes of a Robert A. Heinlein or L. Sprague de Camp that Campbell had rejected; Pohl, alas, was in no position to be so choosy.

But then along came the Second World War to put everything on hold for a while. Pohl wound up joining the Army Air Force, and was rewarded with what he freely described as a “cushy” war experience, working as meteorologist for a B-24 squadron based in Italy. When he returned from Europe, he returned to publishing as well but, initially, not to science fiction. Now a married man with familial responsibilities, he worked for a few years as an advertising copywriter, then as an editor for the book adjuncts to the magazines Popular Science and Outdoor Life; this constitutes the only substantial period of his entire professional life spent outside science fiction.

Yet the pull of science fiction remained strong, and in the early 1950s Pohl resumed his old role of literary agent for his writer buddies, albeit now on a slightly more professional footing. The locus of science-fiction profits was moving from the pulps to paperback novels and short-story collections in book form; thus Pohl became an editor for Ballantine’s new line of science-fiction paperbacks. By this point, the name of Frederik Pohl, while still fairly obscure to most readers, was known to everyone inside the community of science-fiction writers. He really was on a first-name basis with everyone who was anyone in the field, from hard science fiction’s holy trinity of Isaac Asimov, Robert A. Heinlein, and Arthur C. Clarke to lyrical science fiction’s patron saint Ray Bradbury.

In 1960, a 41-year-old Pohl accepted what was destined to become his most influential behind-the-scenes role of all when he agreed to become editor of a troubled ten-year-old also-ran of a magazine called Galaxy Science Fiction. “The pay was miserable,” he would later remember. “The work was never-ending. It was the best job I ever had in my life.”

At that time, science fiction was on the precipice of a new era, as a more culturally, racially, sexually, and stylistically diverse generation of up-and-coming writers — the so-called “New Wave” — began to arrive on the scene with a new interest in prose quality and formal experimentation, alongside an interest in exploring the future in terms of human psychology rather than technology alone. Many or most of the old guard who had cut their teeth in the pulp era, whose politics tended to veer conservative in predictable middle-aged-white-male fashion, greeted this invasion of beatnik radicals with dismay and contempt. The card-carrying John Birch Society member John W. Campbell, who was still editing Astounding — or rather, as it had recently been renamed, Analog Science Fiction — was particularly vocal in his criticism of all this new-fangled nonsense.

Frederik Pohl, however, was different from most of his peers. He had always read widely outside the field of science fiction as well as inside it, and was as comfortable discussing the stylistic experiments of James Joyce and Marcel Proust as he was the clockwork plots of Doc Smith. And as for politics… well, he had spent four years as a card-carrying member of the American Communist Party — take that, John Campbell! — and even after disillusionment with the Soviet Union of Josef Stalin had put an end to that phase he had retained his leftward bent.

In short: Frederik Pohl welcomed the new arrivals and their new ideas with open arms, making Galaxy a haven for works at the cutting edge of modern science fiction, superseding Campbell’s increasingly musty-smelling Analog as the genre’s journal of record. He had to, as he later put it, “encourage, coax, and sometimes browbeat” his charges to get the very best work out of them, but together they changed the face of science fiction. Indeed, it was arguably helping other writers be their best selves that constituted this multifariously talented man’s most remarkable talent of all. Perhaps his most difficult yet rewarding writer was the famously irascible Harlan Ellison, who burst to prominence in the pages of Galaxy and If, its sister publication, with stories whose names were as scintillatingly trippy as their contents: “‘Repent, Harlequin!’ Said the Ticktockman,” “I Have No Mouth, and I Must Scream,” “The Beast That Shouted Love at the Heart of the World.” Such stories were painfully shaped over the course of a series of bloody rows between editor and writer. Most readers would agree that Ellison’s later fiction has never approached the quality of these early stories, churned out under the editorship of Frederik Pohl.

Burned out at last by the job of editing Galaxy, Pohl stepped down at the end of the 1960s, a decade that had transformed the culture of science fiction every bit as much as it had the larger American culture that surrounded it. In the following decade, however, he continued to push the boundaries as an editor for Bantam Books. It was entirely thanks to him that Bantam in 1975 published Samuel R. Delany’s experimental masterpiece or colossal con job — depending on the beholder — Dhalgren, nearly 900 pages of digressive, circular prose heavily influenced by James Joyce’s equally controversial Finnegans Wake. Whatever else you could say about it, science fiction had come a long way from the days of Science Wonder Stories and Edgar Rice Burroughs.

All of which is to say that Frederik Pohl would have made a major impact on the field of science fiction had he never written a word of his own. In actuality, though, he managed to combine all of the work I’ve described to this point with an ebbing and flowing output of original short stories and novels, beginning with, of all things, a rather awkwardly adolescent poem called “Elegy to a Dead Satellite: Luna,” which appeared in Amazing Stories in 1937. Through the ensuing decades, Pohl was regarded as a competent but second-tier writer, the kind who could craft a solid tale but seldom really dazzled. Yet he kept at it; if nothing else, continuing to work as a writer in his own right gave him a feeling for what the more high-profile writers he represented and edited were going through. In 1967, he even switched roles with his frenemy Harlan Ellison by contributing a story to the latter’s Dangerous Visions anthology, a collection of deliberately provocative stories — the sorts of things that could never, ever have gotten into print in earlier years — from New Age writers and adventurous members of the old guard; it went on to become what many critics consider the most important and influential science-fiction anthology of all time.

But even Pohl’s contribution there — “The Day After the Day After the Martians Came,” a parable about the eternal allure of racism and xenophobia that was well-taken then and now but far less provocative than many of the anthology’s other stories — didn’t really change perceptions of him as a fine editor with a sideline in writing rather than the opposite. That shift didn’t happen until a decade later, when the now 58-year-old Pohl published a novel called Gateway. Coming after the most important work of the vast majority of his pulpy peers was well behind them, Pohl’s 21st solely-authored or co-authored novel constitutes the most unlikely story of a late blooming in the history of science fiction.

Described in the broadest strokes, Gateway sounds like the sort of rollicking space opera which John W. Campbell would have loved to publish back in the heyday of Astounding. In our solar system’s distant past, when the primitive ancestors of humanity had yet to discover fire, an advanced star-faring race, later to be dubbed the Heechee by humans, visited, only to abandon their bases an unknown period of time later. As humans begin to explore and settle the solar system in our own near future, they discover a deserted Heechee space station in an elliptical orbit around our sun. They find that the station still contains bays full of hundreds of small spaceships, and discover the hard way that, at the press of a mysterious button, these spaceships sweep their occupants away on a non-negotiable faster-than-light journey to some other corner of the galaxy, then (hopefully) back to Earth at the press of another button; for this reason, they name the station Gateway, as in, “Gateway to the Stars.” Many of the destinations the spaceships visit are pointless; some, such as the interior of a black hole, are deadly. Sometimes, though, the spaceships travel to habitable planets and/or to planets containing other artifacts of Heechee technology, worth a pretty penny to scientists, engineers, and collectors back on Earth.

Earth itself is not in very good shape socially, culturally, or environmentally. Overpopulation and runaway capitalism have all but ruined the planet and created an underclass of have-nots who make up the vast majority of the population, working in unappetizing industries like “food shale mines.” The so-called Gateway Corporation, which has taken charge of the station, runs a lottery for people interested in climbing into a Heechee spaceship, pressing a button, and seeing where it takes them. Possibly they can end up rich; more likely, they might wind up dead, their bodies left to decay hundreds of light years from home. But, conditions being what they are among the teeming masses, there’s no shortage of volunteers ready and willing to take such a long shot. These intrepid — or, rather, desperate — explorers are known as the Gateway “prospectors.”


That, then, is the premise —  a premise offering a universe of possibility to any writer with an ounce of the old pulpy space-opera spirit. Who are (or were) the Heechee? Why did they disappear? Did they intend for humans to discover their technology and start using it to explore the galaxy, or is that just a happy (?) accident? Will the two races meet someday? Or, if you like, table all those Big Mysteries for some series finale off in the far distance. Just the premise of flying off to parts unknown in all these Heechee spaceships admits of an infinite variety of adventures. Gene Roddenberry may have once famously pitched Star Trek as “Wagon Train to the Stars,” but the starship Enterprise has got nothing on this idea.

Here’s the thing, though: having come up with this spectacular idea that the likes of a Doc Smith could have spent an entire career milking, Frederik Pohl perversely refused to turn it into the straightforward tales of interstellar adventure that it was crying out to become. Gateway engages with it instead only in the most subversively oblique fashion. Half of the novel consists of a series of therapy sessions involving a robot psychologist and a Gateway prospector named Robinette Broadhead who’s neither conventionally adventurous nor even terribly likable. Robinette is the only survivor — under somewhat suspicious circumstances — of a recent five-person prospecting expedition. He’s now rich, but he’s also a deeply damaged soul, just one of the many who inhabit Gateway, a rather squalid place beset by rampant drug abuse, a symptom of the literal dead-enders who inhabit it between prospecting voyages. We spend far more time exploring the origins and outcomes of Robinette’s various psycho-sexual hangups than we do gallivanting about the stars. It’s as if we wandered into a Star Trek movie and got an Ingmar Bergman film that just happens to be set in space instead. Gateway is a shameless bait-and-switch of a novel. Robinette Broadhead, I’m afraid, lost his sense of wonder a long time ago, and it seems that he took Frederik Pohl’s as well.

The best way to understand Gateway may be through the lens of the times in which it was written: this is very much a novel of the 1970s, that long, hazy morning after to the rambunctious 1960s. The counterculture of the earlier decade had focused on collective struggles for social justice, but the 1970s turned inward to focus on the self. Images of feminist activists like Betty Friedan shouting through bullhorns at rallies were replaced in the media landscape with the sitcom character Mary Tyler Moore, the career gal who really did have it all; rollicking songs of mass protest were replaced by the navel-gazing singer-songwriter movement; the term Me Generation was coined, and suddenly everyone seemed to be in therapy of one kind or another, trying to sort out their personal issues instead of trying to fix society writ large. Meanwhile a pair of global oil crises, acid rain, and the thick layer of smog that hovered continually over Hollywood — the very city of dreams itself — were driving home for the first time what a fragile place this planet of ours actually is. Oh, well… on the brighter side, if you were into that sort of thing, lots of people were having lots and lots of casual sex, still enjoying the libertine sexual mores of the 1960s before the specter of AIDS would rear its head and put an end to all that as well in the following decade.

It’s long been a truism among science-fiction critics that this genre which is ostensibly about our many possible futures usually has far more interesting things to say about the various presents that create it. And nowhere is said truism more true than in the case of Gateway. For better or for worse, all of the aspects of fashionable 1970s culture which I’ve just mentioned fairly leap off its pages: the therapy and accompanying obsessive self-examination, the warnings about ecology and environment, the sex. It was so in tune with its times that the taste-makers of science fiction, who so desperately wanted their favored literary genre to be relevant, able to hold its head up proudly alongside any other, rewarded the novel mightily. Gateway won pretty much everything it was possible for a science-fiction novel to win, including its year’s Hugo and Nebula, the most prestigious awards in the genre; it sold far better than anything else Frederik Pohl had ever written; it made Pohl, four decades on from publishing that first awkward adolescent poem in Amazing Stories, a truly hot author at last.

The modern critical opinion tends to be more mixed. In fact, Gateway stands today as one of the more polarizing science-fiction novels ever written. Plenty of readers find its betrayal of its brilliant space-operatic setup unforgivable, and/or find its unlikable, self-absorbed protagonist insufferable, and/or find its swinging-70s social mores and dated ideas about technology simply silly. I confess that I myself largely belong to this group, although more for the latter two reasons than the first. Other readers, though, continue to find something hugely compelling about the novel that’s never quite come through for me. And yet even some of this group might agree that some aspects of Gateway haven’t aged terribly well. With some of the best writers in the world now embracing or at least acknowledging science fiction as as valid a literary form as any other, the desperate need to prove the genre’s literary bona fides at every turn that marked the 1960s and 1970s no longer exists. Gateway today feels like it’s trying just a bit too hard.

In at least one sense, Gateway did turn into a case of business as usual for a popular genre novel: Frederik Pohl published three sequels plus a collection of Gateway short stories during the 1980s. These gradually peeled back the layers of mystery to reveal who the Heechee were, why they had once come to our solar system, and why they had left, using the same oblique approach that had so delighted and infuriated readers of the first book. None of the them had the same lightning-in-a-bottle quality as that first book, however, and Pohl’s reputation gradually declined back to join the mid-tier authors with which he had always been grouped prior to 1977. Perhaps in the long run that was simply where he belonged — a solid writer of readable, enjoyable fiction, but not one overly likely to shift any paradigms inside a reader’s psyche.

At any rate, such was the position Pohl found himself in in early 1991, when Legend Entertainment came calling with a plan to make a computer game out of Gateway.


As a tiny developer and publisher in a fast-growing, competitive industry, Legend was always doomed to lead a somewhat precarious existence. Nevertheless, the first months of 1991 saw them having managed to establish themselves fairly well as the only company still making boxed parser-driven adventure games — the natural heir to Infocom, co-founded by an ex-Infocom author named Bob Bates and publishing games written not only by him but also by Steve Meretzky, the most famous Infocom author of all. Spellcasting 101, the latter’s fantasy farce that had become Legend’s debut product the previous year, was selling quite well, and a sequel was already in the works, as was Timequest, a more serious-minded time-travel epic from Bates.

Taking stock of the situation, Legend realized that they needed to increase the number of games they cranked out in order to consolidate their position. Their problem was that they only had two game designers to call upon, both of whom had other distractions to deal with in addition to the work of designing new Legend adventure games: Bates was kept busy by the practical task of running the company, while Meretkzy was working from home as a freelancer, and as such was also doing other projects for other companies. A Legend “Presentation to Stockholders” dated May of 1991 makes the need clear: “We need to find new game authors,” it states under the category of “Product Issues.” Luckily, there was already someone to hand — in fact, someone who had played a big part in drawing up the very document in question — who very much wanted to design a game.

Mike Verdu had been Bates’s partner in Legend Entertainment from the very beginning. Although not yet out of his twenties, he was already an experienced entrepreneur who had founded, run, and then sold a successful business. He still held onto his day job with ASC, the computer-services firm with many Defense Department contracts which had acquired the aforementioned business, even as he was devoting his evenings and weekends to Legend. Verdu:

I was the business guy. I was the CFO, the COO, the guy who went and got money and made sure we didn’t run out of it, who figured out the production plans for the products, tried to get them done on time, figured out the milestone plans and the software-development plans. I was a product guy inasmuch as I was helping to hire programmers and putting them to work, but I wasn’t a game designer, and I wasn’t writing code or being the creative director on products. And I really wanted to do that.

So, there was this moment when I had to decide between continuing to work with ASC and doing Legend part time or doing Legend full time. I decided to do Legend full time. But as a condition of that, I said, “I’d like to be a part of the teams that are actually making the games.”

But I didn’t believe I had the chops to create a whole world and write a game from scratch. I was sort of looking for a world I could tell a story in. So I talked to Bob about licensing. I was incredibly passionate about Frederik Pohl’s novels. So we talked about Gateway, and Bob made the connection and negotiated the deal. It went so much smoother and easier than I thought it would. I was so excited!

The negotiations were doubtless sped along by the fact that the bloom was already somewhat off the rose when it came to Gateway. The novel’s sequels had been regarded by even many fans of the original as a classic case of diminishing returns, and the whole body of work, which so oozed that peculiar malaise of the 1970s, felt rather dated when set up next to hipper, slicker writers of the 1980s like William Gibson. Nobody, in short, was clamoring to license Gateway for much of anything by this point, so a deal wasn’t overly hard to strike.

Just like that, Mike Verdu had his world to design his game in, and Legend was about to embark on their first foray into a type of game that would come to fill much of their catalog in subsequent years: a literary license. For this first time out, they were fortunate enough to get the best kind of literary license, short of the vanishingly rare case of one where an active, passionate author is willing to serve as a true co-creator: the kind where the author doesn’t appear to be all that interested in or even aware of the project’s existence. Mike Verdu never met or even spoke to Frederik Pohl in the process of making what would turn out to be two games based on his novels. He got all the benefits of an established world to play in with none of the usual drawbacks of having to ask for approval on every little thing.

Yet the Gateway project didn’t remain Verdu’s baby alone for very long. Bates and Verdu, eager to expand their stable of game designers yet further, hit upon the idea of using it as a sort of training ground for other current Legend employees who, like Verdu, dreamed of breaking into a different side of the game-development business. Verdu agreed to divide his baby into three pieces, taking one for himself and giving the others to Glen Dahlgren, a Legend programmer, and to Michael Lindner, the company’s music-and-sound guru. All would work on their parts under the loose supervision of the experienced Bob Bates, who stood ready to gently steer them back on course if they started to stray. Verdu:

We learned how to write code. We learned the craft of interactive-fiction design from Bob, then we would huddle as a group and hash out the storylines and puzzles for our respective sections of the game, then try to tie them all together. That was one of the best times of my career, turning from a defense-industry executive into a game designer who could write code and bring a game to life. Magical… incredibly great!

You were writing, compiling, and testing in this constant iteration. You would write something, then you would see the results, then repeat. I think that was the most powerful flow state I’ve ever been in. Hours would just evaporate. I’d look up at four in the morning and there’d be nobody in the office: Good God, where did the last eight hours go? It was a wonderful creative process.

It was an unorthodox, perhaps even disjointed way to make a game, but the Legend Trade School for Game Design worked out beautifully. When it shipped in the summer of 1992, Gateway was by far the best thing Legend had done to that point: a big, generous, well-polished game, with lots to see and do, a nice balance between plot and free-form exploration, and meticulously fair puzzle design. It’s the adventure-game equivalent of a summer beach read, a page turner that just keeps rollicking along, ratcheting up the excitement all the while. It isn’t a hard game, but you wouldn’t want it to be; this is a game where you just want to enjoy the ride, not scratch your head for long periods of time over its puzzles. It even looks much better than the occasionally garish-looking Legend games which came before it, thanks to the company’s belated embrace of 256-color VGA graphics and their growing comfort working with multimedia elements.

You might already be sensing a certain incongruity between this description of Gateway the game and my earlier description of Gateway the novel. And, indeed, said incongruity is very much present. A conventional object-oriented adventure game is hardly the right medium for delving deep into questions of individual psychology. A player of a game needs a through line to follow, a set of concrete goals to achieve; this explains why adventure games share their name with adventure fiction rather than literary fiction. Do you remember how I described Gateway the novel as setting up a perfect space-opera premise, only to obscure it behind therapy sessions and a disjointed, piecemeal approach to its narrative? Well, Gateway the game becomes the very space opera that the novel seemed to promise us, only to jerk it away: a big galaxy-spanning romp that Doc Smith could indeed have been proud of. Mike Verdu, the designer most responsible for the overarching structure of the game, jettisoned Pohl’s sad-sack protagonist along with all of his other characters. He also dispensed with the foreground plot, such as it is, about personal guilt and responsibility that drives the novel. What he was left with was the glorious wide-frame premise behind it all.

The game begins with you, a lucky (?) lottery winner from the troubled Earth, arriving at Gateway Station to take up the job of prospector. In its first part, written by Mike Verdu, you acclimate to life on the station, complete your flight training, and go on your initial prospecting mission. In the second part, written by Michael Lindner, you tackle a collection of prospecting destinations in whatever order you prefer, visiting lots of alien environments and assembling clues about who the Heechee were and why they’ve disappeared. In part three, written by Glen Dahlgren, you have to avert a threat to Earth posed by another race of aliens known as the Assassins — that race being the reason, you’ve only just discovered to your horror, that the Heechee went into hiding in the first place. The plot as a whole is expansive and improbable and, yes, more than a little silly. In other words, it’s space opera at its best. There’s nothing wrong with a little pure escapism from time to time.

Gateway the game thus becomes, in my opinion anyway, an example of a phenomenon more common than one might expect in creative media: the adaptation that outdoes its source material. It doesn’t even try to carry the same literary or thematic weight that the novel rather awkwardly stumbles along under, but by way of compensation it’s a heck of a lot more fun. As an adaptation, it fails miserably if one’s criterion for success is capturing the unadulterated flavor and spirit of the source material. As a standalone adventure game, however, it’s a rollicking success.

Legend had signed a two-game deal with Frederik Pohl right from the start, and had always intended to develop a sequel to Gateway if its sales made that idea viable. And so, when the first Gateway sold a reasonable 35,000 units or so, Gateway II: Homeworld got the green light. Michael Lindner had taken on another project of his own by this point, so Mike Verdu and Glen Dahlgren divided the sequel between just the two of them, each taking two of the sequel’s four parts.

Reaching stores almost exactly one year after its predecessor, Gateway II became both the last parser-driven adventure Legend published and the last boxed game of that description from any publisher — a melancholy milestone for anyone who had grown up with Infocom and their peers during the previous decade. The text adventure would live on, but it would do so outside the conventional computer-game industry, in the form of games written by amateurs and moonlighters that were distributed digitally and usually given away rather than sold. Never again would anyone be able to make a living from text adventures.

As era enders go, though, Gateway II: Homeworld is pretty darn spectacular, with all the same strengths as its predecessor. In its climax, you finally meet the Heechee themselves on their hidden homeworld — thus the game’s subtitle — and save the Earth one final time while you’re at it. It’s striking to compare the driving plot of this game with the static collections of environments and puzzles that had been the text adventures of ten years before. The medium had come a long way from the days of Zork. This isn’t to say that Legend’s latter-day roller-coaster text adventures, sporting music, cut scenes, and heaps of illustrations, were intrinsically superior to the traditional approach — but they certainly were impressive in their degree of difference, and in how much fun they still are to play in their own way.

One thing that Zork and the Gateway games do share is the copious amounts of love and passion that went into making them. Unlike so many licensed games, the Gateway games were made for the right reasons, made by people who genuinely loved the universe of the novels and were passionate about bringing it to life in an interactive medium.

For Mike Verdu, Michael Lindner, and Glen Dahlgren, the Gateway games did indeed mark the beginning of new careers as game designers, at Legend and elsewhere. The story of Verdu, the business executive who became a game designer, is particularly compelling — almost as compelling, one might even say, as that of Frederik Pohl, the mid-tier author, agent, and editor who briefly became the hottest author in science fiction almost five decades after he decided to devote his life to his favorite literary genre, in whatever capacity it would have him. Both men’s stories remind us that, for the lucky among us at least, life is long, and as rich as we care to make it, and it’s a shame to spend it all doing just one thing.

Gateway and Gateway II: Homeworld in Pictures


Gateway employs Legend’s standard end-stage-commercial-text-adventure interface, with music and sound and graphics and several screen layouts to choose from, straining to satisfy everyone from the strongly typing-averse to the purists who still scoff at anything more elaborate than a simple stream of text and a blinking command prompt.

Mike Verdu wanted a license to give him an established world to play with. Having gotten his wish, he used it well. Gateway puts enormous effort into making its environment a rich, living place, building upon what is found in Frederik Pohl’s novels. Much of this has nothing to do with the puzzles or other gameplay elements; it’s there strictly to add to the experience as a piece of fiction. Thanks to an unlimited word count and heaps of new multimedia capabilities, it outdoes anything Infocom could ever have dreamed of doing in this respect.

We spend a big chunk of Gateway II in a strange alien spaceship — the classic “Big Dumb Object” science-fiction plot, reminding us not just of classic novels but of earlier text adventures like Infocom’s Starcross and Telarium’s adaptation of Rendezvous with Rama. In fact, there are some oddly specific echoes of the former game, such as a crystal rod and a sort of zoo of alien lifeforms to deal with. That said, you’ll never mistake one game for the other. Starcross is minimalist in spirit and presentation, a cerebral exercise in careful exploration and puzzle-solving, while Gateway II is just a big old fun-loving thrill ride, full of sound and color, that rarely slows down enough to let you take a breath. I love them both equally.

Many of the illustrations in Gateway II in particular really are lovely to look at, especially when one considers the paucity of resources at Legend’s disposal in comparison to bigger adventure developers like Sierra and LucasArts. There were obviously some fine artists employed by Legend, with a keen eye for doing more with less.

Some of the cut scenes in Gateway II are 3D-modeled. Such scenes were becoming more and more common in games by 1993, as computing hardware advanced and developers began to experiment with a groundbreaking product called 3D Studio. The 3D Revolution, which would change the look and to a large extent the very nature of games as the decade wore on, was already looming in the near distance.

The parser disappeared from Legend’s games not so much all at once as over a series of stages. By Gateway II, the last Legend game to be ostensibly parser-based, conversations and even some puzzles had become purely point-and-click affairs for the sake of convenience and variety. It already feels like you spend almost as much time mousing around as you do typing, even if you don’t choose to use the (cumbersome) onscreen menus of verbs and nouns to construct your commands for the parser. Having come this far, it was a fairly straightforward decision for Legend to drop the parser entirely in their next game. Thus do most eras end — not with a bang but with a barely recognized whimper. At least the parser went out on a high note…

(Sources: I find Frederik Pohl’s memoir The Way the Future Was, about his life spent in science fiction, more compelling than his actual fiction, as I do The Way the Future Blogs, an online journal which he maintained for the last five years or so of his life, filling it with precious reminiscences about his writing, his fellow authors, his nearly century-spanning personal life, and his almost equally lengthy professional career in publishing and fandom. I’m able to tell the Legend Entertainment side of this story in detail thanks entirely to Bob Bates and Mike Verdu, both of whom sat down for long interviews, the former of whom also shared some documents from those times.

Feel free to download the games Gateway and Gateway II, packaged to be as easy as possible to get running under DOSBox on your modern computer, from right here. As noted in the article proper, they’re great rides that are well worth your time, two of the standout gems of Legend’s impressive catalog.)

 
39 Comments

Posted by on September 21, 2018 in Digital Antiquaria, Interactive Fiction

 

Tags: , ,

Shades of Gray

Ladies and gentlemen, come and see. This isn’t a country here but an epic failure factory, an excuse for a place, a weed lot, an abyss for tightrope walkers, blindman’s bluff for the sightless saddled with delusions of grandeur, proud mountains reduced to dust dumped in big helpings into the cruciform maws of sick children who crouch waiting in the hope of insane epiphanies, behaving badly and swamped besides, bogged down in their devil’s quagmires. Our history is a corset, a stifling cell, a great searing fire.

— Lyonel Trouillot

What’s to be done about Haiti?

Generations have asked that question about the first and most intractable poster child for postcolonial despair, the poorest country in North or South America now and seemingly forever, a place whose corruption and futility manages to make the oft-troubled countries around it look like models of good governance. Nowhere feels James Joyce’s description of history as “a nightmare from which I am trying to awake” more apt. Indeed, Haiti stands as perhaps the ultimate counterargument to the idealistic theory of history as progress. Here history really is just one damned thing after another — differing slightly in the details, but always the same at bottom.

But why should it be this way? What has been so perplexing and infuriating about Haiti for so long is that there seems to be no real reason for its constant suffering. Long ago, when it was still a French colony, it was known as the “Pearl of the Caribbean,” and was not only beautiful but rich; at the time of the American Revolution, it was richer than any one of the thirteen British American colonies. Those few who bother to visit Haiti today still call it one of the most beautiful places of all in the beautiful region that is the Caribbean. Today the Dominican Republic, the nation with which Haiti shares the island of Hispaniola, is booming, the most popular tourist spot in the Caribbean, with the fastest-growing economy anywhere in North or South America. But Haiti, despite being blessed with all the same geographic advantages, languishes in poverty next door, seething with resentment over its condition. It’s as if the people of Haiti have been cursed by one of the voodoo gods to which some of them still pray to act out an eternal farce of chaos, despair, and senseless violence.

Some scenes from the life of Haiti…

…you are a proud Mandingue hunter in a hot West African land. But you’re not hunting. You’re being hunted — by slavers, both black and white. You run, and run, and run, until your lungs are near to bursting. But it’s no use. You’re captured and chained like an animal, and thrust into the dank hold of a sailing ship. Hundreds of your countrymen and women are here — hungry, thirsty, some beaten and maimed by your captors. All are terrified for themselves and their families, from whom they’ve been cruelly separated. Many die on the long voyage. But when it’s over, you wonder if perhaps they were the lucky ones…

The recorded history of the island of Hispaniola begins with the obliteration of the people who had always lived there. The Spanish conquistadors arrived on the island in the fifteenth century, bringing with them diseases against which the native population, known as the Taíno, had no resistance, along with a brutal regime of forced labor. Within two generations, the Taíno were no more. They left behind only a handful of words which entered the European vocabulary, like “hammock,” “hurricane,” “savanna,” “canoe,” “barbecue,” and “tobacco.” The Spanish, having lost their labor force, shrugged their shoulders and largely abandoned Hispaniola.

But in the ensuing centuries, Europeans developed a taste for sugar, which could be produced in large quantities only in the form of sugarcane, which in turn grew well only in tropical climates like those of the Caribbean. Thus the abandoned island of Hispaniola began to have value again. The French took possession of the western third of the island — the part known as Haiti today — with the Treaty of Ryswick, which ended the Nine Years War in 1697. France officially incorporated its new colony of Saint-Domingue on Hispaniola the same year.

Growing sugarcane demanded backbreaking labor under the hot tropical sun, work of a kind judged unsuitable for any white man. And so, with no more native population to enslave, the French began to import slaves from Africa. Their labor turned Saint-Domingue in a matter of a few decades from a backwater into one of the jewels of France’s overseas empire. In 1790, the year of the colony’s peak, 48,000 slaves were imported to join the 500,000 who were already there. It was necessary to import slaves in such huge numbers just to maintain the population in light of the appalling death toll of those working in the fields; little Saint-Domingue alone imported more slaves over the course of its history than the entirety of the eventual United States.

…you’re a slave, toiling ceaselessly in a Haitian cane field for your French masters. While they live bloated with wealth, you and your fellows know little but hardship and pain. Brandings, floggings, rape, and killing are everyday events. And for the slightest infraction, a man could be tortured to death by means limited only by his owners’ dark imaginations. What little comfort you find is in the company of other slaves, who, at great risk to themselves, try to keep the traditions of your lost homeland alive. And there is hope — some of your brothers could not be broken, and have fled to the hills to live free. These men, the Maroons, are said to be training as warriors, and planning for your people’s revenge. Tonight, you think, under cover of darkness, you will slip away to join them…

The white masters of Saint-Domingue, who constituted just 10 percent of the colony’s population, lived in terror of the other 90 percent, and this fear contributed to the brutality with which they punished the slightest sign of recalcitrance on the part of their slaves. Further augmenting their fears of the black Other was the slaves’ foreboding religion of voodoo: a blending of the animistic cults they had brought with them from tribal Africa with the more mystical elements of Catholicism — all charms and curses, potions and spells, trailing behind it persistent rumors of human sacrifice.

Even very early in the eighteenth century, some slaves managed to escape into the wilderness of Hispaniola, where they formed small communities that the white men found impossible to dislodge. Organized resistance, however, took a long time to develop.

Legend has it that the series of events which would result in an independent nation on the western third of Hispaniola began on the night of August 21, 1791, when a group of slave leaders secretly gathered at a hounfour — a voodoo temple — just outside the prosperous settlement of Cap‑Français. Word of the French Revolution had reached the slaves, and, with mainland France in chaos, the time seemed right to strike here in the hinterlands of empire. A priestess slit the throat of a sacrificial pig, and the head priest said that the look and taste of the pig’s blood indicated that Ogun and Ghede, the gods of war and death respectively, wanted the slaves to rise up. Together the leaders drank the blood under a sky that suddenly broke into storm, then sneaked back onto their individual plantations at dawn to foment revolution.

That, anyway, is the legend. There’s good reason to doubt whether the hounfour actually happened, but the revolution certainly did.

…you are in the middle of a bloody revolution. You are a Maroon, an ex-slave, fighting in the only successful slave revolt in history. You have only the most meager weapons, but you and your comrades are fighting for your very lives. There is death and destruction all around you. Once-great plantation houses lie in smouldering ruins. Corpses, black and white, litter the cane fields. Ghede walks among them, smiling and nodding at his rich harvest. He sees you and waves cheerfully…

The proudest period of Haiti’s history — the one occasion on which Haiti actually won something — began before a nation of that name existed, when the slaves of Saint-Domingue rose up against their masters, killing or driving them off their plantations. After the French were dispensed with, the ex-slaves continued to hold their ground against Spanish and English invaders who, concerned about what an example like this could mean for other colonies, tried to bring them to heel.

In 1798, a well-educated, wily former slave named Toussaint Louverture consolidated control of the now-former French colony. He spoke both to his own people and to outsiders using the language of the Enlightenment, drawing from the American Declaration of Independence and the French Declaration of the Rights of Man and the Citizen, putting a whole new face on this bloody revolution that had supposedly been born at a voodoo houfour on a hot jungle night.

Toussaint Louverture was frequently called the black George Washington in light of the statesmanlike role he played for his people. He certainly looked the part. Would Haiti’s history have been better had he lived longer? We can only speculate.

…and you are battling Napoleon’s armies, Europe’s finest, sent to retake the jewel of the French empire. You have few resources, but you fight with extraordinary courage. Within two years, sixty thousand veteran French troops have died, and your land is yours again. The French belong to Ghede, who salutes you with a smirk…

Napoleon had now come to power in France, and was determined to reassert control over his country’s old empire even as he set about conquering a new one. In 1802, he sent an army to retake the colony of Saint-Domingue. Toussaint Louverture was tricked, captured, and shipped to France, where he soon died in a prison cell. But his comrades in arms, helped along by a fortuitous outbreak of yellow fever among the French forces and by a British naval blockade stemming from the wars back in Europe, defeated Napoleon’s finest definitively in November of 1803. The world had little choice but to recognize the former colony of Saint-Domingue as a predominately black independent nation-state, the first of its type.

With Louverture dead, however, there was no one to curb the vengeful instincts of the former slaves who had defeated the French after such a long, hard struggle. It was perfectly reasonable that the new nation would take for its name Haiti — the island of Hispaniola’s name in the now-dead Taíno language — rather than the French appellation of Saint-Domingue. Less reasonable were the words of independent Haiti’s first leader, and first in its long line of dictators, Jean-Jacques Dessalines, who said that “we should use the skin of a white man as a parchment, his skull for an inkwell, his blood for ink, and a bayonet for a pen.” True to his words, he proceeded to carry out systematic genocide on the remaining white population of Haiti, destroying in the process all of the goodwill that had accrued to the new country among progressives and abolitionists in the wider world. His vengeance cost Haiti both much foreign investment that might otherwise have been coming its way and the valuable contribution the more educated remaining white population, by no means all of whom had been opposed to the former slaves’ cause, might have been able to make to its economy. A precedent had been established which holds to this day: of Haiti being its own worst enemy, over and over again.

…a hundred years of stagnation and instability flash by your eyes. As your nation’s economic health declines, your countrymen’s thirst for coups d’etat grows. Seventeen of twenty-four presidents are overthrown by guile or force of arms, and Ghede’s ghastly armies swell…

So, Haiti, having failed from the outset to live up to the role many had dreamed of casting it in as the first enlightened black republic, remained poor and inconsequential, mired in corruption and violence, as its story devolved from its one shining moment of glory into the cruel farce it remains to this day. The arguable lowlight of Haiti’s nineteenth century was the reign of one Faustin Soulouque, who had himself crowned Emperor Faustin I — emperor of what? — in 1849. American and European cartoonists had a field day with the pomp and circumstance of Faustin’s “court.” He was finally exiled to Jamaica in 1859, after he had tried and failed to invade the Dominican Republic (an emperor has to start somewhere, right?), extorted money from the few well-to-do members of Haitian society and defaulted on his country’s foreign debt in order to finance his palace, and finally gotten himself overthrown by a disgruntled army officer. Like the vast majority of Haiti’s leaders down through the years, he left his country in even worse shape than he found it.

Haiti’s Emperor Faustin I was a hit with the middle-brow reading public in the United States and Europe.

…you are a student, protesting the years-long American occupation of your country. They came, they said, to thwart Kaiser Wilhelm’s designs on the Caribbean, and to help the Haitian people. But their callous rule soon became morally and politically bankrupt. Chuckling, Ghede hands you a stone and you throw it. The uprising that will drive the invaders out has begun…

In 1915, Haiti was in the midst of one of its periodic paroxysms of violence. Jean Vilbrun Guillaume Sam, the country’s sixth president in the last four years, had managed to hold the office for just five months when he was dragged out of the presidential palace into the street and torn limb from limb by a mob. The American ambassador to Haiti, feeling that the country had descended into a state of complete anarchy that could spread across the Caribbean, pleaded with President Woodrow Wilson to intervene. Fearing that Germany and its allies might exploit this chaos on the United States’s doorstep if and when his own country should enter the First World War on the opposing side, Wilson agreed. On July 28, 1915, a small force of American sailors occupied the Haitian capital of Port-au-Prince almost without firing a shot — a far cry from Haiti’s proud struggle for independence against the French. Haiti was suddenly a colony again, although its new colonizers did promise that the occupation was temporary. It was to last just long enough to set the country on its feet and put a sound system of government in place.

When the Americans arrived in Haiti, they found its people’s lives not all that much different from the way they had lived at the time of Toussaint Louverture. Here we see the capital city of Port-au-Prince, the most “developed” place in the country.

The American occupation wound up lasting nineteen years, during which the occupiers did much practical good in Haiti. They paved more than a thousand miles of roadway; built bridges and railway lines and airports and canals; erected power stations and radio stations, schools and hospitals. Yet, infected with the racist attitudes toward their charges that were all too typical of the time, they failed at the less concrete tasks of instilling a respect for democracy and the rule of law. They preferred to make all the rules themselves by autocratic decree, giving actual Haitians only a token say in goings-on in their country. This prompted understandable anger and a sort of sullen, passive resistance among Haitians to all of the American efforts at reform, occasionally flaring up into vandalism and minor acts of terrorism. When the Americans, feeling unappreciated and generally hard-done-by, left Haiti in 1934, it didn’t take the country long to fall back into the old ways. Within four years President Sténo Vincent had declared himself dictator for life. But he was hardly the only waxing power in Haitian politics.

…a tall, ruggedly handsome black man with an engaging smile.

He is speaking to an assembled throng in a poverty-stricken city neighborhood. He tells moving stories about his experiences as a teacher, journalist, and civil servant. You admire both his skillful use of French and Creole, and his straightforward ideas about government. With eloquence and obvious sincerity, he speaks of freedom, justice and opportunity for all, regardless of class or color. His trenchant, biting criticisms of the establishment delight the crowd of longshoremen and laborers.

“Latin America and the Caribbean already have too many dictators,” he says. “It is time for a truly democratic government in Haiti.” The crowd roars out its approval…

The aspect of Haitian culture which had always baffled the Americans the most was the fact that this country whose population was 99.9 percent black was nevertheless riven by racism as pronounced as anywhere in the world. The traditional ruling class was the mulattoes: Haitians who could credit their lighter skin to white blood dating back to the old days of colonization, and/or to the fact that they and their ancestors hadn’t spent long years laboring in the sun. They made up perhaps 10 percent of the population, and spoke and governed in French. The rest of the population was made up of the noir Haitians: the darker-skinned people who constituted the working class. They spoke only the Haitian Creole dialect for the most part, and thus literally couldn’t understand most of what their country’s leaders said. In the past, it had been the mulattoes who killed one another to determine who ruled Haiti, while the noir Haitians just tried to stay out of the way.

In the 1940s, however, other leaders came forward to advance the cause of the “black” majority of the population; these leaders became known as the noiristes. Among the most prominent of them was Daniel Fignolé, a dark-skinned Haitian born, like most of his compatriots, into extreme poverty in 1913. Unlike most of them, he managed to educate himself by dint of sheer hard work, became political at the sight of the rampant injustice and corruption all around him, and came to be known as the “Moses of Port-au-Prince” for the fanatical loyalty he commanded among the stevedores, factory workers, and other unskilled laborers in and around the capital. Fignolé emphasized again and again that he was not a Marxist — an ideology that had been embraced by some of the mulattoes and was thus out of bounds for any good noiriste. Yet he did appropriate the Marxist language of proletariat and bourgeoisie, and left no doubt which side of that divide he was fighting for. For years, he remained an agitating force in Haitian politics without ever quite breaking through to real power. Then came the tumultuous year of 1957.

Daniel Fignolé, the great noiriste advocate for the working classes of Haiti.

…but you’re now a longshoreman in Port-au-Prince, and your beloved Daniel Fignolé has been ousted after just nineteen days as Provisional President. Rumors abound that he has been executed by Duvalier and his thugs. You’re taking part in a peaceful, if noisy, demonstration demanding his return. Suddenly, you’re facing government tanks and troops. Ghede rides on the lead tank, laughing and clapping his hands in delight. You shout your defiance and pitch a rock at the tank. The troops open fire, and machine-gun bullets rip through your chest…

One Paul Magloire, better known as Bon Papa, had been Haiti’s military dictator since 1950. The first few years of his reign had gone relatively well; his stridently anticommunist posturing won him some measure of support from the United States, and Haiti briefly even became a vacation destination to rival the Dominican Republic among sun-seeking American tourists. But when a devastating hurricane struck Hispaniola in 1954 and millions of dollars in international aid disappeared in inimitable Haitian fashion without ever reaching the country’s people, the mood among the elites inside the country who had been left out of that feeding frenzy began to turn against Bon Papa. On December 12, 1956, he resigned his office by the hasty expedient of jumping into an airplane and getting the hell out of Dodge before he came to share the fate of Jean Vilbrun Guillaume Sam. The office of the presidency, a hot potato if ever there was one, then passed through three more pairs of hands in the next six months, while an election campaign to determine Haiti’s next permanent leader took place.

Of course, in Haiti election campaigns were fought with fists, clubs, knives, guns, bombs, and, most of all, rampant, pervasive corruption at every level. Still, in a rare sign of progress of a sort in Haitian politics, the two strongest candidates were both noiristes promising to empower the people rather than the mulatto elites. They were Daniel Fignolé and François Duvalier, the latter being a frequent comrade-in-arms of the former during the struggles of the last twenty years who had now become a rival; he was an unusually quiet, even diffident-seeming personality in terms of typical Haitian politics, so much so that many doubted his mental fortitude and intelligence alike. But Duvalier commanded enormous loyalty in the countryside, where he had worked for years as a doctor, often in tandem with American charitable organizations. Meanwhile Fignolé’s urban workers remained as committed to him as ever, and clashes between the supporters of the two former friends were frequent and often violent.

The workers around Port-au-Prince pledged absolute allegiance to Daniel Fignolé. He liked to call them his wuolo konmpresé — his “steamrollers,” always ready to take to the streets for a rally, a demonstration, or just a good old fight.

But then, on May 25, 1957, Duvalier unexpectedly threw his support behind a bid to make his rival the latest provisional president while the election ran its course, and Fignolé marched into the presidential palace surrounded by his cheering supporters. In a stirring speech on the palace steps, he promised a Haitian “New Deal” in the mold of Franklin D. Roosevelt’s American version.

The internal machinations of Haitian politics are almost impossible for an outsider to understand, but many insiders have since claimed that Duvalier, working in partnership with allies he had quietly made inside the military, had set Fignolé up for a fall, contriving to remove him from the business of day-to-day campaigning and thereby shore up his own support while making sure his presidency was always doomed to be a short one even by Haitian standards. At any rate, on the night of June 14, 1957 — just nineteen days after he had assumed the post — a group of army officers burst into Fignolé’s office, forced him to sign a resignation letter at gunpoint, and then tossed him into an airplane bound for the United States, exiling him on pain of death should he ever return to Haiti.

The deposing of Fignolé ignited another spasm of civil unrest among his supporters in Port-au-Prince, but their violence was met with even more violence by the military. There were reports of soldiers firing machine guns into the crowds of demonstrators. People were killed in the hundreds if not thousands in the capital, even as known agitators were rounded up en masse and thrown into prison, the offices of newspapers and magazines supporting Fignolé’s cause closed and ransacked. On September 22, 1957, it was announced that François Duvalier had been elected president by the people of Haiti.

Inside the American government, opinion was divided about the latest developments in Haiti. The CIA was convinced that, despite Fignolé’s worrisome leftward orientation, his promised socialist democracy was a better, more stable choice for the United States’s close neighbor than a military junta commanded by Duvalier. The agency thus concocted a scheme to topple Duvalier’s new government, which was to begin with the assassination of his foreign minister, Louis Raimone, on an upcoming visit to Mexico City to negotiate an arms deal. But the CIA’s plans accidentally fell into the hands of one Austin Garriot, an academic doing research for his latest book in Washington, D.C. Garriot passed the plans on to J. Edgar Hoover’s FBI, who protested strongly that any attempt to overthrow Duvalier would be counter to international law — and who emphasized as well that he had declared himself to be strongly pro-American and anti-Soviet. With the top ranks of the FBI threatening to expose the illegal assassination plot to other parts of the government if the scheme was continued, the CIA had no choice but to quietly abandon it. Duvalier remained in power, unmolested.

He had promised his supporters a bright future…

…before a shining white city atop a hill. A sign welcomes you to Duvalierville. As you walk through the busy streets, well-dressed, cheerful people greet you as they pass by. You are struck by the abundance of goods and services offered, and the cleanliness and order that prevails. Almost every wall is adorned with a huge poster of a frail, gray-haired black man wearing a dark suit and horn-rimmed glasses.

Under the figure are the words: “Je suis le drapeau Haitien, Uni et Indivisible. François Duvalier.”

Everyone you ask about the man says the same thing: “We all love Papa Doc. He’s our president for life now, and we pray that he will live forever.”

Instead the leader who became known as Papa Doc — this quiet country doctor — became another case study in the banality of evil. During his fourteen years in power, an estimated 60,000 people were executed upon his personal extra-judicial decree. The mulatto elite, who constituted the last remnants of Haiti’s educated class and thus could be a dangerous threat to his rule, were a particular target; purge after purge cut a bloody swath through their ranks. When Papa Doc died in 1971, his son Jean-Claude Duvalier — Baby Doc — took over for another fifteen years. The world became familiar with the term “Haitian boat people” as the Duvaliers’ desperate victims took to the sea in the most inadequate of crafts. For them, any shred of hope for a better life was worth grasping at, no matter what the risk.

…you find yourself at sea, in a ragged little boat. Every inch of space is crowded with humanity. They’re people you know and care about deeply. You have no food or water, but you have something more precious — hope. In your native Haiti, your life has become intolerable. The poverty, the fear, the sudden disappearances of so many people — all have driven you to undertake this desperate journey into the unknown.

A storm arises, and your small boat is battered by the waves and torn apart. One by one, your friends, your brothers, your children slip beneath the roiling water and are lost. You cling to a rotten board as long as you can, but you know that your dream of freedom is gone. “Damn you, Duvalier,” you scream as the water closes over your head…



And now I have to make a confession: not quite all of the story I’ve just told you is true. That part about the CIA deciding to intervene in Haitian politics, only to be foiled by the FBI? It never happened (as far as I know, anyway). That part, along with all of the quoted text above, is rather lifted from a fascinating and chronically underappreciated work of interactive fiction from 1992: Shades of Gray.

Shades of Gray was the product of a form of collaboration which would become commonplace in later years, but which was still unusual enough in 1992 that it was remarked in virtually every mention of the game: the seven people who came together to write it had never met one another in person, only online. The project began when a CompuServe member named Judith Pintar, who had just won the 1991 AGT Competition with her CompuServe send-up Cosmoserve, put out a call for collaborators to make a game for the next iteration of the Competition. Mark Baker, Steve Bauman, Belisana, Hercules, Mike Laskey, and Cindy Yans wound up joining her, each writing a vignette for the game. Pintar then wrote a central spine to bind all these pieces together. The end result was so much more ambitious than anything else made for that year’s AGT Competition that organizer David Malmberg created a “special group effort” category just for it — which, it being the only game in said category, it naturally won.

Yet Shades of Gray‘s unusual ambition wasn’t confined to its size or number of coauthors. It’s also a game with some serious thematic heft.

The idea of using interactive fiction to make a serious literary statement was rather in abeyance in the early 1990s. Infocom had always placed a premium on good writing, and had veered at least a couple of times into thought-provoking social and historical commentary with A Mind Forever Voyaging and Trinity. But neither of those games had been huge sellers, and Infocom’s options had always been limited by the need to please a commercial audience who mostly just wanted more fun games like Zork from them, not deathless literary art. Following Infocom’s collapse, amateur creators working with development systems like AGT and TADS likewise confined almost all of their efforts to making games in the mold of Zork — unabashedly gamey games, with lots of puzzles to solve and an all-important score to accumulate.

On the surface, Shades of Gray may not seem a radical departure from that tradition; it too sports lots of puzzles and a score. Scratch below the surface, though, and you’ll find a text adventure with more weighty thoughts on its mind than any since 1986’s Trinity (a masterpiece of a game which, come to think of it, also has puzzles and a score, thus proving these elements are hardly incompatible with literary heft).

It took the group who made Shades of Gray much discussion to arrive at its central theme, which Judith Pintar describes as one of “moral ambiguity”: “We wanted to show that life and politics are nuanced.” You are cast in the role of Austin Garriot, a man whose soul has become unmoored from his material being for reasons that aren’t ever — and don’t really need to be — clearly explained. With the aid of a gypsy fortune teller and her Tarot deck, you explore the impulses and experiences that have made you who you are, presented in the form of interactive vignettes carved from the stuff of symbolism and memory and history. Moral ambiguity does indeed predominate through echoes of the ancient Athens of Antigone, the Spain of the Inquisition, the United States of the Civil War and the Joseph McCarthy era. In the most obvious attempt to present contrasting viewpoints, you visit Sherwood Forest twice, playing once as Robin Hood and once as the poor, put-upon Sheriff of Nottingham, who’s just trying to maintain the tax base and instill some law and order.

> examine chest
The chest is solidly made, carved from oak and bound together with strips
of iron. It contains the villagers' taxes -- money they paid so you could
defend them against the ruffians who inhabit the woods. Unfortunately, the
outlaws regularly attack the troops who bring the money to Nottingham, and
generally steal it all.

Because you can no longer pay your men-at-arms, no one but you remains to protect the local villagers. The gang is taking full advantage of this, attacking whole communities from their refuge in Sherwood Forest. You are alone, but you still have a duty to perform.

Especially in light of the contrasting Robin Hood vignettes, it would be all too easy for a reviewer like me to neatly summarize the message of Shades of Gray as something like “there are two sides to every story” or “walk a mile in my shoes before you condemn me.” And, to be sure, that message is needed more than ever today, not least by the more dogmatic members of our various political classes. Yet to claim that that’s all there is to Shades of Gray is, I think, to do it a disservice. Judith Pintar, we should remember, described its central theme as moral ambiguity, which is a more complex formulation than just a generalized plea for empathy. There are no easy answers in Shades of Gray — no answers at all really. It tells us that life is complicated, and moral right is not always as easy to determine as we might wish.

Certainly that statement applies to the longstanding question with which I opened this article: What to do about Haiti? In the end, it’s the history of that long-suffering country that comes to occupy center stage in Shades of Gray‘s exploration of… well, shades of gray.

Haiti’s presence in the game is thanks to the contributor whose online handle was Belisana.1 It’s an intriguingly esoteric choice of subject matter for a game written in this one’s time and place, especially given that none of the contributors, Belisana included, had any personal connection to Haiti. She rather began her voyage into Haitian history with a newspaper clipping, chanced upon in a library, from that chaotic year of 1957. She included a lightly fictionalized version of it in the game itself:

U.S. AID TO HAITI REDUCED TWO-THIRDS

PORT-AU-PRINCE, Haiti, Oct. 8 — The United States government today shut down two-thirds of its economic aid to Haiti. The United States Embassy sources stressed that the action was not in reprisal against the reported fatal beating of a United States citizen last Sunday.

The death of Shibley Matalas was attributed by Col. Louis Raimone, Haitian Foreign Minister, to a heart attack. Three U.S. representatives viewed Mr. Matalas’ body. Embassy sources said they saw extensive bruises, sufficient to be fatal.

Through my own archival research, I’ve determined that in the game Belisana displaced the date of the actual incident by one week, from October 1 to October 8, and that she altered the names of the principals: Shibley Matalas was actually named Shibley Talamas, and Louis Raimone was Louis Roumain. The incident in question occurred after François Duvalier had been elected president of Haiti but three weeks before he officially assumed the office. The real wire report, as printed in the Long Beach Press Telegram, tells a story too classically Haitian not to share in full.

Yank in Haitian Jail Dies, U.S. Envoy Protests

Port-au-Prince, Haiti (AP) — Americans were warned to move cautiously in Haiti today after Ambassador Gerald Drew strongly protested the death of a U.S. citizen apparently beaten while under arrest. The death of Shibley Talamas, 30-year-old manager of a textile factory here, brought the United States into the turmoil which followed the presidential election Sept. 22 in the Caribbean Negro republic.

Drew protested Monday to Col. Louis Roumain, foreign minister of the ruling military junta. The ambassador later cautioned Americans to be careful and abide by the nation’s curfew.

Roumain had gone to the U.S. Embassy to present the government’s explanation of Talamas’ death, which occurred within eight hours of his arrest.

The ambassador said Roumain told him Talamas, son of U.S. citizens of Syrian extraction, was arrested early Sunday afternoon in connection with the shooting of four Haitian soldiers. The solders were killed by an armed band Sunday at Kenscoff, a mountain village 14 miles from this capital city.

Drew said Roumain “assured me that Talamas was not mistreated.”

While being questioned by police, Talamas tried to attack an officer and to reach a nearby machine gun, Roumain told Drew. He added that Talamas then was handcuffed and immediately died of a heart attack.

The embassy said three reliable sources reported Talamas was beaten sufficiently to kill him.

One of these sources said Talamas’ body bore severe bruises about the legs, chest, shoulders, and abdomen, and long incisions that might have been made in an autopsy.

A Haitian autopsy was said to have confirmed that Talamas died of a heart attack. The location of the body remained a mystery. It was not delivered immediately to relatives.

Talamas, 300-pound son of Mr. and Mrs. Antoine Talamas, first was detained in the suburb of Petionville. Released on his promise to report later to police, he surrendered to police at 2 p.m. Sunday in the presence of two U.S. vice-consuls. His wife, Frances Wilpula Talamas, formerly of Ashtabula, Ohio, gave birth to a child Sunday.

Police said they found a pistol and shotgun in Talamas’ business office. Friends said he had had them for years.

Before seeing Roumain Monday, Drew tried to protest to Brig. Gen. Antonio Kébreau, head of the military junta, but failed in the attempt. An aid told newsmen that Kébreau could not see them because he had a “tremendous headache.”

Drew issued a special advisory to personnel of the embassy and U.S. agencies and to about 400 other Americans in Haiti. He warned them to stay off the streets during the curfew — 8 p.m. to 4 a.m. — except for emergencies and official business.

Troops and police have blockaded roads and sometimes prevented Americans getting to and from their homes. Americans went to their homes long ahead of the curfew hour Monday night. Some expressed fear that Talamas’ death might touch off other incidents.

Calm generally prevailed in the country. Police continued to search for losing presidential candidate Louis Déjoie, missing since the election. His supporters have threatened violence and charged that the military junta rigged the election for Dr. François Duvalier, a landslide winner in unofficial returns.

Official election results will be announced next Tuesday. Duvalier is expected to assume the presidency Oct. 14.

The Onion, had it existed at the time, couldn’t have done a better job of satirizing the farcical spectacle of a Haitian election. And yet all this appeared in a legitimate news report, from the losing candidate who mysteriously disappeared to the prisoner who supposedly dropped dead of a heart attack as soon as his guards put the handcuffs on him — not to mention the supreme leader with a headache, which might just be my favorite detail of all. Again: what does one do with a place like this, a place so corrupt for so long that corruption has become inseparable from its national culture?

But Shades of Gray is merciless. In the penultimate turn, it demands that you answer that question — at least this one time, in a very specific circumstance. Still playing the role of the hapless academic Austin Garriot, you’ve found a briefcase with all the details of the CIA’s plot to kill the Haitian foreign minister and initiate a top-secret policy of regime change in the country. The CIA’s contracted assassin, the man who lost the briefcase in the first place, is a cold fish named Charles Calthrop. He’s working together with Michael Matalas, vengeance-seeking brother of the recently deceased Shibley Matalas (né Talamas), and David Thomas, the CIA’s bureau chief in Haiti; they all want you to return the briefcase to them and forget that you ever knew anything about it. But two FBI agents, named Smith and Wesson (ha, ha…), have gotten wind of the briefcase’s contents, and want you to give it to them instead so they can stop the conspiracy in its tracks.

So, you are indeed free to take the course of action I’ve already described: give the briefcase to the FBI, and thereby foil the plot and strike a blow for international law. This will cause the bloody late-twentieth-century history of Haiti that we know from our own timeline to play out unaltered, as Papa Doc consolidates his grip on the country unmolested by foreign interventions.

Evil in a bow tie: François Duvalier at the time of the 1957 election campaign. Who would have guessed that this unassuming character would become the worst single Haitian monster of the twentieth century?

Or you can choose not to turn over the briefcase, to let the CIA’s plot take its course. And what happens then? Well, this is how the game describes it…

Smith and Wesson were unable to provide any proof of the CIA’s involvement in Raimone’s killing, and they were censured by Hoover for the accusation.

The following Saturday, Colonel Louis Raimone died from a single rifle shot through the head as he disembarked from a plane in Mexico City. His assassin was never caught, nor was any foreign government ever implicated.

It was estimated that the shot that killed Raimone was fired from a distance of 450 yards, from a Lee Enfield .303 rifle. Very few professionals were capable of that accuracy over that distance; Charles Calthrop was one of the few, and the Lee Enfield was his preferred weapon.

Duvalier didn’t survive long as president. Without the riot equipment that Raimone had been sent to buy, he was unable to put down the waves of unrest that swept the country. The army switched its allegiance to the people, and he was overthrown in March 1958.

Duvalier lived out the rest of his life in exile in Paris, and died in 1964.

Daniel Fignolé returned to govern Haiti after Duvalier was ousted, and introduced an American-style democracy. He served three 5-year terms of office, and was one of Kennedy’s staunchest allies during the Cuban missile crisis. He is still alive today, an elder statesman of Caribbean politics.

His brother’s death having been avenged, Michel Matalas returned to his former job as a stockman in Philadelphia. He joined the army and died in Vietnam in 1968. His nephew, Shibley’s son Mattieu, still lives in Haiti.

David Thomas returned to Haiti in his role as vice-consul, and became head of the CIA’s Caribbean division. He provided much of the intelligence that allowed Kennedy to bluff the Russians during the Cuban missile crisis before returning to take up a senior post at Langley.

What we have here, then, is a question of ends versus means. In the universe of Shades of Gray, at least, carrying out an illegal assassination and interfering in another sovereign country’s domestic politics leads to a better outcome than the more straightforwardly ethical course of abiding by international law.

Ever since it exited World War II as the most powerful country in the world, the United States has been confronted with similar choices time and time again. It’s for this reason that Judith Pintar calls her and her colleagues’ game “a story about American history as much as it is about Haiti.” While its interference in Haiti on this particular occasion does appear to have been limited or nonexistent in our own timeline, we know that the CIA has a long history behind it of operations just like the one described in the game, most of which didn’t work out nearly so well for the countries affected. And we also know that such operations were carried out by people who really, truly believed that their ends did justify their means. What can we do with all of these contradictory facts? Shades of gray indeed.

Of course, Shades of Gray is a thought experiment, not a serious study in geopolitical outcomes. There’s very good reason to question whether the CIA, who saw Daniel Fignolé as a dangerously left-wing leader, would ever have allowed him to assume power once again; having already chosen to interfere in Haitian politics once, a second effort to keep Fignolé out of power would only have been that much easier to justify. (This, one might say, is the slippery slope of interventionism in general.) Even had he regained and subsequently maintained his grip on the presidency, there’s reason to question whether Fignolé would really have become the mechanism by which true democracy finally came to Haiti. The list of Haitian leaders who once seemed similarly promising, only to disappoint horribly, is long; it includes on it that arguably greatest Haitian monster of all, the mild-mannered country doctor named François Duvalier, alongside such more recent disappointments as Jean-Bertrand Aristide. Perhaps Haiti’s political problems really are cultural problems, and as such are not amenable to fixing by any one person. Or, as many a stymied would-be reformer has speculated over the years, perhaps there really is just something in the water down there, or a voodoo curse in effect, or… something.

So, Shades of Gray probably won’t help us solve the puzzle of Haiti. It does, however, provide rich food for thought on politics and ethics, on the currents of history and the winds of fate — and it’s a pretty good little text adventure too. Its greatest weakness is the AGT development system that was used to create it, whose flexibility is limited and whose parser leaves much to be desired. “Given a better parser and the removal of some of the more annoying puzzles,” writes veteran interactive-fiction reviewer Carl Muckenhoupt, “this one would easily rate five stars.” I don’t actually find the puzzles all that annoying, but do agree that the game requires a motivated player willing to forgive and sometimes to work around the flaws of its engine. Any player willing to do so, though, will be richly rewarded by this milestone in interactive-fiction history, the most important game in terms of the artistic evolution of the medium to appear between Infocom’s last great burst of formal experiments in 1987 and the appearance of Graham Nelson’s milestone game Curses! in 1993. Few games in all the years of text-adventure history have offered more food for thought than Shades of Gray — a game that refuses to provide incontrovertible answers to the questions it asks, and is all the better for it.

In today’s Haiti, meanwhile, governments change constantly, but nothing ever changes. The most recent election as of this writing saw major, unexplained discrepancies between journalists’ exit polling and the official results, accompanied by the usual spasms of violence in the streets. Devastating earthquakes and hurricanes in recent years have only added to the impression that Haiti labors under some unique curse. On the bright side, however, it has been nearly a decade and a half since the last coup d’etat, which is pretty good by Haitian standards. You’ve got to start somewhere, right?

(Sources: the books Red & Black in Haiti: Radicalism, Conflict, and Political Change 1934-1957, Haiti: The Tumultuous History — From Pearl of the Caribbean to Broken Nation by Philippe Girard, and Haiti: The Aftershocks of History by Laurent Dubois; Life of June 3 1957; Long Beach Press Telegram of October 1 1957. My huge thanks go to Judith Pintar for indulging me with a long conversation about Shades of Gray and other topics. You can read more of our talk elsewhere on this site.

You can download Shades of Gray from the IF Archive. You can play it using the included original interpreter through DOSBox, or, more conveniently, with a modern AGT interpreter such as AGiliTY or — best of all in my opinion — the multi-format Gargoyle.)


  1. I do know her real name, but don’t believe it has ever been published in connection with Shades of Gray, and therefore don’t feel comfortable “outing” her here. 

 
22 Comments

Posted by on September 14, 2018 in Digital Antiquaria, Interactive Fiction

 

Tags: , , ,

Agrippa (A Book of the Dead)

Is it the actor or the drama
Playing to the gallery?
Or is it but the character
Of any single member of the audience
That forms the plot
of each and every play?

“Hanging in the Gallery” by Dave Cousins

I was introduced to the contrast between art as artifact and art as experience by an episode of Northern Exposure, a television show which meant a great deal to my younger self. In “Burning Down the House,” Chris in the Morning, the town of Cicely, Alaska’s deejay, has decided to fling a living cow through the air using a trebuchet. Why? To create a “pure moment.”

“I didn’t know what you are doing was art,” says Shelley, the town’s good-hearted bimbo. “I thought it had to be in a frame, or like Jesus and Mary and the saints in church.”

“You know, Shell,” answers Chris in his insufferable hipster way, “the human soul chooses to express itself in a profound profusion of ways, not just the plastic arts.”

“Plastic hearts?”

“Arts! Plastic arts! Like sculpture, painting, charcoal. Then there’s music and poetry and dance. Lots of people, Susan Sontag notwithstanding, include photography.”

“Slam dancing?”

“Insofar as it reflects the slam dancer’s inner conflict with society through the beat… yeah, sure, why not? You see, Shelley, what I’m dealing with is the aesthetics of the transitory. I’m creating tomorrow’s memories, and, as memories, my images are as immortal as art which is concrete.”

Certain established art forms — those we generally refer to as the performing arts — have this quality baked into them in an obvious way. Keith Richards of the Rolling Stones once made the seemingly arrogant pronouncement that his band was “the greatest rock-and-roll band in the world” — but later modified his statement by noting that “on any given night, it’s a different band that’s the greatest rock-and-roll band in the world.” It might be the Rolling Stones playing before an arena full of 20,000 fans one night, and a few sweaty teenagers playing for a cellar full of twelve the next. It has nothing to do with the technical skill of the musicians; music is not a skills competition. A band rather becomes the greatest rock-and-roll band in the world the moment when the music goes someplace that transcends notes and measures. This is what the ancient Greeks called the kairos moment: the moment when past and future and thought itself fall away and there are just the band, the audience, and the music.

But what of what Chris in the Morning calls the “plastic arts,” those oriented toward producing some physical (or at least digital) artifact that will remain in the world long after the artist has died? At first glance, the kairos moment might seem to have little relevance here. Look again, though. Art must always be an experience, in the sense that there is a viewer, a reader, or a player who must experience it. And the meaning it takes on for that person — or lack thereof — will always be profoundly colored by where she was, who she was, when she was at the time. You can, in other words, find your own transitory transcendence inside the pages of a book just as easily as you can in a concert hall.

The problem with the plastic arts is that it’s too easy to destroy the fragile beauty of that initial impression. It’s too easy to return to the text trying to recapture the transcendent moment, too easy to analyze it and obsess over it and thereby to trample it into oblivion.

But what if we could jettison the plastic permanence from one of the plastic arts, creating something that must live or die — like a rock band in full flight or Chris in the Morning’s flying cow — only as a transitory transcendence? What if we could write a poem which the reader couldn’t return to and fuss over and pin down like a butterfly in a display case? What if we could write a poem that the reader could literally only read one time, that would flow over her once and leave behind… what? As it happens, an unlikely trio of collaborators tried to do just that in 1992.



Very early that year, a rather strange project prospectus made the rounds of the publishing world. Its source was Kevin Begos, Jr., who was known, to whatever extent he was known at all, as a publisher of limited-edition art books for the New York City gallery set. This new project, however, was something else entirely, and not just because it involved the bestselling science-fiction author William Gibson, who was already ascending to a position in the mainstream literary pantheon as “the prophet of cyberspace.”

Kevin Begos Jr., publisher of museum-quality, limited edition books, has brought together artist Dennis Ashbaugh (known for his large paintings of computer viruses and his DNA “portraits”) and writer William Gibson (who coined the term cyberspace, then explored the concept in his award-winning books Neuromancer, Count Zero, and Mona Lisa Overdrive) to produce a collaborative Artist’s Book.

In an age of artificial intelligence, recombinant genetics, and radical, technologically-driven cultural change, this “Book” will be as much a challenge as a possession, as much an enigma as a “story”.

The Text, encrypted on a computer disc along with a Virus Program written especially for the project, will mutate and destroy itself in the course of a single “reading”. The Collector/Reader may either choose to access the Text, thus setting in motion a process in which the Text becomes merely a Memory, or preserve the Text unread, in its “pure” state — an artifact existing exclusively in cyberspace.

Ashbaugh’s etchings, which allude to the potent allure and taboo of Genetic Manipulation, are both counterpoint and companion-piece to the Text. Printed on beautiful rag paper, their texture, odor, form, weight, and color are qualities unavailable to the Text in cyberspace. (The etchings themselves will undergo certain irreparable changes following their initial viewing.)

This Artist’s Book (which is not exactly a “book” at all) is cased in a wrought metal box, the Mechanism, which in itself becomes a crucial, integral element of the Text. This book-as-object raises unique questions about Art, Time, Memory, Possession—and the Politics of Information Control. It will be the first Digital Myth.

William Gibson had been friends with Dennis Ashbaugh for some time, ever since the latter had written him an admiring letter a few years after his landmark novel Neuromancer was published. The two men worked in different mediums, but they shared an interest in the transformations that digital technology and computer networking were having on society. They corresponded regularly, although they met only once in person.

Yet it was neither Gibson the literary nor Ashbaugh the visual artist who conceived their joint project’s central conceit; it was instead none other than the author of the prospectus above, publisher Kevin Begos, Jr., another friend of Ashbaugh. Ashbaugh, who like Begos was based in New York City, had been looking for a way to collaborate with Gibson, and came to his publisher friend looking for ideas that might be compelling enough to interest such a high-profile science-fiction writer, who lived all the way over in Vancouver, Canada, just about as far away as it was possible to get from New York City and still be in North America. “The idea kind of came out of the blue,” says Begos: “to do a book on a computer disk that destroys itself after you read it.” Gibson, Begos, thought, would be the perfect writer to which to pitch such a project, for he innately understood the kairos moment in art; his writing was thoroughly informed by the underground rhythms of the punk and new-wave music scenes. And, being an acknowledged fan of experimental literature like that written by his hero William S. Burroughs, he wasn’t any stranger to conceptual literary art of the sort which this idea of a self-destroying text constituted.

Even so, Begos says that it took him and Ashbaugh a good six to nine months to convince Gibson to join the project. Even after agreeing to participate, Gibson proved to be the most passive of the trio by far, providing the poem that was to destroy itself early on but then doing essentially nothing else after that. It’s thus ironic and perhaps a little unfair that the finished piece remains today associated almost exclusively with the name of William Gibson. If one person can be said to be the mastermind of the project as a whole, that person must be Kevin Begos, Jr., not William Gibson.

Begos, Ashbaugh, and Gibson decided to call their art project Agrippa (A Book of the Dead), adopting the name Gibson gave to his poem for the project as a whole. Still, there was, as the prospectus above describes, much more to it than the single self-immolating disk which contained the poem. We can think of the whole artwork as being split into two parts: a physical component, provided by Ashbaugh, and a digital component, provided by Gibson, with Begos left to tie them together. Both components were intended to be transitory in their own ways. (Their transcendence, of course, must be in the eye of the beholder.)

Begos said that he would make and sell just 455 copies of the complete work, ranging in price from $450 for the basic edition to $7500 for a “deluxe copy in a bronze case.” The name of William Gibson lent what would otherwise have been just a wacky avant-garde art project a great deal of credibility with the mainstream press. It was discussed far and wide in the spring and summer of 1992, finding its way into publications like People, Entertainment WeeklyEsquire, and USA Today long before it existed as anything but a set of ideas inside the minds of its creators. A reporter for Details magazine repeated the description of a Platonic ideal of Agrippa that Begos relayed to him from his fond imagination:

‘Agrippa’ comes in a rough-hewn black box adorned with a blinking green light and an LCD readout that flickers with an endless stream of decoded DNA. The top opens like a laptop computer, revealing a hologram of a circuit board. Inside is a battered volume, the pages of which are antique rag-paper, bound and singed by hand.

Like a frame of unprocessed film, ‘Agrippa’ begins to mutate the minute it hits the light. Ashbaugh has printed etchings of DNA nucleotides, but then covered them with two separate sets of drawings: One, in ultraviolet ink, disappears when exposed to light for an hour; the other, in infrared ink, only becomes visible after an hour in the light. A paper cavity in the center of the book hides the diskette that contains Gibson’s fiction, digitally encoded for the Macintosh or the IBM.

[…]

The disk contained Gibson’s poem Agrippa: “The story scrolls on the screen at a preset pace. There is no way to slow it down, speed it up, copy it, or remove the encryption that ultimately causes it to disappear.” Once the text scrolled away, the disk got wiped, and that was that. All that would be left of Agrippa was the reader’s memory of it.

The three tricksters delighted over the many paradoxes of their self-destroying creation with punk-rock glee. Ashbaugh laughed about having to send two copies of it to the copyright office — because to register it for a copyright, you had to read it, but when you read it you destroyed it. Gibson imagined some musty academic of the future trying to pry the last copy out of the hands of a collector so he could read it — and thereby destroy it definitively for posterity. He described it as “a cruel joke on book collectors.”

As I’ve already noted, Ashbaugh’s physical side of the Agrippa project was destined to be overshadowed by Gibson’s digital side, to the extent that the former is barely remembered at all today. Part of the problem was the realities of working with physical materials, which conspired to undo much of the original vision for the physical book. The LCD readout and the circuit-board hologram fell by the wayside, as did Ashbaugh’s materializing and de-materializing pictures. (One collector has claimed that the illustrations “fade a bit” over time, but one does have to wonder whether even that is wishful thinking.)

But the biggest reason that one aspect of Agrippa so completely overshadowed the other was ironically the very thing that got the project noticed at all in so many mainstream publications: William Gibson’s fame in comparison to his unknown collaborators. People magazine didn’t even bother to mention that there was anything to Agrippa at all beyond the disk; “I know Ashbaugh was offended by that,” says Begos. Unfortunately obscured by this selective reporting was an intended juxtaposition of old and new forms of print, a commentary on evolving methods of information transmission. Begos was as old-school as publishers got, working with a manual printing press not very dissimilar from the one invented by Gutenberg; each physical edition of Agrippa was a handmade object d’art. Yet all most people cared about was the little disk hidden inside it.

So, even as the media buzzed with talk about the idea of a digital poem that could only be read once, Begos had a hell of a time selling actual, physical copies of the book. As of December of 1992, a few months after it went to press, Begos said he still had about 350 copies of it sitting around waiting for buyers. It seems unlikely that most of these were ever sold; they were quite likely destroyed in the end, simply because the demand wasn’t there. Begos relates a typical anecdote:

There was a writer from a newspaper in the New York area who was writing something on Agrippa. He was based out on Long Island and I was based in Manhattan. He sent a photographer to photograph the book one afternoon. And he’d done a phone interview with me, though I don’t remember if he called Gibson or not. He checked in with me after the photographer had come to make sure that it had gone alright, and I said yes. I said, “Well aren’t you coming by; don’t you want to see the book?” He said “No; you know, the traffic’s really bad; you know, I just don’t have time.” He published his story the next day, and there was nothing wrong with it, but I found that very odd. It probably would have taken him an hour to drive in, or he could have waited a few days. But some people, they almost seemed resistant to seeing the whole package.

It’s inevitable, given the focus of this site, that our interest too will largely be captured by the digital aspect of the work. Yet the physical artwork — especially the full-fledged $7500 edition — certainly is an interesting creation in its own right. Rather than looking sleek and modern, as one might expect from the package framing a digital text from the prophet of cyberpunk, it looks old — mysteriously, eerily old. “There’s a little bit of a dark side to the Gibson story and the whole mystery about it and the whole notion of a book that destroys itself, a text that destroys itself after you read it,” notes Begos. “So I thought that was fitting.” It smacks of ancient tomes full of forbidden knowledge, like H.P. Lovecraft’s Necronomicon, or the Egyptian Book of the Dead to which its parenthetical title seems to pay homage. Inside was to be found abstract imagery and, in lieu of conventional text, long strings of numbers and characters representing the gene sequence of the fruit fly. And then of course there was the disk, nestled into its little pocket at the back.

The deluxe edition of Agrippa is housed in this box, made out of fiberglass and paper and “distressed” by hand.

The book is inside a shroud and another case. Its title has been burned into it by hand.

The book’s 64 hand-cut pages combine long chunks of the fruit-fly genome alongside Daniel Ashbaugh’s images evocative of genetics — and occasional images, such as the pistol above, drawn from Gibson’s poem of “Agrippa.”

The last 20 pages have been glued together — as usual, by hand — and a pocket cut out of them to hold the disk.

But it was, as noted, the contents of the disk that really captured the public’s imagination, and that’s where we’ll turn our attention now.

William Gibson’s contribution to the project is an autobiographical poem of approximately 300 lines and 2000 words. The poem called “Agrippa” is named after something far more commonplace than its foreboding packaging might imply. “Agrippa” was actually the brand name of a type of photo album which was sold by Kodak in the early- and mid-twentieth century. Gibson’s poem begins as he has apparently just discovered such an artifact — “a Kodak album of time-burned black construction paper” — in some old attic or junk room. What follows is a meditation on family and memory, on the roots of things that made William Gibson the man he is now. There’s a snapshot of his grandfather’s Appalachian sawmill; there’s a pistol from some semi-forgotten war; there’s a picture of downtown Wheeling, West Virginia, 1917; there’s a magazine advertisement for a Rocket 88; there’s the all-night bus station in Wytheville, Virginia, where a young William Gibson used to go to buy cigarettes for his mother, and from which a slightly older one left for Canada to avoid the Vietnam draft and take up the life of an itinerant hippie.

Gibson is a fine writer, and “Agrippa” is a lovely, elegiac piece of work which stands on its own just fine as plain old text on the page when it’s divorced from all of its elaborate packaging and the work of conceptual art that was its original means of transmission. (Really, it does: go read it.) It was also the least science-fictional thing he had written to date — quite an irony in light of all of the discussion that swirled around it about publication in the age of cyberspace. But then, the ironies truly pile up in layers when it comes to this artistic project. It was ironically appropriate that William Gibson, a famously private person, should write something so deeply personal only in the form of a poem designed to disappear as soon as it had been read. And perhaps the supreme irony was this disappearing poem’s interest in the memories encoded by permanent artifacts like an old photo album, an old camera, or an old pistol. This interest in the way that everyday objects come to embody our collective memory would go on to become a recurring theme in Gibson’s later, more mature, less overtly cyberpunky novels. See, for example, the collector of early Sinclair microcomputers who plays a prominent role in 2003’s Pattern Recognition, in my opinion Gibson’s best single novel to date.

But of course it wasn’t as if the public’s interest in Agrippa was grounded in literary appreciation of Gibson’s poem, any more than it was in artistic appreciation of the physical artwork that surrounded it. All of that was rather beside the point of the mainstream narrative — and thus we still haven’t really engaged with the reason that Agrippa was getting write-ups in the likes of People magazine. Beyond the star value lent the project by William Gibson, all of the interest in Agrippa was spawned by this idea of a text — it could been have any text packaged in any old way, if we’re being brutally honest — that consumed itself as it was being read. This aspect of it seemed to have a deep resonance with things that were currently happening in society writ large, even if few could clarify precisely what those things were in a world perched on the precipice of the Internet Age. And, for all that the poem itself belied his reputation as a writer of science fiction, this aspect of Agrippa also resonated with the previous work of William Gibson, the mainstream media’s go-to spokesman for the (post)modern condition.

Enter, then, the fourth important contributor to Agrippa, a shadowy character who has chosen to remain anonymous to this day and whom we shall therefore call simply the Hacker. He apparently worked at Bolt, Beranek, and Newman, a Boston consulting firm with a rich hacking heritage (Will Crowther of Adventure fame had worked there), and was a friend of Dennis Ashbaugh. Kevin Begos, Jr., contracted with him to write the code for Gibson’s magical disappearing poem. “Dealing with the hacker who did the program has been like dealing with a character from one of your books,” wrote Begos to Gibson in a letter.

The Hacker spent most of his time not coding the actual display of the text — a trivial exercise — but rather devising an encryption scheme to make it impenetrable to the inevitable army of hex-editor-wielding compatriots who would try to extract the text from the code surrounding it. “The encryption,” he wrote to Begos, “has a very interesting feature in that it is context-sensitive. The value, both character and numerical, of any given character is determined by the characters next to it, which from a crypto-analysis or code-breaking point of view is an utter nightmare.”

The Hacker also had to devise a protection scheme to prevent people from simply copying the disk, then running the program from the copy. He tried to add digitized images of some of Ashbaugh’s art to the display, which would have had a welcome unifying effect on an artistic statement that too often seemed to reflect the individual preoccupations of Begos, Ashbaugh, and Gibson rather than a coherent single vision. In the end, however, he gave that scheme up as technically unfeasible. Instead he settled for a few digitized sound effects and a single image of a Kodak Agrippa photo album, displayed as the title screen before the text of the poem began to scroll. Below you can see what he ended up creating, exactly as someone would have who was foolhardy enough to put the disk into her Macintosh back in 1992.


The denizens of cyberspace, many of whom regarded William Gibson more as a god than a prophet, were naturally intrigued by Agrippa from the start, not least thanks to the implicit challenge it presented to crack the protection and thus turn this artistic monument to impermanence into its opposite. The Hacker sent Begos samples of the debates raging on the pre-World Wide Web Internet already in April of 1992, months before the book’s publication.

“I just read about William Gibson’s new book Agrippa (The Book of the Dead),” wrote one netizen. “I understand it’s going to be published on disk, with a virus that prevents it from being printed out. What do people think of this idea?”

“I seem to recall reading that this stuff about the virus-loaded book was an April Fools joke started here on the Internet,” replied another. “But nobody’s stopped talk about it, and even Tom Maddox, who knows Gibson, seemed to confirm its existence. Will the person who posted the original message please confirm or confess? Was this an April Fools joke or not?”

The Tom Maddox in question, who was indeed personally acquainted with Gibson, replied that the disappearing text “was part of a limited-edition, expensive artwork that Gibson believes was totally subscribed before ‘publication.’ Someone will publish it in more accessible form, I believe (and it will be interesting to see what the cyberpunk audience makes of it — it’s an autobiographical poem, about ten pages long).”

“What a strange world we live in,” concluded another netizen. Indeed.

The others making Agrippa didn’t need the Hacker to tell them with what enthusiasm the denizens of cyberspace would attack his code, vying for the cred that would come with being the first to break it. John Perry Barlow, a technology activist and co-founder of the Electronic Frontier Foundation, told Begos that unidentified “friends of his vow to buy and then run Agrippa through a Cray supercomputer to capture the code and crack the program.”

And yet for the first few months after the physical book’s release it remained uncracked. The thing was just so darn expensive, and the few museum curators and rare-books collectors who bought copies neither ran in the same circles as the hacking community nor were likely to entrust their precious disks to one of them.

Interest in the digital component of Agrippa remained high in the press, however, and, just as Tom Maddox had suspected all along, the collaborators eventually decided to give people unwilling to spend hundreds or thousands of dollars on the physical edition a chance to read — and to hear — William Gibson’s poem through another ephemeral electronic medium. On December 9, 1992, the Americas Society of New York City hosted an event called “The Transmission,” in which the magician and comedian Penn Jillette read the text of the poem as it scrolled across a big screen, bookended by question-and-answer sessions with Kevin Begos, Jr., the only member of the artistic trio behind Agrippa to appear at the event. The proceedings were broadcast via a closed-circuit satellite hookup to, as the press release claimed, “a street-corner shopfront on the Lower East Side, the Michael Carlos Museum in Atlanta, the Kitchen in New York City, a sheep farm in the Australian Outback, and others.” Continuing with the juxtaposition of old and new that had always been such a big thematic part of the Agrippa project — if a largely unremarked one — the press release pitched the event as a return to the days when catching a live transmission of one form or another had been the only way to hear a story, an era that had been consigned to the past by the audio- and videocassette.

When did you last hear Hopalong Cassidy on his NBC radio program? When did you last read to your children around a campfire? Have you been sorry that your busy schedule prevented a visit to the elders’ mud hut in New Guinea, where legends of times past are recounted? Have you ever looked closely at your telephone cable to determine exactly how voices and images can come out of the tiny fibers?

Naturally, recording devices were strictly prohibited at the event. Agrippa was still intended to be an ephemeral kairos moment, just like the radio broadcasts of yore.

Of course, it had always been silly to imagine that all traces of the poem could truly be blotted from existence after it had been viewed and/or heard by a privileged few. After all, people reading it on their monitor screens at home could buy video cameras too. Far from denying this reality, Begos imagined an eventual underground trade in fuzzy Agrippa videotapes, much like the bootleg concert tapes traded among fans of Bob Dylan and the Grateful Dead. Continuing with the example set by those artists, he imagined the bootleg trade being more likely to help than to hurt Agrippa‘s cultural cachet. But it would never come to that — for, despite Begos’s halfhearted precautions, the Transmission itself was captured as it happened.

Begos had hired a trio of student entrepreneurs from New York University’s Interactive Television Program to run the technical means of transmission of the Transmission. They went by the fanciful names of “Templar, Rosehammer, and Pseudophred” — names that could have been found in the pages of a William Gibson novel, and that should therefore have set off warning bells in the head of one Kevin Begos, Jr. Sure enough, the trio slipped a videotape into the camera broadcasting the proceedings. The very next morning, the text of the poem appeared on an underground computer bulletin board called MindVox, preceded by the following introduction:

Hacked & Cracked by
-Templar-
Rosehammer & Pseudophred
Introduction by Templar

When I first heard about an electronic book by William Gibson… sealed in an ominous tome of genetic code which smudges to the touch… which is encrypted and automatically self-destructs after one reading… priced at $1,500… I knew that it was a challenge, or dare, that would not go unnoticed. As recent buzzing on the Internet shows, as well as many overt attempts to hack the file… and the transmission lines… it’s the latest golden fleece, if you will, of the hacking community.

I now present to you, with apologies to William Gibson, the full text of AGRIPPA. It, of course, does not include the wonderful etchings, and I highly recommend purchasing the original book (a cheaper version is now available for $500). Enjoy.

And I’m not telling you how I did it. Nyah.

As Matthew Kirschenbaum, the foremost scholar of Agrippa, points out, there’s a delicious parallel to be made with the opening lines of Gibson’s 1981 short story “Johnny Mnemonic,” the first fully realized piece of cyberpunk literature he or anyone else ever penned: “I put the shotgun in an Adidas bag and padded it out with four pairs of tennis socks, not my style at all, but that was what I was aiming for: If they think you’re crude, go technical; if they think you’re technical, go crude. I’m a very technical boy. So I decided to get as crude as possible.” Templar was happy to let people believe he had reverse-engineered the Hacker’s ingenious encryption, but in reality his “hack” had consisted only of a fortuitous job contract and a furtively loaded videotape. Whatever works, right? “A hacker always takes the path of least resistance,” said Templar years later. “And it is a lot easier to ‘hack’ a person than a machine.”

Here, then, is one more irony to add to the collection. Rather than John Parry Barlow’s Cray supercomputer, rather than some genius hacker Gibson would later imagine had “cracked the supposedly uncrackable code,” rather than the “international legion of computer hackers” which the journal Cyberreader later claimed had done the job, Agrippa was “cracked” by a cameraman who caught a lucky break. Within days, it was everywhere in cyberspace. Within a month, it was old news online.

Before Kirschenbaum uncovered the real story, it had indeed been assumed for years, even by the makers of Agrippa, that the Hacker’s encryption had been cracked, and that this had led to its widespread distribution on the Internet — led to this supposedly ephemeral text becoming as permanent as anything in our digital age. In reality, though, it appears that the Hacker’s protection wasn’t cracked at all until long after it mattered. In 2012, the University of Toronto sponsored a contest to crack the protection, which was won in fairly short order by one Robert Xiao. Without taking anything away from his achievement, it should be noted that he had access to resources — including emulators, disk images, and exponentially more sheer computing power — of which someone trying to crack the program on a real Macintosh in 1992 could hardly even have conceived. No protection is unbreakable, but the Hacker’s was certainly unbreakable enough for its purpose.

And so, with Xiao’s exhaustive analysis of the Hacker’s protection (“a very straightforward in-house ‘encryption’ algorithm that encodes data in 3-byte blocks”), the last bit of mystery surrounding Agrippa has been peeled away. How, we might ask at this juncture, does it hold up as a piece of art?

My own opinion is that, when divorced from its cultural reception and judged strictly as a self-standing artwork of the sort we might view in a museum, it doesn’t hold up all that well. This was a project pursued largely through correspondence by three artists who were all chasing somewhat different thematic goals, and it shows in the end result. It’s very hard to construct a coherent narrative of why all of these different elements are put together in this way. What do Ashbaugh’s DNA texts and paintings really have to do with Gibson’s meditation on family memory? (Begos made a noble attempt to answer that question at the Transmission, claiming that recordings of DNA strands would somehow become the future’s version of family snapshots — but if you’re buying that, I have some choice swampland to sell you.) And then, why is the whole thing packaged to look like H.P. Lovecraft’s Necronomicon? Rather than a unified artistic statement, Agrippa is a hodgepodge of ideas that too often pull against one another.

But is it really fair to divorce Agrippa so completely from its cultural reception all those years ago? Or, to put it another way, is it fair to judge Agrippa the artwork based solely upon Agrippa the slightly underwhelming material object? Matthew Kirschenbaum says that “the practical failure to realize much of what was initially planned for Agrippa allowed the project to succeed by leaving in its place the purest form of virtual work — a meme rather than an artifact.” He goes on to note that Agrippa is “as much conceptual art as anything else.” I agree with him on both points, as I do with the online commenter from back in the day who called it “a piece of emergent performance art.” If art truly lives in our memory and our consciousness, then perhaps our opinion of Agrippa really should encompass the whole experience, including its transmission and its reception. Certainly this is the theory that underlies the whole notion of conceptual art —  whether the artwork in question involves flying cows or disappearing poems.

It’s ironic — yes, there’s that word again — to note that Agrippa was once seen as an ominous harbinger of the digital future in the way that it showed information, divorced from physical media, simply disappearing into the ether, when the reality of the digital age has led to exactly the opposite problem, with every action we take and every word we write online being compiled into a permanent record of who we supposedly are — a slate which we can never wipe clean. And this digital permanence has come to apply to the poem of “Agrippa” as well, which today is never more than a search query away. Gibson:

The whole thing really was an experiment to see just what would happen. That whole Agrippa project was completely based on “let’s do this. What will happen?” Something happens. “What’s going to happen next?”

It’s only a couple thousand words long, and dangerously like poetry. Another cool thing was getting a bunch of net-heads to sit around and read poetry. I sort of liked that.

Having it wind up in permanent form, sort of like a Chinese Wall in cyberspace… anybody who wants to can go and read it, if they take the trouble. Free copies to everyone. So that it became, really, at the last minute, the opposite of the really weird, elitist thing many people thought it was.

So, Agrippa really was as uncontrollable and unpredictable for its creators as it was for anyone else. Notably, nobody made any money whatsoever off it, despite all the publicity and excitement it generated. In fact, Begos calls it a “financial disaster” for his company; the fallout soon forced him to abandon publishing altogether.

“Gibson thinks of it [Agrippa] as becoming a memory, which he believes is more real than anything you can actually see,” said Begos in a contemporary interview. Agrippa did indeed become a collective kairos moment for an emerging digital culture, a memory that will remain with us for a long, long time to come. Chris in the Morning would be proud.

(Sources: the book Mechanisms: New Media and the Forensic Imagination by Matthew G. Kirschenbaum; Starlog of September 1994; Details of June 1992; New York Times of November 18 1992. Most of all, The Agrippa Files of The University of California Santa Barbara, a huge archive of primary and secondary sources dealing with Agrippa, including the video of the original program in action on a vintage Macintosh.)

 
19 Comments

Posted by on September 7, 2018 in Digital Antiquaria, Interactive Fiction

 

Tags: ,

The Games of Windows

There are two stories to be told about games on Microsoft Windows during the operating environment’s first ten years on the market. One of them is extremely short, the other a bit longer and far more interesting. We’ll dispense with the former first.

During the first half of the aforementioned decade — the era of Windows 1 and 2 — the big game publishers, like most of their peers making other kinds of software, never looked twice at Microsoft’s GUI. Why should they? Very few people were even using the thing.

Yet even after Windows 3.0 hit the scene in 1990 and makers of other kinds of software stampeded to embrace it, game publishers continued to turn up their noses. The Windows API made life easier in countless ways for makers of word processors, spreadsheets, and databases, allowing them to craft attractive applications with a uniform look and feel. But it certainly hadn’t been designed with games in mind; they were so far down on Microsoft’s list of priorities as to be nonexistent. Games were in fact the one kind of software in which uniformity wasn’t a positive thing; gamers craved diverse experiences. As a programmer, you couldn’t even force a Windows game to go full-screen. Instead you were stuck all the time inside the borders of the window in which it ran; this, needless to say, didn’t do much for immersion. It was true that Windows’s library for programming graphics, known as the Graphics Device Interface, or GDI, liberated programmers from the tyranny of the hardware — from needing to program separate modules to interact properly with every video standard in the notoriously diverse MS-DOS ecosystem. Unfortunately, though, GDI was slow; it was fine for business graphics, but unusable for most of the popular game genres.

For all these reasons, game developers, alone among makers of software, stuck obstinately with MS-DOS throughout the early 1990s, even as everything else in mainstream computing went all Windows, all the time. It wouldn’t be until after the first decade of Windows was over that game developers would finally embrace it, helped along both by a carrot (Microsoft was finally beginning to pay serious attention to their needs) and a stick (the ever-expanding diversity of hardware on the market was making the MS-DOS bare-metal approach to programming untenable).

End of story number one.

The second, more interesting story about games on Windows deals with different kinds of games from the ones the traditional game publishers were flogging to the demographic who were happy to self-identify as gamers. The people who came to play these different kinds of games couldn’t imagine describing themselves in those terms — and, indeed, would likely have been somewhat insulted if you had suggested it to them. Yet they too would soon be putting in millions upon millions of hours every year playing games, albeit more often in antiseptic adult offices than in odoriferous teenage bedrooms. Whatever; the fact was, they were still playing games. In fact, they were playing games enough to make Windows, that alleged game-unfriendly operating environment, quite probably the most successful gaming platform of the early 1990s in terms of sheer number of person-hours spent playing. And all the while the “hardcore” gamers barely even noticed this most profound democratization of computer gaming that the world had yet seen.



Microsoft Windows, like its inspiration the Apple Macintosh, used what’s known as a skeuomorphic interface — an interface built out of analogues to real-world objects, such as paper documents, a desktop,  and a trashcan — to present a friendlier face of computing to people who may have been uncomfortable with the blinking command prompt of yore. It thus comes as little surprise that most of the early Windows games were skeuomorphic as well, being computerized versions of non-threateningly old-fashioned card and board games. In this, they were something of a throwback to the earliest days of personal computing in general, when hobbyists passed around BASIC versions of these same hoary classics, whose simple designs constituted some of the only ones that could be made to fit into the minuscule memories of the first microcomputers. With Windows, it seemed, the old had become new again, as computer gaming started over to try to capture a whole new demographic.

The very first game ever programmed to run in Windows is appropriately prototypical. When Tandy Trower took over the fractious and directionless Windows project at Microsoft in January of 1985, he found that a handful of applets that weren’t, strictly speaking, a part of the operating environment itself had already been completed. These included a calculator, a rudimentary text editor, and a computerized version of a board game called Reversi.

Reversi is an abstract game for two players that looks a bit like checkers and plays like a faster-paced, simplified version of the Japanese classic Go. Its origins are somewhat murky, but it was first popularized as a commercial product in late Victorian England. In 1971, an enterprising Japanese businessman made a couple of minor changes to the rules of this game that had long been considered in the public domain, patented the result, and started selling it as Othello. Under this name, it enjoys modest worldwide popularity to this day. Under both of its names, it also became an early favorite on personal computers, where its simple rules and relatively constrained possibility space lent themselves well to the limitations of programming in BASIC on a 16 K computer; Byte magazine, the bible of early microcomputer hackers, published a type-in Othello as early as its October 1977 issue.

A member of the Windows team named Chris Peters had decided to write a new version of the game under its original (and non-trademarked) name of Reversi in 1984, largely as one of several experiments — proofs of concept, if you will — into Windows application programming. Tandy Trower then pushed to get some of his team’s experimental applets, among them Reversi, included with the first release of Windows in November of 1985:

When the Macintosh was announced, I noted that Apple bundled a small set of applications, which included a small word processor called MacWrite and a drawing application called MacPaint. In addition, Lotus and Borland had recently released DOS products called Metro and SideKick that consisted of a small suite of character-based applications that could be popped up with a keyboard combination while running other applications. Those packages included a simple text editor, a calculator, a calendar, and a business-card-like database. So I went to [Bill] Gates and [Steve] Ballmer with the recommendation that we bundle a similar set of applets with Windows, which would include refining the ones already in development, as well as a few more to match functions comparable to these other products.

Interestingly, MacOS did not include any games among its suite of applets, apart from a minimalist sliding-number puzzle that filled all of 600 bytes. Apple, whose Apple II was found in more schools and homes than businesses and who were therefore viewed with contempt by much of the conservative corporate computing establishment, ran scared from any association of their latest machine with games. But Microsoft, on whose operating system MS-DOS much of corporate America ran, must have felt they could get away with a little more frivolity.

Still, Windows Reversi didn’t ultimately have much impact on much of anyone. Reversi in general was a game more suited to the hacker mindset than the general public, lacking the immediate appeal of a more universally known design, while the execution of this particular version of Reversi was competent but no more. And then, of course, very few people bought Windows 1 in the first place.

For a long time thereafter, Microsoft gave little thought to making more games for Windows. Reversi stuck around unchanged in the only somewhat more successful Windows 2, and was earmarked to remain in Windows 3.0 as well. Beyond that, Microsoft had no major plans for Windows gaming. And then, in one of the stranger episodes in the whole history of gaming, they were handed the piece of software destined to become almost certainly the most popular computer game of all time, reckoned in terms of person-hours played: Windows Solitaire.

The idea of a single-player card game, perfect for passing the time on long coach or railway journeys, had first spread across Europe and then the world during the nineteenth century. The game of Solitaire — or Patience, as it is still more commonly known in Britain — is really a collection of many different games that all utilize a single deck of everyday playing cards. The overarching name is, however, often used interchangeably with the variant known as Klondike, by far the most popular form of Solitaire.

Klondike Solitaire, like the many other variants, has many qualities that make it attractive for computer adaptation on a platform that gives limited scope for programmer ambition. Depending on how one chooses to define such things, a “game” of Solitaire is arguably more of a puzzle than an actual game, and that’s a good thing in this context: the fact that this is a truly single-player endeavor means that the programmer doesn’t have to worry about artificial intelligence at all. In addition, the rules are simple, and playing cards are fairly trivial to represent using even the most primitive computer graphics. Unsurprisingly, then, Solitaire was another favorite among the earliest microcomputer game developers.

It was for all the same reasons that a university student named Wes Cherry, who worked at Microsoft as an intern during the summer of 1988, decided to make a version of Klondike Solitaire for Windows that was similar to one he had spent a lot of time playing on the Macintosh. (Yes, even when it came to the games written by Microsoft’s interns, Windows could never seem to escape the shadow of the Macintosh.) There was, according to Cherry himself, “nothing great” about the code of the game he wrote; it was no better nor worse than a thousand other computerized Solitaire games. After all, how much could you really do with Solitaire one way or the other? It either worked or it didn’t. Thankfully, Cherry’s did, and even came complete with a selection of cute little card backs, drawn by his girlfriend Leslie Kooy. Asked what was the hardest aspect of writing the game, he points today to the soon-to-be-iconic cascade of cards that accompanied victory: “I went through all kinds of hoops to get that final cascade as fast as possible.” (Here we have a fine example of why most game programmers held Windows in such contempt…) At the end of his summer internship, he put his Solitaire on a server full of games and other little experiments that Microsoft’s programmers had created while learning how Windows worked, and went back to university.

Months later, some unknown manager at Microsoft sifted through the same server and discovered Cherry’s Solitaire. It seems that Microsoft had belatedly started looking for a new game — something more interesting than Reversi — to include with the upcoming Windows 3.0, which they intended to pitch as hard to consumers as businesspeople. They now decided that Solitaire ought to be that game. So, they put it through a testing process, getting Cherry to fix the bugs they found from his dorm room in return for a new computer. Meanwhile Susan Kare, the famed designer of MacOS’s look who was now working for Microsoft, gave Leslie Kooy’s cards a bit more polishing.

And so, when Windows 3.0 shipped in May of 1990, Solitaire was included. According to Microsoft, its purpose was to teach people how to use a GUI in a fun way, but that explanation was always something of a red herring. The fact was that computing was changing, machines were entering homes in big numbers once again, and giving people a fun game to play as part of an otherwise serious operating environment was no longer anathema. Certainly huge numbers of people would find Solitaire more than compelling enough as an end unto itself.

The ubiquity that Windows Solitaire went on to achieve — and still maintains to a large extent to this day1 — is as difficult to overstate as it is to quantify. Microsoft themselves soon announced it to be the “most used” Windows application of all, easily besting heavyweight businesslike contenders like Word, Excel, Lotus 1-2-3, and WordPerfect. The game became a staple of office life all over the world, to be hauled out during coffee breaks and down times, to be kept always lurking minimized in the background, much to the chagrin of officious middle managers. By 1994, a Washington Post article would ask, only half facetiously, if Windows Solitaire was sowing the seeds of “the collapse of American capitalism.”

“Yup, sure,” says Frank Burns, a principal in the region’s largest computer bulletin board, the MetaNet. “You used to see offices laid out with the back of the video monitor toward the wall. Now it’s the other way around, so the boss can’t see you playing Solitaire.”

“It’s swallowed entire companies,” says Dennis J. “Gomer” Pyles, president of Able Bodied Computers in The Plains, Virginia. “The water-treatment plant in Warrenton, I installed [Windows on] their systems, and the next time I saw the client, the first thing he said to me was, ‘I’ve got 2000 points in Solitaire.'”

Airplanes full of businessmen resemble not board meetings but video arcades. Large gray men in large gray suits — lugging laptops loaded with spreadsheets — are consumed by beating their Solitaire scores, flight attendants observe.

Some companies, such as Boeing, routinely remove Solitaire from the Windows package when it arrives, or, in some cases, demand that Microsoft not even ship the product with the game inside. Even PC Magazine banned game-playing during office hours. “Our editor wanted to lessen the dormitory feel of our offices. Advertisers would come in and the entire research department was playing Solitaire. It didn’t leave the best impression,” reported Tin Albano, a staff editor.

Such articles have continued to crop up from time to time in the business pages ever since — as, for instance, the time in 2006 when New York City Mayor Michael Bloomberg summarily terminated an employee for playing Solitaire on the job, creating a wave of press coverage both positive and negative. But the crackdowns have always been to no avail; it’s as hard to imagine the modern office without Microsoft Solitaire as it is to imagine it without Microsoft Office.

Which isn’t to say that the Solitaire phenomenon is limited to office life. My retired in-laws, who have quite possibly never played another computer game in either of their lives, both devote hours every week to Solitaire in their living room. A Finnish study from 2007 found it to be the favorite game of 36 percent of women and 13 percent of men; no other game came close to those numbers. Even more so than Tetris, that other great proto-casual game of the early 1990s, Solitaire is, to certain types of personality at any rate, endlessly appealing. Why should that be?

To begin to answer that question, we might turn to the game’s pre-digital past. Whitmore Jones’s Games of Patience for One or More Players, a compendium of many Solitaire variants, was first published in 1898. Its introduction is fascinating, presaging much of the modern discussion about Microsoft Solitaire and casual gaming in general.

In days gone by, before the world lived at the railway speed as it is doing now, the game of Patience was looked upon with somewhat contemptuous toleration, as a harmless but dull amusement for idle ladies, and was ironically described as “a roundabout method of sorting the cards”; but it has gradually won for itself a higher place. For now, when the work, and still more the worries, of life have so enormously increased and multiplied, the value of a pursuit interesting enough to absorb the attention without unduly exciting the brain, and so giving the mind a rest, as it were, a breathing space wherein to recruit its faculties, is becoming more and more recognised and appreciated.

In addition to illustrating how concerns about the pace of contemporary life and nostalgia for the good old days are an eternal part of the human psyche, this passage points to the heart of Solitaire’s appeal, whether played with real cards or on a computer: the way that it can “absorb the attention without unduly exciting the brain.” It’s the perfect game to play when killing time at the end of the workday, as a palate cleanser between one task and another, or, as in the case of my in-laws, as a semi-active accompaniment to the idle practice of watching the boob tube.

Yet Solitaire isn’t a strictly rote pursuit even for those with hundreds of hours of experience playing it; if it was, it would have far less appeal. Indeed, it isn’t even particularly fair. About 20 percent of shuffles will result in a game that isn’t winnable at all, and Wes Cherry’s original computer implementation at least does nothing to protect you from this harsh mathematical reality. Still, when you get stuck there’s always that “Deal” menu option waiting for you up there in the corner, a tempting chance to reshuffle the cards and try your hand at a new combination. So, while Solitaire is the very definition of a low-engagement game, it’s also a game that has no natural end point; somehow the “Deal” option looks equally tempting whether you’ve just won or just lost. After being sucked in by its comfortable similarity to an analog game of cards almost everyone of a certain age has played, people can and do proceed to keep playing it for a lifetime.

As in the case of Tetris, there’s room to debate whether spending so many hours upon such a repetitive activity as playing Solitaire is psychologically healthy. For my own part, I avoid it and similar “time waster” games as just that — a waste of time that doesn’t leave me feeling good about myself afterward. By way of another perspective, though, there is this touching comment that was once left by a Reddit user to Wes Cherry himself:

I just want to tell you that this is the only game I play. I have autism and don’t game due to not being able to cope with the sensory processing – but Solitaire is “my” game.

I have a window of it open all day, every day and the repetitive clicking is really soothing. It helps me calm down and mentally function like a regular person. It makes a huge difference in my quality of life. I’m so glad it exists. Never thought there would be anyone I could thank for this, but maybe I can thank you. *random Internet stranger hugs*

Cherry wrote Solitaire in Microsoft’s offices on company time, and thus it was always destined to be their intellectual property. He was never paid anything at all, beyond a free computer, for creating the most popular computer game in history. He says he’s fine with this. He’s long since left the computer industry, and now owns and operates a cider distillery on Vashon Island in Puget Sound.

The popularity of Solitaire convinced Microsoft, if they needed convincing, that simple games like this had a place — potentially a profitable place — in Windows. Between 1990 and 1992, they released four “Microsoft Entertainment Packs,” each of which contained seven little games of varying degrees of inspiration, largely cobbled together from more of the projects coded by their programmers in their spare time. These games were the polar opposite of the ones being sold by traditional game publishers, which were growing ever more ambitious, with increasingly elaborate storylines and increasing use of video and sound recorded from the real world. The games from Microsoft were instead cast in the mold of Cherry’s Solitaire: simple games that placed few demands on either their players or the everyday office computers Microsoft envisioned running them, as indicated by the blurbs on the boxes: “No more boring coffee breaks!”; “You’ll never get out of the office!” Bruce Ryan, the manager placed in charge of the Entertainment Packs, later summarized the target demographic as “loosely supervised businesspeople.”

The centerpiece of the first Entertainment Pack was a passable version of Tetris, created under license from Spectrum Holobyte, who owned the computer rights to the game. Wes Cherry, still working out of his dorm room, provided a clone of another older puzzle game called Pipe Dream to be the second Entertainment Pack’s standard bearer; he was even compensated this time, at least modestly. As these examples illustrate, the Entertainment Packs weren’t conceptually ambitious in the least, being largely content to provide workmanlike copies of established designs from both the analog and digital realms. Among the other games included were Solitaire variants other than Klondike, a clone of the Activision tile-matching hit Shanghai, a 3D Tic-tac-toe game, a golf game (for the ultimate clichéd business-executive experience), and even a version of John Horton Conway’s venerable study of cellular life cycles, better known as the game of Life. (One does have to wonder what bored office workers made of that).

Established journals of record like Computer Gaming World barely noticed the Entertainment Packs, but they sold more than half a million copies in two years, equaling or besting the numbers of the biggest hardcore hits of the era, such as the Wing Commander series. Yet even that impressive number rather understates the popularity of Microsoft’s time wasters. Given that they had no copy protection, and given that they would run on any computer capable of running Windows, the Entertainment Packs were by all reports pirated at a mind-boggling rate, passed around offices like cakes baked for the Christmas potluck.

For all their success, though, nothing on any of the Entertainment Packs came close to rivaling Wes Cherry’s original Solitaire game in terms of sheer number of person-hours played. The key factor here was that the Entertainment Packs were add-on products; getting access to these games required motivation and effort from the would-be player, along with — at least in the case of the stereotypical coffee-break player from Microsoft’s own promotional literature — an office environment easygoing enough that one could carry in software and install it on one’s work computer. Solitaire, on the other hand, came already included with every fresh Windows installation, so long as an office’s system administrators weren’t savvy and heartless enough to seek it out and delete it. The archetypal low-effort game, its popularity was enabled by the fact that it also took no effort whatsoever to gain access to it. You just sort of stumbled over it while trying to figure out this new Windows thing that the office geek had just installed on your faithful old computer, or when you saw your neighbor in the next cubicle playing and asked what the heck she was doing. Five minutes later, it had its hooks in you.

It was therefore significant when Microsoft added a new game — or rather an old one — to 1992’s Windows 3.1. Minesweeper had actually debuted as part of the first Entertainment Pack, where it had become a favorite of quite a number of players. Among them was none other than Bill Gates himself, who became so addicted that he finally deleted the game from his computer — only to start getting his fix on his colleagues’ machines. (This creates all sorts of interesting fuel for the imagination. How do you handle it when your boss, who also happens to be the richest man in the world, is hogging your computer to play Minesweeper?) Perhaps due to the CEO’s patronage, Minesweeper became part of Windows’s standard equipment in 1992, replacing the unloved Reversi.

Unlike Solitaire and most of the Entertainment Pack games, Minesweeper was an original design, written by staff programmers Robert Donner and Curt Johnson in their spare time. That said, it does owe something to the old board game Battleship, to very early computer games like Hunt the Wumpus, and in particular to a 1985 computer game called Relentless Logic. You click on squares in a grid to uncover their contents, which can be one of three things: nothing at all, indicating that neither this square nor any of its adjacent squares contain mines; a number, indicating that this square is clear but said number of its adjacent squares do contain mines; or — unlucky you! — an actual mine, which kills you, ending the game. Like Solitaire, Minesweeper straddles the line — if such a line exists — between game and puzzle, and it isn’t a terribly fair take on either: while the program does protect you to the extent that the first square you click will never contain a mine, it’s possible to get into a situation through no fault of your own where you can do nothing but play the odds on your next click. But, unlike Solitaire, Minesweeper does have more of the trappings of a conventional videogame, including a timer which encourages you to play quickly to achieve the maximum score.

Doubtless because of those more overt videogame trappings, Minesweeper never became quite the office fixture that Solitaire did. Those who did get sucked in by it, however, found it even more addictive, perhaps not least because it does demand a somewhat higher level of engagement. It too became an iconic part of life with Microsoft Windows, and must rank high on any list of most-played computer games of all time, if the data only existed to compile such a thing. After all, it did enjoy one major advantage over even Solitaire for office workers with uptight bosses: it ran in a much smaller window, and thus stood out far less on a crowded screen when peering eyes glanced into one’s cubicle.

Microsoft included a third game with Windows for Workgroups 3.1, a variant intended for a networked office environment. True to that theme, Hearts was a version of the evergreen card game which could be played against computer opponents, but which was most entertaining when played together by up to four real people, all on separate computers. Its popularity was somewhat limited by the fact that it came only with Windows for Workgroups, but, again, that adjective is relative. By any normal computer-gaming standard, Hearts was hugely popular indeed for quite some years, serving for many people as their introduction to the very concept of online gaming — a concept destined to remake much of the landscape of computer gaming in general in years to come. Certainly I can remember many a spirited Hearts tournament at my workplaces during the 1990s. The human, competitive element always made Hearts far more appealing to me than the other games I’ve discussed in this article.

But whatever your favorite happened to be, the games of Windows became a vital part of a process I’ve been documenting in fits and starts over the last year or two of writing this history: an expansion of the demographics that were playing games, accomplished not by making parents and office workers suddenly fall in love with the massive, time-consuming science-fiction or fantasy epics upon which most of the traditional computer-game industry remained fixated, but rather by meeting them where they lived. Instead of five-course meals, Microsoft provided ludic snacks suited to busy lives and limited attention spans. None of the games I’ve written about here are examples of genius game design in the abstract; their genius, to whatever extent it exists, is confined to worming their way into the psyche in a way that can turn them into compulsions. Yet, simply by being a part of the software that just about everybody, with the exception of a few Macintosh stalwarts, had on their computers in the 1990s, they got hundreds of millions of people playing computer games for the first time. The mainstream Ludic Revolution, encompassing the gamification of major swaths of daily life, began in earnest on Microsoft Windows.

(Sources: the book A Casual Revolution: Reinventing Video Games and Their Players by Jesper Juul; Byte of October 1977; Computer Gaming World of September 1992; Washington Post of March 9 1994; New York Times of February 10 2006; online articles at Technologizer, The Verge, B3TA, Reddit, Game Set Watch, Tech Radar, Business Insider, and Danny Glasser’s personal blog.)


  1. The game got a complete rewrite for Windows Vista in 2006. Presumably any traces of Wes Cherry’s original code that might have been left were excised at that time. Beginning with Windows 8 in 2012, a standalone Klondike Solitaire game was no longer included as a standard part of every Windows installation — a break with more than twenty years of tradition. Perhaps due to the ensuing public outcry, the advertising-supported Microsoft Solitaire Collection did become a component of Windows 10 upon the latter’s release in 2015. 

 
 

Tags: ,