RSS

Search results for ‘infocom’

Spycraft: The Great Game, Part 1 (or, Parallel Spies)

Police recover William Colby’s body on the coast of Maryland, May 6, 1996.

The last people known to have seen William Colby alive are a cottage caretaker and his sister. They bumped into the former head of the CIA early on the evening of April 27, 1996, watering the willow trees around his vacation home on Neale Sound in Maryland, about 60 miles south of Washington, D.C. The trio chatted together for a few minutes about the fine weather and about the repairs Colby had spent the day doing to his sailboat, which was moored in the marina on Cobb Island, just across the sound. Then the caretaker and his sister went on their way. Everything seemed perfectly normal to them.

The next morning, a local handyman, his wife, and their two children out on the water in their motorboat spotted a bright green canoe washed up against a spit of land that extended from the Maryland shore. The canoe appeared to be abandoned. Moving in to investigate, they found that it was full of sand. This was odd, thought the handyman; he had sailed past this same place the day before without seeing the canoe, and yet so much sand could hardly have collected in it naturally over the course of a single night. It was almost as if someone had deliberately tried to sink the canoe. Oh, well; finders keepers. It really was a nice little boat. He and his family spent several hours shoveling out the sand, then towed the canoe away with them.

In the meantime, Colby’s next-door neighbor was surprised not to see him out and about. The farthest thing from a layabout, the wiry 76-year-old was usually up early, puttering about with something or other around his cottage or out on the sound. Yet now he was nowhere to be seen outside and didn’t answer his door, even though his car was still in the driveway and the neighbor thought she could hear a radio playing inside the little house. Peeking around back, she saw that Colby’s green canoe was gone. At first, she thought the mystery was solved. But as the day wore on and he failed to return, she grew more and more concerned. At 7:00 that evening, she called the police.

When they arrived, the police found that both doors to the cottage were unlocked. The radio was indeed turned on, as was Colby’s computer. Even weirder, a half-eaten meal lay in the sink, surrounded by unwashed dishes and half a glass of white wine. It wasn’t at all like the man not to clean up after himself. And his wallet and keys were also lying there on the table. Why on earth would he go out paddling without them?

Inquiries among the locals soon turned up Colby’s canoe and the story of its discovery. Clearly something was very wrong here. The police ordered a search. Two helicopters, twelve divers, and 100 volunteers in boats pulling drag-lines behind them scoured the area, while CIA agents also arrived to assist the investigation into the disappearance of one of their own; their presence was nothing to be alarmed at, they assured everyone, just standard procedure. Despite the extent of the search effort, it wasn’t until the morning of May 6, nine days after he was last seen, that William Colby’s body was found washed up on the shore, just 130 feet from where the handyman had found his canoe, but on the other side of the same spit of land. It seemed that Colby must have gone canoeing on the lake, then fallen overboard and drowned. He was 76 years old, after all.

But the handyman who had found the canoe, who knew these waters and their currents as well as anyone, didn’t buy this. He was sure that the body could not have gotten so separated from the canoe as to wind up on the opposite side of the spit. And why had it taken it so long to wash up on shore? Someone must have gone out and planted it there later on, he thought. Knowing Colby’s background, and having seen enough spy movies to know what happened to inconvenient witnesses in cases like this one, he and his family left town and went into hiding.

The coroner noticed other oddities. Normally a body that has been in the water a week or more is an ugly, bloated sight. But Colby’s was bizarrely well-preserved, almost as if it had barely spent any time in the water at all. And how could the divers and boaters have missed it for so long, so close to shore as it was?

Nonetheless, the coroner concluded that Colby had probably suffered a “cardiovascular incident” while out in his canoe, fallen into the water, and drowned. This despite the fact that he had had no known heart problems, and was in general in a physical shape that would have made him the envy of many a man 30 years younger than he was. Nor could the coroner explain why he had chosen to go canoeing long after dark, something he was most definitely not wont to do. (It had been dusk already when the caretaker and his sister said goodbye to him, and he had presumably sat down to his dinner after that.) Why had he gone out in such a rush, leaving his dinner half-eaten and his wine half-drunk, leaving his radio and computer still turned on, leaving his keys and wallet lying there on the table? It just didn’t add up in the eyes of the locals and those who had known Colby best.

But that was that. Case closed. The people who lived around the sound couldn’t help but think about the CIA agents lurking around the police station and the morgue, and wonder at everyone’s sudden eagerness to put a bow on the case and be done with it…


Unusually for a septuagenarian retired agent of the security state, William Colby had also been a game developer, after a fashion at least. In fact, at the time of his death a major game from a major publisher that bore his name very prominently right on the front of the box had just reached store shelves. This article and the next will partly be the story of the making of that game. But they will also be the story of William Colby himself, and of another character who was surprisingly similar to him in many ways despite being his sworn enemy for 55 years — an enemy turned friend who consulted along with him on the game and appeared onscreen in it alongside him. Then, too, they will be an inquiry into some of the important questions the game raises but cannot possibly begin to answer.


Sierra’s Police Quest: Open Season, created with the help of controversial former Los Angeles police chief Daryl Gates, was one of the few finished products to emerge from a brief-lived vision of games as up-to-the-minute, ripped-from-the-headlines affairs. Spycraft: The Great Game was another.

Activision’s Spycraft: The Great Game is the product of a very specific era of computer gaming, when “multimedia” and “interactive movies” were among the buzzwords of the zeitgeist. Most of us who are interested in gaming history today are well aware of the set of technical and aesthetic approaches these terms imply: namely, games built from snippets of captured digitized footage of real actors, with interactivity woven as best the creators can manage between these dauntingly large chunks of static content.

There was a certain ideology that sometimes sprang up in connection with this inclusion of real people in games, a belief that it would allow games to become relevant to the broader culture in a way they never had before, tackling stories, ideas, and controversies that ordinary folks were talking about around their kitchen tables. At the margins, gaming could almost become another form of journalism. Ken Williams, the founder and president of Sierra On-Line, was the most prominent public advocate for this point of view, as exemplified by his decision to make a game with Daryl F. Gates, the chief of police for Los Angeles during the riots that convulsed that city in the spring of 1992. Williams, writing during the summer of 1993, just as the Gates game was being released:

I want to find the top cop, lawyer, airline pilot, fireman, race-car driver, politician, military hero, schoolteacher, white-water rafter, mountain climber, etc., and have them work with us on a simulation of their world. Chief Gates gives us the cop game. We are working with Emerson Fittipaldi to simulate racing, and expect to announce soon that Vincent Bugliosi, the lawyer who locked up Charles Manson, will be working with us to do a courtroom simulation. My goal is that products in the Reality Role-Playing series will be viewed as serious simulations of real-world events, not games. If we do our jobs right, this will be the closest most of us will ever get to seeing the world through these people’s eyes.

It sounded good in theory, but would never get all that far in practice, for a whole host of reasons: a lack of intellectual bandwidth and sufficient diversity of background in the games industry to examine complex social questions in an appropriately multi-faceted way (the jingoistic Gates game is a prime case in point here); a lack of good ideas for turning such abstract themes into rewarding forms of interactivity, especially when forced to work with the canned video snippets that publishers like Sierra deemed an essential part of the overall vision; the expense of the games themselves, the expense of the computers needed to run them, and the technical challenges involved in getting them running, which in combination created a huge barrier to entry for newcomers from outside the traditional gamer demographics; and, last but not least, the fact that those existing gamers who did meet all the prerequisites were generally perfectly happy with more blatantly escapist entertainments, thank you very much. Tellingly, none of the game ideas Ken Williams mentions above ever got made. And I must admit that this failure does not strike me as any great loss for world culture.

That said, Williams, being the head of one of the two biggest American game publishers, had a lot of influence on the smaller ones when he prognosticated on the future of the industry. Among the latter group was Activision, a toppled giant which had been rescued from the dustbin of bankruptcy in 1991 by a young wheeler-dealer named Bobby Kotick. His version of the company got fully back onto its feet the same year that Williams wrote the words above, thanks largely to Return to Zork, a cutting-edge multimedia evocation of the Infocom text adventures of yore, released at the perfect time to capitalize on a generation of gamers’ nostalgia for those bygone days of text and parsers (whilst not actually asking them to read much or to type out their commands, of course).

With that success under their belts, Kotick and his cronies thought about what to do next. Adventure games were hot — Myst, the bestselling adventure of all time, was released at the end of 1993 — and Ken Williams’s ideas about famous-expert-driven “Reality Role-Playing” were in the air. What might they do with that? And whom could they get to help them do it?

They hit upon espionage, a theme that, in contrast to many of those outlined by Williams, seemed to promise a nice balance of ripped-from-the-headlines relevance with interesting gameplay potential. Then, when they went looking for the requisite famous experts, they hit the mother lode with William Colby, the head of the CIA from September of 1973 to January of 1976, and Oleg Kalugin, who had become the youngest general in the history of the First Central Directorate of the Soviet Committee for State Security, better known as the KGB, in 1974.

I’ll return to Spycraft itself in due course. But right now, I’d like to examine the lives of these two men, which parallel one another in some perhaps enlightening ways. Rest assured that in doing so I’m only following the lead of Activision’s marketers; they certainly wanted the public to focus first and foremost on the involvement of Colby and Kalugin in their game.


William Colby (center), looking every inch the dashing war hero in Norway just after the end of World War II.

William Colby was born in St. Paul, Minnesota on January 4, 1920. He was the only child of Elbridge Colby, a former soldier and current university professor who would soon rejoin the army as an officer and spend the next 40 years in the service. His family was deeply Catholic — his father thanks to a spiritual awakening and conversion while a student at university, his mother thanks to long family tradition. The son too absorbed the ethos of a stern but loving God and the necessity of serving Him in ways both heavenly and worldly.

The little family bounced around from place to place, as military families generally do. They wound up in China for three years starting in 1929, where young Bill learned a smattering of Chinese and was exposed for the first time to the often compromised ethics of real-world politics, in this case in the form of the United States’s support for the brutal dictatorship of Chiang Kei-shek. Colby’s biographer Randall Bennett Woods pronounces his time in China “one of the formative influences of his life.” It was, one might say, a sort of preparation for the many ugly but necessary alliances — necessary as Colby would see them, anyway — of the Cold War.

At the age of seventeen, Colby applied to West Point, but was rejected because of poor eyesight. He settled instead for Princeton, a university whose faculty included Albert Einstein among many other prominent thinkers. Colby spent the summer of 1939 holidaying in France, returning home just after the fateful declarations of war in early September, never imagining that the idyllic environs in which he had bicycled and picnicked and practiced his French on the local girls would be occupied by the Nazis well before another year had passed. Back at Princeton, he made the subject of his senior thesis the ways in which France’s weakness had allowed the Nazi threat on its doorstep to grow unchecked. This too was a lesson that would dominate his worldview throughout the decades to come. After graduating, Colby received his officer’s commission in the United States Army, under the looming shadow of a world war that seemed bound to engulf his own country sooner or later.

When war did come on December 7, 1941, he was working as an artillery instructor at Fort Sill in Oklahoma. To his immense frustration, the Army thought he was doing such a good job in that role that it was inclined to leave him there. “I was afraid the war would be over before I got a chance to fight,” he writes in his memoir. He therefore leaped at the opportunity when he saw an advertisement on a bulletin board for volunteers to become parachutists with the 82nd Airborne. He tried to pass the entrance physical by memorizing the eye chart. The doctor wasn’t fooled, but let him in anyway: “I guess your eyesight is good enough for you to see the ground.”

Unfortunately, he broke his ankle in a training jump, and was forced to watch, crestfallen, as his unit shipped out to Europe without him. Then opportunity came calling again, in a chance to join the new Office of Strategic Services (OSS), the forerunner of the CIA. Just as the CIA would later on, the OSS had two primary missions: foreign intelligence gathering and active but covert interference. Colby was to be dropped behind enemy lines, whence he would radio back reports of enemy troop movements and organize resistance among the local population. It would be, needless to say, an astonishingly dangerous undertaking. But that was the way Colby wanted it.

William Colby finally left for Britain in December of 1943, aboard the British luxury liner Queen Elizabeth, now refitted to serve as a troop transport. It was in a London bookstore that he first encountered another formative influence, the book Seven Pillars of Wisdom by T.E. Lawrence — the legendary Lawrence of Arabia, who had convinced the peoples of the Middle East to rise up against their Turkish overlords during the last world war. Lawrence’s book was, Colby would later say, an invaluable example of “an outsider operat[ing] within the political framework of a foreign people.” It promptly joined the Catholic Bible as one of the two texts Colby carried with him everywhere he went.

As it happened, he had plenty of time for reading: the weeks and then months passed in Britain, and still there came no orders to go into action. There was some talk of using Colby and his fellow American commandos to sow chaos during the run-up to D-Day, but this role was given to British units in the end. Instead Colby watched from the sideline, seething, as the liberation of France began. Then, out of the blue, action orders came at last. On the night of August 14, 1944, Colby and two exiled French soldiers jumped out of a B-24 bomber flying over central France.

The drop was botched; the men landed fifteen miles away from the intended target, finding themselves smack dab in the middle of a French village instead of out in the woods. Luckily, there were no Germans about, and the villagers had no desire to betray them. There followed a hectic, doubtless nerve-wracking month, during which Colby and his friends made contact with the local resistance forces and sent back to the advancing Allied armies valuable information about German troop movements and dispositions. Once friendly armies reached their position, the commandos made their way back to the recently liberated Paris, thence to London. It had been a highly successful mission, with more than enough danger and derring-do to suffice for one lifetime in the eyes of most people. But for Colby it all felt a bit anticlimactic; he had never even discharged his weapon at the enemy. Knowing that his spoken German wasn’t good enough to carry out another such mission behind the rapidly advancing Western European front, Colby requested a transfer to China.

He got another offer instead. Being an accomplished skier, he was asked to lead 35 commandos into the subarctic region of occupied Norway, to interdict the German supply lines there. Naturally, he agreed.

The parachute drop that took place on the night of March 24, 1945, turned into another botched job. Only fifteen of the 35 commandos actually arrived; the other planes strayed far off course in the dark and foggy night, accidentally dropping their passengers over neutral Sweden, or giving up and not dropping them at all. But Colby was among the lucky (?) fifteen who made it to their intended destination. Living off the frigid land, he and his men set about dynamiting railroad tracks and tunnels. This time, he got to do plenty of shooting, as his actions frequently brought him face to face with the Wehrmacht.

On the morning of May 7, word came through on the radio that Adolf Hitler was dead and his government had capitulated; the war in Europe was over. Colby now set about accepting the surrender of the same German occupiers he had recently been harassing. While the operation he had led was perhaps of doubtful necessity in the big picture of a war that Germany had already been well along the path of losing, no one could deny that he had demonstrated enormous bravery and capability. He was awarded the Silver Star.

Gung ho as ever, Colby proposed to his superiors upon returning to London that he lead a similar operation into Francisco Franco’s Spain, to precipitate the downfall of that last bastion of fascism in Europe. Having been refused this request, he returned to the United States, still seeming a bit disappointed that it had all ended so quickly. Here he asked for and was granted a discharge from the Army, asked for and was granted the hand in marriage of his university sweetheart Barbara Heinzen, and asked for and was granted a scholarship to law school. He wrote on his application that he hoped to become a lawyer in the cause of organized labor. (Far from the fire-breathing right-wing extremist some of his later critics would characterize him to be, Colby would vote Democrat throughout his life, maintaining a center-left orientation when it came to domestic politics at least.)


Oleg Kalugin at age seventeen, a true believer in Joseph Stalin and the Soviet Communist Party.

While the war hero William Colby was seemingly settling into a more staid time of life, another boy was growing up in the heart of the nation that Colby and most other Americans would soon come to regard as their latest great enemy. Born on September 6, 1934, in Leningrad (the once and future Saint Petersburg), Oleg Kalugin was, like Colby, an only child of a couple with an ethic of service, the son of a secret-police agent and a former factory worker, both of whose loyalty to communism was unimpeachable; the boy’s grandmother caused much shouting and hand-wringing in the family when she spirited him away to have him baptized in a furtive Orthodox ceremony in a dark basement. That piece of deviancy notwithstanding, Little Oleg was raised to see Joseph Stalin as his god on earth, the one and only savior of his people.

On June 22, 1941, he was “hunting maybugs with a pretty girl,” as he writes, when he saw a formation of airplanes roar overhead and drop a load of bombs not far away. The war had come to his country, six months before it would reach that of William Colby. With the German armies nearing Leningrad, he and his mother fled to the Siberian city of Omsk while his father stayed behind to fight. They returned to a devastated hometown in the spring of 1944. Oleg’s father had survived the terrible siege, but the boy had lost all of his grandparents — including that gentle soul who had caused him to be baptized — along with four uncles to either starvation or enemy bullets.

Kalugin remained a true believer after the Great Patriotic War was over, joining the Young Communist League as soon as he was eligible at the age of fourteen. At seventeen, he decided to join the KGB; it “seemed like the logical place for a person with my academic abilities, language skills, and fervent desire to fight class enemies, capitalist parasites, and social injustice.” Surprisingly, his father, who had seen rather too much of what Soviet-style class struggle really meant over the last couple of decades, tried to dissuade him. But the boy’s mind was made up. He entered Leningrad’s Institute of Foreign Languages, a shallow front for the training of future foreign agents, in 1952.

When Stalin died in March of the following year, the young zealot wrote in his diary that “Stalin isn’t dead. He cannot die. His physical death is just a formality, one that needn’t deprive people of their faith in the future. The fact that Stalin is still alive will be proven by our country’s every new success, both domestically and internationally.” He was therefore shocked when Stalin’s successor, Nikita Khrushchev, delivered a speech that roundly condemned the country’s erstwhile savior as a megalomaniac and a mass-murderer who had cynically corrupted the ideology of Marx and Lenin to serve his own selfish ends. It was Kalugin’s initiation into the reality that the state he so earnestly served was less than incorruptible and infallible.

Nevertheless, he kept the faith, moving to Moscow for advanced training in 1956. In 1958, he was selected on the basis of his aptitude for English to go to the United States as a graduate student. “Just lay the foundation for future work,” his superiors told him. “Buy yourself good maps. Improve your English. Find out about their way of life. Communicate with people and make as many friends as possible.” Kalugin’s joyous reaction to this assignment reflects the ambivalence with which young Soviets like him viewed the United States. It was, they fervently believed, the epicenter of the imperialism, capitalism, racism, and classism they hated, and must ultimately be destroyed for that reason. Yet it was also the land of jazz and rock and roll, of fast cars and beautiful women, with a standard of living so different from anything they had ever known that it might as well have been Shangri-La. “I daydreamed constantly about America,” Kalugin admits. “The skyscrapers of New York and Chicago, the cowboys of the West…” He couldn’t believe he was being sent there, and on a sort of paid vacation at that, with few concrete instructions other than to experience as much of the American way of life as he could. Even his sadness about leaving behind the nice Russian girl he had recently married couldn’t overwhelm his giddy excitement.


William Colby in Rome circa 1955, with his son Carl and daughter Catherine.

As Oleg Kalugin prepared to leave for the United States, William Colby was about to return to that same country, where he hadn’t been living for seven years. He had become a lawyer as planned and joined the National Labor Relations Board to forward the cause of organized labor, but his tenure there had proved brief. In 1950, he was convinced to join the new CIA, the counterweight to the KGB on the world stage. He loved his new “band of brothers,” filled as he found it to be with “adventuresome spirits who believed fervently that the communist threat had to be met aggressively, innovatively, and courageously.”

In April of 1951, he took his family with him on his first foreign assignment, under the cover identity of a mid-level diplomatic liaison in Stockholm, Sweden. His real purpose was to build and run an intelligence operation there. (All embassies were nests of espionage in those days, as they still are today.) “The perfect operator in such operations is the traditional gray man, so inconspicuous that he can never catch the waiter’s eye in a restaurant,” Colby wrote. He was — or could become — just such a man, belying his dashing commando past. Small wonder that he proved very adept at his job. The type of spying that William Colby did was, like all real-world espionage, more John Le Carré than Ian Fleming, an incrementalist milieu inhabited by just such quiet gray men as him. Dead-letter drops, secret codes, envelopes stuffed with cash, and the subtle art of recruitment without actually using that word — the vast majority of his intelligence contacts would have blanched at the label of “spy,” having all manner of other ways of defining what they did to themselves and others — were now his daily stock in trade.

In the summer of 1953, Colby and his family left Stockholm for Rome. Still riven by discontent and poverty that the Marshall Plan had never quite been able to quell, with a large and popular communist party that promised the people that it alone could make things better, Italy was considered by both the United States and the Soviet Union to be the European country most in danger of changing sides in the Cold War through the ballot box, making this assignment an unusually crucial one. Once again, Colby performed magnificently. Through means fair and occasionally slightly foul, he propped up Italy’s Christian Democratic Party, the one most friendly to American interests. His wife and five young children would remember these years as their happiest time together, with the Colosseum visible outside their snug little apartment’s windows, with the trapping of their Catholic faith all around them. The sons became altar boys, learning to say Mass in flawless Latin, and Barbara amazed guests with her voluble Italian, which was even better than her husband’s.

She and her children would gladly have stayed in Rome forever, but after five years there her husband was growing restless. The communist threat in Italy had largely dissipated by now, thanks to an improving economy that made free markets seem more of a promise than a threat, and Colby was itching to continue the shadowy struggle elsewhere. In 1958, he was recalled to the States to begin preparing for a new, more exotic assignment: to the tortured Southeast Asian country of Vietnam, which had recently won its independence from France, only to become a battleground between the Western-friendly government of Ngo Dinh Diem and a communist insurgency led by Ho Chi Minh.


Oleg Kalugin (center) at Columbia University, 1958.

While Colby was hitting the books at CIA headquarters in Langley, Virginia, in preparation for his latest assignment, Kalugin was doing the same as a philology student on a Fulbright scholarship to New York City’s Columbia University. (Fully half of the eighteen exchange students who traveled with him were also spies-in-training.) A natural charmer, he had no trouble ingratiating himself with the native residents of the Big Apple as he had been ordered to do.

He went home when his one-year scholarship expired, but returned to New York City one year after that, to work as a journalist for Radio Moscow. Now, however, his superiors expected a bit more from him. Despite the wife and young daughter he had left behind, he seduced a string of women who he believed could become valuable informants — so much so that American counter-espionage agents, who were highly suspicious of him, labeled him a “womanizer” and chalked it up as his most obvious weakness, should they ever be in need of one to exploit. (For his part, Kalugin writes that “I always told my officers, male and female, ‘Don’t be afraid of sex.’ If they found themselves in a situation where making love with a foreigner could help our work, I advised them to hop into bed.”)

Kalugin’s unlikely career as Radio Moscow’s foreign correspondent in New York City lasted almost four years in all. He covered — with a pro-Soviet spin, naturally — the election of President John F. Kennedy, the trauma of the Bay of Pigs Invasion and the Cuban Missile Crisis, and the assassination of Kennedy by a man with Soviet ties. He was finally called home in early 1964, his superiors having decided he was now attracting too much scrutiny from the Americans. He found returning to the dingy streets of Moscow from the Technicolor excitement of New York City to be rather dispiriting. “Worshiping communism from afar was one thing. Living in it was another thing altogether,” he writes wryly, echoing sentiments shared by many an idealistic Western defector for the cause. Shortly after his return, the reform-minded Nikita Khrushchev was ousted in favor of Leonid Brezhnev, a man who looked as tired as the rest of the Soviet Union was beginning to feel. It was hard to remain committed to the communist cause in such an environment as this, but Kalugin continued to do his best.


William Colby, looking rather incongruous in his typical shoe salesman’s outfit in a Vietnamese jungle.

William Colby might have been feeling similar sentiments somewhere behind that chiseled granite façade of his. For he was up to his eyebrows in the quagmire that was Vietnam, the place where all of the world’s idealism seemed to go to die.

When he had arrived in the capital of Saigon in 1959, with his family in tow as usual, he had wanted to treat this job just as he had his previous foreign postings, to work quietly behind the scenes to support another basically friendly foreign government with a communist problem. But Southeast Asia was not Europe, as he learned to his regret — even if the Diem family were Catholic and talked among themselves in French. There were systems of hierarchy and patronage inside the leader’s palace that baffled Colby at every turn. Diem himself was aloof, isolated from the people he ruled, while Ho Chi Minh, who already controlled the northern half of the country completely and had designs on the rest of it, had enormous populist appeal. The type of espionage Colby had practiced in Sweden and Italy — all mimeographed documents and furtive meetings in the backs of anonymous cafés — would have been useless against such a guerilla insurgency even if it had been possible. Which it was not: the peasants fighting for and against the communists were mostly illiterate.

Colby’s thinking gradually evolved, to encompass the creation of a counter-insurgency force that could play the same game as the communists. His mission in the country became less an exercise in pure espionage and overt and covert influencing than one in paramilitary operations. He and his family left Vietnam for Langley in the summer of 1962, but the country was still to fill a huge portion of Colby’s time; he was leaving to become the head of all of the CIA’s Far Eastern operations, and there was no hotter spot in that hot spot of the world than Vietnam. Before departing, the entire Colby family had dinner with President Diem in his palace, whose continental cuisine, delicate furnishings, and manicured gardens almost could lead one to believe one was on the French Riviera rather than in a jungle in Southeast Asia. “We sat there with the president,” remembers Barbara. “There was really not much political talk. Yet there was a feeling that things were not going well in that country.”

Sixteen months later — in fact, just twenty days before President Kennedy was assassinated — Diem was murdered by the perpetrators of a military coup that had gone off with the tacit support of the Americans, who had grown tired of his ineffectual government and felt a change was needed. Colby was not involved in that decision, which came down directly from the Kennedy White House to its ambassador in the country. But, good soldier that he was, he accepted it after it had become a fait accompli. He even agreed to travel to Vietnam in the immediate aftermath, to meet with the Vietnamese generals who had perpetrated the coup and assure them that they had powerful friends in Washington. Did he realize in his Catholic heart of hearts that his nation had forever lost the moral high ground in Vietnam on the day of Diem’s murder? We cannot say.

The situation escalated quickly under the new President Lyndon Johnson, as more and more American troops were sent to fight a civil war on behalf of the South Vietnamese, a war which the latter didn’t seem overly inclined to fight for themselves. Colby hardly saw his family now, spending months at a stretch in the country. Lawrence of Arabia’s prescription for winning over a native population through ethical persuasion and cultural sensitivity was proving unexpectedly difficult to carry out in Vietnam, most of whose people seemed just to want the Americans to go away. It appeared that a stronger prescription was needed.

Determined to put down the Viet Cong — communist partisans in the south of the country who swarmed over the countryside, killing American soldiers and poisoning their relations with the locals — Colby introduced a “Phoenix Program” to eliminate them. It became without a doubt the biggest of all the moral stains on his career. The program’s rules of engagement were not pretty to begin with, allowing for the extra-judicial execution of anyone believed to be in the Viet Cong leadership in any case where arresting him was too “hard.” But it got entirely out of control in practice, as described by James S. Olsen and Randy W. Roberts in their history of the war: “The South Vietnamese implemented the program aggressively, but it was soon laced with corruption and political infighting. Some South Vietnamese politicians identified political enemies as Viet Cong and sent Phoenix hit men after them. The pressure to identify Viet Cong led to a quota system that incorrectly labeled many innocent people the enemy.” Despite these self-evident problems, the Americans kept the program going for years, saying that its benefits were worth the collateral damage. Olsen and Roberts estimate that at least 20,000 people lost their lives as a direct result of Colby’s Phoenix Program. A large proportion of them — possibly even a majority — were not really communist sympathizers at all.

In July of 1971, Colby was hauled before the House Committee on Government Operations by two prominent Phoenix critics, Ogden Reid and Pete McCloskey (both Republicans.) It is difficult to absolve him of guilt for the program’s worst abuses on the basis of his circuitous, lawyerly answers to their straightforward questions.

Reid: Can you state categorically that Phoenix has never perpetrated the premeditated killing of a civilian in a noncombat situation?

Colby: No, I could not say that, but I do not think it happens often. Individual members of it, subordinate people in it, may have done it. But as a program, it is not designed to do that.

McCloskey: Did Phoenix personnel resort to torture?

Colby: There were incidents, and they were treated as an unjustifiable offense. If you want to get bad intelligence, you use bad interrogation methods. If you want to get good intelligence, you had better use good interrogation methods.


Oleg Kalugin (right) receives from Bulgarian security minister Dimitri Stoyanov the Order of the Red Star, thanks largely to his handling of John Walker. The bespectacled man standing between and behind the two is Yuri Andropov, then the head of the KGB, who would later become the fifth supreme leader of the Soviet Union.

During the second half of the 1960s, Oleg Kalugin spent far more time in the United States than did William Colby. He returned to the nation that had begun to feel like more of a home than his own in July of 1965. This time, however, he went to Washington, D.C., instead of New York City. His new cover was that of a press officer for the Soviet Foreign Ministry; his real job was that of a deputy director in the KGB’s Washington operation. He was to be a spy in the enemy’s city of secrets. “By all means, don’t treat it as a desk job,” he was told.

Kalugin took the advice to heart. He had long since developed a nose for those who could be persuaded to share their country’s deepest secrets with him, long since recognized that the willingness to do so usually stemmed from weakness rather than strength. Like a lion on the hunt, he had learned to spot the weakest prey — the nursers of grudges and harborers of regrets; the sexually, socially, or professionally frustrated — and isolate them from the pack of their peers for one-on-one persuasion. At one point, he came upon a secret CIA document that purported to explain the psychology of those who chose to spy for that yin to his own service’s yang. He found it to be so “uncannily accurate” a description of the people he himself recruited that he squirreled it away in his private files, and quoted from it in his memoir decades later.

Acts of betrayal, whether in the form of espionage or defection, are almost in every case committed by morally or psychologically unsteady people. Normal, psychologically stable people — connected with their country by close ethnic, national, cultural, social, and family ties — cannot take such a step. This simple principle is confirmed by our experience of Soviet defectors. All of them were single. In every case, they had a serious vice or weakness: alcoholism, deep depression, psychopathy of various types. These factors were in most cases decisive in making traitors out of them. It would only be a slight exaggeration to say that no [CIA] operative can consider himself an expert in Soviet affairs if he hasn’t had the horrible experience of holding a Soviet friend’s head over the sink as he poured out the contents of his stomach after a five-day drinking bout.

What follows from that is that our efforts must mostly be directed against weak, unsteady members of Soviet communities. Among normal people, we should pay special attention to the middle-aged. People that age are starting their descent from their psychological peak. They are no longer children, and they suddenly feel the acute realization that their life is passing, that their ambitions and youthful dreams have not come true in full or even in part. At this age comes the breaking point of a man’s career, when he faces the gloomy prospect of pending retirement and old age. The “stormy forties” are of great interest to an [intelligence] operative.

It’s great to be good, but it’s even better to be lucky. John Walker, the spy who made Kalugin’s career, shows the truth in this dictum. He was that rarest of all agents in the espionage trade: a walk-in. A Navy officer based in Norfolk, Virginia, he drove into Washington one day in late 1967 with a folder full of top-secret code ciphers on the seat of his car next to him, looked up the address of the Soviet embassy in the directory attached to a pay phone, strode through the front door, plunked his folder down on the front desk, and said matter-of-factly, “I want to see the security officer, or someone connected with intelligence. I’m a naval officer. I’d like to make some money, and I’ll give you some genuine stuff in return.” Walker was hastily handed a down payment, ushered out of the embassy, and told never under any circumstances to darken its doors again. He would be contacted in other ways if his information checked out.

Kalugin was fortunate enough to be ordered to vet the man. The picture he filled in was sordid, but it passed muster. Thirty years old when his career as a spy began, Walker had originally joined the Navy to escape being jailed for four burglaries he committed as a teenager. A born reprobate, he had once tried to convince his wife to become a prostitute in order to pay off the gambling debts he had racked up. Yet he could also be garrulous and charming, and had managed to thoroughly conceal his real self from his Navy superiors. A fitness report written in 1972, after he had already been selling his country’s secrets for almost five years, calls him “intensely loyal, taking great pride in himself and the naval service, fiercely supporting its principles and traditions. He possesses a fine sense of personal honor and integrity, coupled with a great sense of humor.” Although he was only a warrant officer in rank, he sat on the communications desk at Norfolk, handling radio traffic with submarines deployed all over the world. It was hard to imagine a more perfect posting for a spy. And this spy required no counseling, needed no one to pretend to be his friend, to talk him down from crises of conscience, or to justify himself to himself. Suffering from no delusions as to who and what he was, all he required was cold, hard cash. A loathsome human being, he was a spy handler’s dream.

Kalugin was Walker’s primary handler for two years, during which he raked in a wealth of almost unbelievably valuable information without ever meeting the man face to face. Walker was the sort of asset who turns up “once in a lifetime,” in the words of Kalugin himself. He became the most important of all the spies on the Kremlin’s payroll, even recruiting several of his family members and colleagues to join his ring. “K Mart has better security than the Navy,” he laughed. He would continue his work long after Kalugin’s time in Washington was through. Throughout the 1970s and into the 1980s, Navy personnel wondered at how the Soviets always seemed to know where their ships and submarines were and where their latest exercises were planned to take place. Not until 1985 was Walker finally arrested. In a bit of poetic justice, the person who turned him in to the FBI was his wife, whom he had been physically and sexually abusing for almost 30 years.

The luster which this monster shed on Kalugin led to the awarding of the prestigious Order of the Red Star, and then, in 1974, his promotion to the rank of KGB general while still just shy of his 40th birthday, making him the youngest such in the post-World War II history of the service. By that time, he was back in Moscow again, having been recalled in January of 1970, once again because it was becoming common knowledge among the Americans that his primary work in their country was that of a spy. He was too hot now to be given any more long-term foreign postings. Instead he worked out of KGB headquarters in Moscow, dealing with strategic questions and occasionally jetting off to far-flung trouble spots to be the service’s eyes and ears on the ground. “I can honestly say that I loved my work,” he writes in his memoir. “My job was always challenging, placing me at the heart of the Cold War competition between the Soviet Union and the United States.” As ideology faded, the struggle against imperialism had become more of an intellectual fascination — an intriguing game of chess — than a grand moral crusade.


William Colby testifies before Congress, 1975.

William Colby too was now back in his home country on a more permanent basis, having been promoted to executive director of the CIA — the third highest position on the agency’s totem pole — in July of 1971. Yet he was suffering through what must surely have been the most personally stressful period of his life since he had dodged Nazis as a young man behind enemy lines.

In April of 1973, his 23-year-old daughter Catherine died of anorexia. Her mental illness was complicated, as they always are, but many in the family believed it to have been aggravated by being the daughter of the architect of the Phoenix Program, a man who was in the eyes of much of her hippie generation Evil Incarnate. His marriage was now, in the opinion of his biographer Randall Bennett Woods, no more than a “shell.” Barbara blamed him not only for what he had done in Vietnam but for failing to be there with his family when his daughter needed him most, for forever skipping out on them with convenient excuses about duty and service on his lips.

Barely a month after Catherine’s death, Colby got a call from Alexander Haig, chief of staff in Richard Nixon’s White House: “The president wants you to take over as director of the CIA.” It ought to have been the apex of his professional life, but somehow it didn’t seem that way under current conditions. At the time, the slow-burning Watergate scandal was roiling the CIA almost more than the White House. Because all five of the men who had been arrested attempting to break into the Democratic National Committee’s headquarters the previous year had connections to the CIA, much of the press was convinced it had all been an agency plot. Meanwhile accusations about the Phoenix Program and other CIA activities, in Vietnam and elsewhere, were also flying thick and fast. The CIA seemed to many in Congress to be an agency out of control, ripe only for dismantling. And of course Colby was still processing the loss of his daughter amidst it all. It was a thankless promotion if ever there was one. Nevertheless, he accepted it.

Colby would later claim that he knew nothing of the CIA’s many truly dirty secrets before stepping into the top job. These were the ones that other insiders referred to as the “family jewels”: its many bungled attempts to assassinate Fidel Castro, before and after he became the leader of Cuba, as well as various other sovereign foreign leaders; the coups it had instigated against lawfully elected foreign governments; its experiments with mind control and psychedelic drugs on unwilling and unwitting human subjects; its unlawful wiretapping and surveillance of scores of Americans; its longstanding practice of opening mail passing between the United States and less-than-friendly nations. That Colby could have risen so high in the agency without knowing these secrets and many more seems dubious on the face of it, but it is just possible; the CIA was very compartmentalized, and Colby had the reputation of being a bit of a legal stickler, just the type who might raise awkward objections to such delicate necessities. “Colby never became a member of the CIA’s inner club of mandarins,” claims the agency’s historian Harold Ford. But whether he knew about the family jewels or not beforehand, he was stuck with them now.

Perhaps in the hope that he could make the agency’s persecutors go away if he threw them just a little red meat, Colby came clean about some of the dodgy surveillance programs. But that only whet the public’s appetite for more revelations. For as the Watergate scandal gradually engulfed the White House and finally brought down the president, as it became clear that the United States had invested more than $120 billion and almost 60,000 young American lives into South Vietnam only to see it go communist anyway, the public’s attitude toward institutions like the CIA was not positive; a 1975 poll placed the CIA’s approval rating at 14 percent. President Gerald Ford, the disgraced Nixon’s un-elected replacement, was weak and unable to protect the agency. Indeed, a commission chaired by none other than Vice President Nelson Rockefeller laid bare many of the family jewels, holding back only the most egregious incidents of meddling in foreign governments. But even those began to come out in time. Both major political parties had their sights set on future elections, and thus had a strong motivation to blame a rogue CIA for any and all abuses by previous administrations. (Attorney General Robert F. Kennedy, for example, had personally ordered and supervised some of the attempts on Fidel Castro’s life during the early 1960s.)

It was a no-win situation for William Colby. He was called up to testify in Congress again and again, to answer questions in the mold of “When did you stop beating your wife?”, as he put it to colleagues afterward. Everybody seemed to hate him: right-wing hardliners because they thought he was giving away the store (“It is an act of insanity and national humiliation,” said Secretary of State Henry Kissinger, “to have a law prohibiting the president from ordering assassinations”), left-wingers and centrists because they were sure he was hiding everything he could get away with and confessing only to that which was doomed to come out anyway — which was probably true. Colby was preternaturally cool and unflappable at every single hearing, which somehow only made everyone dislike him that much more. Some of his few remaining friends wanted to say that his relative transparency was a product of Catholic guilt — over the Phoenix Program, over the death of his daughter, perchance over all of the CIA’s many sins — but it was hard to square that notion with the rigidly composed, lawyerly presence that spoke in clipped, minimalist phrases before the television cameras. He seemed more like a cold fish than a repentant soul.

On November 1, 1975 — exactly six months after Saigon had fallen, marking the humiliating final defeat of South Vietnam at the hands of the communists — William Colby was called into the White House by President Ford and fired. “There goes 25 years just like that,” he told Barbara when he came home in a rare display of bitterness. His replacement was George Herbert Walker Bush, an up-and-coming Republican politician who knew nothing about intelligence work. President Ford said such an outsider was the only viable choice, given the high crimes and misdemeanors with which all of the rank and file of the CIA were tarred. And who knows? Maybe he was right. Colby stayed on for three more months while his green replacement got up to speed, then left public service forever.


An Oleg Kalugin campaign poster from 1990, after he reinvented himself as a politician. “Let’s vote for Oleg Kalugin!” reads the caption.

Oleg Kalugin was about to suffer his own fall from grace. According to his account, his rising star flamed out when he ventured out on a limb to support a defector from the United States, one of his own first contacts as a spy handler, who was now accused of stealing secrets for the West. The alleged double agent was sent to a Siberian prison despite Kalugin’s advocacy. Suspected now of being a CIA mole himself, Kalugin was reassigned in January of 1980 to a dead-end job as deputy director of the KGB’s Leningrad branch, where he would be sure not to see too much valuable intelligence. You live by the sword, you die by the sword; duplicity begets suspicions of duplicity, such that spies always end up eating their own if they stay in the business long enough.

Again according to Kalugin himself, it was in Leningrad that his nagging doubts about the ethics and efficacy of the Soviet system — the same ones that had been whispering at the back of his mind since the early 1960s — rose to a roar which he could no longer ignore. “It was all an elaborately choreographed farce, and in my seven years in Leningrad I came to see that we had created not only the most extensive totalitarian state apparatus in history but also the most arcane,” he writes. “Indeed, the mind boggled that in the course of seven decades our communist leaders had managed to construct this absurd, stupendous, arcane ziggurat, this terrifyingly centralized machine, this religion that sought to control all aspects of life in our vast country.” We might justifiably wonder that it took him so long to realize this, and note with some cynicism that his decision to reject the system he had served all his life came only after that system had already rejected him. He even confesses that, when Leonid Brezhnev died in 1982 and was replaced by Yuri Andropov, a former head of the KGB who had always thought highly of Kalugin, he wasn’t above dreaming of a return to the heart of the action in the intelligence service. But it wasn’t to be. Andropov soon died, to be replaced by another tired old man named Konstantin Chernenko who died even more quickly, and then Mikhail Gorbachev came along to accidentally dismantle the Soviet Union in the name of saving it.

In January of 1987, Kalugin was given an even more dead-end job, as a security officer in the Academy of Sciences in Moscow. From here, he watched the extraordinary events of 1989, as country after country in the Soviet sphere rejected its communist government, until finally the Berlin Wall fell, taking the Iron Curtain down with it. Just like that, the Cold War was over, with the Soviet Union the undeniable loser. Kalugin must surely have regarded this development with mixed feelings, given what a loyal partisan he had once been for the losing side. Nevertheless, on February 26, 1990, he retired from the KGB. After picking up his severance check, he walked a few blocks to the Institute of History and Archives, where a group of democracy activists had set up shop. “I want to help the democratic movement,” he told them, in a matter-of-fact tone uncannily similar to that of John Walker in a Soviet embassy 22 years earlier. “I am sure that my knowledge and experience will be useful. You can use me in any capacity.”

And so Oleg Kalugin reinvented himself as an advocate for Russian democracy. A staunch supporter of Boris Yeltsin and his post-Soviet vision for Russia, he became an outspoken opponent of the KGB, which still harbored in its ranks many who wished to return the country to its old ways. He was elected to the Supreme Soviet in September of 1990, in the first wave of free and fair elections ever held in Russia. When some of his old KGB colleagues attempted a coup in August of 1991, he was out there manning the barricades for democracy. The coup was put down — just.


William Colby in his later years, enjoying his sailboat, one of his few sources of uncalculated joy.

William Colby too had to reinvent himself after the agency he served declared that it no longer needed him. He wrote a circumspect, slightly anodyne memoir about his career; its title of Honorable Men alone was enough to tell the world that it wasn’t the tell-all book from an angry spy spurned that it might have been hoping for. He consulted for the government on various issues for larger sums than he had ever earned as a regular federal employee, appeared from time to time as an expert commentator on television, and wrote occasional opinion pieces for the national press, most commonly about the ongoing dangers posed by nuclear weapons and the need for arms-control agreements with the Soviet Union.

In 1982, at the age of 62, this stiff-backed avatar of moral rectitude fell in love with a pretty, vivacious 37-year-old, a former American ambassador to Grenada named Sally Shelton. It struck those who knew him as almost a cliché of a mid-life crisis, of the sort that the intelligence services had been exploiting for decades — but then, clichés are clichés for a reason, aren’t they? “I thought Bill Colby had all the charisma of a shoe clerk,” said one family friend. “Sally is a very outgoing woman, even flamboyant. She found him a sex object, and with her he was.” The following year, Colby asked his wife Barbara for a divorce. She was taken aback, even if their marriage hadn’t been a particularly warm one in many years. “People like us don’t get a divorce!” she exclaimed — meaning, of course, upstanding Catholic couples of the Greatest Generation who were fast approaching their 40th wedding anniversary. But there it was. Whatever else was going on behind that granite façade, it seemed that Colby felt he still had some living to do.

None of Colby’s family attended the marriage ceremony, or had much to do with him thereafter. He lost not only his family but his faith: Sally Shelton had no truck with Catholicism, and he only went to church after he married her for weddings and funerals. Was the gain worth the loss? Only Colby knew the answer.


Old frenemies: Oleg Kalugin and William Colby flank Ken Berris, who directed the Spycraft video sequences.

Oleg Kalugin met William Colby for the first time in May of 1991, when both were attending the same seminar in Berlin — appropriately enough, on the subject of international terrorism, the threat destined to steal the attention of the CIA and the Russian FSB (the successor to the KGB) as the Cold War faded into history. The two men had dinner together, then agreed to be jointly interviewed on German television, a living symbol of bygones becoming bygones. “What do you think of Mr. Colby as a leading former figure in U.S. intelligence?” Kalugin was asked.

“Had I had a choice in my earlier life, I would have gladly worked under Mr. Colby,” he answered. The two became friends, meeting up whenever their paths happened to cross in the world.

And why shouldn’t they be friends? They had led similar lives in so many ways. Both were ambitious men who had justified their ambition as a call to service, then devoted their lives to it, swallowing any moral pangs they might have felt in the process, until the people they served had rejected them. In many ways, they had more in common with one another than with the wives and children they had barely seen for long stretches of their lives.

And how are we to judge these two odd, distant men, both so adept at the art of concealment as to seem hopelessly impenetrable? “I am not emotional,” Colby said to a reporter during his turbulent, controversy-plagued tenure as director of the CIA. “I admit it. Oh, don’t watch me like that. You’re looking for something underneath which isn’t there. It’s all here on the surface, believe me.”

Our first instinct might be to scoff at such a claim; surely everyone has an inner life, a tender core they dare reveal only to those they love best. But maybe we should take Colby at his word; maybe doing so helps to explain some things. As Colby and Kalugin spouted their high-minded ideals about duty and country, they forgot those closest to them, the ones who needed them most of all, apparently believing that they possessed some undefined special qualities of character or a special calling that exempted them from all that. Journalist Neil Sheehan once said of Colby that “he would have been perfect as a soldier of Christ in the Jesuit order.” There is something noble but also something horrible about such devotion to an abstract cause. One has to wonder whether it is a crutch, a compensation for some piece of a personality that is missing.

Certainly there was an ultimate venality, an amorality to these two men’s line of work, as captured in the subtitle of the computer game they came together to make: “The Great Game.” Was it all really just a game to them? It would seem so, at least at the end. How else could Kalugin blithely state that he would have “gladly” worked with Colby, forgetting the vast gulf of ideology that lay between them? Tragically, the ante in their great game was all too often human lives. Looking back on all they did, giving all due credit to their courage and capability, it seems clear to me that the world would have been better off without their meddling. The institutions they served were full of people like them, people who thought they knew best, who thought they were that much cleverer than the rest of the world and had a right to steer its course from the shadows. Alas, they weren’t clever enough to see how foolish and destructive their arrogance was.

“My father lived in a world of secrets,” says William’s eldest son Carl Colby. “Always watching, listening, his eye on the door. He was tougher, smarter, smoother, and could be crueler than anybody I ever knew. I’m not sure he ever loved anyone, and I never heard him say anything heartfelt.” Was William Colby made that way by the organization he served, or did he join the organization because he already was that way? It’s impossible to say. Yet we must be sure to keep these things in mind when we turn in earnest to the game on which Colby and Kalugin allowed their names to be stamped, and find out what it has to say about the ethical wages of being a spy.

(Sources: the books Legacy of Ashes: The History of the CIA by Tim Weiner, The Sword and the Shield: The Mitrokhin Archive and the Secret History of the KGB by Christopher Andrew and Vasili Mitrokhin, Lost Crusader: The Secret Wars of CIA Director William Colby by John Prados, Spymaster: My Thirty-Two Years in Intelligence and Espionage against the West by Oleg Kalugin, Where the Domino Fell: America and Vietnam, 1945-2010, sixth edition by James S. Olson and Randy Roberts, Shadow Warrior: William Egan Colby and the CIA by Randall B. Woods, Honorable Men: My Life in the CIA by William Colby and Peter Forbath, and Lost Victory: A Firsthand Account of America’s Sixteen-Year Involvement in Vietnam by William Colby and James McCargar; the documentary film The Man Nobody Knew: In Search of My Father, CIA Spymaster William Colby; Sierra On-Line’s newsletter InterAction of Summer 1993; Questbusters of February 1994. Online sources include “Who Murdered the CIA Chief?” by Zalin Grant at Pythia Press.)

 

Tags: , ,

Doing Windows, Part 11: The Internet Tidal Wave

On August 6, 1991, when Microsoft was still in the earliest planning stages of creating the operating system that would become known as Windows 95, an obscure British researcher named Tim Berners-Lee, working out of the Conseil Européen pour la Recherche Nucléaire (CERN) in Switzerland, put the world’s first publicly accessible website online. For years to come, these two projects would continue to evolve separately, blissfully unconcerned by if not unaware of one another’s existence. And indeed, it is difficult to imagine two computing projects with more opposite personalities. Mirroring its co-founder and CEO Bill Gates, Microsoft was intensely pragmatic and maniacally competitive. Tim Berners-Lee, on the other hand, was a classic academic, a theorist and idealist rather than a businessman. The computers on which he and his ilk built the early Web ran esoteric operating systems like NeXTSTEP and Unix, or at their most plebeian MacOS, not Microsoft’s mass-market workhorse Windows. Microsoft gave you tools for getting everyday things done, while the World Wide Web spent the first couple of years of its existence as little more than an airy proof of concept, to be evangelized by wide-eyed adherents who often appeared to have read one too many William Gibson novels. Forbes magazine was soon to anoint Bill Gates the world’s richest person, his reward for capturing almost half of the international software market; the nascent Web was nowhere to be found in the likes of Forbes.

Those critics who claim that Microsoft was never a visionary company — that it instead thrived by letting others innovate, then swooping in and taking taking over the markets thus opened — love to point to its history with the World Wide Web as Exhibit Number One. Despite having a role which presumably demanded that he stay familiar with all leading-edge developments in computing, Bill Gates by his own admission never even heard of the Web until April of 1993, twenty months after that first site went up. And he didn’t actually surf the Web for himself until another six months after that — perhaps not coincidentally, shortly after a Windows version of NCSA Mosaic, the user-friendly graphical browser that made the Web a welcoming place even for those whose souls didn’t burn with a passion for information theory, had finally been released.

Gates focused instead on a different model of online communication, one arguably more in keeping with his instincts than was the free and open Web. For almost a decade and a half by 1993, various companies had been offering proprietary dial-up services aimed at owners of home computers. These came complete with early incarnations of many of the staples of modern online life: email, chat lines, discussion forums, online shopping, online banking, online gaming, even online dating. They were different from the Web in that they were walled gardens that provided no access to anything that lay beyond the big mainframes that hosted them. Yet within their walls lived bustling communities whose citizens paid their landlords by the minute for the privilege of participation.

The 500-pound gorilla of this market had always been CompuServe, which had been in the business since the days when a state-of-the-art home computer had 16 K of memory and used cassette tapes for storage. Of late, however, an upstart service called America Online (AOL) had been making waves. Under Steve Case, its wunderkind CEO, AOL aimed its pitch straight at the heart of Middle America rather than the tech-savvy elite. Over the course of 1993 alone, it went from 300,000 to 500,000 subscribers. But that was only the beginning if one listened to Case. For a second Home Computer Revolution, destined to be infinitely more successful and long-lasting than the first, was now in full swing, powered along by the ease of use of Windows 3 and by the latest consumer-grade hardware, which made computing faster and more aesthetically attractive than it had ever been before. AOL’s quick and easy custom software fit in perfectly with these trends. Surely this model of the online future — of curated content offered up by a firm whose stated ambition was to be the latest big player in mass media as a whole; of a subscription model that functioned much like the cable television which the large majority of Americans were already paying for — was more likely to take hold than the anarchic jungle that was the World Wide Web. It was, at any rate, a model that Bill Gates could understand very well, and naturally gravitated toward. Never one to leave cash on the table, he started asking himself how Microsoft could get a piece of this action as well.

Steve Case celebrates outside the New York Stock Exchange on March 19, 1992, the day America Online went public.

Gates proceeded in his standard fashion: in May of 1993, he tried to buy AOL outright. But Steve Case, who nursed dreams of becoming a media mogul on the scale of Walt Disney or Jack Warner, turned him down flat. At this juncture, Russ Siegelman, a 33-year-old physicist-by-education whom Gates had made his point man for online strategy, suggested a second classically Microsoft solution to the dilemma: they could build their own online service that copied AOL in most respects, then bury their rival with money and sheer ubiquity. They could, Siegelman suggested, make their own network an integral part of the eventual Windows 95, make signing up for it just another step in the installation process. How could AOL possibly compete with that? It was the first step down a fraught road that would lead to widespread outrage inside the computer industry and one of the most high-stakes anti-trust investigations in the history of American business — but for all that, the broad strategy would prove very, very effective once it reached its final form. It had a ways still to go at this stage, though, targeting as it did AOL instead of the Web.

Gates put Siegelman in charge of building Microsoft’s online service, which was code-named Project Marvel. “We were not thinking about the Internet at all,” admits one of the project’s managers. “Our competition was CompuServe and America Online. That’s what we were focused on, a proprietary online service.” At the time, there were exactly two computers in Microsoft’s sprawling Redmond, Washington, campus that were connected to the Internet. “Most college kids knew much more than we did because they were exposed to it,” says the Marvel manager. “If I had wanted to connect to the Internet, it would have been easier for me to get into my car and drive over to the University of Washington than to try and get on the Internet at Microsoft.”

It came down to the old “not built here” syndrome that dogs so many large institutions, as well as the fact that the Web and the Internet on which it lived were free, and Bill Gates tended to hold that which was free in contempt. Anyone who attempted to help him over his mental block — and there were more than a few of them at Microsoft — was greeted with an all-purpose rejoinder: “How are we going to make money off of free?” The biggest revolution in computing since the arrival of the first pre-assembled personal computers back in 1977 was taking place all around him, and Gates seemed constitutionally incapable of seeing it for what it was.

In the meantime, others were beginning to address the vexing question of how you made money out of free. On April 4, 1994, Marc Andreessen, the impetus behind the NCSA Mosaic browser, joined forces with Jim Clark, a veteran Silicon Valley entrepreneur, to found Netscape Communications for the purpose of making a commercial version of the Mosaic browser. A team of programmers, working without consulting the Mosaic source code so as to avoid legal problems, soon did just that, and uploaded Netscape Navigator to the Web on October 13, 1994. Distributed under the shareware model, with a $39 licensing fee requested but not demanded after a 90-day trial period was up, the new browser was installed on more than 10 million computers within nine months.

AOL’s growth had continued apace despite the concurrent explosion of the open Web; by the time of Netscape Navigator’s release, the service had 1.25 million subscribers. Yet Steve Case, no one’s idea of a hardcore techie, was ironically faster to see the potential — or threat — of the Web than was Bill Gates. He adopted a strategy in response that would make him for a time at least a superhero of the business press and the investor set. Instead of fighting the Web, AOL would embrace it — would offer its own Web browser to go along with its proprietary content, thereby adding a gate to its garden wall and tempting subscribers with the best of both worlds. As always for AOL, the whole package would be pitched toward neophytes, with a friendly interface and lots of safeguards — “training wheels,” as the tech cognoscenti dismissively dubbed them — to keep the unwashed masses safe when they did venture out into the untamed wilds of the Web.

But Case needed a browser of his own in order to execute his strategy, and he needed it in a hurry. He needed, in short, to buy a browser rather than build one. He saw three possibilities. One was to bring Netscape and its Navigator into the AOL fold. Another was a small company called Spyglass, a spinoff of the National Center for Supercomputing (NCSA) which was attempting to commercialize the original NCSA Mosaic browser. And the last was a startup called Booklink Technologies, which was making a browser from scratch.

Netscape was undoubtedly the superstar of the bunch, but that didn’t help AOL’s cause any; Marc Andreessen and Jim Clark weren’t about to sell out to anyone. Spyglass, on the other hand, struck Case as an unimaginative Johnny-come-lately that was trying to shut the barn door long after the horse called Netscape had busted out. That left only Booklink. In November of 1994, AOL paid $30 million for the company. The business press scoffed, deeming it a well-nigh flabbergasting over-payment. But Case would get the last laugh.

While AOL was thus rushing urgently to “embrace and extend” the Web, to choose an ominous phrase normally associated with Microsoft, the latter was dawdling along more lackadaisically toward a reckoning with the Internet. During that same busy fall of 1994, IBM released OS/2 3.0, which was marketed as OS/2 Warp in the hope of lending it some much-needed excitement. By either name, it was the latest iteration of an operating system that IBM had originally developed in partnership with Microsoft, an operating system that had once been regarded by both companies as nothing less than the future of mainstream computing. But since the pair’s final falling out in 1991, OS/2 had become an irrelevancy in the face of the Windows juggernaut, winning a measure of affection only in some hacker circles and a few other specialized niches. Despite its snazzy new name and despite being an impressive piece of software from a purely technical perspective, OS/2 Warp wasn’t widely expected to change those fortunes before its release, and this lack of expectations proved well-founded afterward. Yet it was a landmark in another way, being the first operating system to include a Web browser as an integral component, in this case a program called Web Explorer, created by IBM itself because no one else seemed much interested in making a browser for the unpopular OS/2.

This appears to have gotten some gears turning in Bill Gates’s head. Microsoft already planned to include more networking tools than ever before in Windows 95. They had, for example, finally decided to bow to customer demand and build right into the operating system TCP/IP, the networking protocol that allowed a computer to join the Internet; Windows 3 required the installation of a third-party add-on for the same purpose. (“I don’t know what it is, and I don’t want to know what it is,” said Steve Ballmer, Gates’s right-hand man, to his programmers on the subject of TCP/IP. “[But] my customers are screaming about it. Make the pain go away.”) Maybe a Microsoft-branded Web browser for Windows 95 would be a good idea as well, if they could acquire one without breaking the bank.

Just days after AOL bought Booklink for $30 million, Microsoft agreed to give $2 million to Spyglass. In return, Spyglass would give Microsoft a copy of the Mosaic source code, which it could then use as the basis for its own browser. But, lest you be tempted to see this transaction as evidence that Gates’s opinions about the online future had already undergone a sea change by this date, know that the very day this deal went down was also the one on which he chose to publicly announce Microsoft’s own proprietary AOL competitor, to be known as simply the Microsoft Network, or MSN. At most, Gates saw the open Web at this stage as an adjunct to MSN, just as it would soon become to AOL. MSN would come bundled into Windows 95, he told the assembled press, so that anyone who wished to could become a subscriber at the click of a mouse.

The announcement caused alarm bells to ring at AOL. “The Windows operating system is what the dial tone is to the phone industry,” said Steve Case. He thus became neither the first nor the last of Gates’s rival to hint at the need for government intervention: “There needs to be a level playing field on which companies compete.” Some pundits projected that Microsoft might sign up 20 million subscribers to MSN before 1995 was out. Others — the ones whom time would prove to have been more prescient — shook their heads and wondered how Microsoft could still be so clueless about the revolutionary nature of the World Wide Web.

AOL leveraged the Booklink browser to begin offering its subscribers Web access very early in 1995, whereupon its previously robust rate of growth turned downright torrid. By November of 1995, it would have 4 million subscribers. The personable and photogenic Steve Case became a celebrity in his own right, to the point of starring in a splashy advertising campaign for The Gap’s line of khakis; the man and the pants represented respectively the personification and the uniform of the trend in corporate America toward “business casual.” Meanwhile Case’s company became an indelible part of the 1990s zeitgeist. “You’ve got mail!,” the words AOL’s software spoke every time a new email arrived — something that was still very much a novel experience for many subscribers — was featured as a sample in a Prince song, and eventually became the name of a hugely popular romantic comedy starring Tom Hanks and Meg Ryan. CompuServe and AOL’s other old rivals in the proprietary space tried to compete by setting up Internet gateways of their own, but were never able to negotiate the transition from one era of online life to another with the same aplomb as AOL, and gradually faded into irrelevancy.

Thankfully for Microsoft’s shareholders, Bill Gates’s eyes were opened before his company suffered the same fate. At the eleventh hour, with what were supposed to be the final touches being put onto Windows 95, he made a sharp swerve in strategy. He grasped at last that the open Web was the here, the now, and the future, the first major development in mainstream consumer computing in years that hadn’t been more or less dictated by Microsoft — but be that as it may, the Web wasn’t going anywhere. On May 26, 1995, he wrote a memo to every Microsoft employee that exuded an all-hands-on-deck sense of urgency. Gates, the longstanding Internet agnostic, had well and truly gotten the Internet religion.

I want to make clear that our focus on the Internet is critical to every part of our business. The Internet is the most important single development to come along since the IBM PC was introduced in 1981. It is even more important than the arrival of [the] graphical user interface (GUI). The PC analogy is apt for many reasons. The PC wasn’t perfect. Aspects of the PC were arbitrary or even poor. However, a phenomena [sic] grew up around the IBM PC that made it a key element of everything that would happen for the next fifteen years. Companies that tried to fight the PC standard often had good reasons for doing so, but they failed because the phenomena overcame any weakness that [the] resistors identified.

Over the last year, a number of people [at Microsoft] have championed embracing TCP/IP, hyperlinking, HTML, and building clients, tools, and servers that compete on the Internet. However, we still have a lot to do. I want every product plan to try and go overboard on Internet features.

Everything changed that day. Instead of walling its campus off from the Internet, Microsoft put the Web at every employee’s fingertips. Gates himself sent his people lists of hot new websites to explore and learn from. The team tasked with building the Microsoft browser, who had heretofore labored in under-staffed obscurity, suddenly had all the resources of the company at their beck and call. The fact was, Gates was scared; his fear oozes palpably from the aggressive language of the memo above. (Other people talked of “joining” the Internet; Gates wanted to “compete” on it.)

But just what was he so afraid of? A pair of data points provides us with some clues. Three days before he wrote his memo, a new programming language and run-time environment had taken the industry by storm. And the day after he did so, a Microsoft executive named Ben Slivka sent out a memo of his own with Gate’s blessing, bearing the odd title of “The Web Is the Next Platform.” To understand what Slivka was driving at, and why Bill Gates took it as such an imminent existential threat to his company’s core business model, we need to back up a few years and look at the origins of the aforementioned programming language.


Bill Joy, an old-school hacker who had made fundamental contributions to the Unix operating system, was regarded as something between a guru and an elder statesman by 1990s techies, who liked to call him “the other Bill.” In early 1991, he shared an eye-opening piece of his mind at a formal dinner for select insiders. Microsoft was then on the ascendant, he acknowledged, but they were “cruising for a bruising.” Sticking with the automotive theme, he compared their products to the American-made cars that had dominated until the 1970s — until the Japanese had come along peddling cars of their own that were more efficient, more reliable, and just plain better than the domestic competition. He said that the same fate would probably befall Microsoft within five to seven years, when a wind of change of one sort or another came along to upend the company and its bloated, ugly products. Just four years later, people would be pointing to a piece of technology from his own company Sun Microsystems as the prophesied agent of Microsoft’s undoing.

Sun had been founded in 1982 to leverage the skills of Joy along with those of a German hardware engineer named Andy Bechtolsheim, who had recently built an elegant desktop computer inspired by the legendary Alto machines of Xerox’s Palo Alto Research Center. Over the remainder of the 1980s, Sun made a good living as the premier maker of Unix-based workstations: computers that were a bit too expensive to be marketed to even the most well-heeled consumers, but were among the most powerful of their day that could be fit onto or under a single desktop. Sun possessed a healthy antipathy for Microsoft, for all of the usual reasons cited by the hacker contingent: they considered Microsoft’s software derivative and boring, considered the Intel hardware on which it ran equally clunky and kludgy (Sun first employed Motorola chips, then processors of their own design), and loathed Microsoft’s intensely adversarial and proprietorial approach to everything it touched. For some time, however, Sun’s objections remained merely philosophical; occupying opposite ends of the market as they did, the two companies seldom crossed one another’s paths. But by the end of the decade, the latest Intel hardware had advanced enough to be comparable with that being peddled by Sun. And by the time that Bill Joy made his prediction, Sun knew that something called Windows NT was in the works, knew that Microsoft would be coming in earnest for the high-end-computing space very soon.

About six months after Joy played the oracle, Sun’s management agreed to allow one of their star programmers, a fellow named James Gosling, to form a small independent group in order to explore an idea that had little obviously to do with the company’s main business. “When someone as smart as James wants to pursue an area, we’ll do our best to provide an environment,” said Chief Technology Officer Eric Schmidt.

James Gosling

The specific “area” — or, perhaps better said, problem — that Gosling wanted to address was one that still exists to a large extent today: the inscrutability and lack of interoperability of so many of the gadgets that power our daily lives. The problem would be neatly crystalized almost five years later by one of the milquetoast jokes Jay Leno made at the Windows 95 launch, about how the VCR in even Bill Gates’s living room was still blinking “12:00” because he had never figured out how to set the thing’s clock. What if everything in your house could be made to talk together, wondered Gosling, so that setting one clock would set all of them — so that you didn’t have to have a separate remote control for your television and your VCR, each with about 80 buttons on it that you didn’t understand what they did and never, ever pressed. “What does it take to watch a videotape?” he mused. “You go plunk, plunk, plunk on all of these things in certain magic sequences before you can actually watch your videotape! Why is it so hard? Wouldn’t it be nice if you could just slide the tape into the VCR, [and] the system sort of figures it out: ‘Oh, gee, I guess he wants to watch it, so I ought to power up the television set.'”

But when Gosling and his colleagues started to ponder how best to realize their semi-autonomous home of the future, they tripped over a major stumbling block. While it was true that more and more gadgets were becoming “smart,” in the sense of incorporating programmable microprocessors, the details of their digital designs varied enormously. Each program to link each individual model of, say, VCR into the home network would have to be written, tested, and debugged from scratch. Unless, that is, the program could be made to run in a virtual machine.

A virtual machine is an imaginary computer which a real computer can be programmed to simulate. It permits a “write once, run everywhere” approach to software: once a given real computer has an interpreter for a given virtual machine, it can run any and all programs that have been or will be written for that virtual machine, albeit at some cost in performance.

Like almost every other part of the programming language that would eventually become known as Java, the idea of a virtual machine was far from new in the abstract. (“In some sense, I would like to think that there was nothing invented in Java,” says Gosling.) For example, a decade before Gosling went to work on his virtual machine, the Apple Pascal compiler was already targeting one that ran on the lowly Apple II, even as the games publisher Infocom was distributing its text adventures across dozens of otherwise incompatible platforms thanks to its Z-Machine.

Unfortunately, Gosling’s new implementation of this old concept proved unable to solve by itself the original problem for which it had been invented. Even Wi-Fi didn’t exist at this stage, much less the likes of Bluetooth. Just how were all of these smart gadgets supposed to actually talk to one another, to say nothing of pulling down the regular software updates which Gosling envisioned as another benefit of his project? (Building a floppy-disk drive into every toaster was an obvious nonstarter.) After reluctantly giving up on their home of the future, the team pivoted for a while toward “interactive television,” a would-be on-demand streaming system much like our modern Netflix. But Sun had no real record in the consumer space, and cable-television providers and other possible investors were skeptical.

While Gosling was trying to figure out just what this programming language and associated runtime environment he had created might be good for, the World Wide Web was taking off. In July of 1994, a Sun programmer named Patrick Naughton did something that would later give Bill Gates nightmares: he wrote a fairly bare-bones Web browser in Java, more for the challenge than anything else. A couple of months later there came the eureka moment: Naughton and another programmer named Jonathan Payne made it possible to run other Java programs, or “applets” as they would soon be known, right inside their browser. They stuck one of the team’s old graphical demos on a server and clicked the appropriate link, whereupon they were greeted with a screen full of dancing Coca-Cola cans. Payne found it “breathtaking”: “It wasn’t just playing an animation. It was physics calculations going on inside a webpage!”

In order to appreciate his awe, we need to understand what a static place the early Web was. HTML, the “language” in which pages were constructed, was an abbreviation for “Hypertext Markup Language.” In form and function, it was more akin to a typesetting specification than a Turing-complete programming language like C or Pascal or Java; the only form of interactivity it allowed for was the links that took the reader from static page to static page, while its only visual pizazz came in the form of static in-line images (themselves a relatively recent addition to the HTML specification, thanks to NCSA Mosaic). Java stood to change all that at a stroke. If you could embed programs running actual code into your page layouts, you could in theory turn your pages into anything you wanted them to be: games, word processors, spreadsheets, animated cartoons, stock-market tickers, you name it. The Web could almost literally come alive.

The potential was so clearly extraordinary that Java went overnight from a moribund project on the verge of the chopping block to Sun’s top priority. Even Bill Joy, now living in blissful semi-retirement in Colorado, came back to Silicon Valley for a while to lend his prodigious intellect to the process of turning Java into a polished tool for general-purpose programming. There was still enough of the old-school hacker ethic left at Sun that management bowed to the developers’ demand that the language be made available for free to individual programmers and small businesses; Sun would make its money on licensing deals with bigger partners, who would pay for the Java logo on their products and the right to distribute the virtual machine. The potential of Java certainly wasn’t lost on Netscape’s Marc Andreessen, who had long been leading the charge to make the Web more visually exciting. He quickly agreed to pay Sun $750,000 for the opportunity to build Java into the Netscape Navigator browser. In fact, it was Andreessen who served as master of ceremonies at Java’s official coming-out party at a SunWorld conference on May 23, 1995 — i.e., three days before Bill Gates wrote his urgent Internet memo.

What was it that so spooked him about Java? On the one hand, it represented a possible if as-yet unrealized challenge to Microsoft’s own business model of selling boxed software on floppy disks or CDs. If people could gain access to a good word processor just by pointing their browsers to a given site, they would presumably have little motivation to invest in Microsoft Office, the company’s biggest cash cow after Windows. But the danger Java posed to Microsoft might be even more extreme. The most maximalist predictions, which were being trumpeted all over the techie press in the weeks after the big debut, had it that even Windows could soon become irrelevant courtesy of Java. This is what Microsoft’s own Ben Slivka meant when he said that “the Web is the next platform.” The browser itself would become the operating system from the perspective of the user, being supported behind the scenes only by the minimal amount of firmware needed to make it go. Once that happened, a new generation of cheap Internet devices would be poised to replace personal computers as the world now knew them. With all software and all of each person’s data being stored in the cloud, as we would put it today, even local hard drives might become passé. And then, with Netscape Navigator and Java having taken over the role of Windows, Microsoft might very well join IBM, the very company it had so recently displaced from the heights of power, in the crowded field of computing’s has-beens.

In retrospect, such predictions seem massively overblown. Officially labeled beta software, Java was in reality more like an alpha release at best at the time it was being celebrated as the Paris to Microsoft’s Achilles, being painfully crash-prone and slow. And even when it did reach a reasonably mature form, the reality of it would prove considerably less than the hype. One crippling weakness that would continue to plague it was the inability of a Java applet to communicate with the webpage that spawned it; applets ran in Web browsers, but weren’t really of them, being self-contained programs siloed off in a sandbox from the environment that spawned them. Meanwhile the prospects of applications like online word processing, or even online gaming in Java, were sharply limited by the fact that at least 95 percent of Web users were accessing the Internet on dial-up connections, over which even the likes of a single high-resolution photograph could take minutes to load. A word processor like the one included with Microsoft Office would require hours of downloading every time you wanted to use it, assuming it was even possible to create such a complex piece of software in the fragile young language. Java never would manage to entirely overcome these issues, and would in the end enjoy its greatest success in other incarnations than that of the browser-embedded applet.

Still, cooler-headed reasoning like this was not overly commonplace in the months after the SunWorld presentation. By the end of 1995, Sun’s stock price had more than doubled on the strength of Java alone, a product yet to see a 1.0 release. The excitement over Java probably contributed as well to Netscape’s record-breaking initial public offering in August. A cavalcade of companies rushed to follow in the footsteps of Netscape and sign Java distribution deals, most of them on markedly more expensive terms. Even Microsoft bowed to the prevailing winds on December 7 and announced a Java deal of its own. (BusinessWeek magazine described it as a “capitulation.”) That all of this was happening alongside the even more intense hype surrounding the release of Windows 95, an operating system far more expansive than any that had come out of Microsoft to date but one that was nevertheless of a very traditionalist stripe at bottom, speaks to the confusion of these go-go times when digital technology seemed to be going anywhere and everywhere at once.

Whatever fear and loathing he may have felt toward Java, Bill Gates had clearly made his peace with the fact that the Web was computing’s necessary present and future. The Microsoft Network duly debuted as an icon on the default Windows 95 desktop, but it was now pitched primarily as a gateway to the open Web, with just a handful of proprietary features; MSN was, in other words, little more than yet another Internet service provider, of the sort that were popping up all over the country like dandelions after a summer shower. Instead of the 20 million subscribers that some had predicted (and that Steve Case had so feared), it attracted only about 500,000 customers by the end of the year. This left it no more than one-eighth as large as AOL, which had by now completed its own deft pivot from proprietary online service of the 1980s type to the very face of the World Wide Web in the eyes of countless computing neophytes.

Yet if Microsoft’s first tentative steps onto the Web had proved underwhelming, people should have known from the history of the company — and not least from the long, checkered history of Windows itself — that Bill Gates’s standard response to failure and rejection was simply to try again, harder and better. The real war for online supremacy was just getting started.

(Sources: the books Overdrive: Bill Gates and the Race to Control Cyberspace by James Wallace, The Silicon Boys by David A. Kaplan, Architects of the Web by Robert H. Reid, Competing on Internet Time: Lessons from Netscape and Its Battle with Microsoft by Michael Cusumano and David B. Yoffie, dot.con: The Greatest Story Ever Sold by John Cassidy, Stealing Time: Steve Case, Jerry Levin, and the Collapse of AOL Time Warner by Alec Klein, Fools Rush In: Steve Case, Jerry Levin, and the Unmaking of AOL Time Warner by Nina Munk, and There Must be a Pony in Here Somewhere: The AOL Time Warner Debacle by Kara Swisher.)

 
 

Tags: , , , ,

Titanic Visions, Part 3: An Adventure Out of Time

It’s disarmingly easy to underestimate Titanic: Adventure Out of Time, by far the best-selling game in history about the doomed luxury liner. At first glance, after all, it looks like just another of the lifeless multimedia Myst clones that were cluttering up store shelves in such quantities in the mid-1990s. Meanwhile the studio behind it was known as CyberFlix, a name which positively reeks of the era when equally misbegotten “interactive movies” were all the rage. And indeed, CyberFlix really was founded by folks convinced that the future of games would be a collision between Hollywood and Silicon Valley.

But the prime mover behind the operation, a 30-something Tennessean named Bill Appleton, wasn’t just another of the clueless bandwagon jumpers who were using off-the-shelf middleware packages like Macromedia Director to cobble together dodgy games where the video clips took center stage and the interactivity was an afterthought. On the contrary, Appleton knew how to make innovative technology of his own, and had a lengthy resumé to prove it. His early software oeuvre was the ironic polar opposite of interactive movies, those ultimate end-user products that seemed designed to convince the human being behind the monitor that she couldn’t possibly create anything like this. In the beginning, Appleton was all about empowering people to make stuff for themselves.

A youthful overachiever from Oak Ridge, Tennessee, Appleton studied painting and philosophy at university before settling on economics. He was weeks away from earning his master’s degree in that field from Vanderbilt University in the spring of 1984, when he saw an Apple Macintosh for the first time. Like any number of other curious minds who hadn’t heretofore taken much interest in computers, he allowed all of his plans for his life to be utterly derailed by the encounter. He dropped out of university, moved back into his parents’ basement, and rededicated his life to making the Mac do amazing things.

He created an adventure game called Enchanted Scepters, which combined vestiges of the text adventures that were popular on other platforms at the time with simple pictures, sounds, and mouse-driven interactions. In this sense, it was similar to such other early Mac graphic adventures as ICOM Simulations’s Deja Vu, although considerably less refined. The real stroke of genius came when Appleton, a year after releasing the game itself through a small publisher called Silicon Beach Software, packaged up all of the tools he had used to make it and released them as well, under the name of World Builder. The do-it-yourself toolkit spawned a small but dedicated amateur community of adventure makers and players that persisted well into the 1990s. Appleton also adapted World Builder into another product called Course Builder, aimed at educators who wanted to create interactive experiences for the classroom.

With its ethos of empowering a fairly non-technical end user to create original multimedia content, Course Builder especially was veering into the territory soon to be staked out by HyperCard, Apple’s own revolutionary hypertext-authoring system. It’s thus no surprise that, when that software did debut in 1987, Appleton first greeted it as a threat. He quickly decided, however, to adopt the old adage of can’t beat ’em, join ’em — or rather enhance ’em. He moved to Silicon Valley, took control of a team of programmers hired by Silicon Beach Software, and made SuperCard, a system that could run existing HyperCard “stacks” as-is, but that added a whole slew of additional native features to the environment. It attracted some interest in the Macintosh world, but proved unable to compete with HyperCard’s huge existing user base, the result of being bundled with every single new Mac. So, Appleton turned back to games. Hooking up with a Chicago-based developer and publisher called Reactor, he made a beat-em-up game in the tradition of Karateka called Creepy Castle, then embarked on an action-packed 3D extravaganza called Screaming Metal, only for Reactor to go out of business midway through development.

It was thus a thoroughly frustrated Bill Appleton who returned to Tennessee in 1992. His eight years in software had resulted in a pair of cults in the form of the World Builder and SuperCard communities, but he hadn’t ever managed to hit the commercial bullseye he was aiming for. He was a man of significant ambition, and the status of cult hero just wasn’t good enough for him. “I’ve built a lot [of programs] for Silicon Valley,” he said, then went on to air his grievances using the precious diction of a sniffy artiste: “This isn’t about money or power or technology. It’s about art. I’m an artist, and I’ve got to be able to control my work.” Like Bob Dylan and The Band retreating to that famous pink house in Woodstock, he decided he could do so as easily right there in Tennessee as anywhere else.

Appleton recruited a few other bright sparks, none of them your prototypical computer nerds. There were Scott Scheinbaum, a musician and composer who had spent the last fifteen years playing in various local rock bands and working in record stores to make ends meet; Jamie Wicks, an accomplished young visual artist, described by a friend from school as “the quiet guy who sits next to you in class and draws pictures of monsters”; Andrew Nelson, a journalist by education who had grown tired of writing puff pieces for glossy lifestyle magazines; and Eric Quist, an attorney and childhood friend of Appleton. “Bill inoculated [sic] us with his vision of becoming multimedia superstars and taking over the world,” says Scheinbaum. The five of them hatched their plans for world domination in Appleton’s basement before officially founding CyberFlix in May of 1993, with Appleton as the majority stakeholder and decider-in-chief. The division of labor on their games broke down obviously enough: Appleton would be the programmer, Scheinbaum the composer and sound-effects man, Wicks the pixel artist and 3D modeller, Nelson the designer and writer, and Quist the business guy. In fact, by this time they had their first game just about ready to go.

Lunicus

It went by the name of Lunicus. More of a tech demo than a carefully designed game, it began as a graphic adventure that took place on the titular Moonbase Lunicus, only to turn into a frantic corridor shooter, a slightly more sophisticated Castle Wolfenstein that came complete with a pounding rock-and-roll soundtrack. But everyone seemed to agree that its most impressive feature was the sheer speed with which it unspooled from the CD-ROM, thanks to some proprietary software technology developed by Appleton. Called a “mindblower” by no less a pundit than Steven Levy (author of the seminal book Hackers), the game sold 50,000 copies on the Macintosh, then was picked up by Paramount Interactive and ported to Microsoft Windows, where it did rather less well in the face of much stiffer competition. A follow-up called Jump Raven that was still faster did even better in a Mac marketplace that was starving for just this style of flashy action game, selling by some reports almost 100,000 copies.

Jump Raven

CyberFlix was riding high, basking in the glowing press they were receiving inside the small and fairly insular milieu of Mac gaming. Being so thoroughly immersed in that world could distort the founders’ perspective. Jump Raven “was the fastest thing on the Mac,” says one early CyberFlix employee. “And that was back when the Mac was going to take over everything.”

CyberFlix moved into a snazzy loft in the center of Knoxville, Tennessee, and set about burnishing their hipster credibility by throwing parties for the downtown set, with live bands and open bars. Knoxville wasn’t quite the country-bumpkin town that East and West Coast media sometimes like to stereotype it as; its three largest employers were the University of Tennessee, the Tennessee Valley Authority, and the Oak Ridge National Laboratory. Many people in local government and business were eager to see CyberFlix as the progenitors of a new line for the city in multimedia. In a major publicity coup, Newsweek magazine was enticed to come down and write a two-page feature on the company, in which the wide-eyed reporter said that Appleton had become “something of a legend” during his time in Silicon Valley — this was something of a stretch — and called the house in whose basement the founding quintet had gotten together a literal log cabin. Others, however, were less credulous. One consultant who was hired to help the company work out a proper business plan remembers that Appleton “absolutely would not listen. He would sit and seem to listen, and then he was off to something else. It was exasperating.”

One of the things he was off to was CyberFlix’s next big game, a Western homage or send-up — the distinction is never clear, and therein lies many of the game’s problems — called Dust: A Tale of the Wired West. It was to be an unadulterated adventure game, whose action elements were limited to a few anodyne mini-games. CyberFlix used mostly employees and friends to play its characters — and therein lies another of the problems. Like many games of its technological ilk and era, Dust lacks the courage of its convictions, resulting in a fatal case of split personality. It seems that CyberFlix first intended to tell a fairly serious story. But as the amateurish acting and the limitations of their tools presented themselves, it drifted further and further into camp as a sort of defense mechanism, albeit without excising the would-be “dramatic” beats that had already been laid down. The result was, as Arinn Dembo noted in a scathing review for Computer Gaming World magazine, a comedy with dramatic relief, an approach that doesn’t work nearly as well as the opposite. Dembo’s concluding paragraphs are so well-stated, and apply so well not just to this game but to many other adventure games, that I’d like to quote them here.

The confusion in the design of this game brings up a general point, which is this: if you want to use dramatic elements in any narrative, you have to earn them. That means taking your subject seriously, even if it is “just a computer game.” Someone has to go to the trouble of fashioning characters deeper than your average mud puddle (and that includes giving them names that aren’t farcical), and writing dialog for them that sounds like something a real person might say.

If, on the other hand, your intention is to satirize the form and make fun of its tropes and limitations, you lay your cards on the table from the start; you don’t try to tap into drama that you don’t deserve. It’s either Blazing Saddles or The Unforgiven — you can’t mix the two. Computer-game writers need to learn that comedy is not a fallback position, something you do when you don’t believe you’re competent to sustain the drama. Satire and farce can be done well, and I’m not against them, but I’m against using them as a screen for poor storytelling.

All of this was made even more problematic by the way that even the jokes usually failed to land. The name of Dust, for example, was intended as a strained “ironic play” on the name of Myst. But this literally no one cottoned onto, until a peeved-sounding CyberFlix employee revealed it in an interview.

Dust: A Tale of the Wired West

The same CyberFlix representative said that, of 90 publications that reviewed Dust, 88 of them recommended it. If so, I managed to stumble on both of the exceptions, and, unfortunately for CyberFlix, they were both biggies: the aforementioned Computer Gaming World, the journal of record among the hardcore set, and Entertainment Weekly, a major taste-maker among the mainstream-entertainment set which the company wanted desperately to reach. Released in late 1995, Dust sold only 30,000 copies between its Macintosh and Windows incarnations. In the aftermath of its failure, CyberFlix was forced to take on more plebeian contract work, such as porting software from Windows to Mac and implementing pre-written design briefs for educational products. Other folks at the company turned to simpler, less expensive sorts of original games. For many both inside and outside of Cyberflix were now beginning to wonder whether interactive movies were really destined to be the future of mainstream entertainment after all. But CyberFlix had one more big game of the old style still in them — the one that would write them into gaming history as something more than just another flash in the pan from the 1990s multimedia boom.



It must be conceded that Titanic: Adventure Out of Time did not have a very auspicious gestation. Its mastermind Andrew Nelson admits that he was prompted to make it by a logic far more plebeian than any of the grand philosophical meditations about fate and hubris that the great ship’s sinking has so often inspired. Back when CyberFlix was just getting off the ground, he’d had an interesting conversation with his sister-in-law: she “was intrigued with these new CD-ROM games, but she had heard they take forever and she didn’t have that much time.”  Soon after, he read a magazine article about the Titanic, which noted that the ship had sunk two and a half hours after hitting the iceberg. That seemed like just about the right amount of time for an interactive movie that could appeal to busy adults like his sister-in-law. He decided to take the idea up with Bill Appleton and his other colleagues.

Initially, he didn’t have any more luck than Steve Meretzky had enjoyed at Infocom or Legend with his own Titanic concept. Appleton was particularly unenthusiastic. But Nelson kept hammering away at him, and finally, after Appleton’s own brainchild of Dust had proved a bust, he got his way. The company would go all-in on one last big adventure game.

The project may have been born out of practical commercial reasoning, but that didn’t keep it from taking on a more idealistic personality now. Nelson and many of those around him became full-bore Titanic fanatics. “We read all the books, listened to tapes of survivors, looked at 750 different pictures,” says Scott Scheinbaum. They laid out their virtual ship from the builder’s blueprints that had been used for the original — the very same documents, in fact, that James Cameron and friends were using to build their Titanic replica out of real steel down in Mexico at the very same time, although no one at CyberFlix was aware of this. Computer games which are labelled as “historical” tend to be strategic war games, exercises in moving abstract units around abstract fields of battle. CyberFlix was attempting a different kind of historical re-creation — a living, immersive view of history that dropped you right into the past as an individual on the scene.

In that spirit, Nelson and his colleagues set out to present as accurate a reproduction of the ship as the resources at their disposal would allow; again, they took their own endeavor as seriously as James Cameron was taking his. They tried to make every detail of every room as authentic as possible, knowing all the while that, while a movie director’s cameras had the luxury of gliding quickly over the surface of things, their players would be able to move around of their own free will in the spaces CyberFlix created and linger over what they saw to their heart’s content. This only made it that much more important to get things right.

A journalist named J.C. Herz came to visit CyberFlix in Knoxville for part of a book she was writing about videogame culture. She found an office with a “24-hour Kinko’s Copies atmosphere — full of equipment and overworked twentysomethings, simultaneously frenetic and oddly mellow.” She was especially taken by a “photo researcher” named Billy, who in his country boy’s baseball cap looked and talked like Bo Duke from The Dukes of Hazzard. “I do the carpeting for the Titanic,” he told her by way of introduction.

We have a room where you start out in the game, and I’ve outfitted the desk with postcards that you can actually flip over and read, and magazines like Brave New World, and I’ve designed the covers for ’em, so you can pick those up and look at ’em. There’s a lot of detail in there that we don’t even expect people to actually look at. It’s like, if you were just tryin’ to half-ass it and get through it, you might make a lamp, but you might not make the electric cord that goes behind the desk. We’re tryin’ to get all the detail in there. There’s a lot of games that you look at today, and a lot of people don’t take the time and energy to go in and really work with their maps to make ’em look real, so they end up coming out lookin’ plastic or fake. I made it so that when you click on [a] scrapbook, it opens up, and then all the pages are just full of imagery, you know, ephemera, things like that. So I go out and I find all the stuff to go in the scrapbook and put it in there. That’s the fun job. I could spend a day or I could spend a month on that book.

I worked 36 hours in two days last week. But they try to make it as accommodating as possible. We’ve got showers, you know. And they stock the refrigerators with Cokes. Everybody gives you Cokes. They want you gettin’ wired so you stay there all the time. And they got some couches. So, I mean, you can stay here forever.

Another of the employees she met was named Alex, a rough-looking character with a Mohawk haircut, earrings, and tattoos to complement his “lengthy criminal record,” who had recently discovered a latent talent for computer art. He demonstrated that not quite everyone working on the Titanic game shared Billy’s passion for it. Long force of habit kept him talking about the people who ran CyberFlix as The Man, even though they let him get away with just about anything.

Whatever it takes to keep us here. Whatever we want. You can come in looking like a wreck, reeking of booze, whatever, and they’re never gonna fire you for it because they need you.

Luckily enough, they’ve been thoughtful not to force any kind of real schedule on us. Just get in when you can and do your shit. So, I just go to work doing whatever I have to do, build sets and do props, little things here and there where it needs to fit in, do movies and help. While I’ve got big jobs off running on the SGI [graphics workstation], I just jump around and do little different things, 2D work or whatever. As long as it takes is as long as you’ve got to spend, and if you’re here friggin’ eighteen hours a day, so be it.

And it’s kind of very strange for me because until I came up here to do this I was always working construction, my whole life, and I felt sorry for all the poor bastards trapped in air-conditioned prisons all day, and I thought it was so much fun to be roaming around on the job site, getting sun and running and hollering and screaming. And that’s all well and good, but you ain’t never gonna make shit. You’re gonna die poor or you’re gonna die pissing away your social-security check in some stinking little bar, and that’s no good. So, I just decided to take the step and at least do this for a few years to say that I could do it, and make some money out of it. If something went horribly wrong here tomorrow and I got kicked out or fired or I had to leave, I would just throw some things in the truck, get out, and go someplace else and do it. Because this industry is just replicating itself at such a disgusting rate, and everybody’s got something to do. And sure, not everything is quality, but it doesn’t matter. It’s like, you got money? All right, pay me, I’ll do it. Give it up. And then you just do it and move on again.

Of course, a game consists of more than just its graphical presentation, regardless of whether the latter is created lovingly or for reasons of filthy lucre. What, then, was CyberFlix’s Titanic game actually all about, beyond the obvious?

Andrew Nelson named his game Titanic: Adventure Out of Time because it really does involve time travel — which, as readers of the previous article in this series will recognize, is rather an ongoing theme in ludic Titanic fictions. It opens not in 1912 in the North Atlantic but in 1942. You play a former agent in His Majesty’s Secret Service who has fallen on hard times. You’ve been drinking your life away in your dingy flat, still haunted by the mission that destroyed your promising career — an espionage mission which took place aboard the Titanic. (Shades of Graham Nelson’s Jigsaw, although the similarities would appear to be completely coincidental.) Then a German bomb falls on your head, but instead of killing you it opens up a rift in space-time, sending you back to April 14, 1912, to try again.

You arrive in your cabin aboard the Titanic at 9:30 on that fateful evening, two hours before the collision with the iceberg. After an introductory spiel from the ship’s steward, you’re free to start exploring. Indeed, the meticulously re-created ship lies at the heart of this game’s appeal. You can roam freely through First Class, Second Class, and steerage; up to the promenade decks and into the bridge and wireless room; to the ship’s gym, complete with state-of-the-art exercise equipment like the “electric camel”; to the gentlemen’s smoking lounge and the Café Parisien; to the squash court and the Turkish sauna, with its alarmingly named “electric bath”; even down into the boiler rooms and the cargo holds. All of these and more are presented as node-based spaces pre-rendered in first-person 3D — in the superficial style of Myst, in other words. But do remember the opening to this article, when I warned you not to underestimate this game. CyberFlix’s technology was better than the vast majority of Myst clones that were flooding the market at this time, and their ambitions for this project at least were higher.

There are in fact only a handful Myst-style set-piece puzzles here, none of them terribly difficult. Instead of fiddling endlessly with esoteric mechanics in a deserted environment, you spend your time here — when not just taking in the views like a virtual tourist, that is — actually talking to a diverse cast of characters whom you meet scattered all over the ship, who in the aggregate are a pretty good representation of the many nationalities, professions, and social classes that were aboard the real Titanic. Having apparently learned a lesson from Dust, CyberFlix splashed out for mostly professional actors this time. The accents are pretty good, and the voice acting in general is, if not always inspired, serviceable enough by the standard of most productions of this nature and vintage.

Prior to the Titanic‘s tragic rendezvous with the iceberg, Adventure Out of Time runs on plot rather than clock time. That’s to say that time aboard the ship, which you can keep track of via your handy pocket watch, advances in increments of anywhere from five to fifteen minutes only when you complete certain milestones. If you choose to do nothing but wander around taking in the scenery, in other words, you have literally forever in which to do so — which isn’t a bad thing, given how big a part of the game’s appeal this virtual tourism really is. In another testimony to just that reality, CyberFlix included a “tour mode” separate from the game proper, which lets you explore the ship whilst listening to historical commentary. One has to assume that, just as most of the people who bought Myst never got off the first island, most of the people who casually plucked this game off a shop shelf were content just to poke around the Titanic for a while and call it a day.

But let’s assume that you’re one of the minority who chose to go deeper. As noted above, progressing through the milestones doesn’t entail solving logic puzzles so much as it does social ones. You scurry all over the ship, from the top of the crow’s nest to the bowels of the engine rooms, talking to everyone you can find, running fetch quests and conducting third-party diplomacy. It goes without saying that a real person on the real ship could never possibly have covered this much ground in a bare two hours, but it doesn’t really matter. Thankfully, in most situations you can jump from place to place by clicking on a map of the ship given to you by the steward at the beginning of the game. You soon learn that there’s a bewildering amount of stuff going on aboard this version of the Titanic well before it hits the iceberg. British and German and Russian and Serbian spies and double agents are all aboard, intriguing their little hearts out in the name of great-power politics.There’s a jewelry-smuggling ring, a servant girl who’s blackmailing the steel kingpin who got her pregnant, even a former flame of your own begging you to help her out with this and that for old times’ sake. And then there’s a rather mediocre painting being passed around, which the epilogue will reveal is from the hand of an obscure Austrian artist named Adolf Hitler…

Finding out about everything that’s going on aboard will likely require multiple playthroughs. For every time you do something to add minutes to the clock, you run the risk of losing the chance to see things that were taking place during the time window that’s just passed. It’s occasionally possible to get all of the intricate plot machinery fouled up and end up with someone talking to you familiarly about things you know nothing about, but this is relatively unusual. Very few other adventure games have attempted to offer their players such a freewheeling story space as this one, and even fewer have succeeded this well. There are no complete dead ends here that I know of; every player’s story can eventually be brought to a resolution of some kind if she just keeps poking at things long enough.

These two hours before disaster strikes are charged with the dreadful foreknowledge of what’s coming — with the knowledge that, if the law of averages holds true, two out of every three of the people you talk to won’t live to see the dawn. I played this game last winter, when we were in the process of moving house and my wife was already working and staying in another town. Sitting all alone in an empty living room on a cold, dark Scandinavian evening, surrounded by the souvenirs of our life together packed up in moving boxes, now strikes me as the perfect environment in which to appreciate it. Others have similar memories. Andrew Nelson:

People use the word “haunting” a lot to describe this game. And I know the feeling, because late at night while I was checking out if the dialog was working and I was strolling down those hallways — and how they were lit by our designers, and the amazing score that Scott Scheinbaum did, it had a very otherworldly feeling to it. Sometimes even I would get chills walking through it and encountering some of these passengers.

It’s debatable to what extent these feelings are the product of real aesthetic intent and to what extent they’re mere artifacts of the technology used to create the game, not to mention the knowledge we possess that’s external to its world. Yet we shouldn’t be too eager to look askance at any game that manages for whatever reason to evoke feelings in its player that go beyond the primary emotional colors, as this one does. And then, too, some things plainly are done, cleverly and deliberately, to heighten the sense of encroaching doom. For example, little establishing cut scenes play from time to time, showing the ship sailing inexorably onward toward its date with a cruel destiny.

After said destiny comes to a head and the iceberg is struck, everything begins to feel more immediate and urgent, as it should. At this point, plot time goes away in favor of something close to if not quite the same as clock time: the clock ticks a handful of seconds every time you make a move as you attempt to wrap up your espionage mission and get certain vital objects safely off the ship along with your own person. One might say that this is the real stress test for the game as a fiction. Can it muster the gravitas to depict a tragedy as immense as this one in an honest, unflinching way?

Alas, the short answer is no, not really. Some of this can be blamed on technological constraints; a Myst-style engine is better suited to contemplative exploration than the mass chaos the game is now attempting to project. Yet there’s no denying that the writing also fails the test in the breach. One or two of the characters behave just about believably. The most unnervingly realistic reaction comes from a snobby old First Class busybody who has refused to get into the first lifeboat offered to her because it’s “full of people I don’t know,” and because, like so many passengers, she didn’t truly believe the ship would sink. Now she clutches her pearls alone there on the deck and begs forlornly for assurance that surely there will be more lifeboats, won’t there? But the majority of characters fall victim to the old Dust syndrome. Unable or unwilling to stare down tragedy without blinking, the game falls back on jarringly inappropriate comedy. In terms of its fiction, the actual sinking is by far the weakest part of the game; we can feel thankful that this climax takes up a fairly small portion of the full playing time. Still, it does have one practical saving grace: it gives you one last chance to wrap up any loose ends you failed to get to earlier — one last chance, as it turns out, to change history, hopefully for the better.

For in the epilogue the game returns you to 1942 and presents your actions aboard the Titanic as having determined the course of world history over the last 30 years; think of it as the ultimate riposte to Graham Nelson’s claim that the disaster was not any “turning point” in history. The history for which you’re responsible can be much the same as the timeline we know or even worse. There’s an element of black comedy to many of these scenarios, as when you avert both the First World War and the rise of Adolf Hitler (who has vanquished the monster called Envy that was lurking in the depths of his soul by becoming a successful painter selling vacantly pleasant landscapes to middle-class housewives), only to see the entire world get steamrolled by the Soviet Union. It makes me think of dodging the iceberg in Dateline Titanic: “Oh, no! You hit another one!” But in the ideal case, where you’ve chased down every single plot thread and wrapped them all up neatly, history turns out markedly better, with neither a First World War, a Second World War, nor (presumably, in that there is no Soviet Union) a Cold War.

Adventure Out of Time is an impressive piece of work in many respects, standing out not least because it’s so much more ambitious and, well, just better than CyberFlix’s track record before it would ever tempt one to suspect. It’s possible to finish it with a very different story to tell about your time aboard the Titanic than someone else who has accomplished the same feat. And that is a very rare quality in adventure games.

That said, I can’t quite say that I love this game unabashedly. Its failings in the writing department — its inability to make me really care about any of the characters aboard or to build upon the vague sense of dread it has so masterfully engendered when the time comes for sharper emotions — keep it from joining my own top rank of games. Nevertheless, its rich grounding in real history and the formal ambition it displays mark it as the labor of love it so clearly was. It remains well worth playing as an example of a path seldom taken in adventure games, a welcome example of a game that’s much, much more than it first appears to be.

Its commercial trajectory, on the other hand, is a case study in how those things sometimes don’t matter a whit. Sometimes, all you need to do to have a hit is to get the timing right.


Aboard the Titanic. The eeriness of wandering the doomed ship, which is almost deserted thanks to the limitations of the technology used to re-create it, is what most players seem to remember best about the game.

Penny Pringle, your intelligence contact aboard the ship. Stills of real people in costume were spliced over the computer-generated graphics. Their lips and facial expressions were then painstakingly hand-animated to match their dialog.

One of the relatively few mechanical puzzles involves a decoding machine. More shades of Graham Nelson’s Jigsaw, whose Enigma machine is one of text adventuring’s all-time classic puzzles.

Fans of James Cameron’s movie will recognize the Renault Type CB Coupé de Ville automobile in which Jack and Rose make love for the first and only time. It’s used for less carnal purposes here, as a handy source of illumination in a dark cargo hold. There really was such a vehicle aboard the Titanic, a car that a wealthy American coal and iron heir named William Carter had purchased and was taking home with him after a family vacation in Europe. Unlike their new car, Carter and his family survived the sinking. He filed a claim with the White Star Line and was reimbursed $5000.

Another amusing parallel with the movie is this pair of characters, named Jack and… Shailagh. (Okay, the parallel isn’t perfect.) They’re brother and sister rather than star-struck lovers, but Jack is as noble as Leonardo DiCaprio’s character, and like him sacrifices himself in the end to save the one he loves.

It all starts to go a bit sideways when the ship starts to actually sink, a tragedy which the game seems constitutionally incapable of facing, instead giving us awkward attempts at comedy.

We always knew how this game was going to end, didn’t we?


The Titanic was already having one of its recurring moments in media when Adventure Out of Time was released in late 1996, under CyberFlix’s own imprint because the old-media mavens that had been serving as their publishers until this point were all bailing out of games in the wake of disappointing sales. One of the biggest literary novels of the year, shortlisted for the Booker Prize, was Beryl Bainbridge’s Every Man For Himself, about a young American who sails aboard the ship and interacts with many historical figures before and on the night of the disaster. The following April, a full-blown song-and-dance musical about the ship opened on Broadway, a dubious proposition on the face of it that would nonetheless run for 804 performances.

Tailwinds like these, along with the eternal recognizability of the Titanic name itself, were enough to lift Adventure Out of Time to sales of 100,000 copies in its first year on the market, despite reviews from the hardcore gaming press that were unenthusiastic at best about a product that was widely dismissed as just another tired Myst clone. “If the ocean were as shallow as Titanic‘s gameplay,” wrote Computer Gaming World in a valiant but confused attempt at clever wordplay, “the real ship would never have sunk.” But such reviews really didn’t matter at all by this point; even during this first year, the people who bought Adventure Out of Time generally weren’t the ones who read the likes of Computer Gaming World. Be that as it may, 100,000 copies sold would no doubt have been the limit of the game’s success, had not James Cameron’s movie dropped on December 19, 1997, just as Adventure Out of Time was getting decidedly long in the tooth by the standards of the novelty-obsessed games industry.

The tide had begun to turn for Cameron’s over-time, over-budget film some weeks before that date, when critics traveled to Tokyo to catch some early screenings. They came back raving about what they proclaimed to be that rarest of beasts, a showy blockbuster that could also make its audience think and feel something that went beyond the adrenal emotions. One critic stated that “Titanic plumbs personal and philosophical story depths not usually found in event-scale movies.” “It is a masterwork of big-canvas storytelling,” said another, “broad enough to entrance and entertain yet precise and delicate enough to educate and illuminate.”

The movie earned $29 million in the United States on its opening weekend, then $35 million the next weekend. Just twelve days after its debut, it was already halfway to earning back its much-mocked $200 million budget from domestic receipts alone. Four weeks after that, that milestone was already $100 million in its rear-view mirror, with Hollywood Reporter declaring that it had “shattered all previous models of film performance at the nation’s theaters.” On February 24, 1998 —  just nine weeks after its release — it officially became the most successful film in history. One week later, its worldwide gross surpassed $1 billion. It was nominated for fourteen Academy Awards and won eleven of them, including those for Best Director and Best Picture.

Titanic was simply inescapable during 1998. When you turned on the radio, there it was, in the form of Celine Dion’s gloriously overwrought theme song; when you turned on the television, someone was bound to be talking about the film and/or the disaster that inspired it; when you went to work, your colleagues were discussing it around the water cooler; when you came home, you found that your teenage daughter had bought yet another poster of Leonardo DiCaprio to watch over her from her bedroom wall. Not everybody loved the film, mind you; some contrary souls dared to point out that the dialog was a bit trite and the love story more than a little contrived. But absolutely everyone had to reckon with it — not least among them its two young stars and its director, condemned to spend the rest of their careers answering as best they could the question of what you did next after you had already made the biggest movie in the history of the world.

All of this redounded to the immense benefit of some modest little CD-ROMs sitting on the shelves of software stores all over the country, due shortly to be sent back to the distributors that had sent them out. Now, thanks to the film, they suddenly started to sell again — to sell faster than they ever had before, so fast that store owners were soon clamoring for more of them from those selfsame distributors, causing a mad scramble at CyberFlix to crank up the presses once again. Adventure Out of Time enjoyed a whole new commercial life, an order of magnitude larger than its first one. Now companies were knocking at CyberFlix’s door to release the game to European and Asian markets; it was localized into seven different languages in a matter of weeks. By the end of 1998, worldwide sales had surpassed 1 million units. Well after the heyday of interactive movies and adventure games in general, it became the very last of its breed to hit that magical milestone.

But, surprisingly in an industry where one profitable game tends to beget another one just like it, CyberFlix never even tried to make anything else like Adventure Out of Time. After the game’s initial release and modest initial success, Andrew Nelson had wanted to continue to plow the same ground, with a game set aboard another glamorous and doomed means of conveyance: the airship Hindenburg. (Adventure Out of Time itself includes a hint about what was gestating in Nelson’s mind, via a Hindenburg ticket stub you can stumble across in your desk drawer in 1942.) “We’ve got this historical-fiction genre nailed,” said Nelson. “We have this new audience of people who never played a computer game.” But Bill Appleton, looking back on a 1996 which hadn’t yielded any huge adventure hits like in earlier years, wasn’t so sure. Nelson finally gave up trying to convince him and left the company in April of 1997, eight months before Cameron’s film changed everything. CyberFlix released only one major game after Adventure Out of Time, a pirate caper called Redjack: Revenge of the Brethren that returned to the model of Lunicus and Jump Raven, combining multimedia-heavy adventure-style gameplay with 3D action. It sank without a trace even as Adventure Out of Time was soaring to new heights; by some accounts, it sold as few as 10,000 copies in all.

That was enough to convince Bill Appleton, an unsentimental realist about the games market, that his company simply wasn’t made for these times. He was able to face what most others in his position would have closed their eyes to: that the success of Adventure Out of Time was sui generis, a fluke driven by a fortuitous happenstance, a stroke of blind luck that would never, ever come again, no matter how great an adventure game they made next time out. For it did nothing to change the fact that the multimedia boom, which had always been more wishful thinking than reality, was over, and the styles of game it had favored were in precipitous decline. So, he set about dismantling his company even as millions were still pouring into it from Adventure Out of Time. Better to pocket that money and go out a winner than to piss a fortune away on some grandiose new production that was as doomed to fail as the Titanic had been doomed to strike that iceberg. It was a brutal decision, but, from a pure business standpoint at least, it’s hard to argue that it was the wrong one.

Still, there are lingering questions about the way Appleton went about it, especially the bonuses of close to $2 million which he awarded to himself over the course of 1998 even as he was busily shedding staff. On November 30 of that year, he announced to the last of his employees that CyberFlix was done as anything but a holding company to collect the last of the revenues from Adventure Out of Time. Then he decamped for Silicon Valley to “build enterprise software for small companies,” never even saying goodbye to the four other dreamers who had once gathered in his cellar. Of them, only visual artist Jamie Wicks stayed in the games industry, going on to work on the hugely popular EA Sports lineup.

Neither Billy nor Alex, those two unlikely game developers interviewed by J.C. Herz when they were making Adventure Out of Time, ever worked in the industry again either. Likewise, Knoxville’s dream of becoming a new locus of artsy high tech died with CyberFlix. A 1999 history of the company’s rise and fall, written by one Jack Neely for the alternative urban newspaper Metro Pulse, describes the old offices standing “empty and silent,” bringing to mind those haunted corridors of the Titanic in Adventure Out of Time.

“This weekend I was in a mall in Atlanta,” said one former CyberFlix employee whom Neely interviewed for his article, “going through [a] store, and they had a copy of [Adventure Out of Time] on the cheap rack. It’s still around. But it’s kind of sad to see it there.” Already by then, the best game by far to come out of Cyberflix had met the inevitable fate of all Titanic productions, just another unmoored piece of ephemera in the ever-growing debris field of pop culture that surrounds the most famous sunken ship in the world.

(Sources: the books Titanic and the Making of James Cameron by Paula Parisi and Joystick Nation by J.C. Herz; InfoWorld of September 14 1987; Compute! of March 1989; Computer Play of April 1989; MacWorld of May 1989, June 1989, April 1992, June 1992, January 1994, and February 1995; Computer Gaming World of August 1993, April 1994, December 1995, and March 1997; MacUser of October 1993 and January 1996; Next Generation of November 1996; Knoxville News Sentinel of November 20 2006; Dragon of February 1987; JOM volume 50 number 1; Knoxville Metro Pulse 942; Newsweek of August 28 1994; Entertainment Weekly of September 22 1995. Online source include an Adventure Out of Time retrospective at PC Gamer, a Game Developer interview with Andrew Nelson, and Stay Forever‘s interview with Andrew Nelson.

Titanic: Adventure Out of Time is available for digital purchase at GOG.com.)

 
 

Tags: , ,

Titanic Visions, Part 2: A Night to Remember

Why does the sinking of the Titanic have such a stranglehold on our imaginations? The death of more than 1500 people is tragic by any standard, but worse things have happened on the world’s waters, even if we set aside deliberate acts of war. In 1822, for example, the Chinese junk Tek Sing ran into a reef in the South China Sea, drowning all 1600 of the would-be immigrants to Indonesia who were packed cheek-by-jowl onto its sagging deck. In 1948, the Chinese passenger ship Kiangya struck a leftover World War II mine shortly after departing Shanghai, killing as many as 4000 supporters of Chiang Kai-shek’s government who were attempting to flee the approaching Communist armies. In 1987, the Philippine ferry Doña Paz collided with an oil tanker near Manila, killing some 4300 people who were just trying to get home for Christmas.

But, you may object, these were all East Asian disasters, involving people for whom we in the West tend to have less immediate empathy, for a variety of good, bad, and ugly reasons. It’s a fair point. And yet what of the American paddle-wheel steamer Sultana, whose boiler exploded as it plied the Mississippi River in 1865, killing about 1200 people, or only 300 fewer than died on the Titanic?

I’m comfortable assuming that, unless you happen to be a dedicated student of maritime lore or of Civil War-era Americana, you probably don’t know much about any of these disasters. But everyone — absolutely everyone — seems to know at least the basic outline of what happened to the Titanic. Why?

It seems to me that the sinking of the Titanic is one of those rare occasions when History stops being just a succession of one damn thing after another, to paraphrase Arnold Toynbee, and shows some real dramatic flair. The event has enough thematic heft to curl the toes of William Shakespeare: the pride that goeth before a fall (no one will ever dare to call a ship “unsinkable” again); the cruelty of fate (experts have estimated that, if the Titanic somehow could have been raised and put into service once again, it could have made a million more Atlantic crossings without bumping into any more icebergs); the artificiality of money and social status (a form of communism far purer than anything ever implemented in the Soviet Union or China reigned in the Titanic‘s lifeboats); the crucible of character in the breach (some people displayed tremendous, selfless bravery when faced with the ultimate existential impasse of their lives, while others behaved… less well). Unlike the aforementioned shipwrecks, all of which were short, sharp shocks, the sinking of the Titanic was a slow-motion tragedy that took place over the course of two and a half hours. This gave ample space for all of the aforementioned themes to play out. The end result was almost irresistibly dramatic, if you’ll excuse my callousness in writing about it like a film prospectus.

And then, of course, there is the power of the Titanic as a symbol of changing times, as an almost tangible way point in history. The spirit of a century doesn’t always line up neatly with the numbers in our calendars; the terrorist attacks of September 11, 2001, were actually unusual in setting the tone for our muddled, complicated 21st-century existences so soon after we were all cheering our escape from the Y2K crisis and drinking toasts to The End of History on January 1, 2000. By way of contrast, one might say that the nineteenth century didn’t really get going in earnest until Napoleon was defeated once and for all at the Battle of Waterloo in 1815. Similarly, one could say that the sinking of the Titanic in 1912 makes for a much more satisfying fin de siècle than anything that occurred in 1900. On that cold April night in the North Atlantic, an entire worldview sank beneath the waves, a glittering vision of progress as an inevitability, of industry and finance and social refinement as a guarantee against any and all forms of unpleasantness, of war — at least war between the proverbial great powers — as a quaint relic of the past. Less than two and a half years after the Titanic went down, the world was plunged into the bloodiest war it had ever known.

That, anyway, is how we see the sinking of the Titanic today. Many people of our own era are surprised, even though they probably shouldn’t be, that the event’s near-mythic qualities went completely unrecognized at the time; the larger currents of history tend to make sense only in retrospect. While the event was certainly recognized as an appalling tragedy, it was not seen as anything more than that. Rather than trying to interrogate the consciousness of the age, the governments of both Britain and the United States took a more practical tack, endeavoring to get to the bottom of just what had gone wrong, who had been responsible, and how they could prevent anything like this from ever happening again. There followed interminable hearings in the Houses of Parliament and the Capitol Building, while journalists gathered the stories of the 700-odd survivors and wrote them up for a rapt public. But no one wrote or spoke of the event as any sea change in history, and in due course the world moved on. By the time the British luxury liner Lusitania, the queen of the Atlantic-crossing trade prior to the construction of the Titanic and its two sister ships, was torpedoed and sunk by a German submarine on May 7, 1915 — loss of life: 1200 — the Titanic was fading fast from the public consciousness, just another of those damn things that had happened before the present ones.

“Had the Titanic been a mud scow with the same number of useful workingmen on board and had it gone down while engaged in some useful social work,” wrote a muckraking left-wing Kansan newspaper, “the whole country would not have gasped with horror, nor would all the capitalist papers have given pages for weeks to reciting the terrible details.” This was harsh, but undeniably true. The only comfort for our Kansan polemicists, if it was comfort, was that the Titanic looked likely to be forgotten just as completely as that hypothetical mud scow would have been in the fullness of time.

But then, in the 1950s, the Titanic was scooped out of the dustbin of history and turned into an icon for the ages by a 30-something American advertising executive and part-time author named Walter Lord, who had crossed the Atlantic as a boy aboard the Titanic‘s sister the Olympic and been fascinated by the ships’ stories ever since. Lord’s editor was unenthusiastic when he proposed writing the first-ever book-length chronicle of that fateful night, but grudgingly agreed to the project at last, as long as Lord wrote “in terms of the people involved instead of the ship.” Accordingly, Lord interviewed as many of the living survivors and their progeny as he could, then wove their stories together into A Night to Remember, a vividly novelistic minute-by-minute account of the night in question that has remained to this day the classic book about the Titanic, a timeless wellspring of lore and legend. It was Lord, for example, who first told the story of the ship’s band bravely playing on in the hope of comforting their fellow passengers, until the musicians and their music were swallowed by the ocean along with their audience. Ditto the story of the ship’s stoic Captain Edward Smith, who directed his crew to save as many passengers as they could and then to save themselves if possible, while he followed the unwritten law of the sea and went down with his ship. Published in November of 1955, A Night to Remember became an instant bestseller and a veritable cultural sensation. Walter Lord became Homer to the Titanic‘s Trojan War, pumping tragedy full of enough heroism, romance, and melodrama to almost — almost, mind you — make us wish we could have been there.

The book was soon turned into an American teleplay that was reportedly seen by an astonishing 28 million people. “Millions, perhaps, learned about the disaster for the first time,” mused Lord later about the evening it was broadcast. “More people probably thought about the Titanic that night than at any time since 1912.” (Sadly, every trace of this extraordinary cultural landmark has been lost to us because it was shot and broadcast live without ever touching film or videotape, as was the norm in those days). The book then became a lavish British feature film in 1958. Surprisingly, the movie was a failure in the United States. Walter Lord blamed this on poor Stateside distribution on the part of the British producers and a newspaper strike in New York. A more convincing set of causes might begin with its lack of big-name stars, continue with the decision to shoot it in stately black and white rather than garish Technicolor, and conclude with the way it echoed the book in weaving together a tapestry of experiences rather than giving the audience just one or two focal points whom they could get to know well and root for.

Nevertheless, by the end of the 1950s the Titanic had been firmly lodged in the public’s imagination as mythology and metaphor, and it would never show any sign of coming unstuck. The first Titanic fan club — for lack of a better term — was founded in Massachusetts in 1960, whence chapters quickly spread around the country and the world. Initially called the Titanic Enthusiasts Society, the name was changed to the Titanic Historical Society after it was pointed out that being an “enthusiast” of a disaster like this one was perhaps not quite appropriate.  But whatever the name under which they traveled, these were obsessive fans in the classic sense, who could sit around for hours debating the minutiae of their favorite ship’s brief but glamorous life in the same way that others of their ilk were dissecting every detail of the starship Enterprise. (Doug Woolley, the first person to propose finding the wreck and raising it back to the surface, was every inch a product of this milieu.)

“The story of the Titanic is a curious one because it rolled on and on,” said Walter Lord decades after writing his seminal book, “becoming more newsworthy as time went by.” Needless to say, A Night to Remember has never come close to going out of print. Even as the 83 survivors who were still around in 1960 died off one by one and the mass-media spotlight shifted from them to the prospects of finding the wreck of the ship on which they had sailed all those years ago, it was always the stories of that one horrible night, with all of their pathos and their bizarre sort of glamour, that undergirded the interest. If there had been no Walter Lord to turn a disaster into a mythology, it would never have occurred to Jack Grimm and Robert Ballard to go in search of the real ship. It was thanks to 30 years of tellings and retellings of the Titanic story that those first pictures of the ship sent up from the depths by Ballard felt like coming face to face with Leviathan. For by the 1980s, you could use the Titanic as a simile, a metaphor, a parable, or just a trope in conversation with absolutely anyone, whether aged 9 or 90, and be certain that they would know what you were talking about. That kind of cultural ubiquity is extremely rare.

Thus we shouldn’t be stunned to learn that this totem of modern culture also inspired the people who made computer games. Even as some of their peers were casting their players as would-be Robert Ballards out to find and explore the wreck, others were taking them all the way back to the night of April 14, 1912, and asking them to make the best of a no-win situation.


The very first Titanic computer game of any stripe that I know of was written by an American named Peter Kirsch, the mastermind of SoftSide magazine’s “Adventure of the Month” club, whose members were sent a new text adventure on tape or disk every single month. Dateline Titanic was the game for May of 1982. Casting you as the ship’s captain, it begins with one of the cruelest fake-outs in any game ever. It seems to let you spot and dodge the deadly iceberg and change the course of history — until the message, “Oh, my God! You hit another one!” pops up. Simple soul that I am, I find this kind of hilarious.

Anyway, we’re back in the same old boat, so to speak. The game does permit you to be a bit less of a romantic old sea dog than the real Captain Smith and to save yourself, although you’re expected to rescue as many passengers as you can first. In an article he wrote for SoftSide a few months after making the game, Kirsch noted that “the days of simply finding treasure and returning it to a storage location are gone forever.” But, stuck as he was with an adventure engine oriented toward exactly this “points for treasures” model, he faced a dilemma when it came time to make his Titanic game. He ended up with a design where, instead of scarfing up treasures and putting them in your display case for safe keeping, you have to grab as many passengers as possible and chunk them into lifeboats.

That said, it’s a not a bad little game at all, given the almost unimaginable technological constraints under which it was created. The engine is written in BASIC, and it combined with the actual game it enables have to be small enough to fit into as little as 16 K of memory. You can finish the game the first time whilst rescuing no one other than yourself, if necessary, then optimize your path on subsequent playthroughs until you’ve solved all of the puzzles in the right order, collected everyone, and gotten the maximum score; the whole experience is short enough to support this style of try-and-try-again gameplay without becoming too annoying. Whether it’s in good taste to treat a tragedy in this cavalier way is a more fraught question, but then again, it’s hard to imagine any other programmer doing much better under this set of constraints. It’s hard to pay proper tribute to the dead when you have to sweat every word of text you include as if you’re writing a haiku.

(Although Dateline Titanic was made in versions for the Radio Shack TRS-80, Apple II, and Atari 8-bit line, only the last appears to have survived. Feel free to download it from here. Note that you’ll need an Atari emulator such as the one called simply Atari800. And you’ll also need Atari’s BASIC cartridge. Unfortunately, the emulator is not a particularly user-friendly piece of software, with an interface that is entirely keyboard-driven. You access the menu by hitting the F1 key. From here, you want to first mount the BASIC cartridge: “Cartridge Management -> Cartridge.” Press the Escape key until you return to the emulator’s main screen. You should see a “READY” prompt. Now you can run the “.atr” file by pressing F1 again, then choosing “Run Atari Program.” Be patient; it will take the game a moment to start up fully.)


Four years later, in the midst of the full-blown Titanic mania ignited by Robert Ballard’s discovery of the wreck, another Titanic text adventure appeared, again as something other than a standard boxed game. Beyond the Titanic by Scott Miller is interesting today mostly as a case of humble beginnings. After releasing this game and a follow-up text adventure as shareware to little notice and less profit, Miller switched his focus to action games. He and his company Apogee Entertainment then became the primary impetus behind an underground movement which bypassed the traditional publishers and changed the character of gaming dramatically in the early 1990s by providing a more rough-and-ready alternative to said publishers’ obsession with high-concept “interactive movies.” For all that it belongs to a genre whose commercial potential was already on the wane by 1986, Beyond the Titanic does display the keen instinct for branding that would serve Miller so well in later years. The Titanic was a hot topic in 1986, and it was a name in the public domain, so why not make a game about it?

Beyond the Titanic itself is a strange beast, a game which is soundly designed and competently coded but still manages to leave a laughably bad final impression. Miller obviously didn’t bother to do much if any research for his game. Playing the role of a sort of anti-Captain Smith, you escape from the sinking ship all by yourself in one of its lifeboats and leave everyone else to their fate. Luckily for you, in Miller’s world a lifeboat is apparently about the size of a canoe and just as easy for one person to paddle. (In reality, the lifeboats were larger than many ocean-going pleasure boats, being 30 feet long and 9 feet wide.)

Your escape doesn’t mark the end of the game but its real beginning. Now aliens enter the picture, sucking you into a cave complex hidden below the ocean. From this point on, the game lives up to its title by having nothing else to do with the Titanic; the plot eventually sends you into outer space and finally on a trip through time. “Overstuffed” is as kind a descriptor as I can find for both the plot and the writing. This one is best approached in the spirit of an Ed Wood film; Miller tries valiantly to grab hold of the right verbs and adjectives, but they’re forever flitting out of his grasp like fireflies on a summer night. Suffice to say that Beyond the Titanic won’t leave anyone regretting that he abandoned text adventures for greener pastures so quickly.

(Beyond the Titanic has been available for free from Scott Miller’s company 3D Realms since 1998. In light of that, I’ve taken the liberty of hosting a version here that’s almost ready to run on modern computers; just add your platform’s version of DOSBox.)


A relatively more grounded take on the Titanic‘s one and only voyage appeared in 1995 as one of the vignettes in Jigsaw, Graham Nelson’s epic time-travel text adventure, which does have the heft to support its breadth. Indeed, Nelson’s game was the first ever to deliver a reasonably well-researched facsimile of what it was actually like to be aboard the doomed ship before and after it struck the iceberg. A fine writer by any standard, he describes the scenes with the appropriate gravity as you wander a small subsection of the ship’s promenades, staterooms, lounges, and crew areas.

Making a satisfying game out of the sinking of the Titanic presents a challenge for a designer not least in that really is the very definition of a no-win scenario: to allow the player to somehow avert the disaster would undercut the whole reason we find the ship so fascinating, yet to make a game simply about escaping doesn’t feel all that appropriate either. Many designers, including Scott Miller and now Graham Nelson in a far more effective way, therefore use the sinking ship and all of the associated drama as a springboard for other, original plots. (Because you’re a time traveler in Jigsaw, escape isn’t even an issue for you; you can ride the time stream out of Dodge whenever you feel like it.) Nelson imagines that the fabulously wealthy Benjamin Guggenheim, one of the glitterati who went down with the ship, is also a spy carrying a vital dispatch meant for Washington, D.C. Because Guggenheim, honorable gentleman that he is, would never think of getting into a lifeboat as long as women and children are still aboard the ship, he entrusts you with getting the message into the hands of a co-conspirator whose gender gives her a better chance of surviving: the “rich and beautiful heiress Miss Shutes.”

It must be emphasized that the Titanic is only a vignette in Jigsaw, one of fifteen in the complete game. Thus it comes as no surprise that the espionage plot isn’t all that well developed, or even explained. In addition, there are also a few places where Nelson’s background research falls down. The Titanic was not the first vessel ever to send an “SOS” distress signal at sea, as he claims. And, while there was an Elizabeth Shutes aboard the ship, she was a 40-year-old governess employed by a wealthy family, not a twenty-something socialite. On the more amusing side, Jigsaw walkthrough author Bonni Mierzejewska has pointed out that the compass directions aboard the ship would seem to indicate that it’s sailing due east — a good idea perhaps in light of what awaits it on its westward progress, but a decidedly ahistorical one nonetheless.

Still, Jigsaw gets more right than wrong within the limited space it can afford to give the Titanic. I was therefore surprised to learn from Graham Nelson himself just a couple of years ago that “the Titanic sequence is the one I would now leave out.” While it’s certainly a famous event in history and an enduring sign of changing times, he argues, it wasn’t of itself a turning point in history like his other vignettes, at least absent the insertion of the fictional espionage plot: “Rich people drowned, but other rich people took their place, and history wasn’t much dented.” This is true enough, but I for one am glad the Titanic made the cut for one of my favorite text adventures of the 1990s.

(Jigsaw is available for free from the IF Archive. Note that you’ll need a Z-Machine interpreter such as Gargoyle to run it.)



Yet the most intriguing Titanic text adventure of all is undoubtedly the one that never got made. Steve Meretzky, one of Infocom’s star designers, was one of that odd species of Titanic “fan”; his colleagues remember a shelf filled with dozens of books on the subject, and a scale model of the ship he built himself that was “about as big as his office.” Shortly after his very first game for Infocom, the 1983 science-fiction comedy Planetfall, became a hit, Meretzky started pushing to make a Titanic game. Just like the previous two designers in this survey, he felt he had to add another, “winnable” plot line to accompany the ship’s dramatic sinking.

You are a passenger on the Titanic, traveling in Third Class to disguise the importance of your mission: transporting a MacGuffin from London to New York. As the [game] opens and you feel a long, drawn-out shudder pass through the ship, you must begin the process of escaping the restricted Third Class section, retrieving the MacGuffin from the purser’s safe amidst the confusion, and surviving the sinking to complete your delivery assignment. The actual events of those 160 minutes between iceberg and sinking would occur around you. I see this as a game of split-second timing that would require multiple [playthroughs] to optimize your turns in order to solve the puzzles in the shortest possible time. But you could also ignore all the puzzles and simply wander around the ship as a “tourist,” taking in the sights of this amazing event.

To his immense frustration, Meretzky never was able to drum up any enthusiasm for the idea at Infocom. In 1985, he was finally allowed to make a serious game as his reward for co-authoring the third best-selling text adventure in history, but even then his colleagues convinced him to opt for a science-fiction exercise called A Mind Forever Voyaging instead of the Titanic game. The latter remained something of a running joke at Meretzky’s expense for years. “It was almost a cliché,” says his colleague Dave Lebling. “Steve would say, ‘We should do a Titanic game!’ And we would all say, “No, no Titanic game. Go away, Steve.'”

The dream didn’t die for Meretzky even after Infocom closed up shop in 1989, and he moved on to design games for Legend Entertainment, a company co-founded by his fellow Infocom alum Bob Bates. Sadly, Bates too saw little commercial potential in a Titanic game, leaving Meretzky stuck in his comedy niche for all four of the games he made for Legend.

And still the fire burned. When Meretzky and Mike Dornbrook, another old Infocom colleague, decided to start their own studio called Boffo Games in 1994, the Titanic game was high on the agenda. The changing times meant that it had by now evolved from a text adventure into a point-and-click graphic adventure, with a fully fleshed-out plot that was to place aboard the ship the Mona Lisa, Leonardo da Vinci’s masterpiece, which really was stolen from the Louvre in 1911. (Ever since the painting was recovered from the thieves two years later, conspiracy theories claiming that the Mona Lisa which was hung once again in the Louvre is a face-saving forgery have abounded.) Meretzky and Dornbrook pitched their Titanic game to anyone and everyone who might be willing to fund it throughout Boffo’s short, frustrating existence, and even created a couple of rooms as a prototype. But they never could get anyone to bite. “We were saying, you know, there’s this new movie coming out,” says Dornbrook. “And it might do well. It will come out about the time the game will. It’s [James] Cameron. He sometimes does good stuff…” But it was to no avail. Meretzky made his very last adventure game to date in 1997, and it had nothing to do with the Titanic.

Instead it was left to another graphic adventure to ride the wave kicked up by the movie Dornbrook mentioned to sales that bettered the combined totals of all of the other Titanic games I’ve mentioned in these last two articles by an order of magnitude. I’ll examine that game in detail in the third and final article in this series. But first, allow me to set the table for its success via the origin story of the highest-grossing movie of the twentieth century.


After the failures of the film versions of A Night to Remember and Raise the Titanic, the Hollywood consensus had become that nothing sank a feature film’s prospects faster than the Titanic. This was weird, given that the book A Night to Remember had spawned a cottage industry in print publishing and a whole fannish subculture to go along with it, but box-office receipts didn’t lie. The movers and shakers of Hollywood could only conclude that the public wanted a happy ending when they handed over their hard-earned money on a Friday night, which spelled doom for any film about one of the most infamously unhappy endings of all time. Even the full-fledged Titanic mania that followed Robert Ballard’s discovery of the wreck failed to sway the conventional wisdom.

But one prominent Hollywood director begged to differ. James Cameron was coming off the twin triumphs of The Terminator and Aliens in 1987, when he saw a National Geographic documentary that prominently featured Ballard’s eye-popping underwater footage of the wreck. An avid scuba diver, Cameron was entranced. He began to imagine a film that could unite the two halves of the Titanic‘s media legacy: the real sunken ship that lay beneath the waves and the glamorously cursed vessel of modern mythology. He jotted his thoughts down in his journal:

Do story with bookends of present-day scene of wreck using submersibles inter-cut with memory of a survivor and re-created scenes of the night of the sinking. A crucible of human values under stress. A certainty of slowly impending doom (metaphor). Division of men doomed and women and children saved by custom of the times. Many dramatic moments of separation, heroism, and cowardice, civility versus animal aggression. Needs a mystery or driving plot element woven through with all this as background.

The last sentence would prove key. Just like Scott Miller, Graham Nelson, and Steve Meretzky in the context of games, Cameron realized that his film couldn’t succeed as a tapestry of tragedy only. If it was to capture a wide audience’s interest, it needed the foreground plot and obvious set of protagonists that the film of A Night to Remember had so sorely lacked.

Yet Cameron’s own Titanic film would be a long time in coming. The melancholy splendor of that National Geographic documentary first did much to inform The Abyss, his moody 1989 movie about an American nuclear submarine’s close encounter with aliens. There then followed two more straightforward action vehicles starring Arnold Schwarzenegger, Terminator 2 and True Lies.

Always, though, his Titanic movie stayed in the back of his mind. By 1995, he had more than a decade’s worth of zeitgeist-defining action flicks behind him, enough to make him the most bankable Hollywood crowd-pleaser this side of Steven Spielberg, with combined box-office receipts to his credit totaling more than $1.7 billion. With his reputation thus preceding him, he finally managed to convince an unusual pairing of 20th Century Fox and Paramount Pictures to share the risk of funding his dream project. Hollywood’s reluctance was by no means incomprehensible. In addition to the Titanic box-office curse, there was the fact that Cameron had never made a film quite like this one before. In fact, no one was making films like this in the 1990s; Cameron was envisioning an old-fashioned historical epic, a throwback to the likes of War and Peace, Cleopatra, and Gone with the Wind, complete with those films’ three-hour-plus running times.

Cameron’s plan for his movie had changed remarkably little from that 1987 journal outline. He still wanted to bookend the main story with shots of the real wreck. He filmed this footage first, borrowing a Russian research vessel and deep-ocean submersible in September of 1995 in order to do so. Then it was time for the really challenging part. The production blasted out a 17-million-gallon pool on Mexico’s Baja coast and replicated the Titanic inside it at almost a one-to-one scale, working from the original builder’s blueprints. The sight of those iconic four smokestacks — the Titanic is the one ship in the world that absolutely everyone can recognize — looming up out of the desert was surreal to say the least, but it was only the beginning of the realization of Cameron’s vision. Everything that came within the view of a camera was fussed over for historical accuracy, right down to the pattern of the wainscotting on the walls.

Still hewing to the old-school formula for Hollywood epics, Cameron decided to make his foreground protagonists a pair of starstruck lovers from different sides of the economic divide: a prototypical starving artist from Steerage Class and a pampered young woman from First Class. This suited his backers very well; the stereotype-rooted but nevertheless timeless logic of their industry told them that men would come for the spectacle of seeing the ship go down, while women would come for the romance. The lead roles went to Leonardo DiCaprio and Kate Winslet, a pair of uncannily beautiful young up-and-comers. Pop diva Celine Dion was recruited to sing a big, impassioned theme song. For, if it was to have any hope of earning back its budget, this film would need to have something for everyone: action, romance, drama, a dash of comedy, and more than a little bit of sex appeal. (DiCaprio’s character painting Winslet’s in the altogether remains one of the more famous female nude scenes in film history.) But whether that would make it an entertainment spectacle for the ages or just an unwieldy monstrosity was up for debate.

The press at least knew where they were putting their money. When the project passed the $170 million mark to officially become the most expensive movie ever made, they had a field day. The previous holder of the record had been a deliriously misconceived 1995 fiasco called Waterworld, and the two films’ shared nautical theme was lost on no one. Magazines and newspapers ran headlines like “A Sinking Sensation” and “Glub! Glub! Glub!” before settling on calling Titanic — Cameron had decided that that simple, unadorned name was the only one that would suit his film — “the Waterworld of 1997.” By the time it reached theaters on December 19, 1997, six months behind schedule, its final cost had grown to $200 million.

And then? Well, then the press and public changed their tune, much to the benefit of the latest Titanic game.

(Sources: the books Sinkable: Obsession, the Deep Sea, and the Shipwreck of the Titanic by Daniel Stone, Titanic and the Making of James Cameron by Paula Parisi, A Night to Remember by Walter Lord, and The Way It Was: Walter Lord on His Life and Books edited by Jenny Lawrence; SoftSide of August 1982; the Voyager CD-ROM A Night to Remember. The information on Steve Meretzky’s would-be Titanic game is drawn from the full Get Lamp interview archives which Jason Scott so kindly shared with me many years ago now, and from Jason’s “Infocom Cabinet” of vintage documents. Another online source was “7 of the World’s Deadliest Shipwrecks” at Britannica. My thanks to reader Peter Olausson for digging up a vintage newspaper headline that labels the Titanic “unsinkable” and letting me link to it.)

 
 

Tags: , , , , , ,

Byron Preiss’s Games (or, The Promise and Peril of the Electronic Book)

Byron Preiss in 1982 with some of his “Fair People.”

We humans always seek to understand the new in terms of the old. This applies as much to new forms of media as it does to anything else.

Thus at the dawn of the 1980s, when the extant world of media began to cotton onto the existence of computer software that was more than strictly utilitarian but not action-oriented videogames like the ones being played in coin-op arcades and on home consoles such as the Atari VCS, it looked for a familiar taxonomic framework by which to understand it. One of the most popular of the early metaphors was that of the electronic book. For the graphics of the first personal computers were extremely crude, little more than thick lines and blotches of primary colors. Text, on the other hand, was text, whether it appeared on a monitor screen or on a page. Some of the most successful computer games of the first half of the 1980s were those of Infocom, who drove home the literary associations by building their products out of nothing but text, for which they were lauded in glowing features in respected mainstream magazines and newspapers. In the context of the times, it seemed perfectly natural to sell Infocom’s games and others like them in bookstores. (I first discovered these games that would become such an influence on my future on the shelves of my local shopping mall’s B. Dalton bookstore…)

Small wonder, then, that several of the major New York print-publishing houses decided to move into software. As is usually the case in such situations, they were driven by a mixture of hope and fear: hope that they could expand the parameters of what a book could do and be in exciting ways, and fear that, if they failed to do it, someone else would. The result was the brief-lived era of bookware.

Byron Preiss was perhaps the most important of all the individual book people who now displayed an interest in software. Although still very young by the standards of his tweedy industry — he turned 30 in 1983 — he was already a hugely influential figure in genre publishing, with a rare knack for mobilizing others to get lots and lots of truly innovative things done. In fact, long before he did anything with computers, he was already all about “interactivity,” the defining attribute of electronic books during the mid-1980s, as well as “multimedia,” the other buzzword that would be joined to the first in the early 1990s.

Preiss’s Fiction Illustrated line produced some of the world’s first identifiable graphic novels. These were comics that didn’t involve superheroes or cartoon characters, that were bound and sold as first-run paperbacks rather than flimsy periodicals. Preiss would remain a loyal supporter of comic-book storytelling in all its forms throughout his life.

Preiss rarely published a book that didn’t have pictures; in fact, he deserves a share of the credit for inventing what we’ve come to call the graphic novel, through a series known as Fiction Illustrated which he began all the way back in 1975 as a bright-eyed 22-year-old. His entire career was predicated on the belief that books should be beautiful aesthetic objects in their own right, works of visual as well as literary art that could and should take the reader’s breath away, that reading books should be an intensely immersive experience. He innovated relentlessly in pursuit of that goal. In 1981, for example, he published a collection of stories by Samuel R. Delany that featured “the first computer-enhanced illustrations developed for a science-fiction book.” His non-fiction books on astronomy and paleontology remain a feast for the eyes, as does his Science Fiction Masterworks series of illustrated novels and stories from the likes of Arthur C. Clarke, Fritz Leiber, Philip Jose Farmer, Frank Herbert, and Isaac Asimov.

As part and parcel of his dedication to immersive literature, Preiss also looked for ways to make books interactive, even without the benefit of computers. In 1982, he wrote and published The Secret: A Treasure Hunt, a puzzle book and real-world scavenger hunt in the spirit of Kit Williams’s Masquerade. As beautifully illustrated as one would expect any book with which Preiss was involved to be, it told of “The Fair People,” gnomes and fairies who fled from the Old to the New World when Europeans began to cut down their forests and dam the rivers along which they lived: “They came over and they stayed, and they were happy. But then they saw that man was following the same path [in the Americas] and that what had happened in the Old World would probably happen in the New. So the ones who had already come over and the ones who followed them all decided they would have to go into hiding.” They took twelve treasures with them. “I have been entrusted by the Fair People to reveal the whereabouts of the [treasures] through paintings in the book,” Preiss claimed. “There are twelve treasures hidden throughout North America and twelve color paintings that contain clues to the whereabouts of the treasure. Then, there is a poem for each treasure. So, if you can correctly figure out the poem and the painting, you will find one of the treasures.” Each treasure carried a bounty for the discoverer of $1000. Preiss’s self-professed ultimate goal was to use the interactivity of the scavenger hunt as another tool for immersing the reader, “like in the kids’ books where you choose your own ending.”

The Secret failed to become the sales success or the pop-culture craze that Masquerade had become in Britain three years earlier. Only one of the treasures was found in the immediate wake of its publication, in Chicago in 1983. Yet it had a long shelf life: a second treasure was found in Cleveland more than twenty years later. A 2018 documentary film about the book sparked a renewal of interest, and the following year a third treasure was recovered in Boston. A small but devoted cult continues to search for the remaining ones today, sharing information and theories via websites and podcasts.

In a less enduring but more commercially successful vein, Preiss also published three different lines of gamebooks to feed the hunger ignited by the original Choose Your Own Adventure books of Edward Packard and R.A. Montgomery. Unsurprisingly, his books were much more visual than the typical example of the breed, with illustrations that often doubled as puzzles for the reader to solve. A dedicated nurturer of young writing and illustrating talent, he passed the contracts to make books in these lines and others to up-and-comers who badly needed the cash and the measure of industry credibility they brought with them.

Being a man with a solid claim to the woefully overused title of “visionary,” Preiss was aware of what computers could mean for our relationship with storytelling and information from a very early date. He actually visited Xerox PARC during its 1970s heyday and marveled at the potential he saw there, told all of his friends that this was the real future of information spaces. Later he became the driving force behind the most concentrated and in many ways the most interesting of all the bookware software projects of the 1980s: the Telarium line of literary adaptations, which turned popular science-fiction, fantasy, and mystery novels into illustrated text adventures. I won’t belabor this subject here because I already wrote histories and reviews of all of the Telarium games years ago for this site. I will say, however, that the line as a whole bears all the hallmarks of a Byron Preiss project, from the decision to include colorful pictures in the games — something Infocom most definitely did not provide — to the absolutely gorgeous packaging, which arguably outdid Infocom’s own high standard for same. (The packaging managed to provide a sensory overload which transcended even the visual; one of my most indelible memories of gaming in my childhood is of the rich smell those games exuded, thanks to some irreplicable combination of cardboard, paper, ink, and paste. Call it my version of Proust’s madeleine.) The games found on the actual disks were a bit hit-or-miss, but nobody could say that Telarium didn’t put its best foot forward.

Unfortunately, it wasn’t enough; the Telarium games weren’t big sellers, and the line lasted only from 1984 to 1986. Afterward, Preiss went back to his many and varied endeavors in book publishing, while computer games switched their metaphor of choice from interactive novels to interactive movies in response to the arrival of new, more audiovisually capable gaming computers like the Commodore Amiga. Even now, though, Preiss continued to keep one eye on what was going on with computers. For example, he published novelizations of some of Infocom’s games, thus showing that he bore no ill will toward the company that had both inspired his own Telarium line and outlived it. More importantly in the long run, he saw Apple’s HyperCard, with its new way of navigating texts non-linearly through association — multimedia texts which could include pictures, sound, music, and even movie clips alongside their words. By the turn of the 1990s, Bob Stein’s Voyager Software was starting to make waves with “electronic books” on CD-ROM that took full advantage of all of these affordances. The nature of electronic books had changed since the heyday of the text adventure, but the idea lived on in the abstract.

In fact, the advances in computer technology as the 1990s wore on were so transformative as to give everyone a bad case of mixed metaphors. The traditional computer-games industry, entranced by the new ability to embed video clips of real actors in their creations, was more fixated on interactive movies than ever. At the same time, though, the combination of hypertext with multimedia continued to give life to the notion of electronic books. Huge print publishers like Simon & Schuster and Random House, who had jumped onto the last bookware bandwagon only to bail out when the sales didn’t come, now made new investments in CD-ROM-based software that were an order of magnitude bigger than their last ones, even as huge names in moving pictures, from Disney to The Discovery Channel, were doing the same. The poster child for all of the taxonomical confusion was undoubtedly the pioneering Voyager, a spinoff from the Criterion Collection of classic movies on laserdisc and VHS whose many and varied releases all seemed to live on a liminal continuum between book and movie.

One has to assume that Byron Preiss felt at least a pang of jealousy when he saw the innovative work Voyager was doing. Exactly one decade after launching Telarium, he took a second stab at bookware, with the same high hopes as last time but on a much, much more lavish scale, one that was in keeping with the burgeoning 1990s tech boom. In the spring of 1994, Electronic Entertainment magazine brought the news that the freshly incorporated Byron Preiss Multimedia Company “is planning to flood the CD-ROM market with interactive titles this year.”

They weren’t kidding. Over the course of the next couple of years, Preiss published a torrent of CD-ROMs, enough to make Voyager’s prolific release schedule look downright conservative. There was stuff for the ages in high culture, such as volumes dedicated to Frank Lloyd Wright and Albert Einstein. There was stuff for the moment in pop culture, such as discs about Seinfeld, Beverly Hills 90210, and Melrose Place, not to forget The Sci-Fi Channel Trivia Game. There was stuff reflecting Preiss’s enduring love for comics (discs dedicated to R. Crumb and Jean Giraud) and animation (The Multimedia Cartoon Studio). There were electronic editions of classic novels, from John Steinbeck to Raymond Chandler to Kurt Vonnegut. There was educational software suitable for older children (The Planets, The Universe, The History of the United States), and interactive storybooks suitable for younger ones. There were even discs for toddlers, which line Preiss dubbed “BABY-ROMS.” A lot of these weren’t bad at all; Preiss’s CD-ROM library is almost as impressive as that of Voyager, another testament to the potential of a short-lived form of media that arguably deserved a longer day in the sun before it was undone by the maturation of networked hypertexts on the World Wide Web.

But then there are the games, a field Bob Stein was wise enough to recognize as outside of Voyager’s core competency and largely stay away from. Alas, Preiss was not, and did not.



The first full-fledged game from Byron Preiss Multimedia was an outgrowth of some of Preiss’s recent print endeavors. In the late 1980s, he had the idea of enlisting some of his stable of young writers to author new novels in the universes of aging icons of science fiction whose latest output had become a case of diminishing returns — names like Isaac Asimov, Ray Bradbury, and Arthur C. Clarke. Among other things, this broad concept led to a series of six books by five different authors that was called Robot City, playing with the tropes, characters, and settings of Asimov’s “Robot” stories and novels. In 1994, two years after Asimov’s death, Preiss also published a Robot City computer game. Allow me to quote the opening paragraph of Martin E. Cirulis’s review of same for Computer Gaming World magazine, since it does such a fine job of pinpointing the reasons that so many games of this sort tended to be so underwhelming.

With all the new interest in computer entertainment, it seems that a day doesn’t go by without another company throwing their hat, as well as wads of startup money, into the ring. More often than not, the first thing offered by these companies is an adventure-game title, because of the handy way the genre brings out all the bells and whistles of multimedia. I’m always a big fan of new blood, but a lot of the first offerings get points for enthusiasm, then lose ground and reinvent the wheel. Design and management teams new to the field seem so eager to show us how dumb our old games are that they fail to learn any lessons from the fifteen-odd years of successful and failed games that have gone before. Unfortunately, Robot City, Byron Preiss Multimedia’s initial game release, while impressive in some aspects, suffers from just these kinds of birthing pains.

If anything, Cirulis is being far too kind here. Robot City is a game where simply moving from place to place is infuriating, thanks to a staggeringly awful interface, city streets that are constantly changing into random new configurations, and the developers’ decision to put exterior scenes on one of its two CDs and interior scenes on the other, meaning you can look forward to swapping CDs roughly every five minutes.

Robot City. If you don’t like the look of this city street, rest assured that it will have changed completely next time you walk outside. Why? It’s not really clear… something to do with The Future.

Yet the next game from Byron Preiss Multimedia makes Robot City seem like a classic. I’d like to dwell on The Martian Chronicles just a bit today — not because it’s good, but because it’s so very, very bad, so bad in fact that I find it oddly fascinating.

Another reason for it to pique my interest is that it’s such an obvious continuation of what Preiss had begun with Telarium. One of Telarium’s very first games was an adaptation of the 1953 Ray Bradbury novel Fahrenheit 451. This later game, of course, adapts his breakthrough book The Martian Chronicles, a 1950 “fix-up novel” of loosely linked stories about the colonization — or, perhaps better said, invasion — of Mars by humans. And the two games are of a piece in many other ways once we make allowances for the technological changes in computing between 1984 and 1994.

For example, Bradbury himself gave at least a modicum of time and energy to both game projects, which was by no means always true of the authors Preiss chose to honor with an adaptation of some sort. In the Telarium game, you can call Bradbury up on a telephone and shoot the breeze; in the multimedia one, you can view interview clips of him. In the Telarium game, a special “REMEMBER” verb displays snippets of prose from the novel; in the multimedia one, a portentous narrator recites choice extracts from Bradbury’s Mars stories from time to time as you explore the Red Planet. Then, too, neither game is formally innovative in the least: the Telarium one is a parser-driven interactive fiction, the dominant style of adventure game during its time, while the multimedia game takes all of its cues from Myst, the hottest phenomenon in adventures at the time of its release. (The box even sported a hype sticker which named it the answer to the question of “Where do you go after Myst?”) About the only thing missing from The Martian Chronicles that its predecessor can boast about is Fahrenheit 451‘s gorgeous bespoke packaging. (That ship had largely sailed for computer games by 1994; as the scenes actually shown on the monitor got prettier, the packaging got more uniform and unambitious.)

By way of compensation, The Martian Chronicles emphasizes its bookware bona fides by bearing on its box the name of the book publisher Simon & Schuster, back for a second go-round after failing to make a worthwhile income stream out of publishing games in the 1980s. But sadly, once you get past all the meta-textual elements, what you are left with in The Martian Chronicles is a Myst clone notable only for its unusually extreme level of unoriginality and its utter ineptness of execution.

I must confess that I’ve enjoyed very few of the games spawned by Myst during my life, and that’s still the case today, after I’ve made a real effort to give several of them a fair shake for these histories. It strikes me that the sub-genre is, more than just about any other breed of game I know of, defined by its limitations rather than its allowances. The first-person node-based movement, with its plethora of pre-rendered 3D views, was both the defining attribute of the lineage during the 1990s and an unsatisfying compromise in itself: what you really want to be doing is navigating through a seamless 3D space, but technical limitations have made that impossible, so here you are, lurching around, discrete step by discrete step. In many of these games, movement is not just unsatisfying but actively confusing, because clicking the rotation arrows doesn’t always turn you 90 degrees as you expect it to. I often find just getting around a room in a Myst clone to be a challenge, what with the difficulty of constructing a coherent mental map of my surroundings using the inconsistent movement controls. There inevitably seems to be that one view that I miss — the one that contains something I really, really need. This is what people in the game-making trade sometimes call “fake difficulty”: problems the game throws up in front of you where no problem would exist if you were really in this environment. In other schools of software development, it’s known by the alternative name of terrible interface design.

Yet I have to suspect that the challenges of basic navigation are partially intentional, given that there’s so little else the designer can really do with these engines. Most were built in either HyperCard or the multimedia presentation manager Macromedia Director; the latter was the choice for  The Martian Chronicles. These “middleware” tools were easy to work with but slow and limiting. Their focus was the media they put on the screen; their scripting languages were never intended to be used for the complex programming that is required to present a simulated world with any dynamism to it. Indeed, Myst clones are the opposite of dynamic, being deserted, static spaces marked only by the buttons, switches, and set-piece spatial puzzles which are the only forms of gameplay that can be practically implemented using their tool chains. While all types of games have constraints, I can’t think of any other strand of them that make their constraints the veritable core of their identity. In addition to the hope of selling millions and millions of copies like Myst did, I can’t help but feel that their prevalence during the mid-1990s was to a large extent a reflection of how easy they were to make in terms of programming. In this sense, they were a natural choice for a company like the one Byron Preiss set up, which was more replete with artists and writers from the book trade than with ace programmers from the software trade.

The Martian Chronicles is marked not just by all of the usual Myst constraints but by a shocking degree of laziness that makes it play almost like a parody of the sub-genre. The plot is most kindly described as generic, casting you as the faceless explorer of the ruins of an ancient — and, needless to say, deserted — Martian city, searching for a legendary all-powerful McGuffin. You would never connect this game with Bradbury’s book at all if it weren’t for the readings from it that inexplicably pop up from time to time. What you get instead of the earnest adaptation advertised on the box is the most soul-crushingly dull Myst clone ever: a deserted static environment around which are scattered a dozen or so puzzles which you’ve seen a dozen or more times before. Everything is harder than it ought to be, thanks to a wonky cursor whose hot spot seems to float about its surface randomly, a cursor which disappears entirely whenever an animation loop is playing. This is the sort of game that, when you go to save, requires you to delete the placeholder name of “Save1” character by character before you can enter your own. This game is death by a thousand niggling little aggravations like that one, which taken in the aggregate tell you that no actual human being ever tried to play it before it was shoved into a box and shipped. Even the visuals, the one saving grace of some Myst clones and the defining element of Byron Preiss’s entire career, are weirdly slapdash, making The Martian Chronicles useless even as a tech demo. Telarium’s Fahrenheit 451 had its problems, but it’s Infocom’s Trinity compared to this thing.


It’s telling that many reviewers labelled the fifteen minutes of anodyne interview clips with Ray Bradbury the best part of the game.

Some Myst clones have the virtue of being lovely to look at. Not this one, with views that look like they were vandalized by a two-year-old Salvador Dali wannabee with only two colors of crayon to hand.



Computer Gaming World justifiably savaged The Martian Chronicles. It “is as devoid of affection and skill as any game I have ever seen,” noted Charlies Ardai, by far the magazine’s deftest writer, in his one-star review. Two years after its release, Computer Gaming World named it the sixteenth worst game of all time, outdone only by such higher-profile crimes against their players as Sierra’s half-finished Outpost and Cosmi’s DefCon 5, an “authentic SDI simulation” whose level of accuracy was reflected in its name. (DefCon 5 is the lowest level of nuclear threat, not the highest.) As for The Martian Chronicles, the magazine called it “tired, pointless, and insulting to Bradbury’s poetic genius.” Most of the other magazines had little better to say — those, that is, which didn’t simply ignore it. For it was becoming abundantly clear that games like these really weren’t made for the hardcore set who read the gaming magazines. The problem was, it wasn’t clear who they were made for.

Still, Byron Preiss Multimedia continued to publish games betwixt and between their other CD-ROMs for another couple of years. The best of a pretty sorry bunch was probably the one called Private Eye, which built upon the noir novels of Raymond Chandler, one of Preiss’s favorite touchstones. Tellingly, it succeeded — to whatever extent it did — by mostly eschewing puzzles and other traditional forms of game design, being driven instead by conversations and lengthy non-interactive cartoon cut scenes; a later generation might have labeled it a visual novel. Charlies Ardai rewarded it with a solidly mediocre review, acknowledging that “it don’t stink up da joint.” Faint praise perhaps, but beggars can’t be choosers.

The Spider-Man game, by contrast, attracted more well-earned vitriol from Ardai: “The graphics are jagged, the story weak, the puzzles laughable (cryptograms, anyone?), and the action sequences so dismal, so minor, so clumsy, so basic, so dull, so Atari 2600 as to defy comment.” Tired of what Ardai called Preiss’s “gold-into-straw act,” even Computer Gaming World stopped bothering with his games after this. That’s a pity in a way; I would have loved to see Ardai fillet Forbes Corporate Warrior, a simplistic DOOM clone that replaced monsters with rival corporations, to be defeated with weapons like Price Bombs, Marketing Missiles, Ad Blasters, Takeover Torpedoes, and Alliance Harpoons, with all of it somehow based on “fifteen years of empirical data from an internationally recognized business-simulation firm.” “Business is war, cash is ammo!” we were told. Again, one question springs to mind. Who on earth was this game for?

Corporate Warrior came out in 1997, near the end of the road for Byron Preiss Multimedia, which, like almost all similar multimedia startups, had succeeded only in losing buckets and buckets of money. Preiss finally cut his losses and devoted all of his attention to paper-based publishing again, a realm where his footing was much surer.

I hasten to add that, for all that he proved an abject failure at making games, his legacy in print publishing remains unimpeachable. You don’t have to talk to many who were involved with genre and children’s books in the 1980s and 1990s before you meet someone whose career was touched by him in a positive way. The expressions of grief were painfully genuine after he was killed in a car accident in 2005. He was called a “nice guy and honest person,” “an original,” “a business visionary,” “one of the good guys,” “a positive force in the industry,” “one of the most likable people in publishing,” “an honest, dear, and very smart man,” “warm and personable,” “charming, sophisticated, and the best dresser in the room.” “You knew one of his books would be something you couldn’t get anywhere else, and [that] it would be amazing,” said one of the relatively few readers who bothered to dig deep enough into the small print of the books he bought to recognize Preiss’s name on an inordinate number of them. Most readers, however, “never think about the guy who put it together. He’s invisible, although it wouldn’t happen without him.”

But regrettably, Preiss was a textbook dilettante when it came to digital games, more intrigued by the idea of them than he was prepared to engage with the practical reality of what goes into a playable game. It must be said that he was far from alone in this. As I already noted, many other veterans of other forms of media tried to set up similar multimedia-focused alternatives to conventional gaming, and failed just as abjectly. And yet, dodgy though these games almost invariably were in execution, there was something noble about them in concept: they really were trying to move the proverbial goalposts, trying to appeal to new demographics. What the multimedia mavens behind them failed to understand was that fresh themes and surface aesthetics do not great games make all by themselves; you have to devote attention to design as well. Their failure to do so doomed their games to becoming a footnote in history.

For in the end, games are neither books nor movies; they are their own things, which may occasionally borrow approaches from one or the other but should never delude themselves into believing that they can just stick the adjective “interactive” in front of their preferred inspiration and call it a day. Long before The Martian Chronicles stank up the joint, the very best game designers had come to understand that.


Postscript: On a more positive note…

Because I don’t like to be a complete sourpuss, let me note that the efforts of the multimedia dilettantes of the 1990s weren’t always misbegotten. I know of at least one production in this style that’s well worth your time: The Dark Eye, an exploration of the nightmare consciousness of Edgar Allan Poe that was developed by Inscape and released in 1995. On the surface, it’s alarmingly similar to The Martian Chronicles: a Myst-like presentation created in Macromedia Director, featuring occasional readings from the master’s works. But it hangs together much, much better, thanks to a sharp aesthetic sense and a willingness to eschew conventional puzzles completely in favor of atmosphere — all the atmosphere, I daresay, that you’ll be able to take, given the creepy subject matter. I encourage you to read my earlier review of it and perhaps to check it out for yourself. If nothing else, it can serve as proof that no approach to game-making is entirely irredeemable.

Another game that attempts to do much the same thing as The Martian Chronicles but does it much, much better is Rama, which was developed by Dynamix and released by Sierra in 1996. Here as well, the link to the first bookware era is catnip for your humble author; not only was Arthur C. Clarke adapted by a Telarium game before this one, but the novel chosen for that adaptation was Rendezvous with Rama, the same one that is being celebrated here. As in The Martian Chronicles, the lines between game and homage are blurred in Rama, what with the selection of interview clips in which Clarke himself talks about his storied career and one of the most lauded books it produced. And once again the actual game, when you get around to playing it, is very much in the spirit of Myst.

But Dynamix came from the old school of game development, and were in fact hugely respected in the industry for their programming chops; they wouldn’t have been caught dead using lazy middleware like Macromedia Director. Rama rather runs in a much more sophisticated engine, and was designed by people who had made games before and knew what led to playable ones. It’s built around bone-hard puzzles that often require a mathematical mind comfortable with solving complex equations and translating between different base systems. I must admit that I find it all a bit dry — but then, as I’ve said, games in this style are not usually to my taste; I’ve just about decided that the games in the “real” Myst series are all the Myst I need. Nevertheless, Rama is a vastly better answer to the question of “Where do you go after Myst?” than most of the alternatives. If you like its sort of thing, by all means, check it out. Call it another incarnation of Telarium 2.0, done right this time.

(Sources: Starlog of November 1981, December 1981, November 1982, January 1984, June 1984, April 1986, March 1987, November 1992, December 1992, January 1997, April 1997, February 1999, June 2003, May 2005, and October 2005; Compute!’s Gazette of December 1984; STart of November 1990; InCider of May 1993; Electronic Entertainment of June 1994, December 1994, January 1995, May 1995, and December 1995; MacUser of October 1995; Computer Games Strategy Plus of November 1995; Computer Gaming World of December 1995, January 1996, October 1996, November 1996, and February 1997; Next Generation of October 1996; Chicago Tribune of November 16 1982. Online sources include the announcement of Byron Preiss’s death and the outpouring of memories and sentiment that followed on COMICON.com.

A search on archive.org will reveal a version of The Martian Chronicles that has been modified to run on Windows 10. The Collection Chamber has a version of Rama that’s ready to install and run on Windows 10. Mac and Linux users can import the data files there into their computer’s version of ScummVM.)

 
26 Comments

Posted by on September 2, 2022 in Digital Antiquaria, Interactive Fiction

 

Tags: , , , ,