RSS

The Prophet of Cyberspace

William Gibson

William Gibson was born on March 17, 1948, on the coast of South Carolina. An only child, he was just six years old when his father, a middle manager for a construction company, choked on his food and died while away on one of his many business trips. Mother and son moved back to the former’s childhood home, a small town in Virginia.

Life there was trying for the young boy. His mother, whom he describes today as “chronically anxious and depressive,” never quite seemed to get over the death of her husband, and never quite knew how to relate to her son. Gibson grew up “introverted” and “hyper-bookish,” “the original can’t-hit-the-baseball kid,” feeling perpetually isolated from the world around him. He found refuge, like so many similar personalities, in the shinier, simpler worlds of science fiction. He dreamed of growing up to inhabit those worlds full-time by becoming a science-fiction writer in his own right.

At age 15, desperate for a new start, Gibson convinced his mother to ship him off to a private school for boys in Arizona. It was by his account as bizarre a place as any of the environments that would later show up in his fiction.

It was like a dumping ground for chronically damaged adolescent boys. There were just some weird stories there, from all over the country. They ranged from a 17-year old, I think from Louisiana, who was like a total alcoholic, man, a terminal, end-of-the-stage guy who weighed about 300 pounds and could drink two quarts of vodka straight up and pretend he hadn’t drunk any to this incredibly great-looking, I mean, beautiful kid from San Francisco, who was crazy because from age 10 his parents had sent him to plastic surgeons because they didn’t like the way he looked.

Still, the clean desert air and the forced socialization of life at the school seemed to do him good. He began to come out his shell. Meanwhile the 1960s were starting to roll, and young William, again like so many of his peers, replaced science fiction with Beatles, Beats, and, most of all, William S. Burroughs, the writer who remains his personal literary hero to this day.

William Gibson on the road, 1967

William Gibson on the road, 1967

As his senior year at the boy’s school was just beginning, Gibson’s mother died as abruptly as had his father. Left all alone in the world, he went a little crazy. He was implicated in a drug ring at his school — he still insists today that he was innocent — and kicked out just weeks away from graduation. With no one left to go home to, he hit the road like Burroughs and his other Beat heroes, hoping to discover enlightenment through hedonism; when required like all 18-year-olds to register for the draft, he listed as his primary ambition in life the sampling of every drug ever invented. He apparently made a pretty good stab at realizing that ambition, whilst tramping around North America and, a little later, Europe for years on end, working odd jobs in communes and head shops and taking each day as it came. By necessity, he learned the unwritten rules and hierarchies of power that govern life on the street, a hard-won wisdom that would later set him apart as a writer.

In 1972, he wound up married to a girl he’d met on his travels and living in Vancouver, British Columbia, where he still makes his home to this day. As determined as ever to avoid a conventional workaday life, he realized that, thanks to Canada’s generous student-aid program, he could actually earn more money by attending university than he could working some menial job. He therefore enrolled at the University of British Columbia as an English major. Much to his own surprise, the classes he took there and the people he met in them reawakened his childhood love of science fiction and the written word in general, and with them his desire to write. Gibson’s first short story was published in 1977 in a short-lived, obscure little journal occupying some uncertain ground between fanzine and professional magazine; he earned all of $27 from the venture. Juvenilia though it may be, “Fragments of a Hologram Rose,” a moody, plot-less bit of atmospherics about a jilted lover of the near future who relies on virtual-reality “ASP cassettes” to sleep, already bears his unique stylistic stamp. But after writing it he published nothing else for a long while, occupying himself instead with raising his first child and living the life of a househusband while his wife, now a teacher with a Master’s Degree in linguistics, supported the family. It seemed a writer needed to know so much, and he hardly knew where to start learning it all.

It was punk rock and its child post-punk that finally got him going in earnest. Bands like Wire and Joy Division, who proved you didn’t need to know how to play like Emerson, Lake, and Palmer to make daring, inspiring music, convinced him to apply the same lesson to his writing — to just get on with it. When he did, things happened with stunning quickness. His second story, a delightful romp called “The Gernsback Continuum,” was purchased by Terry Carr, a legendary science-fiction editor and taste-maker, for the 1981 edition of his long-running Universe series of paperback short-story anthologies. With that feather in his cap, Gibson began regularly selling stories to Omni, one of the most respected of the contemporary science-fiction magazines. The first story of his that Omni published, “Johnny Mnemonic,” became the manifesto of a whole new science-fiction sub-genre that had Gibson at its leading light. The small network of writers, critics, and fellow travelers sometimes called themselves “The Movement,” sometimes “The Mirrorshades Group.” But in the end, the world would come to know them as the cyberpunks.

If forced to name one thing that made cyberpunk different from what had come before, I wouldn’t point to any of the exotic computer technology or the murky noirish aesthetics. I’d rather point to eight words found in Gibson’s 1982 story “Burning Chrome”: “the street finds its own use for things.” Those words signaled a shift away from past science fiction’s antiseptic idealized futures toward more organic futures extrapolated from the dirty chaos of the contemporary street. William Gibson, a man who out of necessity had learned to read the street, was the ideal writer to become the movement’s standard bearer. While traditional science-fiction writers were interested in technology for its own sake, Gibson was interested in the effect of technology on people and societies.

Cyberpunk, this first science fiction of the street, was responding to a fundamental shift in the focus of technological development in the real world. The cutting-edge technology of previous decades had been deployed as large-scale, outwardly focused projects, often funded with public money: projects like the Hoover Dam, the Manhattan Project, and that ultimate expression of macro-technology the Apollo moon landing. Even our computers were things filling entire floors, to be programmed and maintained by a small army of lab-coated drones. Golden-age science fiction was right on-board with this emphasis on ever greater scope and scale, extrapolating grand voyages to the stars alongside huge infrastructure projects back home.

Not long after macro-technology enjoyed its greatest hurrah in the communal adventure that was Apollo, however, technology began to get personal. In the mid-1970s, the first personal computers began to appear. In 1979, in an event of almost equal significance, Sony introduced the Walkman, a cassette player the size of your hand, the first piece of lifestyle technology that you could carry around with you. The PC and the Walkman begat our iPhones and Fitbits of today. And if we believe what Gibson and the other cyberpunks were already saying in the early 1980s, those gadgets will in turn beget chip implants, nerve splices, body modifications, and artificial organs. The public has become personal; the outward-facing has become inward-facing; the macro spaces have become micro spaces. We now focus on making ever smaller gadgets, even as we’ve turned our attention away from the outer space beyond our planet in favor of drilling down ever further into the infinitesimal inner spaces of genes and cells, into the tiniest particles that form our universe. All of these trends first showed up in science fiction in the form of cyberpunk.

In marked contrast to the boldness of his stories’ content, Gibson was peculiarly cautious, even hesitant, when it came to the process of writing and of making a proper career out of the act. The fact that Neuromancer, Gibson’s seminal first novel, came into being when it did was entirely down to the intervention of Terry Carr, the same man who had kick-started Gibson’s career as a writer of short stories by publishing “The Gernsback Continuum.” When in 1983 he was put in charge of a new “Ace Specials” line of science-fiction paperbacks reserved exclusively for the first novels of up-and-coming writers, Carr immediately thought again of William Gibson. A great believer in Gibson’s talent and potential importance, he cajoled him into taking an advance and agreeing to write a novel; Gibson had considered himself still “four or five years away” from being ready to tackle such a daunting task. “It wasn’t that vast forces were silently urging me to write,” he says. “It’s just that Terry Carr had given me this money and I had to make up some kind of story. I didn’t have a clue, so I said, ‘Well, I’ll plagiarize myself and see what comes of it.'” And indeed, there isn’t that much in 1984’s Neuromancer that would have felt really new to anyone who had read all of the stories Gibson had written in the few years before it. As a distillation of all the ideas with which he’d been experimenting in one 271-page novel, however, it was hard to beat.

 

Neuromancer

The plot is never the most important important aspect of a William Gibson novel, and this first one is no exception to that rule. Still, for the record…

Neuromancer takes places at some indeterminate time in the future, in a gritty society where the planet is polluted and capitalism has run amok, but the designer drugs and technological toys are great if you can pay for them. Our hero is Case, a former “console cowboy” who used to make his living inside the virtual reality, or “Matrix,” of a worldwide computer network, battling “ICE” (“Intrusion Countermeasures Electronics”) and pulling off heists for fun and profit. Unfortunately for him, an ex-employer with a grudge has recently fried those pieces of Case’s brain that interface with his console and let him inject himself into “cyberspace.” Left stuck permanently in “meat” space, as the novel opens he’s a borderline suicidal, down-and-out junkie. But soon he’s offered the chance to get his nervous system repaired and get back into the game by a mysterious fellow named Armitage, mastermind of a ragtag gang of outlaws who are investigating mysterious happenings on the Matrix. Eventually they’ll discover a rogue artificial intelligence behind it all — the titular Neuromancer.

Given that plot summary, we can no longer avoid addressing the thing for which William Gibson will always first and foremost be known, whatever his own wishes on the matter: he’s the man who invented the term “cyberspace,” as well as the verb “to surf” it and with them much of the attitudinal vector that accompanied the rise of the World Wide Web in the 1990s. It should be noted that both neologisms actually predate Neuromancer in Gibson’s work, dating back to 1982’s “Burning Chrome.” And it should most definitely be noted that he was hardly the first to stumble upon many of the ideas behind the attitude. We’ve already chronicled some of the developments in the realms of theory and practical experimentation that led to the World Wide Web. And in the realm of fiction, a mathematician and part-time science-fiction writer named Vernor Vinge had published True Names, a novella describing a worldwide networked virtual reality of its own, in 1981; its plot also bears some striking similarities to that of Gibson’s later Neuromancer. But Vinge was (and is) a much more prosaic writer than Gibson, hewing more to science fiction’s sturdy old school of Asimov, Clarke, and Heinlein. He could propose the idea of a worldwide network and then proceed to work it out with much more technical rigorousness than Gibson could ever dream of mustering, but he couldn’t hope to make it anywhere near as sexy.

For many the most inexplicable thing about Gibson’s work is that he should ever have come up with all this cyberspace stuff in the first place. As he took a certain perverse delight in explaining to his wide-eyed early interviewers, in his real-world life Gibson was something of a Luddite even by the standards of the 1980s. He had, for instance, never owned or used a computer at the time he wrote his early stories and Neuromancer; he wrote of his sleek high-tech futures on a clunky mechanical typewriter dating from 1927. (Gibson immortalized it within Neuromancer itself by placing it in disassembled form on the desk of Julius Deane, an underworld kingpin Case visits early in the novel.) And I’ve seen no evidence that Gibson was aware of True Names prior to writing “Burning Chrome” and Neuromancer, much less the body of esoteric and (at the time) obscure academic literature on computer networking and hypertext.

Typically, Gibson first conceived the idea of the Matrix not from reading tech magazines and academic journals, as Vinge did in conceiving his own so-called “Other Plane,” but on the street, while gazing through the window of an arcade. Seeing the rapt stares of the players made him think they believed in “some kind of actual space behind the screen, someplace you can’t see but you know is there.” In Neuromancer, he describes the Matrix as the rush of a drug high, a sensation with which his youthful adventures in the counterculture had doubtless left him intimately familiar.

He closed his eyes.

Found the ridged face of the power stud.

And in the bloodlit dark behind his eyes, silver phosphenes boiling in from the edge of space, hypnagogic images jerking past like film compiled from random frames. Symbols, figures, faces, a blurred, fragmented mandala of visual information.

Please, he prayed, _now –_

A gray disk, the color of Chiba sky.

_Now –_

Disk beginning to rotate, faster, becoming a sphere of paler gray. Expanding —

And flowed, flowered for him, fluid neon origami trick, the unfolding of his distanceless home, his country, transparent 3D chessboard extending to infinity. Inner eye opening to the stepped scarlet pyramid of the Eastern Seaboard Fission Authority burning beyond the green cubes of Mitsubishi Bank of America, and high and very far away he saw the spiral arms of military systems, forever beyond his reach.

And somewhere he was laughing, in a white-painted loft, distant fingers caressing the deck, tears of release streaking his face.

Much of the supposedly “futuristic” slang in Neuromancer is really “dope dealer’s slang” or “biker’s talk” Gibson had picked up on his travels. Aside from the pervasive role played by the street, he has always listed the most direct influences on Neuromancer as the cut-up novels of his literary hero William S. Burroughs, the noirish detective novels of Dashiell Hammett, and the deliciously dystopian nighttime neon metropolis of Ridley Scott’s film Blade Runner, which in its exploration of subjectivity, the nature of identity, and the influence of technology on same hit many of the same notes that became staples of Gibson’s work. That so much of the modern world seems to be shaped in Neuromancer‘s image says much about Gibson’s purely intuitive but nevertheless prescient genius — and also something about the way that science fiction can be not only a predictor but a shaper of the future, an idea I’ll return to shortly.

But before we move on to that subject and others we should take just a moment more to consider how unique Neuromancer, a bestseller that’s a triumph of style as much as anything else, really is in the annals of science fiction. In a genre still not overly known for striking or elegant prose, William Gibson is one of the few writers immediately recognizable after just a paragraph or two. If, on the other hand, you’re looking for air-tight world-building and careful plotting, Gibson is definitely not the place to find it. “You’ll notice in Neuromancer there’s obviously been a war,” he said in an interview, “but I don’t explain what caused it or even who was fighting it. I’ve never had the patience or the desire to work out the details of who’s doing what to whom, or exactly when something is taking place, or what’s become of the United States.”

I remember standing in a record store one day with a friend of mine who was quite a good guitar player when Jimi Hendrix’s famous Woodstock rendition of “The Star-Spangled Banner” came over the sound system. “All he does is make a bunch of noise to cover it up every time he flubs a note,” said my friend — albeit, as even he had to agree, kind of a dazzling noise. I sometimes think of that conversation when I read Neuromancer and Gibson’s other early works. There’s an ostentatious, look-at-me! quality to his prose, fueled by, as Gibson admitted, his “blind animal panic” at the prospect of “losing the reader’s attention.” Or, as critic Andrew M. Butler puts it more dryly: “This novel demonstrates great linguistic density, Gibson’s style perhaps blinding the reader to any shortcomings of the novel, and at times distancing us from the characters and what Gibson the author may feel about them.” The actual action of the story, meanwhile, Butler sums up not entirely unfairly as, “Case, the hapless protagonist, stumbles between crises, barely knowing what’s going on, at risk from a femme fatale and being made offers he cannot refuse from mysterious Mr. Bigs.” Again, you don’t read William Gibson for the plot.

Which of course only makes Neuromancer‘s warm reception by the normally plot-focused readers of science fiction all the more striking. But make no mistake: it was a massive critical and commercial success, winning the Hugo and Nebula Awards for its year and, as soon as word spread following its very low-key release, selling like crazy. Unanimously recognized as the science-fiction novel of 1984, it was being labeled the novel of the decade well before the 1980s were actually over; it was just that hard to imagine another book coming out that could compete with its influence. Gibson found himself in a situation somewhat akin to that of Douglas Adams during the same period, lauded by the science-fiction community but never quite feeling a part of it. “Everyone’s been so nice,” he said in the first blush of his success, “but I still feel very much out of place in the company of most science-fiction writers. It’s as though I don’t know what to do when I’m around them, so I’m usually very polite and keep my tie on. Science-fiction authors are often strange, ill-socialized people who have good minds but are still kids.” Politeness or no, descriptions like that weren’t likely to win him many new friends among them. And, indeed, there was a considerable backlash against him by more traditionalist writers and readers, couched in much the same rhetoric that had been deployed against science fiction’s New Wave of writers of twenty years before.

But if we wish to find reasons that so much of science-fiction fandom did embrace Neuromancer so enthusiastically, we can certainly find some that were very practical if not self-serving, and that had little to do with the literary stylings of William S. Burroughs or extrapolations on the social import of technological development. Simply put, Neuromancer was cool, and cool was something that many of the kids who read it decidedly lacked in their own lives. It’s no great revelation to say that kids who like science fiction were and are drawn in disproportionate numbers to computers. Prior to Neuromancer, such kids had few media heroes to look up to; computer hackers were almost uniformly depicted as socially inept nerds in Coke-bottle glasses and pocket protectors. But now along came Case, and with him a new model of the hacker as rock star, dazzling with his Mad Skillz on the Matrix by day and getting hot and heavy with his girlfriend Molly Millions, who seemed to have walked into the book out of an MTV music video, by night. For the young pirates and phreakers who made up the Scene, Neuromancer was the feast they’d never realized they were hungry for. Cyberpunk ideas, iconography, and vocabulary were quickly woven into the Scene’s social fabric.

Like much about Neuromancer‘s success, this way of reading it, which reduced it down to a stylish exercise in escapism, bothered Gibson. His book was, he insisted, not about how cool it was to be “hard and glossy” like Case and Molly, but about “what being hard and glossy does to you.” “My publishers keep telling me the adolescent market is where it’s at,” he said, “and that makes me pretty uncomfortable because I remember what my tastes ran to at that age.”

While Gibson may have been uncomfortable with the huge appetite for comic-book-style cyberpunk that followed Neuromancer‘s success, plenty of others weren’t reluctant to forgo any deeper literary aspirations in favor of piling the casual violence and casual sex atop the casual tech. As the violence got ever more extreme and the sex ever more lurid, cyberpunk risked turning into the most insufferable of clichés.

Sensation though cyberpunk was in the rather insular world of written science fiction, William Gibson and the sub-genre he had pioneered filtered only gradually into the world outside of that ghetto. The first cyberpunk character to take to the screen arguably was, in what feels a very appropriate gesture, a character who allegedly lived within a television: Max Headroom, a curious computerized talking head who became an odd sort of cultural icon for a few years there during the mid- to late-1980s. Invented for a 1985 low-budget British television movie called Max Headroom: 20 Minutes into the Future, Max went on to host his own talk show on British television, to become an international spokesman for the ill-fated New Coke, and finally to star in an American dramatic series which managed to air 14 episodes on ABC during 1987 and 1988. While they lacked anything precisely equivalent to the Matrix, the movie and the dramatic series otherwise trafficked in themes, dystopic environments, and gritty technologies of the street not far removed at all from those of Neuromancer. The ambitions of Max’s creators were constantly curtailed by painfully obvious budgetary limitations as well as the pop-cultural baggage carried by the character himself; by the time of the 1987 television series he had become more associated with camp than serious science fiction. Nevertheless, the television series in particular makes for fascinating viewing for any student of cyberpunk history. (The series endeared itself to Commodore Amiga owners in another way: Amigas were used to create many of the visual effects used on the show, although not, as was occasionally reported, to render Max Headroom himself. He was actually played by an actor wearing a prosthetic mask, with various visual and auditory effects added in post-production to create the character’s trademark ticks.)

There are other examples of cyberpunk’s slowly growing influence to be found in the film and television of the late 1980s and early 1990s, such as the street-savvy, darkly humorous low-budget action flick Robocop. But William Gibson’s elevation to the status of Prophet of Cyberspace in the eyes of the mainstream really began in earnest with a magazine called Wired, launched in 1993 by an eclectic mix of journalists, entrepreneurs, and academics. Envisioned as a glossy lifestyle magazine for the hip and tech-conscious — the initial pitch labeled it “the Rolling Stone of technology” — Wired‘s aesthetics were to a large degree modeled on William Gibson. When they convinced him to contribute a rare non-fiction article (on Singapore, which he described as “Disneyland with the death penalty”) to the fourth issue, the editors were so excited that they stuck the author rather than the subject of the article on their magazine’s cover.

Wired

Well-funded and editorially polished in all the ways that traditional technology journals weren’t, Wired was perfectly situated to become mainstream journalism’s go-to resource for understanding the World Wide Web and the technology bubble expanding around it. It was largely through Wired that “cyberspace” and “surfing” became indelible parts of the vocabulary of the age, even as both neologisms felt a long, long way in spirit from the actual experience of using the World Wide Web in those early days, involving as it did mostly text-only pages delivered to the screen at a glacial pace. No matter. The vocabulary surrounding technology has always tended to be grounded in aspiration rather than reality, and perhaps that’s as it should be. By the latter 1990s, Gibson was being acknowledged by even such dowdy organs as The New York Times as the man who had predicted it all five years before the World Wide Web was so much as a gleam in the eye of Tim Berners-Lee.

To ask whether William Gibson deserves his popular status as a prophet is, I would suggest, a little pointless. Yes, Vernor Vinge may have better claim to the title in the realm of fiction, and certainly people like Vannevar Bush, Douglas Engelbart, Ted Nelson, and even Bill Atkinson of Apple have huge claims on the raw ideas that turned into the World Wide Web. Even within the oeuvre of William Gibson himself, his predictions in other areas of personal technology and society — not least his anticipation of globalization and its discontents — strike me as actually more prescient than his rather vague vision of a global computerized Matrix.

Yet, whether we like it or not, journalism and popular history do tend to condense complexities down to single, easily graspable names, and in this case the beneficiary of that tendency is William Gibson. And it’s not as if he didn’t make a contribution. Whatever the rest did, Gibson was the guy who made the idea of a networked society — almost a networked consciousness — accessible, cool, and fun. In doing so, he turned the old idea of science fiction as prophecy on its head. Those kids who grew up reading Neuromancer became the adults who are building the technology of today. If, with the latest developments in virtual reality, we seem to be inching ever closer to a true worldwide Matrix, we can well ask ourselves who is the influenced and who is the influencer. Certainly Neuromancer‘s effect on our popular culture has been all but incalculable. The Matrix, the fifth highest-grossing film of 1999 and a mind-expanding pop-culture touchstone of its era, borrowed from Gibson to the extent of naming itself after his version of virtual reality. In our own time, it’s hard to imagine current too-cool-for-school television hits like Westworld, Mr. Robot, and Black Mirror existing without the example of Neuromancer (or, at least, without The Matrix and thus by extension Neuromancer). The old stereotype of the closeted computer nerd, if not quite banished to the closet from which it came, does now face strong competition indeed. Cyberpunk has largely faded away as a science-fiction sub-genre or even just a recognized point of view, not because the ideas behind it died but because they’ve become so darn commonplace.

You may have noticed that up to this point I’ve said nothing about the books William Gibson wrote after Neuromancer. That it’s been so easy to avoid doing so says much about his subsequent career, doomed as it is always to be overshadowed by his very first novel. For understandable reasons, the situation hasn’t always set well with Gibson himself. Already in 1992, he could only wryly reply, “Yeah, and they’ll never let me forget it,” when introduced as the man who invented cyberspace — this well before his mainstream fame as the inventor of the word had really even begun to take off. Writing a first book with the impact of Neuromancer is not an unalloyed blessing.

That said, one must also acknowledge that Gibson didn’t do his later career any favors in getting out from under Neuromancer‘s shadow. Evincing that peculiar professional caution that always sat behind his bold prose, he mined the same territory for years, releasing a series of books whose titles — Count Zero, Mona Lisa Overdrive, Virtual Light — seem as of a piece as their dystopic settings and their vaguely realized plots. It’s not that these books have nothing to say; it’s rather that almost everything they do say is already said by Neuromancer. His one major pre-millennial departure from form, 1990’s The Difference Engine, is an influential exercise in Victorian steampunk, but also a book whose genesis owed much to his good friend and fellow cyberpunk icon Bruce Sterling, with whom he collaborated on it.

Here’s the thing, though: as he wrote all those somewhat interchangeable novels through the late 1980s and 1990s, William Gibson was becoming a better writer. His big breakthrough came with 2003’s Pattern Recognition, in my opinion the best pure novel he’s ever written. Perhaps not coincidentally, Pattern Recognition also marks the moment when Gibson, who had been steadily inching closer to the present ever since Neuromancer, finally decided to set a story in our own contemporary world. His prose is as wonderful as ever, full of sentence after sentence I can only wish I’d come up with, yet now free of the look-at-me! ostentation of his early work. One of the best ways to appreciate how much subtler a writer Gibson has become is to look at his handling of his female characters. Molly Millions from Neuromancer was every teenage boy’s wet dream come to life. Cayce, the protagonist of Pattern Recognition — her name is a sly nod back to Neuromancer‘s Case — is, well, just a person. Her sexuality is part of her identity, as it is for that of all of us, but it’s just a part. A strong, capable, intelligent character, she’s not celebrated by the author for any of these qualities. Instead she’s allowed just to be. This strikes me as a wonderful sign of progress — for William Gibson, and perhaps for all of us.

Which isn’t to say that Gibson’s dystopias have turned into utopias. While his actual plots remain as underwhelming as ever, no working writer of today that I’m aware of captures so adroitly the sense of dislocation and isolation that has become such a staple of post-millennial life — paradoxically so in this world that’s more interconnected than ever. If some person from the future or the past asked you how we live now, you could do a lot worse than to simply hand her one of William Gibson’s recent novels.

Whether Gibson is still a science-fiction writer is up for debate and, like so many exercises in labeling, ultimately inconsequential. There remains a coterie of old fans unhappy with the new direction, who complain about every new novel he writes because it isn’t another Neuromancer. By way of compensation, Gibson has come to be widely accepted as a writer of note outside of science-fiction fandom — a writer of note, that is, for something more than being the inventor of cyberspace. That of course doesn’t mean he will ever write another book with the impact of Neuromancer, but Gibson, who never envisioned himself as anything more than a cult writer in the first place, seems to have come to terms at last with the inevitably of the phrases “author of Neuromancer” and “coiner of the term ‘cyberspace'” appearing in the first line of his eventual obituary. Asked in 2007 by The New York Times whether he was “sick of being known as the writer who coined the word ‘cyberspace,'” he said he thought he’d “miss it if it went away.” In the meantime, he has more novels to write. We may not be able to escape our yesterdays, but we always have our today.

(Sources: True Names by Vernor Vinge; Conversations with William Gibson, edited by Patrick A. Smith; Bruce Sterling and William Gibson’s introductions to the William Gibson short-story collection Burning Chrome; Bruce Sterling’s preface to the cyberpunk short-story anthology Mirrorshades; “Science Fiction from 1980 to the Present” by John Clute and “Postmodernism and Science Fiction” by Andrew M. Butler, both found in The Cambridge Companion to Science Fiction; Spin of April 1987; Los Angeles Times of September 12 1993; The New York Times of August 19 2007; William Gibson’s autobiography from his website; “William Gibson and the Summer of Love” from the Toronto Dream Project; and of course the short stories and novels of William Gibson.)

 
 

Tags: ,

How Jordan Mechner Made a Different Sort of Interactive Movie (or, The Virtues of Restraint)

One can learn much about the state of computer gaming in any given period by looking to the metaphors its practitioners are embracing. In the early 1980s, when interfaces were entirely textual and graphics crude or nonexistent, text adventures like those of Infocom were heralded as the vanguard of a new interactive literature destined to augment or entirely supersede non-interactive books. That idea peaked with the mid-decade bookware boom, when just about every entertainment-software publisher (and a few traditional book publishers) were rushing to sign established authors and books to interactive projects. It then proceeded to collapse just as quickly under the weight of its own self-importance when the games proved less compelling and the public less interested than anticipated.

Prompted by new machines like the Commodore Amiga with their spectacular graphics and sound, the industry reacted to that failure by turning to the movies for media mentorship. This relationship would prove more long-lasting. By the end of the 1980s, companies like Cinemaware and Sierra were looking forward confidently to a blending of Hollywood and Silicon Valley that they believed might just replace the conventional non-interactive movie, not to mention computer games as people had known them to that point. Soon most of the major publishers would be conducting casting calls and hiring sound stages, trying literally to make games out of films. It was an approach fraught with problems — problems that were only slowly and grudgingly acknowledged by these would-be unifiers of Southern and Northern Californian entertainment. Before it ran its course, it spawned lots of really terrible games (and, it must be admitted, against all the odds the occasional good one as well).

Given the game industry’s growing fixation on the movies as the clock wound down on the 1980s, Jordan Mechner would seem the perfect man for the age. Struggling with the blessing or curse of an equally abiding love for both mediums, his professional life had already been marked by constant vacillation between movies and games. Inevitably, his love of film influenced him even when he was making games. But, perhaps because that love was so deep and genuine, he accomplished the blending in a more even-handed, organic way than would most of the multi-CD, multi-gigabyte interactive movies that would soon be cluttering store shelves. Mechner’s most famous game, by contrast, filled just two Apple II disk sides — less than 300 K in total. And yet the cinematic techniques it employs have far more in common with those found in the games of today than do those of its more literal-minded rivals.


 

As a boy growing up in the wealthy hamlet of Chappaqua, New York, Jordan Mechner dreamed of becoming “a writer, animator, or filmmaker.” But those ambitions got modified if not discarded when he discovered computers at his high school. Soon after, he got his hands on his own Apple II for the first time. Honing his chops as a programmer, he started contributing occasional columns on BASIC to Creative Computing magazine at the age of just 14. Yet fun as it was to be the magazine’s youngest contributor, his real reason for learning programming was always to make games. “Games were the only kind of software I knew,” he says. “They were the only kind that I enjoyed. At that time, I didn’t really see any use for a word processor or a spreadsheet.” He fell into the throes of what he describes as an “obsession” to get a game of his own published.

Initially, he did what lots of other game programmers were doing at the time: cloning the big standup-arcade hits for fun and (hopefully) profit. He made a letter-perfect copy of Atari’s Asteroids, changed the titular space rocks to bright bouncing balls in the interest of plausible deniability, and sent the resulting Deathbounce off to Brøderbund for consideration; what with Brøderbund having been largely built on the back of Apple Galaxian, an arcade clone which made no effort whatsoever to conceal its source material, the publisher seemed a very logical choice. But Doug Carlston was now trying to distance his company from such fare for reasons of reputation as well as his fear of Atari’s increasingly aggressive legal threats. Nice guy that he was, he called Mechner personally to explain why Deathbounce wasn’t for Brøderbund. He promised to send Mechner a free copy of Brøderbund’s latest hit, Choplifter, suggesting he think about whether he might be able to apply the programming chops he had demonstrated in Deathbounce to a more original game, as Choplifter‘s creator Dan Gorlin had done. Mechner remembers the conversation as well-nigh life-changing. He had been so immersed in the programming side of making games that the idea of doing an original design had never really occurred to him before: “I didn’t have to copy someone else’s arcade game. I was allowed to design my own!”

Carlston’s phone call came in May of 1982, when Mechner was finishing up his first year at Yale University; undecided about his major as he was so much else in his life at the time, he would eventually wind up with a Bachelors in psychology. We’re granted an unusually candid and personal glimpse into into his life between 1982 and 1993 thanks to his private journals, which he published (doubtless in a somewhat expurgated form) in 2012. The early years paint a picture of a bright, sensitive young man born into a certain privilege that carries with it the luxury of putting off adulthood for quite some time. He romanticizes chance encounters (“I saw a heartbreakingly beautiful young blonde out of the corner of my eye. She was wearing a blue down vest. As she passed, our eyes met. She smiled at me. As I went out I held the door for her; her fingers grazed mine. Then she was gone.”); frets frequently about cutting classes and generally not being the man he ought to be (“I think Ben is the only person who truly comprehends the depths of how little classwork I do.”); alternates between grand plans accompanied by frenzies of activity and indecision accompanied by long days of utter sloth (“Here’s what I do do: listen to music. Browse in record stores. Read newspapers, magazines, play computer games, stare out the windows. See a lot of movies.”); muses with all the self-obliviousness of youth on whether he would prefer “writing a bestselling novel or directing a blockbusting film,” as if attaining fame and fortune was as simple as deciding on one or the other.

At Yale, film, that other constant of his creative life, came to the fore. He joined every film society he stumbled upon, signed up for every film-studies course in the catalog, and set about “trying to see in four years every film ever made”; Akira Kurosawa’s classic adventure epic Seven Samurai (a major inspiration behind Star Wars among other things) emerged as his favorite of them all. He also discovered an unexpected affinity for silent cinema, which naturally led him to compare that earliest era of film with the current state of computer games, a medium that seemed in a similar state of promising creative infancy. All of this, combined with the example of Choplifter and the karate lessons he was sporadically attending, led to Karateka, the belated fruition of his obsession with getting a game published.

To a surprising degree given his youth and naivete, Mechner consciously designed Karateka as the proverbial Next Big Thing in action games after the first wave of simple quarter munchers, whose market he watched collapse over the two-plus years he spent intermittently working on it. Plenty of fighting games had appeared on the Apple II and other platforms before, some of them very playable; Mechner wasn’t sure he could really improve on their templates when it came to pure game play. What he could do, however, was give his game some of the feel and emotional resonance of cinema. Reasoning that computer games were technically on par with the first decade or two of film in terms of the storytelling tools at his disposal, he mimicked the great silent-film directors in building his story out of the broadest archetypal elements: an unnamed hero must assault a mountain fortress to rescue an abducted princess, fighting through wave after wave of enemies, culminating in a showdown with the villain himself. He energetically cross-cut the interactive fighting sequences with non-interactive scenes of the villain issuing orders to his minions while the princess looks around nervously in her cell — a suspense-building technique from cinema dating back to The Birth of a Nation. He mimicked the horizontal wipes Kurosawa used for transitions in Seven Samurai; mimicked the scrolling textual prologue from Star Wars. When the player lost or won, he printed “THE END” on the screen in lieu of “GAME OVER.” And, indeed, he made it possible, although certainly not easy, to win Karateka and carry the princess off into the sunset. The player was, in other words, playing for bigger stakes than a new high score.

Karateka

The most technically innovative aspect of Karateka — suggested, like much in the game, by Mechner’s very supportive father — involved the actual people on the screen. To make his fighters move as realistically as possible, Mechner made use for the first time in a computer game of an old cartoon-animation technique known as rotoscoping. After shooting some film footage of his karate instructor in action, doing various kicks and punches, Mechner used an ancient Moviola editing machine that had somehow wound up in the basement of the family home to isolate and make prints out of every third frame. He imported the figure at the center of each print into his Apple II by tracing it on a contraption called the VersaWriter. Flipped through in sequence, the resulting sprites appeared to “move” in an unusually fluid and realistic fashion. “When I saw that sketchy little figure walk across the screen,” he wrote in his journal, “looking just like Dennis [his karate instructor], all I could say was ‘ALL RIGHT!’ It was a glorious moment.”

Karateka

Doug Carlston, who clearly saw something special in this earnest kid, was gently encouraging and almost infinitely patient with him. When it looked like Mechner had come up with something potentially great at last, Carlston signed him to a contract and flew him out to California in the summer of 1984 to finish it up with the help of Brøderbund’s in-house staff. Released just a little too late to fully capitalize on the 1984 Christmas rush, Karateka started slowly but gradually turned into a hit, especially once the Commodore 64 port dropped in June of 1985. Once ported to Nintendo for the domestic Japanese market, it proceeded to sell many hundreds of thousand units, making Jordan Mechner a very flush young man indeed.

So, Mechner, about to somehow manage to graduate despite all the missed assignments and cut classes spent working on Karateka, seemed poised for a fruitful career making games. Yet he continued to vacillate between his twin obsessions. Even as his game, the most significant accomplishment of his young life and one of which anyone could justly be proud, had entered the homestretch, he had written how “I definitely want my next project to be film-related. Videogames have taken up enough of my time for now.” In the wake of his game’s release, the steady stream of royalties therefrom only made it easier to dabble in film.

Mechner spent much of the year after graduating from university back at home in Chappaqua working on his first screenplay. In between writing dialog and wracking himself with doubt over whether he really wanted to do another game at all, he occasionally turned his attention to the idea of a successor to Karateka. Already during that first summer after Yale, he and Gene Portwood, a Brøderbund executive, dreamed up a scenario for just such a beast: an Arabian Nights-inspired story involving an evil sultan, a kidnapped princess, and a young man — the player, naturally — who must rescue her. Karateka in Middle Eastern clothing though it may have been in terms of plot, that was hardly considered a drawback by Brøderbund, given the success of Mechner’s first game.

Seven frames of animation ready to be photocopied and digitized.

Seven frames of animation ready to be photocopied and digitized.

Determined to improve upon the rotoscoping of Karateka, Mechner came up with a plan to film a moving figure and use a digitizer to capture the frames into the computer, rather than tracing the figure using the VersaWriter. He spent $2500 on a high-end VCR and video camera that fall, knowing he would return them before his month’s grace period was out (“I feel so dishonest,” he wrote in his journal). The technique he had in the works may have been an improvement over what he had done for Karateka, but it was still very primitive and hugely labor-intensive. After shooting his video, he would play it back on the VCR, pausing it on each frame he wanted to capture. Then he would take a picture of the screen using an ordinary still camera and get the film developed. Next step was to trace the outline of the figure in the photograph using Magic Marker and fill him in using White-Out. Then he would Xerox the doctored photograph to get a black-and-white version with a very clear silhouette of the figure. Finally, he would digitize the photocopy to import it into his Apple II, and erase everything around the figure by hand on the computer to create a single frame of sprite animation. He would then get to go through this process a few hundred more times to get the prince’s full repertoire of movements down.


On October 20, 1985, Jordan Mechner did his first concrete work on the game that would become Prince of Persia, using his ill-gotten video camera to film his 16-year-old brother David running and jumping through a local parking lot. When he finally got around to buying a primitive black-and-white image digitizer for his trusty Apple II more than six months later, he quickly determined that the footage he’d shot was useless due to poor color separation. Nevertheless, he saw potential magic.

I still think this can work. The key is not to clean up the frames too much. The figure will be tiny and messy and look like crap… but I have faith that, when the frames are run in sequence at 15 fps, it’ll create an illusion of life that’s more amazing than anything that’s ever been seen on an Apple II screen. The little guy will be wiggling and jiggling like a Ralph Bakshi rotoscope job… but he’ll be alive. He’ll be this little shimmering beacon of life in the static Apple-graphics Persian world I’ll build for him to run around in.

For months after that burst of enthusiasm, however, he did little more with the game.

At last in September of 1986, having sent his screenplay off to Hollywood and thus with nothing more to do on that front but wait, Mechner moved out to San Rafael, California, close to Brøderbund’s offices, determined to start in earnest on Prince of Persia. He spent much time over the next few months refining his animation technique, until by Christmas everyone who saw the little running and jumping figure was “bowled over” by him. Yet after that progress again slowed to a crawl, as he struggled to motivate himself to turn his animation demos into an actual game.

And then, on May 4, 1987, came the phone call that would stop the little running prince in his tracks for the better part of a year. A real Hollywood agent called to tell him she “loved” his script for Birthstone, a Spielbergian supernatural comedy/thriller along the lines of Gremlins or The Goonies. Within days of her call, the script was optioned by Larry Turman, a major producer with films like The Graduate on his resume. For months Mechner fielded phone calls from a diverse cast of characters with a diverse cast of suggestions, did endless rewrites, and tried to play the Hollywood game, schmoozing and negotiating and trying not to appear to be the awkward, unworldly kid he still largely was. Only when Birthstone seemed permanently stuck in development hell — “Hollywood’s the only town where you can die of encouragement,” he says wryly, quoting Pauline Kael —  did he give up and turn his attention back to games. Mechner notes today that just getting as far as he did with his very first script was a huge achievement and a great start in itself. After all, he was, if not quite hobnobbing with the Hollywood elite, at least getting rejection letters from such people as Michael Apted, Michael Crichton, and Henry Winkler; such people were reading his script. But he had been spoiled by the success of Karateka. If he wrote another screenplay, there was no guarantee it would get even as far as his first had. If he finished Prince of Persia, on the other hand, he knew Brøderbund would publish it.

And so, in 1988, it was back to games, back to Prince of Persia. Inspired by “puzzly” 8-bit action games like Doug Smith’s Lode Runner and Ed Hobbs’s The Castles of Dr. Creep, his second game was shaping up to be more than just a game of combat. Instead his prince would have to make his way through area after area full of tricks, traps, and perilous drops. “What I wanted to do with Prince of Persia,” Mechner says, “was a game which would have that kind of logical, head-scratching, fast-action, Lode Runner-esque puzzles in a level-based game but also have a story and a character that was trying to accomplish a recognizable human goal, like save a princess. I was trying to merge those two things.” Ideally, the game would play like the iconic first ten minutes of Raiders of the Lost Ark, in which Indiana Jones runs and leaps and dodges and sometimes outwits rather than merely outruns a series of traps. For a long while, Mechner planned to make the hero entirely defenseless, as a sort of commentary on the needless ultra-violence found in so many other games. In the end, he didn’t go that far — the allure of sword-fighting, not to mention commercial considerations, proved too strong — but Prince of Persia was nevertheless shaping up to be a far more ambitious, multi-faceted work than Karateka, boasting much more than just improved running and jumping animations.

With just 128 K of memory to work with on the Apple II, Mechner was forced to make Prince of Persia a modular design, relying on a handful of elements which are repeatedly reused and recombined. Take, for instance, the case of the loose floorboards. The first time they appear, they’re a simple trap: you have to jump over a section of the floor to avoid falling into a pit. Later, they appear on the ceiling, as part of the floor above your own; caught in an apparent cul de sac, you have to jump up and bash the ceiling to open an escape route. Still later, they can be used strategically: to kill guards below you by dropping the floorboards on their heads, or to hold down a pressure plate below you that opens a door on the level on which you’re currently standing. It’s a fine example of a constraint in game design turning into a strength. “There’s a certain elegance to taking an element the player is already familiar with,” says Mechner, “and challenging him to think about it in a different way.”


On July 14, 1989, Mechner shot the final footage for Prince of Persia: the denouement, showing the prince — now played by the game’s project manager at Brøderbund, Brian Ehler — embracing the rescued princess — played by Tina LaDeau, the 18-year-old daughter of another Brøderbund employee, in her prom dress. (“Man, she is a fox,” Mechner wrote in his journal. “Brian couldn’t stop blushing when I had her embrace him.”)

The game shipped for the Apple II on October 6, 1989. And then, despite a very positive review in Computer Gaming World — Charles Ardai called it nothing less than “the Star Wars of its field,” music to the ears of a movie buff like Mechner — it proceeded to sell barely at all: perhaps 500 units a month. It was, everyone at Brøderbund agreed, at least a year too late to hope to sell significant numbers of a game like this on the Apple II, whose only remaining commercial strength was educational software, thanks to the sheer number of the things still installed in American schools. Mechner’s procrastination and vacillation had spoiled this version’s commercial prospects entirely.

Thankfully, the Apple II version wasn’t to be the only one. Brøderbund already had programmers and artists working on ports to MS-DOS and the Amiga, the last two truly viable computer-gaming platforms in North America. Mechner as well turned his attention to the versions for these more advanced machines as soon as the Apple II version was finished. And once again his father pitched in, composing a lovely score for the luxuriously sophisticated sound hardware now at the game’s disposal. “This is going to be the definitive version of Prince of Persia,” Mechner enthused over the MS-DOS version. “With VGA [graphics] and sound card, on a fast machine, it’ll blow the Apple away. It looks like a Disney film. It’s the most beautiful game I’ve ever seen.” Reworked though they were in almost all particulars, at the heart of the new versions lay the same digitized film footage that had made the 8-bit prince run and leap so fluidly.

Prince of Persia

And yet, after it shipped on April 19, 1990, the MS-DOS version also disappointed. Mechner chafed over his publisher’s disinterest in promoting the game; they seemed on the verge of writing it off, noting how the vastly superior MS-DOS version was being regarded as just another port of an old 8-bit game, and thus would likely never be given a fair shake by press or public. True as ever to the bifurcated pattern of his life, he decided to turn back to film. Having tried and failed to get into New York University film school, he resorted to working as a production assistant in movies by way of supporting himself and trying to drum up contacts in the film-making community of New York. Thus the first anniversary of Prince of Persia‘s original release on the Apple II found him schlepping crates around New York City. His career as a game developer seemed to be behind him, and truth be told his prospects as a filmmaker didn’t look a whole lot brighter.

The situation began to reverse itself only after the Amiga version was finished — programmed, as it happened, by Dan Gorlin, the very fellow whose Choplifter had first inspired Mechner to look at his own games differently. In Europe, the Amiga’s stronghold, Prince of Persia was free of the baggage which it carried in North America — few in Europe had much idea of what an Apple II even was — and doubtless benefited from a much deeper and richer tradition on European computers of action-adventures and platform puzzlers. It received ebullient reviews and turned into a big hit on European Amigas, and its reputation gradually leaked back across the pond to turn it at last into a hit in its homeland as well. Thus did Prince of Persia become a slow grower of an international sensation — a very unusual phenomenon in the hits-driven world of videogames, where shelf lives are usually short and retailer patience shorter. Soon came the console releases, along with releases for various other European and Japanese domestic computers, sending total sales soaring to over 2 million units.

By the beginning of 1992, Mechner was far removed from his plight of just eighteen months before. He was drowning in royalties, consulting intermittently with Brøderbund on a Prince of Persia 2 — it was understood that his days in the programming trenches were behind him — and living a globetrotting lifestyle, jaunting from Paris to San Rafael to Madrid to New York as whim and business took him. He was also planning his first film, a short documentary to be shot in Cuba, and already beginning to mull over what would turn into his most ambitious and fascinating game production of all, known at this point only as “the train game.”

Prince of Persia, which despite the merits of that eventual “train game” is and will likely always remain Mechner’s signature work, strikes me most of all as a triumph of presentation. The actual game play is punishingly difficult. Each of its twelve levels is essentially an elaborate puzzle that can only be worked out by dying many times when not getting trapped into one of way too many dead ends. Even once you think you have it all worked out, you still need to execute every step with perfect precision, no mean feat in itself. Messing up at any point in the process means starting that level over again from the beginning. And, because you only have one hour of real time to rescue the princess, every failure is extremely costly; a perfect playthrough, accomplished with absolute surety and no hesitations, takes about half an hour, leaving precious little margin for error. At least there is a “save” feature that will let you bookmark each level starting with the third, so you don’t have to replay the whole game every time you screw up — which, believe me, you will, hundreds if not thousands of times before you finally rescue the princess. Beating Prince of Persia fair and square is a project for a summer vacation of those long-gone adolescent days when responsibilities were few and distractions fewer. As a busy adult, I find it too repetitive and too reliant on rote patterns, as well as — let’s be honest here — just too demanding on my aging reflexes. In short, the effort-to-reward ratio strikes me as way out of whack. Of course, I’m sure that, given Prince of Persia‘s status as a beloved icon of gaming, many of you have a different opinion.

So, let’s turn back to something on which we can hopefully all agree: the brilliance of that aforementioned presentation, which brings to aesthetic maturity many of the techniques Mechner had first begun to experiment with in Karateka. Rather than using filmed footage as a tool for the achievement of fluid, lifelike motion, as Mechner did, games during the years immediately following Prince of Persia would be plastered with jarring chunks of poorly acted, poorly staged “full-motion video.” Such spectacles look far more dated today than the restrained minimalism of Prince of Persia. The industry as a whole would take years to wind up back at the place where Jordan Mechner had started: appropriating some of the language of cinema in the service of telling a story and building drama, without trying to turn games into literal interactive movies. Mechner:

Just as theater is its own thing — with its own conventions, things that it does well, things it does badly — so is film, and so [are] computer games. And there is a way to borrow from one medium to another, and in fact that’s what an all-new medium does when it’s first starting out. Film, when it was new, looked like someone set up a camera front and center and filmed a staged play. Then the things that are specific to film — like the moving camera, close-ups, reaction shots, dissolves — all these kinds of things became part of the language of cinema. It’s the same with computer games. To take a long film sequence and to play that on your TV screen is the bad way to make a game cinematic. The computer game is not a VCR. But if you can borrow from the knowledge that we all carry inside our heads of how cuts work, how reaction shots work, what a low angle means dramatically, what it means when the camera suddenly pulls back… We’ve got this whole collective unconscious of the vocabulary of film, and that’s a tremendously valuable tool to bring into computer gaming.

In a medium that has always struggled to tamp down its instinct toward aesthetic maximalism, Mechner’s games still stand out for their concern with balance and proportion. Mechner again:

Visuals are [a] component where it’s often tempting to compromise. You think, “Well, we could put a menu bar across here, we could put a number in the upper right-hand corner of the screen representing how many potions you’ve drunk,” or something. The easy solution is always to do something that as a side effect is going to make the game look ugly. So I took as one of the ground rules going in that the overall screen layout had to be pleasing, had to be strong and simple. So that somebody who was not playing the game but who walked into the room and saw someone else playing it would be struck by a pleasing composition and could stop to watch for a minute, thinking, “This looks good, this looks as if I’m watching a movie.” It really forces you as a designer to struggle to find the best solution for things like inventory. You can’t take the first solution that suggests itself, you have to try to solve it within the constraints you set yourself.

Mechner’s take on visual aesthetics can be seen as a subversion of Ken Williams’s old “ten-foot rule,” which, as you might remember, stated that every Sierra game ought to be visually arresting enough to make someone say “Wow!” when glimpsing it from ten feet away across a crowded shop. Mechner believed that game visuals ought to be more than just striking; they ought to be aesthetically good by the more refined standards of film and the other, even older visual arts. All that time Mechner spent obsessing over films and film-making, which could all could too easily be labeled a complete waste of time, actually allowed him to bring something unique to the table, something that made him different from virtually all of his many contemporaries in the interactive-movie business.

There are various ways to situate Jordan Mechner’s work in general and Prince of Persia in particular within the context of gaming history. It can be read as the last great swan song of the Apple II and, indeed, of the entire era of 8-bit computer gaming, at least in North America. It can be read as yet one more example of Brøderbund’s downright bizarre commercial Midas touch, which continued to yield a staggering number of hits from a decidedly modest roster of new releases (Brøderbund also released SimCity in 1989, thus spawning two of the most iconic franchises in gaming history within bare months of one another). It can be read as the precursor to countless cinematic action-adventures and platformers to come, many of whose designers would acknowledge it as a direct influence. In its elegant simplicity, it can even be read as a fascinating outlier from the high-concept complexity that would come to dominate American computer gaming in the very early 1990s. But the reading that makes me happiest is to simply say that Prince of Persia showed how less can be more. There’s no need to take my word for it; just have a look for yourself.


(Sources: Game Design Theory and Practice by Richard Rouse III; The Making of Karateka and The Making of Prince of Persia by Jordan Mechner; Creative Computing of March 1979, September 1979, and May 1980; Next Generation of May 1998; Computer Gaming World of December 1989; Jordan Mechner’s Prince of Persia postmortem from the 2011 Game Developers Conference; “Jordan Mechner: The Man Who Would Be Prince” from Games™; the Jordan Mechner and Brøderbund archives at the Strong Museum of Play.)

 
 

Tags: , ,

Cinemaware’s Year in the Desert

The last year of the 1980s was also the last that the Commodore Amiga would enjoy as the ultimate American game machine. Even as the low-end computer-game market was being pummeled into virtual nonexistence by the Nintendo Entertainment System, leaving the Amiga with little room into which to expand downward, the heretofore business-centric world of MS-DOS was developing rapidly on the high end, with VGA graphics and sound cards becoming more and more common. The observant could already recognize that these developments, combined with Commodore’s lackadaisical attitude toward improving their own technology, must spell serious trouble for the Amiga in the long run.

But for now, for this one more year, things were still going pretty well. Amiga zealots celebrated loudly and proudly at the beginning of 1989 when news broke that the platform had pushed past the magic barrier of 1 million machines sold. As convinced as ever that world domination was just around the corner for their beloved “Amy,” they believed that number would have to lead to her being taken much more seriously by the big non-gaming software houses. While that, alas, would never happen, sales were just beginning to roll in many of the European markets that would sustain the Amiga well into the 1990s.

This last positive development fed directly into the bottom line of Cinemaware, the American software house that was the developer most closely identified with the Amiga to a large extent even in Europe. Cinemaware’s founder Bob Jacob wisely forged close ties with the exploding European Amiga market via a partnership with the British publisher Mirrorsoft. In this way he got Cinemaware’s games wide distribution and promotion throughout Europe, racking up sales across the pond under the Mirrorsoft imprint that often dramatically exceeded those Cinemaware was able to generate under their own label in North America. The same partnership led to another welcome revenue stream: the importation of European games into Cinemaware’s home country. Games like Speedball, by the rockstar British developers the Bitmap Brothers, didn’t have much in common with Cinemaware’s usual high-concept fare, but did feed the appetite of American youngsters who had recently found Amiga 500s under their Christmas trees for splashy, frenetic, often ultra-violent action.

Yet Cinemaware’s biggest claim to fame remained their homegrown interactive movies — which is not to say that everyone was a fan of their titular cinematic approach to game-making. A steady drumbeat of criticism, much of it far from unjustified, had accompanied the release of each new interactive movie since the days of Defender of the Crown. Take away all of the music and pretty pictures that surrounded their actual game play, went the standard line of attack, and these games were nothing but shallow if not outright broken exercises in strategy attached to wonky, uninteresting action mini-games. Cinemaware clearly took the criticism to heart despite the sales success they continued to enjoy. Indeed, the second half of the company’s rather brief history can to a large extent be read as a series of reactions to that inescapable negative drumbeat, a series of attempts to show that they could make good games as well as pretty ones.

At first, the new emphasis on depth led to decidedly mixed results. Conflating depth with difficulty in a manner akin to the way that so many adventure-game designers conflate difficulty with unfairness, Cinemaware gave the world Rocket Ranger as their second interactive movie of 1988. It had all the ingredients to be great, but was undone by balance issues exactly the opposite of those which had plagued the prototypical Cinemaware game, Defender of the Crown. In short, Rocket Ranger was just too hard, a classic game-design lesson in the dangers of overcompensation and the importance of extensive play-testing to get that elusive balance just right. With two more new interactive movies on the docket for 1989, players were left wondering whether this would the year when Cinemaware would finally get it right.

Lords of the Rising Sun

Certainly they showed no sign of backing away from their determination to bring more depth to their games. On the contrary, they pushed that envelope still harder with Lords of the Rising Sun, their first interactive movie of 1989. At first glance, it was a very typical Cinemaware confection, a Defender of the Crown set in feudal Japan. Built like that older game from the tropes and names of real history without bothering to be remotely rigorous about any of it, Lords of the Rising Sun is also another strategy game broken up by action-oriented minigames — the third time already, following Defender of the Crown and Rocket Ranger, that Cinemaware had employed this template. This time, however, a concerted effort was made to beef up the strategy game, not least by making it into a much more extended affair. Lords of the Rising Sun became just the second interactive movie to include a save-game feature, and in this case it was absolutely necessary; a full game could absorb many hours. It thus departed more markedly than anything the company had yet done from Bob Jacob’s original vision of fast-playing, non-taxing, ultra-accessible games. Indeed, with a thick manual and a surprising amount of strategic and tactical detail to keep track of, Lords of the Rising Sun can feel more like an SSI than a typical Cinemaware game once you look past its beautiful audiovisual presentation. Reaching for the skies if not punching above their weight, Cinemaware even elected to include the option of playing the game as an exercise in pure strategy, with the action sequences excised.


But sadly, the strategy aspect is as inscrutable as a Zen koan. While Rocket Ranger presents with elegance and grace a simple strategy game that would be immensely entertaining if it wasn’t always kicking your ass, Lords of the Rising Sun is just baffling. You’re expected to move your armies over a map of Japan, recruiting allies where possible, fighting battles to subdue enemies where not. Yet it’s all but impossible to divine any real sense of the overall situation from the display. This would-be strategy game ends up feeling more random than anything else, as you watch your banners wander around seemingly of their own volition, bumping occasionally into other banners that may represent enemies or friends. It suffers mightily from a lack of clear status displays, making it really, really hard to keep track of who wants to do what to whom. If you have the mini-games turned on, the bird’s-eye view is broken up by arcade sequences that are at least as awkward as the strategy game. In the end, Lords of the Rising Sun is just no fun at all.

Lords of the Rising Sun's animated, scrolling map is nicer to look at than it is a practical tool for strategizing.

While it’s very pretty, Lords of the Rising Sun‘s animated, scrolling map is nicer to look at than it is a practical tool for strategizing.

Press and public alike were notably unkind to Lords of the Rising Sun. Claims like Bob Jacob’s that “there is more animation in Lords than has ever been done in any computer game” — a claim as unquantifiable as it was dubious, especially in itself in light of some of Sierra’s recent efforts — did nothing to shake Cinemaware’s reputation for being all sizzle, no steak. Ken St. Andre of Tunnels & Trolls and Wasteland fame, reviewing the game for Questbusters magazine, took Cinemaware to task on its every aspect, beginning with the excruciating picture on the box of a cowering maiden about to fall out of her kimono; he deemed it “an insult to women everywhere and to Japanese culture in particular.” (Such a criticism sounds particularly forceful coming from St. Andre; Wasteland with its herpes-infested prostitutes and all the rest is hardly a bastion of political correctness.) He concluded his review with a zinger so good I wish I’d thought of it: he called the game “a Japanese Noh play.”

Many other reviewers, while less boldly critical, seemed nonplussed by the whole experience — a very understandable reaction to the strategy game’s vagaries. Sales were disappointing in comparison to those of earlier interactive movies, and the game has gone down in history alongside the equally underwhelming S.D.I. as perhaps the least remembered of all the Cinemaware titles.

It Came from the Desert

So, what with the game-play criticisms beginning to affect the bottom line, Cinemaware really needed to deliver something special for their second game of 1989. Thankfully, It Came from the Desert would prove to be the point where they finally got this interactive-movie thing right, delivering at long last a game as nice to play as it is to look at.


It Came from the Desert was the first of the interactive movies not to grow from a seed of an idea planted by Bob Jacob himself. Its originator was rather David Riordan, a newcomer to the Cinemaware fold with an interesting career in entertainment already behind him. As a very young man, he’d made a go of it in rock music, enjoying his biggest success in 1970 with a song called “Green-Eyed Lady,” a #3 hit he co-wrote for the (briefly) popular psychedelic band Sugarloaf. A perennial on Boomer radio to this day, that song’s royalties doubtless went a long way toward letting him explore his other creative passions after his music career wound down. He worked in movies for a while, and then worked with MIT on a project exploring the interactive potential of laser discs. After that, he worked briefly for Lucasfilm Games during their heady early days with Peter Langston at the helm. And from there, he moved on to Atari, where he worked on laser-disc-driven stand-up arcade games until it became obvious that Dragon’s Lair and its spawn had been the flashiest of flashes in the pan.

David Riordan on the job at Cinemaware.

David Riordan on the job at Cinemaware.

Riordan’s resume points to a clear interest in blending cinematic approaches with interactivity. It thus comes as little surprise that he was immediately entranced when he first saw Defender of the Crown one day at his brother-in-law’s house. It had, he says, “all the movie attributes and approaches that I had been trying to get George Lucas interested in” while still with Lucasfilm. He wrote to Cinemaware, sparking up a friendship with Bob Jacob which led him to join the company in 1988. Seeing in Riordan a man who very much shared his own vision for Cinemaware, Jacob relinquished a good deal of the creative control onto which he had heretofore held so tightly. Riordan was placed in charge of the company’s new “Interactive Entertainment Group,” which was envisioned as a production line for cranking out new interactive movies of far greater sophistication than those Cinemaware had made to date. These latest and greatest efforts were to be made available on a whole host of platforms, from their traditional bread and butter the Amiga to the much-vaunted CD-based platforms now in the offing from a number of hardware manufacturers. If all went well, It Came from the Desert would mark the beginning of a whole new era for Cinemaware.

Here we can see -- just barely; sorry for this picture's terrible fidelity -- Cinemaware's interactive-movie scripting tool, which they dubbed MasterPlan, running in HyperCard.

Here we can see — just barely; sorry for this picture’s terrible fidelity — Cinemaware’s scripting tool MasterPlan.

Cinemaware spent months making the technology that would allow them to make It Came from the Desert. Riordan’s agenda can be best described as a desire to free game design from the tyranny of programmers. If this new medium was to advance sufficiently to tell really good, interesting interactive stories, he reasoned, its tools would have to become something that non-coding “real” writers could successfully grapple with. Continuing to advance Cinemaware’s movie metaphors, his team developed a game engine that could largely be “scripted” in point-and-click fashion in HyperCard rather than needing to be programmed in any conventional sense. Major changes to the structure of a game could be made without ever needing to write a line of code, simply by editing the master plan of the game in a HyperCard tool Cinemaware called, appropriately enough, MasterPlan. The development process leveraged the best attributes of a number of rival platforms: Amigas ran the peerless Deluxe Paint for the creation of art; Macs ran HyperCard for the high-level planning; fast IBM clones served as the plumbing of the operation, churning through compilations and compressions. It was by anyone’s standards an impressive collection of technology — so impressive that the British magazine ACE, after visiting a dozen or more studios on a sort of grand tour of the American games industry, declared Cinemaware’s development system the most advanced of them all. Cinemaware had come a long way from the days of Defender of the Crown, whose development process had consisted principally of locking programmer R.J. Mical into his office with a single Amiga and a bunch of art and music and not letting him out again until he had a game. “If we ever get a real computer movie,” ACE concluded, “this is where it’s going to come from.”

It Came from the Desert

While it’s debatable whether It Came from the Desert quite rises to that standard, it certainly is Cinemaware’s most earnest and successful attempt at crafting a true interactive narrative since King of Chicago. The premise is right in their usual B-movie wheelhouse. Based loosely on the campy 1950s classic Them!, the game takes place in a small desert town with the charming appellation of Lizard Breath that’s beset by an alarming number of giant radioactive ants, product of a recent meteor strike. You play a geologist in town; “the most interesting rocks always end up in the least interesting places,” notes the introduction wryly. Beginning in your cabin, you can move about the town and its surroundings as you will, interacting with its colorful cast of inhabitants via simple multiple-choice dialogs and getting into scrapes of various sorts which lead to the expected Cinemaware action sequences. Your first priority is largely to convince the townies that they have a problem in the first place; this task you can accomplish by collecting enough evidence of the threat to finally gain the attention of the rather stupefyingly stupid mayor. Get that far, and you’ll be placed in charge of the town’s overall defense, at which point a strategic aspect joins the blend of action and adventure to create a heady brew indeed. Your ultimate goal, which you have just fifteen days in total to accomplish, is to find the ants’ main nest and kill the queen.

It Came from the Desert excels in all the ways that most of Cinemaware’s interactive movies excel. The graphics and sound were absolutely spectacular in their day, and still serve very well today; you can well-nigh taste the gritty desert winds. What makes it a standout in the Cinemaware catalog, however, is the unusual amount of attention that’s been paid to the design — to you the player’s experience. A heavily plot-driven game like this could and usually did go only one way in the 1980s. You probably know what I’m picturing: a long string of choke points requiring you to be in just the right place at just the right time to avoid being locked out of victory. Thankfully, It Came from the Desert steers well away from that approach. The plot is a dynamic thing rolling relentlessly onward, but your allies in the town are not entirely without agency of their own. If you fail to accomplish something, someone else might just help you out — perhaps not as quickly or efficiently as one might ideally wish, but at least you still feel you have a shot.

And even without the townies’ help, there are lots of ways to accomplish almost everything you need to. The environment as a whole is remarkably dynamic, far from the static set of puzzle pieces so typical of more traditional adventure games of this era and our own. There’s a lot going on under the hood in this one, far more than Cinemaware’s previous games would ever lead one to expect. Over the course of the fifteen days, the town’s inhabitants go from utterly unconcerned about the strange critters out there in the desert to full-on, backs-against-the-wall, fight-or-flight panic mode. By the end, when the ants are roaming at will through the rubble that once was Lizard Breath destroying anything and anyone in their path, the mood feels far more apocalyptic than that of any number of would-be “epic” games. One need only contrast the frantic mood at the end of the game with the dry, sarcastic tone of the beginning — appropriate to an academic stranded in a podunk town — to realize that one really does go on a narrative journey over the few hours it takes to play.

Which brings me to another remarkable thing: you can’t die in It Came from the Desert. If you lose at one of the action games, you wake up in the hospital, where you have the option of spending some precious time recuperating or trying to escape in shorter order via another mini-game. (No, I have no idea why a town the size of Lizard Breath should have a hospital.) In making sure that every individual challenge or decision doesn’t represent a zero-sum game, It Came from the Desert leaves room for the sort of improvisational derring-do that turns a play-through into a memorable, organic story. It’s not precisely that knowledge of past lives isn’t required; you’re almost certain to need several tries to finally save Lizard Breath. Yet each time you play you get to live a complete story, even if it is one that ends badly. Meanwhile you’re learning the lay of the land, learning to play more efficiently and getting steadily better at the action games, which are themselves unusually varied and satisfying by Cinemaware’s often dodgy standards. There are not just many ways to lose It Came from the Desert but also many paths to victory. Win or lose, your story in It Came from the Desert is your story; you get to own it. There’s a save-game feature, but I don’t recommend that you use it except as a bookmark when you really do need to do something else for a while. Otherwise just play along and let the chips fall where they may. At last, here we have a Cinemaware interactive movie that’s neither too easy nor too hard; this one is just right, challenging but not insurmountable.

It Came from the Desert evolves into a strategy game among other things, as you manuveur the town's forces to battle new infestations while you search for the main hive with the queen to put an end to the menace once and for all.

It Came from the Desert evolves into a strategy game among other things, as you deploy the town’s forces to battle each new ant infestation while you continue the search for the main hive.

Widely and justifiably regarded among the old-school Amiga cognoscenti of today as Cinemaware’s finest hour, It Came from the Desert was clearly seen as something special within Cinemaware as well back in the day; one only has to glance at contemporary comments from those who worked on the game to sense their pride and excitement. There was a sense both inside and outside their offices that Cinemaware was finally beginning to crack a nut they’d been gnawing on for quite some time. Even Ken St. Andre was happy this time. “Cinemaware’s large creative team has managed to do a lot of things very well indeed in this game,” he wrote, “and as a result they have produced a game that looks great, sounds great, moves along at a rapid pace, is filled with off-the-wall humor without being dumb, and is occasionally both gripping and exciting.”

When It Came from the Desert proved a big commercial success, Cinemaware pulled together some ideas that had been left out of the original game due to space constraints, combined them with a plot involving the discovery of a second ant queen, and made it all into a sequel subtitled Ant-Heads!. Released at a relatively low price only as an add-on for the original game — thus foreshadowing a practice that would get more and more popular as the 1990s wore on — Ant-Heads! was essentially a new MasterPlan script that utilized the art and music assets from the original game, a fine demonstration of the power of Cinemaware’s new development system. It upped the difficulty a bit by straitening the time limit from fifteen days to ten, but otherwise played much like the original — which, considering how strong said original had been, suited most people just fine.

It Came from the Desert, along with the suite of tools used to create it, might very well have marked the start of exactly the new era of more sophisticated Cinemaware interactive movies that David Riordan had intended it to. As things shook out, however, it would have more to do with endings than beginnings. Cinemaware would manage just one more of these big productions before being undone by bad decisions, bad luck, and a changing marketplace. We’ll finish up with the story of their visionary if so often flawed games soon. In the meantime, by all means go play It Came from the Desert if time and motivation allow. I was frankly surprised at how well it still held up when I tackled it recently, and I think it just might surprise you as well.

(Sources: The One from April 1989, June 1989, and June 1990; ACE from April 1990; Commodore Magazine from November 1988; Questbusters from September 1989, February 1990, and May 1990; Matt Barton’s interview with Bob Jacob on Gamasutra.)

 
 

Tags: , , ,

The Manhole

The Manhole

Because the CD-ROM version of The Manhole sold in relatively small numbers in comparison to the original floppy version, the late Russell Lieblich’s surprisingly varied original soundtrack is too seldom heard today. So, in the best tradition of multimedia computing (still a very new and sexy idea in the time about which I’m writing), feel free to listen while you read.

The Manhole



Were HyperCard “merely” the essential bridge between Ted Nelson’s Xanadu fantasy and the modern World Wide Web, it would stand as one of the most important pieces of software of the 1980s. But, improbably, HyperCard was even more than that. It’s easy to get so dazzled by its early implementation of hypertext that one loses track entirely of the other part of Bill Atkinson’s vision for the environment. True to the Macintosh, “the computer for the rest of us,” Atkinson designed HyperCard as a sort of computerized erector set for everyday users who might not care a whit about hypertext for its own sake. With HyperCard, he hoped, “a whole new body of people who have creative ideas but aren’t programmers will be able to express their ideas or expertise in certain subjects.”

He made good on that goal. An incredibly diverse group of people worked with HyperCard, a group in which traditional hackers were very much the minority. Danny Goodman, the man who became known as the world’s foremost authority on HyperCard programming, was actually a journalist whose earlier experiences with programming had been limited to a few dabblings in BASIC. In my earlier article about hypertext and HyperCard, I wrote how “a professor of music converted his entire Music Appreciation 101 course into a stack.” Well, readers, I meant that literally. He did it himself. Industry analyst and HyperCard zealot Jan Lewis:

You can do things with it [HyperCard] immediately. And you can do sexy things: graphics, animation, sound. You can do it without knowing how to program. You get immediate feedback; you can make a change and see or hear it immediately. And as you go up on the learning curve — let’s say you learn how to use HyperTalk [the bundled scripting language] — again, you can make changes easily and simply and get immediate feedback. It just feels good. It’s fun!

And yet HyperCard most definitely wasn’t a toy. People could and did make great, innovative, commercial-quality software using it. Nowhere is the power of HyperCard — a cultural as well as a technical power — illustrated more plainly than in the early careers of Rand and Robyn Miller.

The Manhole

Rand and Robyn had a very unusual upbringing. The first and third of the four sons of a wandering non-denominational preacher, they spent their childhoods moving wherever their father’s calling took him: from Dallas to Albuquerque, from Hawaii to Haiti to Spokane. They were a classic pairing of left brain and right brain. Rand had taken to computers from the instant he was introduced to them via a big time-shared system whilst still in junior high, and had made programming them into his career. By 1987, the year HyperCard dropped, he was to all appearances settled in life: 28 years old, married with children, living in a small town in East Texas, working for a bank as a programmer, and nurturing a love for the Apple Macintosh (he’d purchased his first Mac within days of the machine’s release back in 1984). He liked to read books on science. His brother Robyn, seven years his junior, was still trying to figure out what to do with his life. He was attending the University of Washington in somewhat desultory fashion as an alleged anthropology major, but devoted most of his energy to drawing pictures and playing the guitar. He liked to read adventure novels.

HyperCard struck Rand Miller, as it did so many, with all the force of a revelation. While he was an accomplished enough programmer to make a living at it, he wasn’t one who particularly enjoyed the detail work that went with the trade. “There are a lot of people who love digging down into the esoterics of compilers and C++, getting down and dirty with typed variables and all that stuff,” he says. “I wanted a quick return on investment. I just wanted to get things done.” HyperCard offered the chance to “get things done” dramatically faster and more easily than any programming environment he had ever seen. He became an immediate convert.

The Manhole

With two small girls of his own, Rand felt keenly the lack of quality children’s software for the Macintosh. He hit upon the idea of making a sort of interactive storybook using HyperCard, a very natural application for a hypertext tool. Lacking the artistic talent to make a go of the pictures, he thought of his little brother Robyn. The two men, so far apart in years and geography and living such different lives, weren’t really all that close. Nevertheless, Rand had a premonition that Robyn would be the perfect partner for his interactive storybook.

But Robyn, who had never owned a computer and had never had any interest in doing so, wasn’t immediately enticed by the idea of becoming a software developer. Getting him just to consider the idea took quite a number of letters and phone calls. At last, however, Robyn made his way down to the Macintosh his parents kept in the basement of the family home in Spokane and loaded up the copy of HyperCard his brother had sent him. There, like so many others, he was seduced by Bill Atkinson’s creation. He started playing around, just to see what he could make. What he made right away became something very different from the interactive storybook, complete with text and metaphorical pages, that Rand had envisioned. Robyn:

I started drawing this picture of a manhole — I don’t even know why. You clicked on it and the manhole cover would slide off. Then I made an animation of a vine growing out. The vine was huge, “Jack and the Beanstalk”-style. And then I didn’t want to turn the page. I wanted to be able to navigate up the vine, or go down into the manhole. I started creating a navigable world by using the very simple tools [of HyperCard]. I created this place.  I improvised my way through this world, creating one thing after another. Pretty soon I was creating little canals, and a forest with stars. I was inventing it as I went. And that’s how the world was born.

For his part, Rand had no problem accepting the change in approach:

Immediately you are enticed to explore instead of turning the page. Nobody sees a hole in the ground leading downward and a vine growing upward and in the distance a fire hydrant that says, “Touch me,” and wants to turn the page. You want to see what those things are. Instead of drawing the next page [when the player clicked a hotspot], he [Robyn] drew a picture that was closer — down in the manhole or above on the vine. It was kind of a stream of consciousness, but it became a place instead of a book. He started sending me these images, and I started connecting them, trying to make them work, make them interactive.

The Manhole

In this fashion, they built the world of The Manhole together: Robyn pulling its elements from the flotsam and jetsam of his consciousness and drawing them on the screen, Rand binding it all together into a contiguous place, and adding sound effects and voice snippets here and there. If they had tried to make a real game of the thing, with puzzles and goals, such a non-designed approach to design would likely have gone badly wrong in a hurry.

Luckily, puzzles and goals were never the point of The Manhole. It was intended always as just an endlessly interesting space to explore. As such, it would prove capable of captivating children and the proverbial young at heart for hours, full as it was of secrets and Easter eggs hidden in the craziest of places. One can play with The Manhole on and off for literally years, and still continue to stumble upon the occasional new thing. Interactions are often unexpected, and unexpectedly delightful. Hop in a rowboat to take a little ride and you might emerge in a rabbit’s teacup. Start watching a dragon’s television — Why does a dragon have a television? Who knows! — and you can teleport yourself into the image shown on the screen to emerge at the top of the world. Search long enough, and you might just discover a working piano you can actually play. The spirit of the thing is perhaps best conveyed by the five books you find inside the friendly rabbit’s home: Alice in Wonderland; The Wind in the Willows; The Lion, the Witch, and the Wardrobe; Winnie the Pooh; and Metaphors of Intercultural Philosophy (“This book isn’t about anything!”). Like all of those books excepting, presumably, the last, The Manhole is pretty wonderful, a perfect blend of sweet cuteness and tart whimsy.

The Manhole

With no contacts whatsoever within the Macintosh software industry, the brothers decided to publish The Manhole themselves via a tiny advertisement in the back of Macworld magazine, taken out under the auspices of Prolog, a consulting company Rand had founded as a moonlighting venture some time before. They rented a tiny booth to show The Manhole publicly for the first time at the Hyper Expo in San Francisco in June of 1988. (Yes, HyperCard mania had gotten so intense that there were entire trade shows dedicated just to it.) There they were delighted to receive a visit from none other than HyperCard’s creator Bill Atkinson, with his daughter Laura in tow; not yet five years old, she had no trouble navigating through their little world. Incredibly, Robyn had never even heard the word “hypertext” prior to the show, had no idea about the decades of theory that underpinned the program he had used, savant-like, to create The Manhole. When he met a band of Ted Nelson’s disgruntled Xanadu disciples on the show floor, come to crash the HyperCard party, he had no idea what they were on about.

But the brothers’ most important Hyper Expo encounter was a meeting with Richard Lehrberg, Vice President for Product Development at Mediagenic,1 who took a copy of The Manhole away with him for evaluation. Lehrberg showed it to William Volk, whom he had just hired away from the small Macintosh and Amiga publisher Aegis to become Mediagenic’s head of technology; he described it to Volk unenthusiastically as “this little HyperCard thing” done by “two guys in Texas.” Volk was much more impressed. He was immediately intrigued by one aspect of The Manhole in particular: the way that it used no buttons or conventional user-interface elements at all. Instead, the pictures themselves were the interface; you could just click where you would and see what happened. It was perhaps a product of Robyn Miller’s sheer naivetee as much anything else; seasoned computer people, so used to conventional interface paradigms, just didn’t think like that. But regardless of where it came from, Volk thought it was genius, a breaking down of a wall that had heretofore always separated the user from the virtual world. Volk:

The Miller brothers had come up with what I call the invisible interface. They had gotten rid of the idea of navigation buttons, which was what everyone was doing: go forward, go backward, turn right, turn left. They had made the scenes themselves the interface. You’re looking at a fire hydrant. You click on the fire hydrant; the fire hydrant sprays water. You click on the fire hydrant again; you zoom in to the fire hydrant, and there’s a little door on the fire hydrant. That was completely new.

Of course, other games did have you clicking “into” their world to make things happen; the point-and-click adventure genre was evolving rapidly during this period to replace the older parser-driven adventure games. But even games like Déjà Vu and Maniac Mansion, brilliantly innovative though they were, still surrounded their windows into their worlds with a clutter of “verb” buttons, legacies of the genre’s parser-driven roots. The Manhole, however, presented the player with nothing but its world. What with its defiantly non-Euclidean — not to say nonsensical — representation of space and its lack of goals and puzzles, The Manhole wasn’t a conventional adventure game by any stretch. Nevertheless, it pointed the way to what the genre would become, not least in the later works of the Miller brothers themselves.

Much of Volk’s working life for the next two years would be spent on The Manhole, by the end of which period he would quite possibly be more familiar with its many nooks and crannies than its own creators were. He became The Manhole‘s champion inside Mediagenic, convincing his colleagues to publish it, thereby bringing it to a far wider audience than the Miller brothers could ever have reached on their own. Released by Mediagenic under their Activision imprint, it became a hit by the modest standards of the Macintosh consumer-software market. Macworld magazine named The Manhole the winner of their “Wild Card” category in a feature article on the best HyperCard stacks, while the Software Publishers Association gave it an “Excellence in Software” award for “Best New Use of a Computer.”

We aware that The Manhole was collecting a certain computer-chic cachet, Mediagenic/Activision didn't hesitate to play that angle up in their advertising.

Well aware that The Manhole was collecting a certain chic cachet to itself, Mediagenic/Activision didn’t hesitate to play that angle up in their advertising.

Had that been left to be that, The Manhole would remain historically interesting as both a delightful little curiosity of its era and as the starting point of the hugely significant game-development careers of the Miller brothers. Yet there’s more to the story.

William Volk, frustrated with the endless delays of CD-I and the state of paralysis the entire industry was in when it came to the idea of publishing entertainment software on CD, had been looking for some time for a way to break the logjam. It was Stewart Alsop, an influential tech journalist, who first suggested to Volk that the answer to his dilemma was already part of Mediagenic’s catalog — that The Manhole would be perfect for CD-ROM. Volk was just the person to see such a project through, having already experimented extensively with CD-ROM and CD-I  as part of Aegis as well as Mediagenic. With the permission of the Miller brothers, he recruited Russell Lieblich, Mediagenic’s longstanding guru in all things music- and sound-related, to compose and perform a soundtrack for The Manhole which would play from the CD as the player explored.

An important difference separates the way the music worked in the CD-ROM version of The Manhole from way it worked in virtually all computer games to appear before it. The occasional brief digitized snippet aside, music in computer games had always been generated on the computer, whether by sound chips like the Commodore 64’s famous SID or entire sound boards like the top-of-its-class Roland MT-32 (we shall endeavor to forget the horrid beeps and squawks that issued from the IBM PC and Apple II’s native sound hardware). But The Manhole‘s music, while having been originally generated entirely or almost entirely on computers in Lieblich’s studio, was then recorded onto CD for digital playback, just like a song on a music CD. This method, made possible only by evolving computer sound hardware and, most importantly, by the huge storage capacity of a CD-ROM, would in the years to come slowly become simply the way that computer-game music was done. Today many big-budget titles hire entire orchestras to record soundtracks as elaborate and ambitious as the ones found in big Hollywood feature films, whilst also including digitized recordings of voices, squealing tires, explosions, and all the inevitable rest. In fact, surprisingly little of the sound present in most modern games is synthesized sound, a situation that has long since relegated elaborate setups like the Roland MT-32 to the status of white elephants; just pipe your digitized recording through a digital-to-analog converter and be done with it already.

As the very first title to go all digitized all the time, The Manhole didn’t have a particularly easy time of it; getting the music to play without breaking up or stuttering as the player explored presented a huge challenge on the Macintosh, a machine whose minimalist design burdened the CPU with all of the work of sound generation. However, Volk and his colleagues got it going at last. Published in the spring of 1989, the CD-ROM version of The Manhole marked a major landmark in the history of computing, the first American game — or, at least, software toy (another big buzzword of the age, as it happens) — to be released on CD-ROM.2 Volk, infuriated with Philips for the chaos and confusion CD-I’s endless delays had wrought in an industry he believed was crying out for the limitless vistas of optical storage, sent them a copy of The Manhole along with a curt note: “See! We did it! We’re tired of waiting!”

And they weren’t done yet. Having gotten The Manhole working on CD-ROM on the Macintosh, Volk and his colleagues at Mediagenic next tackled the daunting task of porting it to the most popular platform for consumer software, MS-DOS — a platform without HyperCard. To address this lack, Mediagenic developed a custom engine for CD-ROM titles on MS-DOS, dubbing it the Multimedia Applications Development Environment, or MADE.3 Mediagenic’s in-house team of artists redrew Robyn Miller’s original black-and-white illustrations in color, and The Manhole on CD-ROM for MS-DOS shipped in 1990.

In my opinion, The Manhole lost a little bit of its charm when it was colorized. The VGA graphics, impressive in their day, look a bit garish today.

In my opinion, The Manhole lost some of its unique charm when it was colorized for MS-DOS. The VGA graphics, impressive in their day, look just a bit garish and overdone today in comparison to the classic pen-and-ink style of the original.

The Manhole, idiosyncratic piece of artsy children’s software that it was, could hardly have been expected to break the industry’s optical logjam all on its own. Its CD-ROM incarnation, for that matter, wasn’t all that hugely different from the floppy version. In the end, one has to acknowledge that The Manhole on CD-ROM was little more than the floppy version with a soundtrack playing in the background — a nice addition certainly, but perhaps not quite the transformative experience which all of the rhetoric surrounding CD-ROM’s potential might have led one to expect. It would take another few excruciating years for a CD-ROM drive to become a must-have accessory for everyday American computers. Yet every revolution has to start somewhere, and William Volk deserves his full measure of credit for doing what he could to push this one forward in the only way that could ultimately matter: by stepping up and delivering a real, tangible product at long last. As Steve Jobs used to say, “Real artists ship.”

The importance of The Manhole, existing as it does right there at the locus of so much that was new and important in computing in the late 1980s, can be read in so many ways that there’s always a danger of losing some of them in the shuffle. But it should never be forgotten whilst trying to sort through the tangle that this astonishingly creative little world was principally designed by someone who had barely touched a computer in his life before he sat down with HyperCard. That he wound up with something so fascinating is a huge tribute not just to Robyn Miller and his enabling brother Rand, but also to Bill Atkinson’s HyperCard itself. Apple has long since abandoned HyperCard, and we enjoy no precise equivalent to it today. Indeed, its vision of intuitive, non-pretentious, fun programming is one that we’re in danger of losing altogether. Being one who loves the computer most of all as the most exciting tool for creation ever invented, I can’t help but see that as a horrible shame.

The Miller brothers had, as most of you reading this probably know, a far longer future in front of them than HyperCard would get to enjoy. Already well before 1988 was through they had rechristened themselves Cyan Productions, a name that felt much more appropriate for a creative development house than the businesslike Prolog. As Cyan, they made two more pieces of children’s software, Cosmic Osmo and the Worlds Beyond the Makerei and Spelunx and the Caves of Mr. Seudo. Both were once again made using HyperCard, and both were very much made in the spirit of The Manhole. And like The Manhole both were published on CD-ROM as well as floppy disk; the Miller brothers, having learned much from Mediagenic’s process of moving their first title to CD-ROM, handled the CD-ROM as well as the floppy versions themselves when it came to these later efforts. Opinions are somewhat divided on whether the two later Cyan children’s titles fully recapture the magic that has led so many adults and children alike over the years to spend so much time plumbing the depths of The Manhole. None, however, can argue with the significance of what came next, the Miller brothers’ graduation to games for adults — and, as it happens, another huge milestone in the slow-motion CD-ROM revolution. But that story, like so many others, is one that we’ll have to tell at another time.

(Sources: Amstrad Action of January 1990; Macworld of July 1988, October 1988, November 1988, March 1989, April 1989, and December 1989; Wired of August 1994 and October 1999; The New York Times of November 28 1989. Also the books Myst and Riven: The World of the D’ni by Mark J.P. Wolf and Prima’s Official Strategy Guide: Myst by Rick Barba and Rusel DeMaria, and the Computer Chronicles television episodes entitled “HyperCard,” “MacWorld Special 1988,” “HyperCard Update,” and “Hypertext.” Online sources include Robyn Miller’s Myst postmortem from the 2013 Game Developer’s Conference; Richard Moss’s Ludiphilia podcast; a blog post by Robyn Miller. Finally, my huge thanks to William Volk for sharing his memories and impressions with me in an interview and for sending me an original copy of The Manhole on CD-ROM for my research.

The Manhole: Masterpiece Edition, a remake supervised by the Miller brothers in 1994 which sports much-improved graphics and sound, is available for purchase on Steam.)


  1. Activision was renamed Mediagenic at almost the very instant that Lehrberg first met the Miller brothers. When the name change was greeted with universal derision, Activision/Mediagenic CEO Bruce Davis quickly began backpedaling on his hasty decision. The Manhole, for instance, was released by Mediagenic under their “Activision” label — which was odd because under the new ordering said label was supposed to be reserved for games, and The Manhole was considered children’s software, not a traditional game. I just stick with the name “Mediagenic” in this article as the least confusing way to address a confusing situation. 

  2. The first CD-based software to reach European consumers says worlds about the differences that persisted between American and European computing, and about the sheer can-do ingenuity that so often allowed British programmers in particular to squeeze every last ounce of potential out of hardware that was usually significantly inferior to that enjoyed by their American counterparts. Codemasters, a budget software house based in Warwickshire, came up with a very unique shovelware package for the 1989 Christmas season. They transferred thirty old games from cassette to a conventional audio CD, which they then sold along with a special cable to run the output from an ordinary music-CD player into a Sinclair or Amstrad home computer. “Here’s your CD-ROM,” they said. “Have a ball.” By all accounts, Codemasters’s self-proclaimed “CD revolution,” kind of hilarious and kind of brilliant, did quite well for them. When it came to doing more with less in computing, you never could beat the Brits. 

  3. MADE’s scripting language was to some extent based on AdvSys, a language for amateur text-adventure creation that never quite took off like the contemporaneous AGT

 
 

Tags: , ,

A Slow-Motion Revolution

CD-ROM

A quick note on terminology before we get started: “CD-ROM” can be used to refer either to the use of CDs as a data-storage format for computers in general or to the Microsoft-sponsored specification for same. I’ll be using the term largely in the former sense in the introduction to this article, in the latter after something called “CD-I” enters the picture. I hope the point of transition won’t be too hard to identify, but my apologies if this leads to any confusion. Sometimes this language of ours is a very inexact thing.



In the first week of March 1986, much of the computer industry converged on Seattle for the first annual Microsoft CD-ROM Conference. Microsoft had anticipated about 500 to 600 attendees to the four-day event. Instead more than 1000 showed up, forcing the organizers to reject many of them at the door of a conference center that by law could only accommodate 800 people. Between the presentations on CD-ROM’s bright future, the attendees wandered through an exhibit hall showcasing the format’s capabilities. The hit of the hall was what was about to become the first CD-ROM product ever to be made available for sale to the public, consisting of the text of all 21 volumes of the Grolier Academic Encyclopedia, some 200 MB in all, on a single disc. It was to be published by KnowledgeSet, a spinoff of Digital Research. Digital’s founder Gary Kildall, apparently forgiving Bill Gates his earlier trespasses in snookering a vital IBM contract out from under his nose, gave the conference’s keynote address.

Kildall’s willingness to forgive and forget in light of the bright optical-storage future that stood before the computer industry seemed very much in harmony with the mood of the conference as a whole. Sentiments often verged on the utopian, with talk of a new “paperless society” abounding, a revolution to rival that of Gutenberg. “The compact disc represents a major discontinuity in the cost of producing and distributing information,” said one Ed Schmid of DEC. “You have to go back to the invention of movable type and the printing press to find something equivalent.” The enthusiasm was so intense and the good vibes among the participants — many of them, like Gates and Kildall, normally the bitterest of enemies — so marked that some came to call the conference “the computer industry’s Woodstock.” If the attendees couldn’t quite smell peace and love in the air, they certainly could smell potential and profit.

All the excitement came down to a single almost unbelievable number: the 650 MB of storage offered by every tiny, inexpensive-to-manufacture compact disc. It’s very, very difficult to fully convey in our current world of gigabytes and terabytes just how inconceivably huge a figure 650 MB actually was in 1986, a time when a 40 MB hard drive was a cavernous, how-can-I-ever-possibly-fill-this-thing luxury found on only the most high-end computers. For developers who had been used to making their projects fit onto floppy disks boasting less than 1 MB of space, the idea of CD-ROM sounded like winning the lottery several times over. You could put an entire 21-volume encyclopedia on one of the things, for Pete’s sake, and still have more than two-thirds of the space left over! Suddenly one of the most nail-biting constraints against which they had always labored would be… well, not so much eased as simply erased. After all, how could anything possibly fill 650 MB?

And just in case that wasn’t enough great news, there was also the fact that the CD was a read-only format. If the industry as a whole moved to CD-ROM as its format of choice, the whole piracy problem, which organizations like the Software Publishers Association ardently believed was costing it billions every year, would dry up and blow away like a dandelion in the fall. Small wonder that the mood at the conference sometimes approached evangelistic fervor. Microsoft, as swept away with it all as anyone, published a collection of the papers that were presented there under the very non-businesslike, non-Microsoft-like title of CD-ROM: The New Papyrus. The format just seemed to demand a touch of rhapsodic poetry.

But the rhapsody wasn’t destined to last very long. The promised land of a software industry built around the effectively unlimited storage capacity of the compact disc would prove infuriatingly difficult to reach; the process of doing so would stretch over the better part of a decade, by the end of which time the promised land wouldn’t seem quite so promising anymore. Throughout that stretch, CD-ROM was always coming in a year or two, always the next big thing right there on the horizon that never quite arrived. This situation, so antithetical to the usual propulsive pace of computer technology, was brought about partly by limitations of the format itself which were all too easy to overlook amid the optimism of that first conference, and partly by a unique combination of external factors that sometimes almost seemed to conspire, perfect-storm-like, to keep CD-ROM out of the hands of consumers.



The compact disc was developed as a format for music by a partnership of the Dutch electronics giant Philips and the Japanese Sony during the late 1970s. Unlike the earlier analog laser-disc format for the storage of video, itself a joint project of Philips and the American media conglomerate MCA, the CD stored information digitally, as long strings of ones and zeros to be passed through digital-to-analog converters and thus turned into rich stereo sound. Philips and Sony published the final specifications for the music CD in 1980, opening up to others who wished to license the technology what would become known as the “Red Book” standard after the color of the binder in which it was described. The first consumer-oriented CD players began to appear in Japan in 1982, in the rest of the world the following year. Confined at first to the high-end audiophile market, by the time of that first Microsoft CD-ROM Conference in 1986 the CD was already well on its way to overtaking the record album and, eventually, the cassette tape to become the most common format for music consumption all over the world.

There were good reasons for the CD’s soaring popularity. Not only did CDs sound better than at least all but the most expensive audiophile turntables, with a complete absence of hiss or surface noise, but, given that nothing actually touched the surface of a disc when it was being played, they could effectively last forever, no matter how many times you listened to them; “Perfect sound forever!” ran the tagline of an early CD advertising campaign. Then there was the way you could find any song you liked on a CD just by tapping a few buttons, as opposed to trying to drop a stylus on a record at just the right point or rewind and fast-forward a cassette to just the right spot. And then there was the way that CDs could be carried around and stored so much more easily than a record album, plus the way they could hold up to 75 minutes worth of music, enough to pack many double vinyl albums onto a single CD. Throw in the lack of a need to change sides to listen to a full album, and seldom has a new media format appeared that is so clearly better than the existing formats in almost all respects.

It didn’t take long for the computer industry to come to see the CD format, envisioned originally strictly as a music medium, as a natural one to extend to other types of data storage. Where the rubber met the road — or the laser met the platter — a CD player was just a mechanism for reading bits off the surface of the disc and sending them on to some other circuitry that knew what to do with them. This circuitry could just as easily be part of a computer as a stereo system.

Such a sanguine view was perhaps a bit overly reductionist. When one started really delving into the practicalities of the CD as a format for data storage, one found a number of limitations, almost all of them drawn directly from the technology’s original purpose as a music-delivery solution. For one thing, CD drives were only capable of reading data off a disc at a rate of 153.6 K per second, this figure being not coincidentally exactly the speed required to stream standard CD sound for real-time playback. Such a throughput was considered pretty good but hardly breathtaking by mid-1980s hard-disk standards; an average 10 MB hard drive of the period might have a transfer rate of about 96 K per second, although high-performance drives could triple or even quadruple that figure.

More problematic was a CD drive’s atrocious seek speed — i.e., the speed at which files could be located for reading on a disc. An average 10 MB hard disk of 1986 had a typical seek time of about 100 milliseconds, a worst-case-scenario maximum of about 200 — although, again, high-performance models could improve on those figures by a factor of four. A CD drive, by contrast, had a typical seek time of 500 milliseconds, a maximum of 1000  — one full second. The designers of the music CD hadn’t been particularly concerned by the issue, for a music-CD player would spend the vast majority of its time reading linear streams of sound data. On those occasions when the user did request a certain track found deeper on the disc, even a full second spent by the drive in seeking her favorite song would hardly be noticed unduly, especially in comparison to the pain of trying to find something on a cassette or a record album. For storage of computer data, however, the slow seek speed gave far more cause for concern.

The LMS LaserDrive is typical of the oddball formats that proliferated during the early years of optical data storage. It can hold 1 GB on each side of a double-sided disc. Unfortunately, each disc cost hundreds of dollars, the unit itself thousands.

The Laser Magnetic Storage LaserDrive is typical of the oddball formats that proliferated during the early years of optical data storage. It could hold 1 GB on each side of a double-sided disc. Unfortunately, each disc cost hundreds of dollars, the unit itself thousands.

Given these issues of performance, which promised only to get more marked in comparison to hard drives as the latter continued to get faster, one might well ask why the industry was so determined to adapt the music CD specifically to data storage rather than using Philips and Sony’s work as a springboard to another optical format with affordances more suitable to the role. In fact, any number of companies did choose the latter course, developing optical formats in various configurations and capacities, many even offering the ability to write to as well as read from the disc. (Such units were called “WORM” drives, for “Write Once Read Many”; data, in other words, could be written to their discs, but not erased or rewritten thereafter.) But, being manufactured in minuscule quantities as essentially bespoke items, all such efforts were doomed to be extremely expensive.

The CD, on the other hand, had the advantage of an existing infrastructure dedicated to stamping out the little silver discs and filling them with data. At the moment, that data consisted almost exclusively of encoded music, but the process of making the discs didn’t care a whit what the ones and zeros being burned into them actually represented. CD-ROM would allow the computer industry to piggy-back on an extant, mature technology that was already nearing ubiquity. That was a huge advantage when set against the cost of developing a new format from scratch and setting up a similar infrastructure to turn it out in bulk — not to mention the challenge of getting the chaotic, hyper-competitive computer industry to agree on another format in the first place. For all these reasons, there was surprisingly little debate on whether adapting the music CD to the purpose of data storage was really the best way to go. For better or for worse, the industry hitched its wagon to the CD; its infelicities as a general-purpose data-storage solution would just have to be worked around.

One of the first problems to be confronted was the issue of a logical file format for CD-ROM. The physical layout of the bits on a data CD was largely dictated by the design of the platters themselves and the machinery used to burn data into them. Yet none of that existing infrastructure had anything to say about how a filesystem appropriate for use with a computer should work within that physical layout. Microsoft, understanding that a certain degree of inter-operability was a valuable thing to have even among the otherwise rival platforms that might wind up embracing CD-ROM, pushed early for a standardized logical format. As a preliminary step on the road to that landmark first CD-ROM Conference, they brought together a more intimate group of eleven other industry leaders at the High Sierra Resort and Casino in Lake Tahoe in November of 1985 to hash out a specification. Among those present were Philips, Sony, Apple, and DEC; notably absent was IBM, a clear sign of Microsoft’s growing determination to step out of the shadow of Big Blue and start dictating the direction of the industry in their own right. The so-called “High Sierra” format would be officially published in finalized form in May of 1986.

In the run-up to the first Microsoft CD-ROM Conference, then, everything seemed to be coming together nicely. CD-ROM had its problems, but virtually everyone agreed that it was a tremendously exciting development. For their part, Microsoft, driven by a Bill Gates who was personally passionate about the format and keenly aware that his company, the purveyor of clunky old MS-DOS, needed for reasons of public relations if nothing else a cutting-edge project to rival any of Apple’s, had established themselves as the driving force behind the nascent optical revolution. And then, just five days before the conference was scheduled to convene — timing that struck very few as accidental — Philips injected a seething ball of chaos into the system via something called CD-I.

CD-I was a different, competing file format for CD data storage. But CD-I was also much, much more. Excited by the success the music CD had enjoyed, Philips, with the tacit support of Sony, had decided to adapt the format into the all-singing, all-dancing, all-around future of home entertainment in the abstract. Philips would be making a CD-I box for the home, based on a minimalist operating system called OS-9 running on a Motorola 68000 processor. But this would be no typical home computer; the user would be able to control CD-I entirely using a VCR-style remote control. CD-I was envisioned as the interactive television of the future, a platform for not only conventional videogames but also lifestyle products of every description, from interactive astronomy lessons to the ultimate in exercise tapes. Philips certainly wasn’t short of ideas:

Think of owning an encyclopedia which presents chosen topics in several different ways. Watching a short audio/video sequence to gain a general background to the topic. Then choosing a word or subject for more in-depth study. Jumping to another topic without losing your place — and returning again after studying the related topic to proceed further. Or watching a cartoon film, concert, or opera with the interactive capabilities of CD-I added. Displaying the score, libretto, or text onscreen in a choice of languages. Or removing one singer or instrument to be able to sing along with the music.

Just as they had with the music CD, Philips would license the specifications to whoever else wanted to make gadgets of their own capable of playing the CD-I discs. They declared confidently that there would be as many CD-I players in the world as phonographs within a few years of the format’s debut, that “in the long run” CD-I “could be every bit as big as the CD-audio market.”

Already at the Microsoft CD-ROM Conference, Phillips began aggressively courting developers in the existing computer-games industry to embrace CD-I. Plenty of them were more than happy to do so. Despite the optimism that dominated at the conference, it wasn’t clear how much priority Microsoft, who earned the vast majority of their money from business computing, would really give to more consumer-focused applications of CD-ROM like gaming. Philips, on the other hand, was a giant of consumer electronics. While they paid due lip service to applications of CD-I in areas like corporate training, it was always clear that it would be first and foremost a technology for the living room, one that comprehensively addressed what most believed was the biggest factor limiting the market for conventional computer games: that the machines that ran them were just too fiddly to operate. At the time that CD-I was first announced, the videogame console was almost universally regarded as a dead fad; the machine that would so dramatically reverse that conventional wisdom, the Nintendo Entertainment System, was still an oddball upstart being sold in selected markets only. Thus many game makers saw CD-I as their only viable route out of the back bedroom and into the living room — into the mainstream of home entertainment.

So, when Philips spoke, the game developers listened. Many publishers, including big powerhouses like Activision as well as smaller boutique houses like the 68000 specialists Aegis Development, committed to CD-I projects during 1986, receiving in return a copy of the closely guarded “Green Book” that detailed the inner workings of the system. There was no small pressure to get in on the action quickly, for Philips was promising to ship the first finished CD-I units in time for the Christmas of 1987. Trip Hawkins of Electronic Arts made CD-I a particular priority, forming a whole new in-house development division for the platform. He’d been waiting for a true next-generation mainstream game machine for years. At first, he’d thought the Commodore Amiga would be that machine, but Commodore’s clueless marketing and the Amiga’s high price were making such an outcome look less and less likely. So now he was looking to CD-I, which promised graphics and sound as good as those of the Amiga, along with the all but infinite storage of the unpirateable CD format, and all in a tidy, inexpensive package designed for the living room. What wasn’t to like? He imagined Silicon Valley becoming “the New Hollywood,” imagined a game like Electronic Arts’s hit Starflight remade as a CD-I experience.

You could actually do it just like a real movie. You could hire a costume designer from the movie business, and create special-effects costumes for the aliens. Then you’d videotape scenes with the aliens, and have somebody do a soundtrack for the voices and for the text that they speak in the game.

Then you’d digitize all of that. You could fill up all the space on the disc with animated aliens and interesting sounds. You would also have a universe that’s a lot more interesting to look at. You might have an out-of-the-cockpit view, like Star Trek, with planets that look like planets — rotating, with detailed zooms and that sort of thing.

Such a futuristic vision seemed thoroughly justifiable based on Philips’s CD-I hype, which promised a rich multimedia environment combining CD-quality stereo sound with full-motion video, all at a time when just displaying a photo-realistic still image captured from life on a computer screen was considered an amazing feat. (Among extant personal computers, only the Amiga could manage it.) When developers began to dive into the Green Book, however, they found the reality of CD-I often sharply at odds with the hype. For instance, if you decided to take advantage of the CD-quality audio, you had to tie up the CD drive entirely to stream it, meaning you couldn’t use it to fetch pictures or video or anything else for this supposed rich multimedia environment.

Video playback became an even bigger sore spot that echoed back to those fundamental limitations that had been baked into the CD when it was regarded only as a medium for music delivery. A transfer rate of barely 150 K per second just wasn’t much to work with in terms of streaming video. Developers found themselves stymied by an infuriating Catch-22. If you tried to work with an uncompressed or only modestly compressed video format, you simply couldn’t read it off the disk fast enough to display it in real-time. Yet if you tried to use more advanced compression techniques, it became so expensive in terms of computation to decompress the data that the CD-I unit’s 68000 CPU couldn’t keep up. The best you could manage was to play video snippets that only filled a quarter of the screen — not a limitation that felt overly compatible with the idea of CD-I as the future of home entertainment in the abstract. It meant that a game like the old laser-disc-driven arcade favorite Dragon’s Lair, the very sort of thing people tended to think of first when you mentioned optical storage in the context of entertainment, would be impossible with CD-I. The developers who had signed contracts with Philips and committed major resources to CD-I could only soldier on and hope the technology would continue to evolve.

By 1987, then, the CD as a computer format had been split into two camps. While the games industry had embraced CD-I, the powers that were in business computing had jumped aboard the less ambitious, Microsoft-sponsored standard of CD-ROM, which solved issues like the problematic video playback of CD-I by the simple expediency of not having anything at all to say about them. Perhaps the most impressive of the very early CD-ROM products was the Microsoft Bookshelf, which combined Roget’s Thesaurus, The American Heritage Dictionary, The Chicago Manual of Style, The World Almanac and Book of Facts, and Bartlett’s Familiar Quotations alongside spelling and grammar checkers, a ZIP Code directory, and a collection of forms and form letters, all on a single disc — as fine a demonstration of the potential of the new format as could be imagined short of all that rich multimedia that Philips had promised. Microsoft proudly noted that Bookshelf was their largest single product ever in terms of the number of bits it contained and their smallest ever in physical size. Nevertheless, with most drives costing north of $1000 and products to use with them like Microsoft Bookshelf hundreds more, CD-ROM remained a pricey proposition found in vanishingly few homes — and for that matter not in all that many businesses either.

But at least actual products were available in CD-ROM format, which was more than could be said for CD-I. As 1986 turned into 1987, developers still hadn’t received any CD-I hardware at all, being forced to content themselves with printed specifications and examples of the system in action distributed on videotape by Philips. Particularly for a small company like Aegis, which had committed heavily to a game based on Jules Verne’s 20,000 Leagues Under the Sea, for which they had recruited Jim Sachs of Defender of the Crown fame as illustrator, it was turning into a potentially dangerous situation.

The computer industry — even those parts of it now more committed to CD-I than CD-ROM — dutifully came together once again for the second Microsoft CD-ROM Conference in March of 1987. In contrast to the unusual Pacific Northwest sunshine of the previous conference, the weather this year seemed to match the more unsettled mood: three days of torrential downpour. It was a more skeptical and decidedly less Woodstock-like audience who filed into the auditorium one day for a presentation by no less unlikely a party than the venerable old American conglomerate General Electric. But in the course of that presentation, the old rapture came back in a hurry, culminating in a spontaneous standing ovation. What had so shocked and amazed the audience was the impossible made real: full-screen video running in real-time off a CD drive connected to what to all appearances was an ordinary IBM PC/AT computer. Digital Video Interactive, or DVI, had just made its dramatic debut.

DVI’s origins dated back to 1983, when engineer Larry Ryan of another old-school American company, RCA, had been working on ways to make the old analog laser-disc technology more interactive. Growing frustrated with the limitations he kept bumping against, he proposed to his bosses that RCA dump the laser disc from the equation entirely and embrace digital optical storage. They agreed, and a new project on those lines was begun in 1984. It was still ongoing two years later — just reaching the prototype stage, in fact — when General Electric acquired RCA.

DVI worked by throwing specialized hardware at the problem which Philips had been fruitlessly trying to solve via software alone. By using ultra-intensive compression techniques, it was possible to crunch video playing at a resolution of 256 X 240 — not an overwhelming resolution even by the standards of the day, but not that far below the practical resolution of a typical television set either — down to a size below 153.6 K per second of footage without losing too much quality. This fact was fairly well-known, not least to Philips. The bottleneck had always been the cost of decompressing the footage fast enough to get it onto the screen in real time. DVI attacked this problem via a hardware add-on that consisted principally of a pair of semi-autonomous custom chips designed just for the task of decompressing the video stream as quickly as possible. DVI effectively transformed the potential 75 minutes of sound that could be stored on a CD into 75 minutes of video.

Philosophically, the design bore similarities to the Amiga’s custom chips — similarities which became even more striking when you considered some of the other capabilities that came almost as accidental byproducts of the design. You could, for instance, overlay conventional graphics onto the streaming video by using the computer’s normal display circuitry in conjunction with DVI, just as you could use an Amiga to overlay titles and other graphics onto a “genlocked” feed from a VCR or other video source. But the difference with DVI was that it required no complicated external video source at all, just a CD in the computer’s CD drive. The potential for games was obvious.

In this demonstration of DVI's potential, the user can explore an ancient Mayan archeological site that's depicted using real-world video footage, while the control icons are traditional computer graphics.

In this demonstration of DVI’s potential, the user can explore an ancient Mayan archeological site that’s depicted using real-world video footage, while the icons used as controls are traditional computer graphics.

Still, DVI’s dramatic debut barely ended before the industry’s doubts began. It seemed clear enough that DVI was technically better than CD-I, at least in the hugely important area of video playback, but General Electric — hardly anyone’s idea of a nimble innovator — offered as yet no clear road map for the technology, no hint of what they really planned to do with it. Should game developers place their CD-I projects on hold to see if something better really was coming in the form of DVI, or should they charge full speed ahead and damn the torpedoes? Some did one, some did the other; some made halfhearted commitments to both technologies, some vacillated between them.

But worst of all was the effect that DVI had on Phillips. They were thrown into a spin by that presentation from which they never really recovered. Fearful of getting their clock cleaned in the marketplace by a General Electric product based on DVI, Phillips stopped CD-I in its tracks, demanding that a way be found to make it do full-screen video as well. From an original plan to ship the first finished CD-I units in time for Christmas 1987, the timetable slipped to promise the first prototypes for developers by January of 1988. Then that deadline also came and went, and all that developers had received were software emulators. Now the development prototypes were promised by summer 1988, finished units expected to ship in 1989. The delay notwithstanding, Philips still confidently predicted sales in “the tens of millions.” But then world domination was delayed again until 1990, then 1991.

Prototype CD-I units finally began reaching developers in early 1989, years behind schedule.

Prototype CD-I units finally began reaching developers in early 1989, years behind schedule.

Wanting CD-I to offer the best of everything, the project chased its own tail for years, trying to address every actual or potential innovation from every actual or potential rival. The game publishers who had jumped aboard with such enthusiasm in the early days were wracked with doubt upon the announcement of each successive delay. Should they jump off the merry-go-round now and cut their losses, or should they stay the course in the hope that CD-I finally would turn into the revolutionary product Philips had been promising for so long? To this day, you merely have to mention CD-I to even the most mild-mannered old games-industry insider to be greeted with a torrent of invective. Philips’s merry-go-round cost the industry huge. Some smaller developers who had trusted Philips enough to bet their very survival on CD-I paid the ultimate price. Aegis, for example, went out of business in 1990 with CD-I still vaporware.

While CD-I chased its tail, General Electric, the unwitting instigators of all this chaos, tried to decide in their slow, bureaucratic way what to do with this DVI thing they’d inherited. Thus things were as unsettled as ever on the CD-I and DVI fronts when the third Microsoft CD-ROM Conference convened in March of 1988. The old plain-Jane CD-ROM format, however, seemed still to be advancing slowly but steadily. Certainly Microsoft appeared to be in fine fettle; harking back to the downpour that had greeted the previous year’s conference, they passed out oversized gold umbrellas to everyone — emblazoned, naturally, with the Microsoft logo in huge type. They could announce at their conference that the High Sierra logical format for CD-ROM had been accepted, with some modest modifications to support languages other than English, by the International Standards Organization as something that would henceforward be known as “ISO 9660.” (It remains the standard logical format for CD-ROM to this day.) Meanwhile Philips and Sony were about to begrudgingly codify the physical format for CD-ROM, extant already as a de facto standard for several years now, as the Yellow Book, latest addition to a library of binders that was turning into quite the rainbow. Apple, who had previously been resistant to CD-ROM, driven as it was by their arch-rival Microsoft, showed up with an official CD-ROM drive for a Macintosh or even an Apple II, albeit at a typically luxurious Apple price of $1200. Even IBM showed up for the conference this time, albeit with a single computer attached to a non-IBM CD-ROM drive and a carefully noncommittal official stance on all this optical evangelism.

As CD-ROM gathered momentum, the stories of DVI and CD-I alike were already beginning to peter out in anticlimax. After doing little with DVI for eighteen long months, General Electric finally sold it to Intel at the end of 1988, explaining that DVI just “didn’t mesh with [their] strategic plans.” Intel began shipping DVI setups to early adopters in 1989, but they cost a staggering $20,000 — a long, long way from a reasonable consumer price point. DVI continued to lurch along into the 1990s, but the price remained too high. Intel, possessed of no corporate tradition of marketing directly to consumers, often seemed little more motivated to turn DVI into a practical product than had been General Electric. Thus did the technology that had caused such a sensation and such disruption in 1987 gradually become yesterday’s news.

Ironically, we can lay the blame for the creeping irrelevancy of DVI directly at the feet of the work for which Intel was best known. As Gordon Moore — himself an Intel man — had predicted decades before, the overall throughput of Intel’s most powerful microprocessors continued to double every two years or so. This situation meant that the problem DVI addressed through all that specialized hardware — that of conventional general-purpose CPUs not having enough horsepower to decompress an ultra-compressed video stream fast enough — wasn’t long for this world. And meanwhile other engineers were attacking the problem from the other side, addressing the standard CD’s reading speed of just 153.6 K per second. They realized that by applying an integral multiplier to the timing of a CD drive’s circuitry, its reading (and seeking) speed could be increased correspondingly. Soon so-called “2X” drives began to appear, capable of reading data at well over 300 K per second, followed in time by “4X” drives, “8X” drives, and whatever unholy figure they’ve reached by today. These developments rendered all of the baroque circuitry of DVI pointless, a solution in search of a problem. Who needed all that complicated stuff?

CD-I’s end was even more protracted and ignominious. The absurd wait eventually got to be too much for even the most loyal CD-I developers. One by one, they dropped their projects. It marked a major tipping point when in 1989 Electronic Arts, the most enthusiastic of all the software publishers in the early days of CD-I, closed down the department they had formed to develop for the platform, writing off millions of dollars on the aborted venture. In another telling sign of the times, Greg Riker, the manager of that department, left Electronic Arts to work for Microsoft on CD-ROM.

When CD-I finally trickled onto store shelves just a few weeks shy of Christmas 1991, it was able to display full-screen video of a sort but only in 128 colors, and was accompanied by an underwhelming selection of slapdash games and lifestyle products, most funded by Philips themselves, that were a far cry from those halcyon expectations of 1986. CD-I sales disappointed — immediately, consistently, and comprehensively. Philips, nothing if not persistent, beat the dead horse for some seven years before giving up at last, having sold only 1 million units in total, many of them at fire-sale discounts.

In the end, the big benefactor of the endless CD-I/DVI standoff was CD-ROM, the simple, commonsense format that had made its public debut well before either of them. By 1993 or so, you didn’t need anything special to play video off a CD at equivalent or better quality to that which had been so amazing in 1987; an up-to-date CPU combined with a 2X CD-ROM drive would do the job just fine. The Microsoft standard had won out. Funny how often that happened in the 1980s and 1990s, isn’t it?

Bill Gates’s reputation as a master Machiavellian being what it is, I’ve heard it suggested that the chaos and indecision which followed the public debut of DVI had been consciously engineered by him — that he had convinced a clueless General Electric to give that 1987 demonstration and later convinced Intel to keep DVI at least ostensibly alive and thus paralyzing Philips long enough for everyday PC hardware and vanilla CD-ROM to win the day, all the while knowing full well that DVI would never amount to anything. That sounds a little far-fetched to this writer, but who knows? Philips’s decision to announce CD-I five days before Microsoft’s CD-ROM Conference had clearly been a direct shot across Bill Gates’s bow, and such challenges did tend not to end well for the challengee. Anything else is, and must likely always remain, mere speculation.

(Sources: Amazing Computing of May 1986; Byte of May 1986, October 1986, April 1987, January 1989, May 1989, and December 1990; Commodore Magazine of November 1988; 68 Micro Journal of August/September 1989; Compute! of February 1987 and June 1988; Macworld of April 1988; ACE of September 1989, March 1990, and April 1990; The One of October 1988 and November 1988; Sierra On-Line’s newsletter of Autumn 1989; PC Magazine of April 29 1986; the premiere issue of AmigaWorld; episodes of the Computer Chronicles television series entitled “Optical Storage Devices,” “CD-ROMs,” and “Optical Storage”; the book CD-ROM: The New Papyrus from the Microsoft Press. Finally, my huge thanks to William Volk, late of Aegis and Mediagenic, for sharing his memories and impressions of the CD wars with me in an interview.)

 
33 Comments

Posted by on September 30, 2016 in Digital Antiquaria, Interactive Fiction

 

Tags: