RSS

Search results for ‘infocom’

A Web Around the World, Part 9: A Network of Networks

UCLA will become the first station in a nationwide computer network which, for the first time, will link together computers of different makes and using different machine languages into one time-sharing system. Creation of the network represents a major step in computer technology and may serve as the forerunner of large computer networks of the future. The ambitious project is supported by the Defense Department’s Advanced Research Projects Agency (ARPA), which has pioneered many advances in computer research, technology, and applications during the past decade.

The system will, in effect, pool the computer power, programs, and specialized know-how of about fifteen computer-research centers, stretching from UCLA to MIT. Other California network stations (or nodes) will be located at the Rand Corporation and System Development Corporation, both of Santa Monica; the Santa Barbara and Berkeley campuses of the University of California; Stanford University and the Stanford Research Institute.

The first stage of the network will go into operation this fall as a sub-net joining UCLA, Stanford Research Institute, UC Santa Barbara, and the University of Utah. The entire network is expected to be operational in late 1970.

Engineering professor Leonard Kleinrock, who heads the UCLA project, describes how the network might handle a sample problem:

Programmers at Computer A have a blurred photo which they want to bring into focus. Their program transmits the photo to Computer B, which specializes in computer graphics, and instructs Computer B’s program to remove the blur and enhance the contrast. If B requires specialized computational assistance, it may call on Computer C for help. The processed work is shuttled back and forth until B is satisfied with the photo, and then sends it back to Computer A. The messages, ranging across the country, can flash between computers in a matter of seconds, Dr. Kleinrock says.

Each computer in the network will be equipped with its own interface message processor (IMP), which will double as a sort of translator among the Babel of computers languages and as a message handler and router.

Computer networks are not an entirely new concept, notes Dr. Kleinrock. The SAGE radar defense system of the fifties was one of the first, followed by the airlines’ SABRE reservation system. However, [both] are highly specialized and single-purpose systems, in contrast to the planned ARPA system which will link a wide assortment of different computers for a wide range of unclassified research functions.

“As of now, computer networks are still in their infancy,” says Dr. Kleinrock. “But as they grow up and become more sophisticated, we will probably see the spread of ‘computer utilities,’ which, like present electric and telephone utilities, will serve individual homes and offices across the country.”

— UCLA press release dated July 3, 1969 (which may include the first published use of the term “router”)



In July of 1968, Larry Roberts sent out a request for bids to build the ARPANET’s interface message processors — the world’s very first computer routers. More than a dozen proposals were received in response, some of them from industry heavy hitters like DEC and Raytheon. But when Roberts and Bob Taylor announced their final decision at the end of the year, everyone was surprised to learn that they had given the contract to the comparatively tiny firm of Bolt Beranek and Newman.

BBN, as the company was more typically called, came up in our previous article as well; J.C.R. Licklider was working there at the time he wrote his landmark paper on “human-computer symbiosis.” Formed in 1948 as an acoustics laboratory, BBN moved into computers in a big way during the 1950s, developing in the process a symbiotic relationship of its own with MIT. Faculty and students circulated freely between the university and BBN, which became a hacker refuge, tolerant of all manner of eccentricity and uninterested in such niceties as dress codes and stipulated working hours. A fair percentage of BBN’s staff came to consist of MIT dropouts, young men who had become too transfixed by their computer hacking to keep up with the rest of their coursework.

BBN’s forte was one-off, experimental contracts, not the sort of thing that led directly to salable commercial products but that might eventually do so ten or twenty years in the future. In this sense, the ARPANET was right up their alley. They won the bid by submitting a more thoughtful, detailed proposal than anyone else, even going so far as to rewrite some of ARPA’s own specifications to make the IMPs operate more efficiently.

Like all of the other bidders, BBN didn’t propose to build the IMPs from scratch, but rather to adapt an existing computer for the purpose. Their choice was the Honeywell 516, one of a new generation of robust integrated-circuit-based “minicomputers,” which distinguished themselves by being no larger than the typical refrigerator and being able to run on ordinary household current. Since the ARPANET would presumably need a lot of IMPs if it proved successful, the relatively cheap and commonplace Honeywell model seemed a wise choice.

The Honeywell 516, the computer model which was transformed into the world’s first router.

Still, the plan was to start as small as possible. The first version of the ARPANET to go online would include just four IMPs, linking four research clusters together. Surprisingly, MIT was not to be one of them; it was left out because the other inaugural sites were out West and ARPA didn’t want to pay AT&T for a transcontinental line right off the bat. Instead the Universities of California at Los Angeles and Santa Barbara each got the honor of being among the first to join the ARPANET, as did the University of Utah and the Stanford Research Institute (SRI), an adjunct to Stanford University. ARPA wanted BBN to ship the first turnkey IMP to UCLA by September of 1969, and for all four of the inaugural nodes to be up and running by the end of the year. Meeting those deadlines wouldn’t be easy.

The project leader at BBN was Frank Heart, a man known for his wide streak of technological paranoia — he had a knack for zeroing in on all of the things that could go wrong with any given plan — and for being “the only person I knew who spoke in italics,” as his erstwhile BBN colleague Severo Ornstein puts it. (“Not that he was inflexible or unpleasant — just definite.”) Ornstein himself, having moved up in the world of computing since his days as a hapless entry-level “Crosstelling” specialist on the SAGE project, worked under Heart as the principal hardware architect, while an intense young hacker named Will Crowther, who loved caving and rock climbing almost as much as computers, supervised the coding. At the start, they all considered the Honeywell 516 a well-proven machine, given that it had been on the market for a few years already. They soon learned to their chagrin, however, that no one had ever pushed it as hard as they were now doing; obscure flaws in the hardware nearly derailed the project on more than one occasion. But they got it done in the end. The first IMP was shipped across the country to UCLA right on schedule.

The team from Bolt Beranek and Newman who created the world’s first routers. Severo Ornstein stands at the extreme right, Will Crowther just next to him. Frank Heart is near the center, the only man wearing a necktie.


On July 20, 1969, American astronaut Neil Armstrong stepped onto the surface of the Moon, marking one culmination of that which had begun with the launch of the Soviet Union’s first Sputnik satellite twelve years earlier. Five and a half weeks after the Moon landing, another, much quieter result of Sputnik became a reality. The first public demonstration of a functioning network router was oddly similar to some of the first demonstrations of Samuel Morse’s telegraph, in that it was an exercise in sending a message around a loop that led it right back to the place where it had first come from. A Scientific Data Systems Sigma 7 computer at UCLA sent a data packet to the IMP that had just been delivered, which was sitting right beside it. Then the IMP duly read the packet’s intended destination and sent it back where it had come from, to appear as text on a monitor screen.

There was literally nowhere else to send it, for only one IMP had been built to date and only this one computer was yet possessed of the ability to talk to it. The work of preparing the latter had been done by a team of UCLA graduate students working under Leonard Kleinrock, the man whose 1964 book had popularized the idea of packet switching. “It didn’t look like anything,” remembers Steve Crocker, a member of Kleinrock’s team. But looks can be deceiving; unlike the crowd of clueless politicians who had once watched Morse send a telegraph message in a ten-mile loop around the halls of the United States Congress, everyone here understood the implications of what they were witnessing. The IMPs worked.

Bob Taylor, the man who had pushed and pushed until he found a way to make the ARPANET happen, chose to make this moment of triumph his ironic exit cue. A staunch opponent of the Vietnam War, he had been suffering pangs of conscience over his role as a cog in the military-industrial complex for a long time, even as he continued to believe in the ARPANET’s future value for the civilian world. After Richard Nixon was elected president in November of 1968, he had decided that he would stay on just long enough to get the IMPs finished, by which point the ARPANET as a whole would hopefully be past the stage where cancellation was a realistic possibility. He stuck to that decision; he resigned just days after the first test of an IMP. His replacement was Larry Roberts — another irony, given that Taylor had been forced practically to blackmail Roberts into joining ARPA in the first place. Taylor himself would land at Xerox’s new Palo Alto Research Center, where over the course of the new decade he would help to invent much else that has become an everyday part of our digital lives.

About a month after the test of the first IMP, BBN shipped a second one, this time to the Stanford Research Institute. It was connected to its twin at UCLA by an AT&T long-distance line. Another, local cable was run from it to SRI’s Scientific Data Systems 940 computer, which was normally completely incompatible with UCLA’s Sigma machine despite coming from the same manufacturer. In this case, however, programmers at the two institutions had hacked together a method of echoing text back and forth between their computers — assuming it worked, that is; they had had no way of actually finding out.

On October 29, 1969, a UCLA student named Charley Kline, sitting behind his Sigma 7 terminal, called up SRI on an ordinary telephone to initiate the first real test of the ARPANET. Computer rooms in those days were noisy places, what with all of the ventilation the big beasts required, so the two human interlocutors had to fairly shout into their respective telephones. “I’m going to type an L,” Kline yelled, and did so. “Did you get the L?” His opposite number acknowledged that he had. Kline typed an O. “Did you get the O?” Yes. He typed a G.

“The computer just crashed,” said the man at SRI.

“History now records how clever we were to send such a prophetic first message, namely ‘LO,'” says Leonard Kleinrock today with a laugh. They had been trying to manage “LOGIN,” which itself wouldn’t have been a challenger to Samuel Morse’s “What hath God wrought?” in the eloquence sweepstakes — but then, these were different times.

At any rate, the bug which had caused the crash was fixed before the day was out, and regular communications began. UC Santa Barbara came online in November, followed by the University of Utah in December. Satisfied with this proof of concept, ARPA agreed to embark on the next stage of the project, extending the network to the East Coast. In March of 1970, the ARPANET reached BBN itself. Needless to say, this achievement — computer networking’s equivalent to telephony’s spanning of the continent back in 1915 — went entirely unnoticed by an oblivious public. BBN was followed before the year was out by MIT, Rand, System Development Corporation, and Harvard University.


It would make for a more exciting tale to say that the ARPANET revolutionized computing immediately, but such was not the case. In its first couple of years, the network was neither a raging success nor an abject failure. On the one hand, its technical underpinnings advanced at a healthy clip; BBN steadily refined their IMPs, moving them away from modified general-purpose computers and toward the specialized routers we know today. Likewise, the network they served continued to grow; by the end of 1971, the ARPANET had fifteen nodes. But despite it all, it remained frustratingly underused; a BBN survey conducted about two years in revealed that the ARPANET was running at just 2 percent of its theoretical capacity.

The problem was one of computer communication at a higher level than that of the IMPs. Claude Shannon had told the world that information was information in a networking context, and the minds behind the ARPANET had taken his tautology to heart. They had designed a system for shuttling arbitrary blocks of data about, without concerning themselves overmuch about the actual purpose of said data. But the ability to move raw data from computer to computer availed one little if one didn’t know how to create meaning out of all those bits. “It was like picking up the phone and calling France,” Frank Heart of BBN would later say. “Even if you get the connection to work, if you don’t speak French you’ve got a little problem.”

What was needed were higher-level protocols that could run on top of the ARPANET’s packet switching — a set of agreed-upon “languages” for all of these disparate computers to use when talking with one another in order to accomplish something actually useful. Seeing that no one else was doing so, BBN and MIT finally deigned to provide them. First came Telnet, a protocol to let one log into a remote computer and interact with it at a textual command line just as if one was sitting right next to it at a local terminal. And then came the File Transfer Protocol, or FTP, which allowed one to move files back and forth between two computers, optionally performing useful transformations on them in the process, such as going from EBCDIC to ASCII text encoding or vice versa. It is a testament to how well the hackers behind these protocols did their jobs that both have remained with us to this day. Still, the application that really made the ARPANET come alive — the one that turned it almost overnight from a technological experiment to an indispensable tool for working and even socializing — was the next one to come along.

Jack Ruina was now long gone as the head of all of ARPA; that role was now filled by a respected physicist named Steve Lukasik. Lukasik would later remember how Larry Roberts came into his office one day in April of 1972 to try to convince him to use the ARPANET personally. “What am I going to do on the ARPANET?” the non-technical Lukasik asked skeptically.

“Well,” mused Roberts, “you could do email.”

Email wasn’t really a new idea at the time. By the mid-1960s, the largest computer at MIT had hundreds of users, who logged in as many as 30 at a time via local terminals. An undergraduate named Tom Van Vleck noticed that some users had gotten in a habit of passing messages to one another by writing them up in text files with names such as “TO TOM,” then dropping them into a shared directory. In 1965, he created what was probably the world’s first true email system in order to provide them with a more elegant solution. Just like all of the email systems that would follow it, it gave each user a virtual mailbox to which any other user could direct a virtual letter, then see it delivered instantly. Replying, forwarding, address books, carbon copies — all of the niceties we’ve come to expect — followed in fairly short order, at MIT and in many other institutions. Early in 1972, a BBN programmer named Ray Tomlinson took what struck him as the logical next step, by creating a system for sending email between otherwise separate computers — or “hosts,” as they were known in the emerging parlance of the ARPANET.

Thanks to FTP, Tomlinson already had a way of doing the grunt work of moving the individual letters from computer to computer. His biggest dilemma was a question of addressing. It was reasonable for the administrators of any single host to demand that every user have a unique login ID, which could also function as her email address. But it would be impractical to insist on unique IDs across the entire ARPANET. And even if it was possible, how was the computer on which an electronic missive had been composed to know which other computer was home to the intended recipient? Trying to maintain a shared central database of every login for every computer on the ARPANET didn’t strike Tomlinson as much of a solution.

His alternative approach, which he would later describe as no more than “obvious,” would go on to become an icon of the digital age. Each email address would consist of a local user name followed by an “at” sign (@) and the name of the host on which it lived. Just as a paper letter moves from an address in a town, then to a larger postal hub, then onward to a hub in another region, and finally to another individual street address, email would use its suffix to find the correct host on the ARPANET. Once it arrived there, said host could drill down further and route it to the correct user. “Now, there’s a nice hack,” said one of Tomlinson’s colleagues; that was about as effusive as a compliment could get in hacker circles.

Stephen Lukasik, ARPA head and original email-obsessed road warrior.

Steve Lukasik reluctantly allowed Larry Roberts to install an ARPANET terminal in his office for the purpose of reading and writing email. Within days, the skeptic became an evangelist. He couldn’t believe how useful email actually was. He sent out a directive to anyone who was anyone at ARPA, whether their work involved computers or not: all were ordered to accept a terminal in their office. “The way to communicate with me is through electronic mail,” he announced categorically. He soon acquired a “portable” terminal which was the size of a suitcase and weighed 30 pounds, but which came equipped with a modem that would allow him to connect to the ARPANET from any location from which he could finagle access to an ordinary telephone. He became the prototype for millions of professional road warriors to come, dialing into the office constantly from conference rooms, from hotel rooms, from airport lounges. He became perhaps the first person in the world who wasn’t already steeped in computing to make the services the ARPANET could provide an essential part of his day-to-day life.

But he was by no means the last. “Email was the biggest surprise about the ARPANET,” says Leonard Kleinrock. “It was an ad-hoc add-on by BBN, and it just blossomed. And that sucked a lot of people in.” Within a year of Lukasik’s great awakening, three quarters of all the traffic on the ARPANET consisted of emails flying to and fro, and the total volume of traffic on the network had grown by a factor of five and a half.



With a supportive ARPA administrator behind them and applications like email beginning to prove their network’s real-world usefulness, it struck the people who had designed and built the ARPANET that it was time for a proper coming-out party. They settled on the International Conference on Computers and Communications, which was to be held at the Washington, D.C., Hilton hotel in October of 1972. Almost every institution connected to the ARPANET sent representatives toting terminals and demonstration software, while AT&T ran a special high-capacity line into the hotel’s ballroom to get them all online.

More than a thousand people traipsed through the exhibition over the course of two and half days, taking in several dozen demonstrations of what the ARPANET could do now and might conceivably be able to do in the future. It was the first that some of them had ever heard of the network, or even of the idea of computer networking in general.

One of the demonstrations bore an ironic resemblance to the SAGE system that had first proved that wide-area computer networking could work at all. Leonard Kleinrock:

One of the things that was demonstrated there was a distributed air-traffic-control system. The idea was there would be some air traffic on the system. And as a plane moved out of its region, it would be picked up by another computer in the next region, data would be exchanged, and collectively the computers would be managing airspace over a large region. This package also had the ability to freeze the simulation at any given point, take the program on Machine A, squeeze it over to Machine B, and then continue the simulation with the user being none the wiser. So there were really some sophisticated things going on there.

Severo Ornstein, who was also present, must have been having flashbacks to his early experiences with Crosstelling…

A Stanford researcher named Vint Cerf remembers three principal reactions among the attendees.

The first came from the die-hard circuit-switching people from the telephone industry, who didn’t believe packet switching could possibly work. And they were stunned because it did work. It was almost like the rail industry disbelieving that airplanes could really fly until they saw one in flight. The second group were the people who didn’t know anything about computer communications at all to speak of, and who were sort of overwhelmed by the whole thing. But then there was the third group: the people who were just as excited as little kids because of all these neat things that were going on.

Amidst the demonstrations of email, file transfers, remote logins, even remote software development, the hit of the show was something far more frivolous. Stanford’s AI Lab had brought a program called Parry, a “simulation” — we would call it a chatbot today — of a patient undergoing psychological therapy. They had wired Parry up to talk with another program called Eliza, a computerized psychotherapist created by one Joseph Weizenbaum at MIT back in 1966.

Good evening. Tell me your problems. Please terminate input with a period or a question mark.

People get on my nerves sometimes.

I am not sure I understand you.

You should pay more attention.

Suppose you should pay more attention.

You're entitled to your own opinion.

What makes you think I am entitled to my own opinion?

It bothers me just to be around people in general.

Please go on.

I went to the track at Bay Meadows a while back.

Why do you say that just now?

There's a lot of crooked gambling going on there.

Please go on.

I gamble on horses.

What does that suggest to you?

Do you know anything about bookies?

What makes you think I know anything about bookies?

A bookie didn't pay me off once.

What does that suggest to you?

I beat up the bookie who did not pay off, and I thought he might use his friends in the underworld to get even with me.

What are you really talking about?

Have you ever really thought about the mafia?

Suppose you ever really thought about the mafia.

Let's talk about something else.

And so it went. As Joseph Weizenbaum himself was at pains to point out later in his career, Eliza and Parry were nothing more than clever parlor tricks, a set of rote semantic transformations and randomized catchphrases meant to convey a tissue-thin simulacrum of real understanding, convincing only to the naïve and those actively determined to believe. Their presence here as the shabby best that the strong-AI contingent could offer, surrounded by so many genuinely visionary demonstrations of computing’s humanistic, networked future, ought to have demonstrated to the thoughtful observer how one vision of computing was delivering on its promises while the other manifestly was not. But no matter: the crowd ate it up. It seems there was no shortage of gullible true believers in the Hilton ballroom during those exciting two and a half days.


The International Conference on Computers and Communications provided the ARPANET with some of its first press coverage beyond academic journals. Within computing circles, however, the ARPANET’s existence hadn’t gone unnoticed even by those who, thanks to accidents of proximity, had no opportunity to participate in it. During the early 1970s, would-be ARPANET equivalents popped up in a number of places outside the continental United States. There was ALOHANET, which used radio waves to join the various campuses of the University of Hawaii, which were located on different islands, into one computing neighborhood. There was the National Physical Laboratory (NPL) network in Britain, which served that country’s research community in much the same way that ARPANET served computer scientists in the United States. (The NPL network’s design actually dated back to the mid-1960s, and some of its proposed architecture had influenced the ARPANET, making it arguably more a case of parallel evolution than of learning from example.) Most recently, there was a network known as CYCLADES in development in France.

All of which is to say that computer networking in the big picture was looking more and more like the early days of telephony: a collection of discrete networks that served their own denizens well but had no way of communicating with one another. This wouldn’t do at all; ever since the time when J.C.R. Licklider had been pushing his Intergalactic Computer Network, proponents of wide-area computer networking had had a decidedly internationalist, even utopian streak. As far as they were concerned, the world’s computers — all of the world’s computers, wherever they happened to be physically located — simply had to find a way to talk to one another.

The problem wasn’t one of connectivity in its purest sense. As we saw in earlier articles, telephony had already found ways of going where wires could not easily be strung decades before. And by now, many of telephony’s terrestrial radio and microwave beams had been augmented or replaced by communications satellites — another legacy of Sputnik — that served to bind the planet’s human voices that much closer together. There was no intrinsic reason that computers couldn’t talk to one another over the same links. The real problem was rather that the routers on each of the extant networks used their own protocols for talking among themselves and to the computers they served. The routers of the ARPANET, for example, used something called the Network Control Program, or NCP, which had been codified by a team from Stanford led by Steve Crocker, based upon the early work of BBN hackers like Will Crowther. Other networks used completely different protocols. How were they to make sense of one another? Larry Roberts came to see this as computer networking’s next big challenge.

He happened to have working just under him at ARPA a fellow named Bob Kahn, a bright spark who had already achieved much in computing in his 35 years. Roberts now assigned Kahn the task of trying to make sense of the international technological Tower of Babel that was computer networking writ large. Kahn in turn enlisted Stanford’s Vint Cerf as a collaborator.

Bob Kahn

Vint Cerf

The two theorized and argued with one another and with their academic colleagues for about a year, then published their conclusions in the May 1974 issue of IEEE Transactions on Communications, in an article entitled “A Protocol for Packet Network Intercommunication.” It introduced to the world a new word: the “Internet,” shorthand for Khan and Cerf’s envisioned network of networks. The linchpin of their scheme was a sort of meta-network of linked “gateways,” special routers that handled all traffic going in and out of the individual networks; if the routers on the ARPANET were that network’s interstate highway system, its gateway would become its international airport. A host wishing to send a packet to a computer outside its own network would pass it to its local gateway using its network’s standard protocols, but would include within the packet information about the particular “foreign” computer it was trying to reach. The gateway would then rejigger the packet into a universal standard format and send it over the meta-network to the gateway of the network to which the foreign computer belonged. Then this gateway would rejigger the packet yet again, into a format suitable for passing over the network behind it to reach its ultimate destination.

Kahn and Cerf detailed a brand-new protocol to allow the gateways on the meta-network to talk among themselves. They called it the Transmission Control Protocol, or TCP. It gave each computer on the networks served by the gateways the equivalent of a telephone number. These “TCP addresses” — which we now call “IP addresses,” for reasons we’ll get to shortly — originally consisted of three fields, each containing a number between 0 and 255. The first field stipulated the network to which the host belonged; think of it as a telephone number’s country code. The other two fields identified the specific computer on that network. “Network identification allows up to 256 distinct networks,” wrote Kahn and Cerf. “This seems sufficient for the foreseeable future. Similarly, the TCP identifier field permits up to 65,536 distinct [computers] to be addressed, which seems more than sufficient for any given network.” Time would prove these statements to be among their few failures of vision.

It wasn’t especially easy to convince the managers of other networks, who came from different cultures and were all equally convinced that their way of doings things was the best way, to accept the standard being shoved in their faces by the long and condescending arm of the American government. Still, the reality was that TCP was as solid and efficient a protocol as anyone could ask for, and there were huge advantages to be had by linking up with the ARPANET, where more cutting-edge computer research was happening than anywhere else. Late in 1975, the NPL network in Britain, the second largest in the world, officially joined up. After that, the Internet began to take on an unstoppable momentum of its own. In 1981, with the number of individual networks on it barreling with frightening speed toward the limit of 256, a new addressing scheme was hastily adopted, one which added a fourth field to each computer’s telephone number to create the format we are still familiar with today.

Amidst all the enthusiasm for communicating across networks, the distinctions between them were gradually lost. The Internet became just the Internet, and no one much knew or cared whether any given computer was on the ARPANET or the NPL network or somewhere else. The important thing was, it was on the Internet. The individual networks’ internal protocols came slowly to resemble that of the Internet, just because it made everything easier from a technical standpoint. In 1978, in a reflection of these trends, the TCP protocol was split into a matched pair of them called TCP/IP. The part that was called the Transmission Control Protocol was equally useful for pushing packets around a network behind a gateway, while the Internet Protocol was reserved for the methods that gateways used to pass packets across network boundaries. (This is the reason that we now refer to IP addresses rather than TCP addresses.) Beginning on January 1, 1983, all computers on the ARPANET were required to use TCP rather than NCP even when they were only talking among themselves behind their gateway.



Alas, by that point ARPA itself was not what it once had been; the golden age of blue-sky computer research on the American taxpayer’s dime had long since faded into history. One might say that the beginning of the end came as early as the fall of 1969, when a newly fiscally conservative United States Congress, satisfied that the space race had been won and the Soviets left in the country’s technological dust once again, passed an amendment to the next year’s Department of Defense budget which specified that any and all research conducted by agencies like ARPA must have “a direct and apparent relationship” to the actual winning of wars by the American military. Dedicated researchers and administrators found that they could still keep their projects alive afterward by providing such justifications in the form of lengthy, perhaps deliberately obfuscated papers, but it was already a far cry from the earlier days of effectively blank checks. In 1972, as if to drive home a point to the eggheads in its ranks who made a habit of straying too far out of their lanes, the Defense Department officially renamed ARPA to DARPA: the Defense Advanced Research Projects Agency.

Late in 1973, Larry Roberts left ARPA. His replacement the following January was none other than J.C.R. Licklider, who had reluctantly agreed to another tour of duty in the Pentagon only when absolutely no one else proved willing to step up.

But, just as this was no longer quite the same ARPA, it was no longer quite the same Lick. He had continued to be a motivating force for computer networking from behind the scenes at MIT during recent years, but his decades of burning the candle at both ends, of living on fast food and copious quantities of Coca Cola, were now beginning to take their toll. He suffered from chronic asthma which left him constantly puffing at an inhaler, and his hands had a noticeable tremor that would later reveal itself to be an early symptom of Parkinson’s disease. In short, he was not the man to revive ARPA in an age of falling rather than rising budgets, of ever increasing scrutiny and internecine warfare as everyone tried to protect their own pet projects, at the expense of those of others if necessary. “When there is scarcity, you don’t have a community,” notes Vint Cerf, who perchance could have made a living as a philosopher if he hadn’t chosen software engineering. “All you have is survival.”

Lick did the best he could, but after Steve Lukasik too left, to be replaced by a tough cookie who grilled everyone who proposed doing anything about its concrete military value, he felt he could hold on no longer. Lick’s second tenure at ARPA ended in September of 1975. Many computing insiders would come to mark that day as the one when a door shut forever on this Defense Department agency’s oddly idealistic past. When it came to new projects at least, DARPA from now on would content itself with being exactly what its name said it ought to be. Luckily, the Internet already existed, and had already taken on a life of its own.



Lick wound up back at MIT, the congenial home to which this prodigal son had been regularly returning since 1950. He took his place there among the younger hackers of the Dynamic Modeling Group, whose human-focused approach to computing caused him to favor them over their rivals at the AI Lab. If Lick wasn’t as fast on his feet as he once had been, he could still floor you on occasion with a cogent comment or the perfect question.

Some of the DMG folks who now surrounded him would go on to form Infocom, an obsession of the early years of this website, a company whose impact on the art of digital storytelling can still be felt to this day.[1]In fact, Lick agreed to join Infocom’s board of directors, although his role there was a largely ceremonial one; he was not a gamer himself, and had little knowledge of or interest in the commercial market for home-computer games that had begun to emerge by the beginning of the 1980s. Still, everyone involved with the company remembers that he genuinely exulted at Infocom’s successes and commiserated with their failures, just as he did with those of all of his former students. One of them was a computer-science student named Tim Anderson, who met the prophet in their ranks often in the humble surroundings of a terminal room.

He signed up for his two hours like everybody else. You’d come in and find this old guy sitting there with a bottle of Coke and a brownie. And it wasn’t even a good brownie; he’d be eating one of those vending-machine things as if that was a perfectly satisfying lunch. Then I also remember that he had these funny-colored glasses with yellow lenses; he had some theory that they helped him see better.

When you learned what he had done, it was awesome. He was clearly the father of us all. But you’d never know it from talking to him. Instead, there was always a sense that he was playing. I always felt that he liked and respected me, even though he had no reason to: I was no smarter than anybody else. I think everybody in the group felt the same way, and that was a big part of what made the group the way it was.

In 1979, Lick penned the last of his periodic prognostications of the world’s networked future, for a book of essays about the abstract future of computing that was published by the MIT Press. As before, he took the year 2000 as the watershed point.

On the whole, computer technology continues to advance along the curve it has followed in its three decades of history since World War II. The amount of information that can be stored for a given period or processed in a given way at unit cost doubles every two years. (The 21 years from 1979 to 2000 yielded ten doublings, for a factor of about 1000.) Wave guides, optical fibers, rooftop satellite antennas, and coaxial cables provide abundant bandwidth and inexpensive digital transmission both locally and over long distances. Computer consoles with good graphics displays and speech input and output have become almost as common as television sets. Some pocket computers are fully programmable, as powerful as IBM 360/40s used to be, and are equipped with both metallic and radio connectors to computer-communication networks.

An international network of digital computer-communication networks serves as the main and essential medium of informational interaction for governments, institutions, corporations, and individuals. The Multinet [i.e., Internet], as it is called, is hierarchical — some of the component networks are themselves networks of networks — and many of the top-level networks are national networks. The many sub-networks that comprise this network of networks are electronically and physically interconnected. Most of them handle real-time speech as well as computer messages, and some handle video.

The Multinet has supplanted the postal system for letters, the dial-telephone system for conversations and teleconferences, standalone batch-processing and time-sharing systems for computation, and most filing cabinets, microfilm repositories, document rooms, and libraries for information storage and retrieval. Many people work at home, interacting with clients and coworkers through the Multinet, and many business offices (and some classrooms) are little more than organized interconnections of such home workers and their computers. People shop through the Multinet, using its funds-transfer functions, and a few receive delivery of small items through adjacent pneumatic-tube networks. Routine shopping and appointment scheduling are generally handled by private-secretary-like programs called OLIVERs which know their masters’ needs. Indeed, the Multinet handles scheduling of almost everything schedulable. For example, it eliminates waiting to be seated at restaurants and if you place your order through it it can eliminate waiting to be served…

But for the first time, Lick also chose to describe a dystopian scenario to go along with the utopian one, stating that the former was just as likely as the latter if open standards like TCP/IP, and the spirit of cooperation that they personified, got pushed away in favor of closed networks and business models. If that happened, the world’s information spaces would be siloed off from one another, and humanity would have lost a chance it never even realized it had.

Because their networks are diverse and uncoordinated, recalling the track-gauge situation in the early days of railroading, the independent “value-added-carrier” companies capture only the fringes of the computer-communication market, the bulk of it being divided between IBM (integrated computer-communication systems based on satellites) and the telecommunications companies (transmission services but not integrated computer-communication services, no remote-computing services)…

Electronic funds transfer has not replaced money, as it turns out, because there were too many uncoordinated bank networks and too many unauthorized and inexplicable transfers of funds. Electronic message systems have not replaced mail, either, because there were too many uncoordinated governmental and commercial networks, with no network at all reaching people’s homes, and messages suffered too many failures of transfers…

Looking back on these two scenarios from the perspective of 2022, when we stand almost exactly as far beyond Lick’s watershed point as he stood before it, we can note with gratification that his more positive scenario turned out to be the more correct one; if some niceties such as computer speech recognition didn’t arrive quite on his time frame, the overall network ecosystem he described certainly did. We might be tempted to contemplate at this point that the J.C.R. Licklider of 1979 may have been older in some ways than his 64 years, being a man who had known as much failure as success over the course of a career spanning four and a half impossibly busy decades, and we might be tempted to ascribe his newfound willingness to acknowledge the pessimistic as well as the optimistic to these factors alone.

But I believe that to do so would be a mistake. It is disarmingly easy to fall into a mindset of inevitability when we consider the past, to think that the way things turned out are the only way they ever could have. In truth, the open Internet we are still blessed with today, despite the best efforts of numerous governments and corporations to capture and close it, may never have been a terribly likely outcome; we may just possibly have won an historical lottery. When you really start to dig into the subject, you find that there are countless junctures in the story where things could have gone very differently indeed.

Consider: way back in 1971, amidst the first rounds of fiscal austerity at ARPA, Larry Roberts grew worried about whether he would be able to convince his bosses to continue funding the fledgling ARPANET at all. Determined not to let it die, he entered into serious talks with AT&T about the latter buying the whole kit and caboodle. After months of back and forth, AT&T declined, having decided there just wasn’t any money to be made there. What would have happened if AT&T had said yes, and the ARPANET had fallen into the hands of such a corporation at this early date? Not only digital history but a hugely important part of recent human history would surely have taken a radically different course. There would not, for instance, have ever been a TCP/IP protocol to run the Internet if ARPA had washed their hands of the whole thing before Robert Kahn and Vint Cerf could create it.

And so it goes, again and again and again. It was a supremely unlikely confluence of events, personalities, and even national moods that allowed the ARPANET to come into being at all, followed by an equally unlikely collection of same that let its child the Internet survive down to the present day with its idealism a bit tarnished but basically intact. We spend a lot of time lamenting the horrific failures of history. This is understandable and necessary — but we should also make some time here and there for its crazy, improbable successes.



On October 4, 1985, J.C.R. Licklider finally retired from MIT for good. His farewell dinner that night had hundreds of attendees, all falling over themselves to pay him homage. Lick himself, now 70 years old and visibly infirm, accepted their praise shyly. He seemed most touched by the speakers who came to the podium late in the evening, after the big names of academia and industry: the group of students who had taken to calling themselves “Lick’s kids” — or, in hacker parlance, “lixkids.”

“When I was an undergraduate,” said one of them, “Lick was just a nice guy in a corner office who gave us all a wonderful chance to become involved with computers.”

“I’d felt I was the only one,” recalled another of the lixkids later. “That somehow Lick and I had this mystical bond, and nobody else. Yet during that evening I saw that there were 200 people in the room, 300 people, and that all of them felt that same way. Everybody Lick touched felt that he was their hero and that he had been an extraordinarily important person in their life.”

J.C.R. Licklider died on June 26, 1990, just as the networked future he had so fondly envisioned was about to become a tangible reality for millions of people, thanks to a confluence of three factors: an Internet that was descended from the original ARPANET, itself the realization of Lick’s own Intergalactic Computer Network; a new generation of cheap and capable personal computers that were small enough to sit on desktops and yet could do far more than the vast majority of the machines Lick had had a chance to work on; and a new and different way of navigating texts and other information spaces, known as hypertext theory. In the next article, we’ll see how those three things yielded the World Wide Web, a place as useful and enjoyable for the ordinary folks of the world as it is for computing’s intellectual elites. Lick, for one, wouldn’t have had it any other way.

(Sources: the books A Brief History of the Future: The Origins of the Internet by John Naughton; Where Wizards Stay Up Late: The Origins of the Internet by Katie Hafner and Matthew Lyon, Hackers: Heroes of the Computer Revolution by Steven Levy, From Gutenberg to the Internet: A Sourcebook on the History of Information Technology edited by Jeremy M. Norman, The Dream Machine by M. Mitchell Waldrop, A History of Modern Computing (2nd ed.) by Paul E. Ceruzzi, Communication Networks: A Concise Introduction by Jean Walrand and Shyam Parekh, Computing in the Middle Ages by Severo M. Ornstein, and The Computer Age: A Twenty-Year View edited by Michael L. Dertouzos and Joel Moses.)

Footnotes

Footnotes
1 In fact, Lick agreed to join Infocom’s board of directors, although his role there was a largely ceremonial one; he was not a gamer himself, and had little knowledge of or interest in the commercial market for home-computer games that had begun to emerge by the beginning of the 1980s. Still, everyone involved with the company remembers that he genuinely exulted at Infocom’s successes and commiserated with their failures, just as he did with those of all of his former students.
 

Tags:

A Web Around the World, Part 8: The Intergalactic Computer Network

One could make a strong argument for the manned Moon landing and the Internet as the two greatest technological achievements of the second half of the twentieth century. Remarkably, the roots of both reach back to the same event — in fact, to the very same American government agency, hastily created in response to that event.


A replica of the Sputnik 1 satellite, the source of the beep heard round the world.

At dawn on October 5, 1957, a rocket blasted off from southern Kazakhstan. Just under half an hour later, at an altitude of about 140 miles, it detached its payload: a silver sphere the size of a soccer ball, from which four antennas extended in vaguely insectoid fashion. Sputnik 1, the world’s first artificial satellite, began to send out a regular beep soon thereafter.

It became the beep heard round the world, exciting a consternation in the West such as hadn’t been in evidence since the first Soviet test of an atomic bomb eight years earlier. In many ways, this panic was even worse than that one. The nuclear test of 1949 had served notice that the Soviet Union had just about caught up with the West, prompting a redoubled effort on the part of the United States to develop the hydrogen bomb, the last word in apocalyptic weaponry. This effort had succeeded in 1952, restoring a measure of peace of mind. But now, with Sputnik, the Soviet Union had done more than catch up to the Western state of the art; it had surpassed it. The implications were dire. Amateur radio enthusiasts listened with morbid fascination to the telltale beep passing overhead, while newspaper columnists imagined the Soviets colonizing space in the name of communism and dropping bombs from there on the heads of those terrestrial nations who refused to submit to tyranny.

The Soviets themselves proved adept at playing to such fears. Just one month after Sputnik 1, they launched Sputnik 2. This satellite had a living passenger: a bewildered mongrel dog named Laika who had been scooped off the streets of Moscow. We now know that the poor creature was boiled alive in her tin can by the unshielded heat of the Sun within a few hours of reaching orbit, but it was reported to the world at the time that she lived fully six days in space before being euthanized by lethal injection. Clearly the Soviets’ plans for space involved more than beeping soccer balls.

These events prompted a predictable scramble inside the American government, a circular firing squad of politicians, bureaucrats, and military brass casting aspersions upon one another as everyone tried to figure out how the United States could have been upstaged so badly. President Dwight D. Eisenhower delivered a major address just four days after Laika had become the first living Earthling to reach space (and to die there). He would remedy the crisis of confidence in American science and technology, he said, by forming a new agency that would report directly to the Secretary of Defense. It would be called the Advanced Research Projects Agency, or ARPA. Naturally, its foremost responsibility would be the space race against the Soviets.

But this mission statement for ARPA didn’t last very long. Many believed that to treat the space race as a purely military endeavor would be unwise; far better to present it to the world as a peaceful, almost utopian initiative, driven by pure science and the eternal human urge to explore. These cooler heads eventually prevailed, and as a result almost the entirety of ARPA’s initial raison d’être was taken away from it in the weeks after its formal creation in February of 1958. A brand new, civilian space agency called the National Aeronautics and Space Administration was formed to carry out the utopian mission of space exploration — albeit more quickly than the Soviets, if you please. ARPA was suddenly an agency without any obvious reason to exist. But the bills to create it had all been signed and office space in the Pentagon allocated, and so it was allowed to shamble on toward destinations that were uncertain at best. It became just another acronym floating about in the alphabet soup of government bureaucracy.

Big government having an inertia all its own, it remained that way for quite some time. While NASA captured headlines with the recruitment of its first seven human astronauts and the inauguration of a Project Mercury to put one of them into space, ARPA, the agency originally slated to have all that glory, toiled away in obscurity with esoteric projects that attracted little attention outside the Pentagon. ARPA had nothing whatsoever to do with computing until mid-1961. At that point — as the nation was smarting over the Soviets stealing its thunder once again, this time by putting a man into space before NASA could — ARPA was given four huge IBM mainframes, leftovers from the SAGE project which nobody knew what to do with, for their hardware design had been tailored for the needs of SAGE alone. The head of ARPA then was a man named Jack Ruina, who just happened to be an electrical engineer, and one who was at least somewhat familiar with the latest developments in computing. Rather than looking a gift horse — or a white elephant — in the mouth, he decided to take his inherited computers as a sign that this was a field where ARPA could do some good. He asked for and was given $10 million per year to study computer-assisted command-and-control systems — basically, for a continuation of the sort of work that the SAGE  project had begun. Then he started looking around for someone to run the new sub-agency. He found the man he felt to be the ideal candidate in one J.C.R. Licklider.


J.C.R. Licklider

Lick was probably the most gifted intuitive genius I have ever known. When I would finally come to Lick with the proof of some mathematical relation, I’d discover that he already knew it. He hadn’t worked it out in detail. He just… knew it. He could somehow envision the way information flowed, and see relations that people who just manipulated the mathematical symbols could not see. It was so astounding that he became a figure of mystery to the rest of us. How the hell does Lick do it? How does he see these things? Talking with Lick about a problem amplified my own intelligence about 30 IQ points.

— William J. McGill, colleague of J.C.R. Licklider at MIT

Joseph Carl Robnett Licklider is one of history’s greatest rarities, a man who changed the world without ever making any enemies. Almost to a person, no one who worked with him had or has a bad word to say about him — not even those who stridently disagreed with him about the approach to computing which his very name came to signify. They prefer to wax rhapsodic about his incisive intellect, his endless good humor, his incomparable ability to inspire and motivate, and perhaps most of all his down-to-earth human kindness — not exactly the quality for which computer brainiacs are most known. He was the kind of guy who, when he’d visit the office soda machine, would always come back with enough Cokes for everyone. When he’d go to sharpen a pencil, he’d ask if anyone else needed theirs sharpened as well. “He could strike up a conversation with anybody,” remembered a woman named Louise Carpenter Thomas who worked with him early in his career. “Waitresses, bellhops, janitors, gardeners… it was a facility I marveled at.”

“I can’t figure it out,” she once told a friend. “He’s too… nice.” She soon decided he wasn’t too good to be true after all; she became his wife.

“Lick,” as he was universally known, wasn’t a hacker in the conventional sense. He was rather the epitome of a big-picture guy. Uninterested in the details of administration of the agencies he ostensibly led and not much more interested in those of programming or engineering at the nitty-gritty level, he excelled at creating an atmosphere that allowed other people to become their best selves and then setting a direction they could all pull toward. One might be tempted to call him a prototype of the modern Silicon Valley “disruptor,” except that he lacked the toxic narcissism of that breed of Steve Jobs wannabees. In fact, Lick was terminally modest. “If someone stole an idea from him,” said his wife Louise, “I’d pound the table and say it’s not fair, and he’d say, ‘It doesn’t matter who gets the credit. It matters that it gets done.'”

His unwillingness to blow his own horn is undoubtedly one of the contributing factors to Lick’s being one of the most under-recognized of computing’s pioneers. He published relatively little, both because he hated to write and because he genuinely loved to see one of his protegees recognized for fleshing out and popularizing one of his ideas. Yet the fact remains that his vision of computing’s necessary immediate future was actually far more prescient than that of many of his more celebrated peers.

To understand that vision and the ways in which it contrasted with that of some of his colleagues, we should begin with Lick’s background. Born in 1915 in St. Louis, Missouri, the son of a Baptist minister, he grew up a boy who was good at just about everything, from sports to mathematics to auto mechanics, but already had a knack for never making anyone feel jealous about it. After much tortured debate and a few abrupt changes of course at university, he finally settled on studying psychology, and was awarded his master’s degree in the field from St. Louis’s Washington University in 1938. According to his biographer M. Mitchell Waldrop, the choice of majors made all the difference in what he would go on to do.

Considering all that happened later, Lick’s youthful passion for psychology might seem like an aberration, a sideline, a long diversion from his ultimate career in computers. But in fact, his grounding in psychology would prove central to his very conception of computers. Virtually all the other computer pioneers of his generation would come to the field in the 1940s and 1950s with backgrounds in mathematics, physics, or electrical engineering, technological orientations that led them to focus on gadgetry — on making the machines bigger, faster, and more reliable. Lick was unique in bringing to the field a deep appreciation for human beings: our capacity to perceive, to adapt, to make choices, and to devise completely new ways of tackling apparently intricate problems. As an experimental psychologist, he found these abilities every bit as subtle and as worthy of respect as a computer’s ability to execute an algorithm. And that was why to him, the real challenge would always lie in adapting computers to the humans who used them, thereby exploiting the strengths of each.

Still, Lick might very well have remained a “pure” psychologist if the Second World War hadn’t intervened. His pre-war research focus had been the psychological processes of human hearing. After the war began, this led him to Harvard University’s Psycho-Acoustic Laboratory, where he devised technologies to allow bomber crews to better communicate with one another inside their noisy airplanes. Thus he found the focus that would mark the rest of his career: the interaction between humans and technology. After moving to MIT in 1950, he joined the SAGE project, where he helped to design the user interface — not that the term yet existed! — which allowed the SAGE ground controllers to interact with the display screens in front of them; among his achievements here was the invention of the light pen. Having thus been bitten by the computing bug, he moved on in 1957 to Bolt Beranek and Newman, a computing laboratory and think tank with close ties to MIT.

He was still there in 1960, when he published perhaps the most important of all his rare papers, a piece entitled “Man-Computer Symbiosis,” in the journal Transactions on Human Factors in Electronics. In order to appreciate what a revolutionary paper it was, we should first step back to look at the view of computing to which it was responding.

The most typical way of describing computers in the mass media of the time was as “giant brains,” little different in qualitative terms from those of humans. This conception of computing would soon be all over pop culture — for example, in the rogue computers that Captain Kirk destroyed on almost a monthly basis on Star Trek, or in the computer HAL 9000, the villain of 2001: A Space Odyssey. A large number of computer researchers who probably ought to have known better subscribed to a more positive take on essentially the same view. Their understanding was that, if artificial intelligence wasn’t yet up to human snuff, it was only a matter of time. These proponents of “strong AI,” such as Stanford University’s John McCarthy and MIT’s own Marvin Minsky, were already declaring by the end of the 1950s that true computer consciousness was just twenty years away. (This would eventually lead to a longstanding joke in hacker culture, that strong AI is always exactly two decades away…) Even such an undeniable genius as Alan Turing, who had been dead six years already when Lick published his paper, had spent much effort devising a “Turing test” that could serve as a determiner of true artificial intelligence, and had attempted to teach a computer to play chess as a sort of opening proof of concept.

Lick, on the other hand, well recognized that to use the completely deterministic and algorithm-friendly game of chess for that purpose was not quite honest; a far better demonstration of artificial intelligence would be a computer that could win at poker, what with all of the intuition and social empathy that game required. But rather than chase such chimeras at all, why not let computers do the things they already do well and let humans do likewise, and teach them both to work together to accomplish things neither could on their own? Many of computing’s leading theorists, Lick implied, had developed delusions of grandeur, moving with inordinate speed from computers as giant calculators for crunching numbers to computers as sentient beings in their own right. They didn’t have to become the latter, Lick understood, to become one of the most important tools humanity had ever invented for itself; there was a sweet spot in between the two extremes. He chose to open his paper with a metaphor from the natural world, describing how fig trees are pollinated by the wasps which feed upon their fruit. “The tree and the insect are thus heavily interdependent,” he wrote. “The tree cannot reproduce without the insect; the insect cannot eat without the tree; they constitute not only a viable but a productive and thriving partnership.” A symbiosis, in other words.

A similar symbiosis could and should become the norm in human-computer interactions, with the humans always in the cat-bird seat as the final deciders — no Star Trek doomsday scenarios here.

[Humans] will set the goals and supply the motivations. They will formulate hypotheses. They will ask questions. They will think of mechanisms, procedures, and models. They will define criteria and serve as evaluators, judging the contributions of the equipment and guiding the general line of thought. The information-processing equipment, for its part, will convert hypotheses into testable models and then test the models against the data. The equipment will answer questions. It will simulate the mechanisms and models, carry out the procedures, and display the results to the operator. It will transform data, plot graphs. [It] will interpolate, extrapolate, and transform. It will convert static equations or logical statements into dynamic models so that the human operator can examine their behavior. In general, it will carry out the routinizable, clerical operations that fill the intervals between decisions.

Perchance in a bid not to offend his more grandiose colleagues, Lick did hedge his bets on the long-term prospects for strong artificial intelligence. It might very well arrive at some point, he said, although he couldn’t say whether that would take ten years or 500 years. Regardless, the years before its arrival “should be intellectually and creatively the most exciting in the history of mankind.”

In the end, however, even Lick’s diplomatic skills would prove insufficient to smooth out the differences between two competing visions of computing. By the end of the 1960s, the argument would literally split MIT’s computer-science research in two. One part would become the AI Lab, dedicated to artificial intelligence in its most expansive form; the other, known as the Dynamic Modeling Group, would take its mission statement as well as its name almost verbatim from Lick’s 1960 paper. For all that some folks still love to talk excitedly and/or worriedly of a “Singularity” after which computer intelligence will truly exceed human intelligence in all its facets, the way we actually use computers today is far more reflective of J.C.R. Licklider’s vision than that of Marvin Minsky or John McCarthy.

But all of that lay well in the future at the dawn of the 1960s. Viewing matters strictly through the lens of that time, we can now begin to see why Jack Ruina at ARPA found J.C.R. Licklider and the philosophy of computing he represented so appealing. Most of the generals and admirals Ruina talked to were much like the general public; they still thought of computers as giant brains that would crunch a bunch of data and then unfold for them the secrets of the universe — or at least of the Soviets. “The idea was that you take this powerful computer and feed it all this qualitative information, such as ‘the air-force chief drank two martinis’ or ‘Khrushchev isn’t reading Pravda on Mondays,'” laughed Ruina later. “And the computer would play Sherlock Holmes and reveal that the Russians must be building an MX-72 missile or something like that.” Such hopes were, as Lick put it to Ruina at their first meeting, “asinine.”

SAGE existed already as a shining example of Lick’s take on computers — computers as aids to rather than replacements for human intelligence. Ruina was well aware that command-and-control was one of the most difficult problems in warfare; throughout history, it has often been the principal reason that wars are won or lost. Just imagine what SAGE-like real-time information spaces could do for the country’s overall level of preparedness if spread throughout the military chain of command…

On October 1, 1962, following a long courtship on the part of Ruina, Lick officially took up his new duties in a small office in the Pentagon. Like Lick himself, Ruina wasn’t much for micromanagement; he believed in hiring smart people and stepping back to let them do their thing. Thus he turned over his $10 million per year to Lick with basically no strings attached. Just find a way to make interactive computing better, he told him, preferably in ways useful to the military. For his part, Lick made it clear that “I wasn’t doing battle planning,” as he later remembered. “I was doing the technical substrate that would one day support battle planning.” Ruina said that was just fine with him. Lick had free rein.

Ironically, he never did do much of anything with the leftover SAGE computers that had gotten the whole ball rolling; they were just too old, too specialized, too big. Instead he set about recruiting the smartest people he knew of to do research on the government’s dime, using the equipment found at their own local institutions.

If I tried to describe everything these folks got up to here, I would get hopelessly sidetracked. So, we’ll move directly to ARPA’s most famous computing project of all. A Licklider memo dated April 25, 1963, is surely one of the most important of its type in all of modern history. For it was here that Lick first made his case for a far-flung general-purpose computer network. The memo was addressed to “members and affiliates of the Intergalactic Computer Network,” which serves as an example of Lick’s tendency to attempt to avoid sounding too highfalutin by making the ideas about which he felt most strongly sound a bit ridiculous instead. Strictly speaking, the phrase “Intergalactic Computer Network” didn’t apply to the thing Lick was proposing; the network in question here was rather the human network of researchers that Lick was busily assembling. Nevertheless, a computer network was the topic of the memo, and its salutation and its topic would quickly become conflated. Before it became the Internet, even before it became the ARPANET, everyone would call it the Intergalactic Network.

In the memo, Lick notes that ARPA is already funding a diverse variety of computing projects at an almost equally diverse variety of locations. In the interest of not duplicating the wheel, it would make sense if the researchers involved could share programs and data and correspond with one another easily, so that every researcher could benefit from the efforts of the others whenever possible. Therefore he proposes that all of their computers be tied together on a single network, such that any machine can communicate at any time with any other machine.

Lick was careful to couch his argument in the immediate practical benefits it would afford to the projects under his charge. Yet it arose from more abstract discussions that had been swirling around MIT for years. Lick’s idea of a large-scale computer network was in fact inextricably bound up with his humanist vision for computing writ large. In a stunningly prescient article published in the May 1964 issue of Atlantic Monthly, Martin Greenberger, a professor with MIT’s Sloan School of Management, made the case for a computer-based “information utility” — essentially, for the modern Internet, which he imagined arriving at more or less exactly the moment it really did become an inescapable part of our day-to-day lives. In doing all of this, he often seemed to be parroting Lick’s ideology of better living through human-computer symbiosis, to the point of employing many of the same idiosyncratic word choices.

The range of application of the information utility includes medical-information systems for hospitals and clinics, centralized traffic controls for cities and highways, catalogue shopping from a convenient terminal at home, automatic libraries linked to home and office, integrated management-control systems for companies and factories, teaching consoles in the classroom, research consoles in the laboratory, design consoles in the engineering firm, editing consoles in the publishing office, [and] computerized communities.

Barring unforeseen obstacles, an online interactive computer service, provided commercially by an information utility, may be as commonplace by 2000 AD as a telephone service is today. By 2000 AD, man should have a much better comprehension of himself and his system, not because he will be innately any smarter than he is today, but because he will have learned to use imaginatively the most powerful amplifier of intelligence yet devised.

In 1964, the idea of shopping and socializing through a home computer “sounded a bit like working a nuclear reactor in your home,” as M. Mitchell Waldrop writes. Still, there it was — and Greenberger’s uncannily accurate predictions almost certainly originated with Lick.

Lick himself, however, was about to step back and entrust his dream to others. In September of 1964, he resigned from his post in the Pentagon to accept a job with IBM. There were likely quite a number of factors behind this decision, which struck many of his colleagues at the time as perplexing as it strikes us today. As we’ve seen, he was not a hardcore techie, and he may have genuinely believed that a different sort of mind would do a better job of managing the projects he had set in motion at ARPA. Meanwhile his family wasn’t overly thrilled at life in their cramped Washington apartment, the best accommodations his government salary could pay for. IBM, on the other hand, compensated its senior employees very generously — no small consideration for a man with two children close to university age. After decades of non-profit service, he may have seen this, reasonably enough, as his chance to finally cash in. Lastly and perhaps most importantly, he probably truly believed that he could do a lot of good for the world at IBM, by convincing this most powerful force in commercial computing to fully embrace his humanistic vision of computing’s potential. That wouldn’t happen in the end; his tenure there would prove short and disappointing. He would find the notoriously conservative culture of IBM impervious to his charms, a thoroughly novel experience for him. But of course he couldn’t know that prior to the fact.

Lick’s successor at ARPA was Ivan Sutherland, a young man of just 26 years who had recently created a sensation at MIT with his PhD project, a program called Sketchpad that let a user draw arbitrary pictures on a computer screen using one of the light pens that Lick had helped to invent for SAGE. But Sutherland proved no more interested in the details of administration than Lick had been, even as he demonstrated why a more typical hacker type might not have been the best choice for the position after all, being too fixated on his own experiments with computer graphics to have much time to inspire and guide others. Lick’s idea for a large-scale computer network lay moribund during his tenure. It remained so for almost two full years in all, until Sutherland too left what really was a rather thankless job. His replacement was one Robert Taylor. Critically, this latest administrator came complete with Lick’s passion for networking, along with something of his genius for interpersonal relations.


Robert Taylor, as photographed by Annie Leibowitz in 1972 for a Rolling Stone feature article on Xerox PARC, his destination after leaving ARPA.

Coming on as a veritable stereotype of a laid-back country boy, right down to his laconic Texan accent, Robert Taylor was a disarmingly easy man to underestimate. He was born seventeen years after Lick, but there were some uncanny similarities in their backgrounds. Taylor too grew up far from the intellectual capitals of the nation as the son of a minister. Like Lick, he gradually lost his faith in the course of trying to decide what to do with his life, and like Lick he finally settled on psychology. More or less, anyway; he graduated from the University of Texas at age 25 in 1957 with a bachelor’s degree in psychology and minors in mathematics, philosophy, English, and religion. He was working at Martin Marietta in a “stopgap” job in the spring of 1960, when he stumbled across Lick’s article on human-computer symbiosis. It changed his life. “Lick’s paper opened the door for me,” he says. “Over time, I became less and less interested in brain research, and more and more heartily subscribed to the Licklider vision of interactive computing.” The interest led him to NASA the following year, where he helped to design the displays used by ground controllers on the Mercury, Gemini, and Apollo manned-spaceflight programs. In early 1965, he moved to ARPA as Sutherland’s deputy, then took over Sutherland’s job following his departure in June of 1966.

In the course of it all, Taylor got to talk with Lick himself on many occasions. Unsurprisingly given the similarities in their backgrounds and to some extent in their demeanors, the two men hit it off famously. Soon Taylor felt the same zeal that his mentor did for a new, unprecedentedly large and flexible computer network. And once he found himself in charge of ARPA’s computer-research budget, he was in a position to do something about it. He was determined to make Lick’s Intergalactic Network a reality.

Alas, instilling the same determination in the researchers working with ARPA would not be easy. Many of them would later be loath to admit their reluctance, given that the Intergalactic Network would prove to be one of the most important projects in the entire history of computing, but it was there nonetheless. Severo Ornstein, who was working at Lick’s old employer of Bolt Beranek and Newman at this time, confesses to a typical reaction: “Who would want such a thing?” Computer cycles were a precious resource in those days, a commodity which researchers coveted for their personal use as much as Scrooge coveted his shillings. Almost no one was eager to share their computers with people in other cities and states. The strong AI contingent under Minsky and McCarthy, whose experiments not coincidentally tended to be especially taxing on a computer’s resources, were among the loudest objectors. It didn’t help matters that Taylor suffered from something of a respect deficit. Unlike Lick and Sutherland before him, he wasn’t quite of this group of brainy and often arrogant cats which he was attempting to herd, having never made a name for himself through research at one of their universities — indeed, lacking even the all-important suffix “PhD” behind his name.

But Bob Taylor shared one more similarity with J.C.R. Licklider: he was all about making good things happen, not about taking credit for them. If the nation’s computer researchers refused to take him seriously, he would find someone else whom they couldn’t ignore. He settled on Larry Roberts, an MIT veteran who had helped Sutherland with Sketchpad and done much groundbreaking work of his own in the field of computer graphics, such as laying the foundation for the compressed file formats that are used to shuffle billions of images around the Internet today. Roberts had been converted by Lick to the networking religion in November of 1964, when the two were hanging out in a bar after a conference. Roberts:

The conversation was, what was the future? And Lick, of course, was talking about his concept of an Intergalactic Network.

At that time, Ivan [Sutherland] and I had gone farther than anyone else in graphics. But I had begun to realize that everything I did was useless to the rest of the world because it was on the TX-2, and that was a unique machine. The TX-2, [the] CTSS, and so forth — they were all incompatible, which made it almost impossible to move data. So everything we did was almost in isolation. The only thing we could do to get the stuff out into the world was to produce written technical papers, which was a very slow process.

It seemed to me that civilization would change if we could move all this [over a network]. It would be a whole new way of sharing knowledge.

The only problem was that Roberts had no interest in becoming a government bureaucrat. So Taylor, whose drawl masked a steely resolve when push came to shove, did what he had to in order to get his man. He went to the administrators of MIT and Lincoln Lab, which were heavily dependent on government funding, and strongly hinted that said funding might be contingent on one member of their staff stepping away from his academic responsibilities for a couple of years. Before 1966 was out, Larry Roberts reported for duty at the Pentagon, to serve as the technical lead of what was about to become known as the ARPANET.

In March of 1967, as the nation’s adults were reeling from the fiery deaths of three Apollo astronauts on the launchpad and its youth were ushering in the Age of Aquarius, Taylor and Roberts brought together 25 or so of the most brilliant minds in computing in a University of Michigan classroom in the hope of fomenting a different sort of revolution. Despite the addition of Roberts to the networking cause, most of them still didn’t want to be there, thought this ARPANET business a waste of time. They arrived all too ready to voice objections and obstacles to the scheme, of which there were no shortage.

The computers that Taylor and Roberts proposed to link together were a motley crew by any standard, ranging from the latest hulking IBM mainframes to mid-sized machines from companies like DEC to bespoke hand-built jobs. The problem of teaching computers from different manufacturers — or even different models of computer from the same manufacturer — to share data with one another had only recently been taken up in earnest. Even moving text from one machine to another could be a challenge; it had been just half a decade since a body called the American Standards Association had approved a standard way of encoding alphanumeric characters as binary numbers, constituting the computer world’s would-be equivalent to Morse Code. Known as the American Standard Code for Information Interchange, or ASCII, it was far from universally accepted, with IBM in particular clinging obstinately to an alternative, in-house-developed system known as the Extended Binary Coded Decimal Interchange Code, or EBCDIC. Uploading a text file generated on a computer that used one standard to a computer that used the other would result in gibberish. How were such computers to talk to one another?

The ARPANET would run on ASCII, Taylor and Roberts replied. Those computers that used something else would just have to implement a translation layer for communicating with the outside world.

Fair enough. But then, how was the physical cabling to work? ARPA couldn’t afford to string its own wires all over the country, and the ones that already existed were designed for telephones, not computers.

No problem, came the reply. ARPA would be able to lease high-capacity lines from AT&T, and Claude Shannon had long since taught them all that information was information. Naturally, there would be some degree of noise on the lines, but error-checking protocols were by now commonplace. Tests had shown that one could push information down one of AT&T’s best lines at a rate of up to 56,000 baud before the number of corrupted packets reached a point of diminishing returns. So, this was the speed at which the ARPANET would run.

The next objection was the gnarliest. At the core of the whole ARPANET idea lay the stipulation that any computer on the network must be able to talk to any other, just like any telephone was able to ring up any other. But existing wide-area computer networks, such as the ones behind SAGE and Sabre, all operated on the railroad model of the old telegraph networks: each line led to exactly one place. To use the same approach as existing telephone networks, with individual computers constantly dialing up one another through electro-mechanical switches, would be way too inefficient and inflexible for a high-speed data network such as this one. Therefore Taylor and Roberts had another approach in mind.

We learned in the last article about R.W. Hamming’s system of error correction, which worked by sending information down a line as a series of packets, each followed by a checksum. In 1964, in a book entitled simply Communication Nets, an MIT researcher named Leonard Kleinrock extended the concept. There was no reason, he noted, that a packet couldn’t contain additional meta-information beyond the checksum. It could, for example, contain the destination it was trying to reach on a network. This meta-information could be used to pass it from hand to hand through the network in the same way that the postal system used the address on the envelope of a paper letter to guide it to its intended destination. This approach to data transfer over a network would soon become known as “packet switching,” and would prove of incalculable importance to the world’s digital future.[1]As Kleinrock himself would hasten to point out, he was not the sole originator of the concept, which has a long and somewhat convoluted history as a theory. His book was, however, the way that many or most of the folks behind the ARPANET first encountered packet switching.

A “star” network topology, in which every computer communicates with every other by passing packets through a single “Grand Central Station.”

How exactly might packet switching work on the ARPANET? At first, Taylor and Roberts had contemplated using a single computer as a sort of central postal exchange. Every other computer on the ARPANET would be wired into this machine, whose sole duty would be to read the desired destination of each incoming packet and send it there. But the approach came complete with a glaring problem: if the central hub went down for any reason, it would take the whole ARPANET down with it.

A “distributed” network topology in which all of the computers work together to move messages through the system. It lacks a single point of failure, but is much more complicated to implement from a technical perspective.

Instead Taylor and Roberts settled on a radically de-centralized approach. Each computer would be directly connected to no more than a handful of other machines. When it received a packet from one of them, it would check the address. If it was not the intended final destination, it would consult a logical map of the network and send the packet along to the peer computer able to get it there most efficiently; then it would forget all about it and go about its own business again. The advantage of the approach was that, if any given computer went down, the others could route their way around it until it came online again. Thus there would be no easy way to “break” the ARPANET, since there would be no single point of failure. This quality of being de-centralized and self-correcting remains the most important of all the design priorities of the modern Internet.

Everyone at the meeting could agree that all of this was quite clever, but they still weren’t won over. The naysayers’ arguments still hinged on how precious computing horsepower was. Every nanosecond a computer spent acting as an electronic postal sorter was a nanosecond that computer couldn’t spend doing other sorts of more useful work. For once, Taylor and Roberts had no real riposte for this concern, beyond vague promises to invest ARPA funds into more and better computers for those who had need of them. Then, just as the meeting was breaking up, with skepticism still hanging palpably in the air, a fellow named Wesley Clark passed a note to Larry Roberts, saying he thought he had a solution to the problem.

It seemed to him, he elaborated to Taylor and Roberts after the meeting, that running the ARPANET straight through all of its constituent machines was rather like running an interstate highway system right through the center of every small town in the country. Why not make the network its own, largely self-contained thing, connected to each computer it served only by a single convenient off- and on-ramp? Instead of asking the computer end-users of the ARPANET to also direct its flow of traffic, one could use dedicated machines as the traffic wardens on the highway itself. These “Interface Message Processors,” or IMPs, would be able to move packets through the system quickly, without taxing the other computers. And they too could allow for a non-centralized, fail-safe network if they were set up the right way. Today IMPs are known as routers, but the principle of their operation remains the same.

A network that uses the IMPs proposed by Wesley Clark. Each IMP sits at the center of a cluster of computers, and is also able to communicate with its peers to send messages to computers on other clusters. A failed IMP actually can take a substantial chunk of the network offline under the arrangement shown here, but redundant IMPs and connections between them all could and eventually would be built into the design.

When Wesley Clark spoke, people listened; his had been an important voice in hacker circles since the days of MIT’s Project Whirlwind. Taylor and Roberts immediately saw the wisdom in his scheme.

The advocacy of the highly respected Clark, combined with the promise that ARPANET need not cost them computer cycles if it used his approach, was enough to finally bring most of the rest of the research community around. Over the months that followed, while Taylor and Roberts worked out a project plan and budget, skepticism gradually morphed into real enthusiasm. J.C.R. Licklider had by now left IBM and returned to the friendlier confines of MIT, whence he continued to push the ARPANET behind the scenes. Especially the younger generation that was coming up behind the old guard tended to be less enamored with the “giant brain” model of computing and more receptive to Lick’s vision, and thus to the nascent ARPANET. “We found ourselves imagining all kinds of possibilities [for the ARPANET],” remembers one Steve Crocker, a UCLA graduate student at the time. “Interactive graphics, cooperating processes, automatic database query, electronic mail…”

In the midst of the building buzz, Lick and Bob Taylor co-authored an article which appeared in the April 1968 issue of the journal Science and Technology. Appropriately entitled “The Computer as a Communications Device,” it included Lick’s most audacious and uncannily accurate prognostications yet, particularly when it came to the sociology, if you will, of its universal computer network of the future.

What will online interactive communities be like? They will consist of geographically separated members. They will be communities not of common location but of common interest [emphasis original]…

Each secretary’s typewriter, each data-gathering instrument, conceivably each Dictaphone microphone, will feed into the network…

You will not send a letter or a telegram; you will simply identify the people whose files should be linked to yours — and perhaps specify a coefficient of urgency. You will seldom make a telephone call; you will ask the network to link your consoles together…

You will seldom make a purely business trip because linking consoles will be so much more efficient. You will spend much more time in computer-facilitated teleconferences and much less en route to meetings…

Available within the network will be functions and services to which you subscribe on a regular basis and others that you call for when you need them. In the former group will be investment guidance, tax counseling, selective dissemination of information in your field of specialization, announcement of cultural, sport, and entertainment events that fit your interests, etc. In the latter group will be dictionaries, encyclopedias, indexes, catalogues, editing programs, teaching programs, testing programs, programming systems, databases, and — most important — communication, display, and modeling programs…

When people do their informational work “at the console” and “through the network,” telecommunication will be as natural an extension of individual work as face-to-face communication is now. The impact of that fact, and of the marked facilitation of the communicative process, will be very great — both on the individual and on society…

Life will be happier for the online individual because the people with whom one interacts most strongly will be selected more by commonality of interests and goals than by accidents of proximity. There will be plenty of opportunity for everyone (who can afford a console) to find his calling, for the whole world of information, with all its fields and disciplines, will be open to him…

For the society, the impact will be good or bad, depending mainly on the question: Will “to be online” be a privilege or a right? If only a favored segment of the population gets to enjoy the advantage of “intelligence amplification,” the network may exaggerate the discontinuity in the spectrum of intellectual opportunity…

On the other hand, if the network idea should prove to do for education what a few have envisioned in hope, if not in concrete detailed plan, and if all minds should prove to be responsive, surely the boon to humankind would be beyond measure…

The dream of a nationwide, perhaps eventually a worldwide web of computers fostering a new age of human interaction was thus laid out in black and white. The funding to embark on at least the first stage of that grand adventure was also there, thanks to the largess of the Cold War military-industrial complex. And solutions had been proposed for the thorniest technical problems involved in the project. Now it was time to turn theory into practice. It was time to actually build the Intergalactic Computer Network.

(Sources: the books A Brief History of the Future: The Origins of the Internet by John Naughton; Where Wizards Stay Up Late: The Origins of the Internet by Katie Hafner and Matthew Lyon, Hackers: Heroes of the Computer Revolution by Steven Levy, From Gutenberg to the Internet: A Sourcebook on the History of Information Technology edited by Jeremy M. Norman, The Dream Machine by M. Mitchell Waldrop, A History of Modern Computing (2nd ed.) by Paul E. Ceruzzi, Communication Networks: A Concise Introduction by Jean Walrand and Shyam Parekh, Communication Nets by Leonard Kleinrock, and Computing in the Middle Ages by Severo M. Ornstein. Online sources include the companion website to Where Wizards Stay Up Late and “The Computers of Tomorrow” by Martin Greenberger on The Atlantic Online.)

Footnotes

Footnotes
1 As Kleinrock himself would hasten to point out, he was not the sole originator of the concept, which has a long and somewhat convoluted history as a theory. His book was, however, the way that many or most of the folks behind the ARPANET first encountered packet switching.
 
 

Tags:

Boffo Games

After Infocom was shut down in 1989, Mike Dornbrook, the mastermind behind the company’s InvisiClues hint books and much else that has become iconic for interactive-fiction fans of a certain generation, was determined to start a company of his own. Indeed, he was so motivated that he negotiated to take much of Infocom’s office furniture in lieu of cash as part of his severance package.

But alas, his entrepreneurial dream seemed vexed. He embarked on a mail-order catalog for maps and travel books — until he learned that Rand-McNally was starting a catalog of its own. He pivoted to offering customized traffic reports for drivers on the go — until it was decided by the authorities in the Boston area where he lived that mobile-phone users would not be allowed to call “premium-rate” numbers like the one he was setting up. So, in January of 1991, he started a regular job at a targeted-marketing and data-processing consultancy that had recently been purchased by American Express. Two years later, he was laid off, but carried his knowledge and contacts into his own data-mining startup. He was still trying to line up enough investment capital to get that company going properly when he got a call from Steve Meretzky, who before becoming a star Infocom designer had been his roommate in a little Boston apartment; in fact, it was Dornbrook who had first introduced Meretzky to the wonders of Zork, thus unleashing him on the world of adventure games.

Unlike Dornbrook, Meretzky had stayed in the games industry since Infocom’s shuttering, designing four adventures for Legend Entertainment and one for Activision from his Boston home. But he had grown tired of working remotely, and dearly missed the camaraderie and creative ferment of life at Infocom. Superhero League of Hoboken, his latest game for Legend (and by far the most inspired of his post-Infocom career in this critic’s opinion), had turned into a particularly frustrating experience for him; delays on the implementation side meant that it was still many months away from seeing the light of day. He had thus decided to start a games studio of his own — and he wanted his old pal Mike Dornbrook to run it for him. “I’ll help you to get it going,” agreed a somewhat reluctant Dornbrook, who after enduring the painful latter years of Infocom wasn’t at all sure he actually wanted to return to the industry.

And so Boffo Games was born. Sadly, all of Dornbrook’s forebodings would prove prescient.



At the time, the hype around multimedia computing was reaching a fever pitch. One of the biggest winners of the era was a Singaporean company called Creative Labs, whose Sound Blaster sound cards had been at the vanguard of a metamorphosis in computer audio since 1989. More recently, they had also begun selling CD-ROM drives, as well as “multimedia upgrade kits”: sound cards and CD-ROM drives in one convenient package, along with a few discs to get purchasers started on their magical journey.

Of late, however, another company had begun making waves in the same market. The Silicon Valley firm Media Vision had first captured headlines in newspaper financial sections in November of 1992, when it raised $45 million in an initial public offering in order to go head to head with Creative. Soon after, Media Vision released their Pro AudioSpectrum 16 sound card, the first to offer 16-bit — i.e., audio-CD-quality — sound playback. It took Creative months to follow suit with the Sound Blaster 16.

In the end, Media Vision would not be remembered for their honesty…

But Media Vision’s ambitions extended well beyond the sound-card and CD-ROM-drive market, which, as most financial analysts well realized, looked likely to plateau and then slowly tail off once everyone who wanted to add multimedia capabilities to an existing computer had done so and new computers were all shipping with these features built-in. To secure their long-term future, Media Vision planned to use their hardware profits to invest heavily in software. By the Christmas buying season of 1993, announced the company’s CEO Paul Jain at the beginning of that same year, they would have ten cutting-edge CD-ROM games on the market. To prove his bona fides, he had recruited to run his games division one Stan Cornyn, a legendary name among music-industry insiders.

Cornyn had been hired by Warner Bros. Records in 1958 to write liner notes, and had gone on to become instrumental in building Warner Music into the biggest record company in the world by the end of the 1980s, with superstars like Madonna and Prince in its stable of artists. During his last years at Warner, Cornyn had headed the Warner New Media spinoff, working on Philips CD-I titles and such other innovations as the CD+G format, which allowed one to place lyrics sheets and pictures on what were otherwise audio CDs. In 1992, he had left Warner. “Corporate [leadership] wanted my company to turn a profit, and I had no idea how our inventions would conquer the world,” he would write later. “That, I left to others.” Instead he decided to reinvent himself as a games-industry executive by signing on with Media Vision. His entrance said much about where the movers and shakers in media believed interactive entertainment was heading. And sure enough, he almost immediately scored a major coup, when he signed press darling Trilobyte to release their much-anticipated sequel to The 7th Guest under the Media Vision banner.

As it happened, Marc Blank, one of the original founders of Infocom, had worked at Warner New Media for a time with Cornyn; he had also remained friendly with both Mike Dornbrook and Steve Meretzky. When he read about Cornyn’s hiring by Media Vision, it all struck Dornbrook as serendipitous. “I thought, ‘Aha!'” he remembers. “‘We have a new person who needs content and has a massive budget, and we have a connection to him.'” It was now the fall of 1993. Media Vision hadn’t published the ten games that Paul Jain had promised by this point — they’d only managed two, neither of them very well-received — but that only made Cornyn that much more eager to sign development deals.

Blank proved as good a go-between as Dornbrook had hoped, and a meeting was arranged for Monday, January 17, 1994, in the Los Angeles offices of Stan Cornyn’s operation. Taking advantage of cheaper weekend airfares, Dornbrook and Meretzky took off from a Boston winter and landed amidst the storied sunshine of Southern California two days before that date. Looking at the pedestrians strolling around in their shorts and flip-flops while he sweated in his winter pullover, Dornbrook said to his friend, “You know, I can kind of see why people want to live out here.”

“You’d never catch me out here,” answered Meretzky, “because of the earthquakes.”

“It would be just our luck, wouldn’t it…” mused Dornbrook.

Fast-forward to 4:30 AM on Monday morning, in the fourth-floor hotel room they were sharing. Dornbrook:

The initial shock threw Steve out of his bed and threw me up in the air. I grabbed onto my mattress and held on for dear life. It was like riding a bucking bronco. The building was shaking and moving in ways I didn’t think a building could survive. I was convinced that at any second the ceiling beams were going to fall on me and crush me. That went on for 35 seconds — which feels like about five minutes in an earthquake. And then it stopped.

We were both fine, but it was pitch black in the room; all the lights were out. But I noticed there was a little red light on the TV. I thought, “Oh, we still have power.” So, I decided to turn the TV on. All my life, the public-broadcast system was telling me, in case of an emergency, they would tell me what to do. While I’m turning it on, Steve is yelling, “We need to get out of here!”

I said, “I want to see what they’re telling us to do.” It was a newsroom in LA, one of the main network stations. The camera was zoomed all the way back in a way you normally didn’t see. There were all these desks, all empty except one. That person was screaming and putting his hands over his head and crawling under the desk — and then the power went out.

I knew the TV station was many, many miles from us. This was not just local; this was a major quake. I’m thinking that the San Andreas Fault might have given way. We might not have water; we’re in a desert. We might be trapped here with no water! So, I crawled into the bathroom and started filling the bathtub with water. Steve is yelling, “What the hell are you doing? We’ve got to get out of here!”

I said, “We need water!”

After the bathtub was full, we got dressed in the dark and worked our way down the hall. We had no way of knowing if there was floor in front of us; it was pitch black. So, I let him go first. He felt his way down the hall, making sure there was a floor there. We got to the exit stairs, and they were pitch black also. We went down step by step, making sure there was another step in front of us, all the way to the first floor.

Then we opened the door into the parking lot, and I remember gasping at the sight. We’re in a desert, it’s dry as can be, and there’s no power for hundreds of miles. You could see stars right down to the horizon. I’ve never seen a sky so clear. It was stunning.

The 1994 Los Angeles earthquake killed 57 people, injured more than 9000, and did tens of billions of dollars of property damage. But the show must go on, as they say in Hollywood. The meeting with the Media Vision games division convened that afternoon in Stan Cornyn’s house, delayed only about six hours by the most violent earthquake in Los Angeles history.

Anyone familiar with my earlier coverage of Steve Meretzky’s career will know that he collected game ideas like some people collect stamps. True to form, he showed up at Cornyn’s house with no less than 21 of them, much to the chagrin of Dornbrook, who would have vastly preferred to pitch just one or two: “Because they don’t really have a clue what will work, and they think you do.” On this occasion, though, everyone in the room was feeling giddy from having survived the morning, not to mention the bottles of good wine Cornyn brought up from his cellar, as they listened to Meretzky work through his list. When he was finally finished, Cornyn and his team huddled together for a few minutes, then returned and announced that they’d take eleven of them, thank you very much, and they’d like the first by Christmas at the latest. As a demonstration of good faith while the lawyers wrote up the final contracts, Cornyn handed Dornbrook and Meretzky a check for $20,000. “Get started right now,” he said. “We don’t want you to lose a day.”

After they’d digested this bombshell, Dornbrook and Meretzky asked each other which idea they could possibly bring to fruition in the span of just nine months or so, given that they were literally starting from scratch: no office, no staff, no computers, no development tools, no investors. (Boffo’s founding capital had been exactly $10.) They decided on something called Hodj ‘n’ Podj.

Hodj ‘n’ Podj wasn’t a traditional adventure game, but it was a classic Steve Meretzky project, a game concept which had caught his fancy a long time ago and had remained in his notebook ever since. Its origins reached back to Fooblitzky, the most atypical Infocom game ever: a multiplayer board game that happened to be played on the computer, designed mostly by Mike Berlyn circa 1984. It was a roll-and-move game which revolved around deducing which four of eighteen possible items your character needed to collect in order to win, and then carrying them across the finish line before your competitors did the same with their collections. Played on the company’s big DEC PDP-10, Fooblitzky was a fixture of life inside mid-period Infocom. In late 1985, it became the one and only Infocom product to use their little-remembered cross-platform graphics engine, becoming in the process something of a case study in why such an engine was more problematic than their ubiquitous textual Z-MachineFooblitzky shipped only for the IBM PC, the Apple II, and the Atari 8-bit line of computers, running on the last two at the speed of treacle on a cold day and not coming close to utilizing the full graphics capabilities, modest though they may have been, of any of its hosts. A casual family game at a time when such things were virtually unheard of on computers, and a completely silent and graphically underwhelming one at that, it sold only about 7500 copies in all.

Meretzky’s idea, then, was to update Fooblitzky for an era of home computing that ought to be more friendly to it. He would retain the core mechanics — roll and move, deduce and fetch — but would polish up the interface and graphics, write a fresh framing story involving a kidnapped princess in a fairy-tale kingdom, and add one important new element: as you moved around the board, you would have to play puzzle- and/or action-based mini-games to earn the clues, items, and money you needed. The game would run under Windows — no futzing about with MS-DOS IRQ settings and memory managers! — in order to reach beyond the hardcore-gamer demographic who would probably just scoff at it anyway. It seemed a more than solid proposition, with an important practical advantage that shot it right to the top of Boffo’s project list: the mini-games, where the bulk of the programming would be required, were siloed off from one another in such a way that they could be developed by separate teams working in parallel. Thus the project should be finishable in the requested nine months or so.

Back in cold but blessedly stable Boston, Dornbrook and Meretzky rented office space, hired staff, and bought computers on Media Vision’s dime. The final contract arrived, and all still seemed fine, so much so that Dornbrook agreed to wind up his data-mining venture in favor of doing games full time again. Then, one morning in early April, he opened his newspaper to read that Media Vision was being investigated by the Securities and Exchange Commission for serious accounting malfeasance.

In retrospect, the signs had been there all along, as they usually are. The move into software should have raised antennas already more than a year before. “When a company switches or expands its business line into something completely different, it generally means management fears that growth will slow in the main line,” wrote stock-market guru Kathryn F. Staley as part of the round of Monday-morning quarterbacking that now began. “When they expand into a highly competitive business that costs money for product development (like software game titles) when the base business eats money as well, you sit back and watch for the train wreck to happen.” Herb Greenberg, a financial correspondent for the San Francisco Chronicle, had been sounding the alarm about Media Vision since the summer of 1993, noting how hard it was to understand how the company’s bottom line could look as good as it did; for all the buzz around Media Vision, it was Creative Labs who still appeared to be selling the vast majority of sound cards and CD-ROM drives. But nobody wanted to listen — least of all two Boston entrepreneurs with a dream of starting a games studio that would bring back some of the old Infocom magic. Media Vision’s stock price had stood at $46 on the day of that earthquake-addled meeting in Los Angeles. Four months later, it stood at $5. Two months after that, the company no longer existed.

As the layers were peeled away, it was learned that Paul Jain and his cronies had engaged in a breathtaking range of fraudulent practices to keep the stock price climbing. They’d paid a fly-by-night firm in India to claim to have purchased $6 million worth of hardware from them that they had never actually made. They’d stashed inventory they said they had sold in secret warehouses in several states. (This house of cards started to fall when Media Vision’s facilities manager, who was not in on the scheme, asked why she kept getting bills from warehouses she hadn’t known existed.) They’d capitalized the expense of their software projects so as to spread the bills out over many years — a practice that was supposed to be used only for permanent, ultra-expensive infrastructure like factories and skyscrapers. Herb Greenberg revealed in one of his articles that they’d go so far as to capitalize their corporate Christmas party. After long rounds of government investigations and shareholder lawsuits, Paul Jain and his chief financial officer Steve Allan would be convicted of wire fraud and sentenced to prison in 2000 and 2002 respectively. “This was certainly one of the dirtiest cases I was ever involved in,” said one lawyer afterward. There is no evidence to suggest that Stan Cornyn’s group was aware of any of this, but the revelations nevertheless marked the end of it alongside the rest of Media Vision. Cornyn himself left the games industry, never to return — understandably enough, given the nature of his brief experience there.

Showing amazing fortitude, Dornbrook, Meretzky, and the team of programmers and artists they’d hired just kept their heads down and kept working on Hodj ‘n’ Podj while Media Vision imploded. When the checks stopped coming from their benefactor, the founders quit paying themselves and cut all other expenses to the bone. That October, Hodj ‘n’ Podj was finished on time and under budget, but it was left in limbo while the bankruptcy court sorted through the wreckage of Media Vision. In December, the contract was bought at the bankruptcy fire sale by Virgin Interactive, and against all odds the game reached store shelves under their imprint in March of 1995. (Virgin also wound up with The 11th Hour, the sequel to The 7th Guest — an ironic and rather delicious turn of events for them, given that they had actually been the publisher of The 7th Guest back in the day, only to be abandoned by a starstruck Trilobyte when the time came to make the sequel.)

Hard sales figures for Hodj ‘n’ Podj aren’t available, but we can say with confidence that it wasn’t a big seller. In a 1998 Game Developers Conference presentation, Dornbrook blamed a shakeup at Virgin for its disappointing performance. It seems that the management team that bought it at the bankruptcy sale was excited about it, but another team that replaced the first was less so, and this latter refused to fund any real advertising.

These things were doubtless a major factor in its lack of commercial success, but it would be a bridge too far to call Hodj ‘n’ Podj a neglected classic. Although it’s bug-free and crisply presented, it wears out its welcome way more quickly than it ought to. A big part of the problem is the mini-games, which are one and all reskinned rehashes of hoary old perennials from both the analog and digital realms: Battleship, cryptograms, Solitaire, Kalah, video poker, etc. (“These tired old things are games you could play in your sleep, and a bit of freshening up on the soundtrack does little to encourage you to stay awake,” wrote Charles Ardai, harshly but by no means entirely inaccurately, in his review for Computer Gaming World.) Hodj ‘n’ Podj gives you no reason to explore the entire board, but rather makes the most efficient winning gambit that of simply hanging around the same few areas, playing the mini-games you are best at over and over; this speaks to a game that needed a lot more play-testing to devise ways to force players out of their comfort zones. But its most devastating weakness is the decision to support only two players in a game that positively begs to become a full-blown social occasion; even Fooblitzky allows up to four players. A board filled with half a dozen players, all bumping into and disrupting one another in all kinds of mischievous ways, would make up for a multitude of other sins, but this experience just isn’t possible. Hodj ‘n’ Podj isn’t a terrible game — you and a friend can have a perfectly enjoyable evening with it once or twice per year — but its concept is better than its implementation. Rather than becoming more interesting as you learn its ins and outs, as the best games do — yes, even the “casual” ones — it becomes less so.


The main game board. Whatever else you can say about it, Hodj ‘n’ Podj is beautifully presented, thoroughly belying its hurried assembling by a bunch of short-term hired hands. Its pixel art still looks great today.

Yes, there are riddles, always the last resort of a game designer out of other ideas.

Whack-a-beaver!



After Hodj ‘n’ Podj, the story of Boffo turns into a numbing parade of games that almost were. By Mike Dornbrook’s final tally, 35 of their proposals were met with a high degree of “interest” by some publisher or another; 21 led to “solid commitments”; 17 garnered verbal “promises”; 8 received letters of intent and down payments; 5 led to signed contracts; and 2 games (one of them Hodj ‘n’ Podj) actually shipped. I don’t have the heart to chronicle this cavalcade of disappointment in too much detail. Suffice to say that Boffo chose to deal — or was forced to deal — mostly with the new entities who had entered the market in the wake of CD-ROM rather than the old guard who had built the games industry over the course of the 1980s. As the venture capitalists and titans of traditional media who funded these experiments got nervous about a multimedia revolution that wasn’t materializing on the timetable they had expected, they bailed one by one, leaving Boffo out in the cold. Meanwhile the hardcore gaming market was shifting more and more toward first-person shooters and real-time strategy, at the expense of the adventure games which Steve Meretzky had always created. The most profitable Boffo project ever, notes Dornbrook wryly, was one which disappeared along with Time Warner Interactive, leaving behind only a contract which stipulated that Boffo must be paid for several months of work that they now didn’t need to do.

But Boffo did manage to complete one more game and see it released, and it’s to that project that we’ll turn now. The horrid pun that is its title aside, the thunderingly obvious inspiration for Steve Meretzky’s The Space Bar is the cantina scene from Star Wars, with its dizzying variety of cute, ugly, and just plain bizarre alien races all gathered into one seedy Tatooine bar, boozing, brawling, and grooving to the music. Meretzky wanted to capture the same atmosphere in his game, which would cast its player as a telepathic detective on the trail of a shapeshifting assassin. To solve the case, the player would not only need to interrogate the dozens of aliens hanging out at The Thirsty Tentacle, but enter the minds of some of them to relive their memories. Meretzky:

The main design goal for the project was to create an adventure game which was composed of a lot of smaller adventure games: a novel is to a short-story collection as a conventional adventure game would be to The Space Bar. In addition to just a desire to try something different, I also felt that people had increasingly scarce amounts of [free] time, and that starting an adventure game required setting aside such a huge amount of time, many tens of hours. But if, instead, you could say to yourself, “I’ll just play this ‘chapter’ now and save the rest for later,” it would be easier to justify picking up and starting the game. Secondary design goals were to create a spaceport bar as compelling as the one in the first Star Wars movie, to create a Bogart-esque noir atmosphere, to be really funny, and to prove that you could make a graphic adventure that, like the Infocom text games, could have a lot of “meat on the bones.” As with Hodj ‘n’ Podj, I felt that just a collection of independent games was too loose and required a connecting thread; thus the meta-story involving [the player character] Alien Node’s search for the shapeshifter Ni’Dopal. Empathy Telepathy was just a convenient device for connecting the “short stories” to the meta-story.

In the spring of 1995, the tireless Mike Dornbrook was on the verge of clinching a deal to make this game — and for once it was not a deal with a trend-chasing multimedia dilettante: he had no less enviable a fish than Microsoft on the hook. Then Meretzky learned of a startup called Rocket Science Games that had on its staff one Ron Cobb, a visual-design legend who had crafted the look of such films as Alien, The Terminator, Back to the Future (yes, the Delorean time machine was his…), The Abyss, and Total Recall, who had even according to Hollywood rumor been the uncredited creator of E.T., Steven Spielberg’s $792 million-grossing extra-terrestrial. But before all of that, Cobb had made his name by doing the cantina scene for Star Wars. It would be crazy to pass up the chance to have him create the aliens in The Space Bar, said Meretzky. Dornbrook thought it was far crazier to turn down a deal with Microsoft in favor of an unproven startup, but he sighed and made the calls. Soon after, Boffo signed a contract with Rocket Science.

Once again, the warning signs were all there, at least in retrospect. Rocket Science’s founder Steve Blank (no relation to Marc Blank) was a fast-talking showman fond of broad comparisons. His company was “Industrial Light & Magic and Disney combined!” he said. Or, even more inexplicably, it was Cream, the 1960s rock supergroup. Tellingly, none of his comparisons betrayed any familiarity with the current games industry. “Rocket Science feels good and looks good, even though when someone asks me to describe it, I’m somewhat at a loss,” said Blank. In most times and places, a founder unable to describe his company is cause for concern among pundits and investors. But in Silicon Valley in 1995, it was no problem as long as its products were to ship on little silver discs. Blank told his interviewers that he was so awash in investment capital that he could run his company for five years without pulling in any revenue at all.

That was the version of Rocket Science which Boffo signed on with, the one which was capturing the cover of Wired magazine. The following year, “I found out that our games are terrible, no one is buying them, our best engineers [have] started leaving, and with 120 people and a huge burn rate, we’re running out of money and about to crash,” Blank later remembered in a mea culpa published in Forbes. The games in question consisted mostly of simplistic arcade-style exercises, not terribly well designed or implemented, threaded between filmed video snippets, not terribly well written or acted. Gamers took one look at them and then returned to their regularly scheduled sessions of DOOM and Warcraft.

Just as they had with Hodj ‘n’ Podj, Boffo kept their heads down and kept working on The Space Bar while Rocket Science was “cratering,” to use Steve Blank’s favorite vernacular. Meretzky did get to work with Ron Cobb on the visual design, which was quite a thrill for him. A seasoned animation team under Bill Davis, Sierra On-Line’s former head of game visuals, created the graphics using a mixture of pixel art and 3D models, with impressive results. Everyone kept the faith, determined to believe that a game as awesome as this one was shaping up to be couldn’t possibly fail — never mind the weakness of Rocket Science, much less the decline of the adventure-game market. As the months went by and the reality of the latter became undeniable, Meretzky and his colleagues started to talk about The Space Bar as the game that would bring adventures back to the forefront of the industry. “We concentrated on making The Space Bar such a winner that everyone would want to work with us going forward,” says Dornbrook.

In the meantime, Rocket Science continued its cratering. The embattled Steve Blank was replaced by Bill Davis in the CEO’s chair in 1996, and this bought the company a bit more money and time from their investors. In the long run, though, this promotion of an animation specialist only emphasized Rocket Science’s core problem: a surfeit of audiovisual genius, combined with a stark lack of people who knew what made a playable game. In April of 1997, the investors pulled the plug. “It’s tragic when a collection of talent like Rocket Science assembled is disbanded,” said Davis. “It’s a great loss to the industry.” Yet said industry failed to mourn. In fact, it barely noticed.

The Space Bar was in its final stages of development when the news came. Boffo’s contract was passed to SegaSoft, the software division of videogame-console maker Sega, who had invested heavily in Rocket Science games for the underwhelming Sega Saturn. Dornbrook and Meretzky couldn’t help but feel a sense of déjà vu. Just as had happened with Hodj ‘n’ Podj, The Space Bar was crawling out from under the wreckage of one publisher into the arms of another who didn’t seem to know quite what to do with it. In the weeks before the game’s release, SegaSoft ran a series of weirdly tone-deaf advertisements for it; for reasons that no one could divine, they were take-offs on the tabloid journalism of The National Enquirer. They were so divorced from the game they claimed to be promoting that the one silver lining, says Dornbrook, was that “at least no one would associate them with our game.”

Unlike Hodj ‘n’ Podj, The Space Bar didn’t prove a commercial disappointment: it turned into an outright bomb. Meretzky still calls its disastrous failure the bitterest single disappointment of his career. Soon after, he and Dornbrook finally gave up and shuttered Boffo. Four years of failure and frustration were enough for anyone.

Dornbrook’s 1998 GDC presentation on the rise and fall of Boffo focused almost exclusively on the little studio’s poor treatment by its larger partners, on the many broken promises and breaches of faith they were forced to endure, until they could endure no more. But at the end of it, he did acknowledge that he might appear to be “blaming all of this on others. Weren’t we also at fault here? Did we have problems on our end?” He concluded that, an unfortunate decision here or there aside — the decision to sign with Rocket Science instead of Microsoft certainly springs to mind — they largely did not. He noted that they never failed to emphasize their biggest strength: “Steve’s a fantastic game designer.”

Does The Space Bar support this contention?

On the surface, the game has much going for it: its rogues’ gallery of misfit aliens is as ingenious and entertaining as you would expect from a meeting of the minds of Steve Meretzky and Ron Cobb; it’s as big and meaty as advertised, packed wall to wall with puzzles; its graphics and voice acting are mostly pretty great; it fills three CDs, and feels like it ought to fill even more. It’s the product of a team that was obviously thinking hard about the limitations of current adventure games and how to move past them — how to make the genre more welcoming to newcomers, as well as tempting once again for those who had gotten tired of the adventure-game status quo and moved on to other things. Among its innovative interface constructs are an auto-map that works wonderfully and a comprehensive logbook that keeps track of suspects, clues, and open puzzles. Dornbrook has called it “a labor of love,” and we have no reason to doubt him.

Nevertheless, it is — and it gives me no pleasure to write this — a flabbergastingly awful game. It plays as if all those intense design discussions Meretzky took part in at Infocom never happened, as if he was not just designing his first adventure game, but was the first person ever to design an adventure game. All the things that Ron Gilbert told the world made adventure games suck almost a decade earlier are here in spades: cul-de-sacs everywhere that can only be escaped by pressing the “restore” button, a need to do things in a certain order when you have no way of knowing what that order is, a need to run though the same boring processes over and over again, a stringent time limit that’s impossible to meet without hyper-optimized play, player deaths that come out of nowhere, puzzles that make sense only in the designer’s head. It’s not just sadistically but incompetently put together as a game. And as a marketplace proposition, it’s utterly incoherent, not to say schizophrenic; how can we possibly square this design with Meretzky’s stated goal of making a more approachable adventure game, one that would be digestible in snack-sized chunks? The Space Bar would seem to be aimed at two completely separate audiences, each the polar opposite of the other; I don’t believe there’s any hidden demographic of casual masochists out there. And there’s no difficulty slider or anything else that serves to bridge the chasm.


One of the oddities of the Boffo story is the sanguine belief on the part of the otherwise savvy Mike Dornbrook that he could use Steve Meretzky’s supposed “star power” to sell games, as demonstrated by his prominent billing here on the cover of the Space Bar box. Meretzky wasn’t any Sid Meier or John Romero; he was a cult figure rather than a household name even among hardcore gamers, adored by a small group of them for his work with Infocom but largely unknown to the rest of them. His last game to sell over 100,000 copies had been Leather Goddesses of Phobos in 1986, his last to manage 50,000 Spellcasting 101 in 1990.

It wouldn’t be a Steve Meretzky game without a bit of this sort of thing…

These aliens are among the funniest. They’re an incredibly advanced and powerful race, but they look like Tiki drinks, and everyone is forever picking them up and trying to sip from them.

The very well-done auto-map.



If The Space Bar sold ten copies, that was ten too many; I hope those ten buyers returned it for a refund. I don’t blame Mike Dornbrook for not being aware of just how terrible a game The Space Bar was; he was way too close to it to be expected to have an objective view under any circumstances, even as he was, as he forthrightly acknowledges, never really much of a gamer after his torrid early romance with Zork had faded into a comfortable conviviality. Still, to analyze the failure of Boffo only in terms of market pressures, bad luck, and perhaps just a few bad business choices is to fail at the task. In addition to all of these other factors, there remains the reality that neither of their two games were actually all that good. Nothing about The Space Bar would lead one to believe that Steve Meretzky is “a fantastic game designer.”

Yet Meretzky could in fact be a fantastic game designer. Back in 2015, writing about his 1987 Infocom game Stationfall, I called him “second to no one on the planet in his ability to craft entertaining and fair puzzles, to weave them together into a seamless whole, and to describe it all concisely and understandably.” I continue to stand by that statement in the context of his games of that era. So, how did we get from Stationfall to The Space Bar?

I belabor this question not because I want to pick on Steve Meretzky, whose half-dozen or so stone-cold classic games are half a dozen more than I can lay claim to, but because I think there’s an important lesson here about the need for collaboration in game design. I tend to see Meretzky’s rather disappointing output during the 1990s — including not only his Boffo games but those he did for Legend and Activision — as another ironic testament to Infocom’s genius for process. Infocom surrounded the designer of each of their games with skeptical, questioning peers, and expected him to work actively with a team of in-house testers who were empowered to do more than just point out bugs and typos, who were allowed to dig into what was fun and unfun, fair and unfair. Meretzky never worked in such an environment again after Infocom — never worked with people who were willing and able to tell him, “Maybe this joke goes on a bit too long, Steve,” or, “Maybe you don’t need to ask the player to go through this dozen-step process multiple times. ” The end results perhaps speak for themselves. Sometimes you need colleagues who do more than tell you how fantastic you are.

Steve Meretzky never designed another full-fledged adventure game after The Space Bar. Following a few dissatisfying intermediate steps, he found his way into the burgeoning world of casual social games, distributed digitally rather than as boxed products, where he’s done very well for himself since the turn of the millennium. Meanwhile Mike Dornbrook signed on with a little company called Harmonix that reminded him somewhat of Infocom, being staffed as they were with youthful bright sparks from MIT. After years of refining their techniques for making music interactive for non-musicians, they released something called Guitar Hero in 2005. Both of the principals behind Boffo have enjoyed second acts in the games industry that dwarf their first in terms of number of players reached and number of dollars earned. So, it all worked out okay for them in the end.

(Sources: the books Games Design Theory and Practice, second edition, by Richard Rouse III Exploding: The Hits, Hype, Heroes, and Hustlers of the Warner Music Group by Stan Cornyn, Capital Instincts: Life as an Entrepreneur, Financier, and Athlete by Richard L. Brandt, Thomas Weisel, and Lance Armstrong, and The Art of Short Selling by Kathryn F. Staley; Computer Gaming World of May 1995, August 1995, May 1997, and December 1997; Questbusters 116; Computer Games Strategy Plus of August 1996; Wired of November 1994 and July 1997; San Francisco Chronicle of August 29 2000; the June 1993 issue of Sierra’s customer newsletter InterAction. Online source include a CD Mag interview with Steve Meretzky, an Adventure Classic Gaming interview with Steve Meretzky, a Happy Puppy interview with Steve Meretzky, “Failure and Redemption” by Steve Blank at Forbes, and Mike Dornbrook’s presentation “Look Before You Leap” at the 1998 Game Developers Conference. But my most valuable source of all was Karl Kuras’s more than four-hour (!) interview with Mike Dornbrook for his Video Game Newsroom Time Machine podcast, a truly valuable oral history of the games industry from a unique perspective. Thanks, Karl and Mike!)

 
33 Comments

Posted by on November 19, 2021 in Digital Antiquaria, Interactive Fiction

 

Tags: , , , , , ,

The Dark Eye

The user-interface constructs that are being developed in computer games are absolutely critical to the advancement of digital culture, as much as it might seem heretical to locate the advancement of civilization in game play. Now, yes, if I thought my worth as a person would be judged in the next century by the body counts I amassed in virtual-fighting games, I guess I’d be worried and dismayed. But if the question is whether a wired world can be serious about art, whether the dynamics of interactive media’s engagement can provide a cultural experience, I think it’s silly to argue that there are inherent reasons why it cannot.

— Michael Nash

Michael Nash

The career arc of Michael Nash between 1991 and 1997 is a microcosm of the boom and bust of non-networked “multimedia computing” as a consumer-oriented proposition. The former art critic was working as a curator at the Long Beach Museum of Art when Bob Stein, founder of The Voyager Company, saw some of the cutting-edge mixed-media exhibitions he was putting together and asked him to come work for him. Nash jumped at the chance, which he saw as a once-in-a-lifetime opportunity to become a curator on a much grander scale.

I was very interested in TV innovators like Ernie Kovacs and Andy Kaufman, in the development of music videos, and in the work of artists using the computer. [I believed] that opportunities can open up for artists at key times in the history of media — artists dream up the kinds of possibilities that push media to envision new things before the significance of these things is generally understood. “Where do you want to go today?” the [technical] architects of the new media ask, because they don’t know. They’re waiting for some great vision to make all this abstract possibility into compelling experiences that will provide shape, purpose, and direction. The potential of the new media to express cultural ideas has increased much faster than the development of new cultural ideas, so the potential is there.

Michael Nash’s official title at Voyager was that of Director of the Criterion Collection, the company’s line of classic films on laser disc — also its one reliably profitable endeavor, the funding engine that powered all of Bob Stein’s more esoteric experiments in interactive multimedia. But roles were fluid at Voyager. “It felt like a lair of tech-enamored bohemians,” remembers Nash. “The company style was 1970s laid-back mixed with intense intellectual ferment and communalism. The work environment was frenetic, at times even a little chaotic.”

As the hype around multimedia reached a fever pitch, everyone who was anyone seemed to want a piece of Voyager. In a typical week, the receptionist might field phone calls from rock star David Bowie, from thriller author Michael Crichton, from counterculture guru Timothy Leary, from cognitive scientist Donald Norman, from Apple CEO John Sculley, from computer scientist Alan Kay, from particle physicist Murray Gell-Mann, from evolutionary biologist Steven Jay Gould, from classical cellist Yo Yo Ma, and from film critic Roger Ebert. The star power on the production side of the equation dwarfed the modest sales of Voyager’s CD-ROMs almost to the point of absurdity. (Only two Voyager CD-ROMs would ever crack 100,000 units in total sales, while most failed to manage even 10,000.)

Another of the stars who wound up working with Voyager — a star after a fashion, anyway — was the Residents, a still-extant San Francisco-based collective of musicians and avant-garde conceptual artists whose members have remained anonymous to this day; they dress in disguises whenever they perform live. Delighting in the obliteration of all boundaries of bourgeois good taste, the Residents both deconstruct existing popular music — their infamous 1976 album The Third Reich n’ Roll, for example, re-contextualized dozens of classic postwar hits as Hitler Youth anthems — and perform their own bizarre original songs. Sometimes it’s difficult to know which is which; their 1979 album Eskimo, for instance, purported to be a collection of Inuit folk songs, but was really a put-on from first to last.

During the 1980s, the Residents began to make the visual element of their performances as important as the music, creating some of the most elaborate concert spectacles this side of Pink Floyd. The term “multimedia” had actually enjoyed its first cultural vogue as a label for just this sort of performance, after it was applied to the Exploding Plastic Inevitable shows put on by Andy Warhol and the Velvet Underground in 1966 and 1967. Thus it was rather appropriate for the Residents to embrace the new, digital definition of multimedia when the time came. It was Michael Nash who made the deal to turn the Residents’ 1991 album Freak Show, a song cycle about the lives and loves of a group of circus freaks, into a 1994 Voyager CD-ROM. Nash:

Within alienage, we discover a lot about the paradox of our own alienation. The recognition of difference is the way we establish our identity and the uniqueness of our own point of view. We are drawn to extreme kinds of “alien” identity — freak shows, fanatics, psychotics, serial killers, nightmares, monsters from outer space — because we are fascinated by absolute otherness, lying as it does at the heart of our own sense of self. We never tire of this paradox because it is so charged by opposites: quirky, eccentric, weird, dark, transgressive vision is so different from our own and yet so full of the very thing that makes us different, that gives our identity its integrity. I think it’s a powerful dynamic to draw on in establishing the essential attributes of extraordinary inner realms that distinguish the best work in the field.

Jelly Jack, one of the freaks of Freak Show.

Critics of the capitalistic system though they were, the Residents weren’t above using the Freak Show CD-ROM to sell some other merch — in a suitably ironic way, of course.

Personally, I find the sentiment above — and the tortured grad-school diction in which it’s couched — to be something the best artists grow out of, just as I find raw honesty to produce a higher form of art than the likes of the Residents’ onion of off-putting artificiality and provocation for the sake of it. Tod Browning’s 1932 film Freaks, the obvious inspiration for the Residents’ album and the CD-ROM, offers a more empathetic, compassionate glimpse of circus “aliens” in my opinion. But to each his own: there’s no question that Freak Show was another bold statement from Voyager that interactive CD-ROMs could and should deal with any and all imaginable subject matter.

The same year that Freak Show was released, Michael Nash left Voyager to set up his own multimedia publisher. Freak Show had been one of the few Voyager discs that could be reasonably labeled a game. Now, Nash wanted to move further in that direction with the company he called Inscape. In a testament to both the tenor of the times and his own considerable charisma, HBO and Warner Music Group agreed to invest $2.5 million each in the venture. Any number of existing games publishers would have killed for a nest egg such as that.

But then, Inscape and Michael Nash himself were the polar opposite of all existing stereotypes about computer games. Certainly the dapper, well-spoken Nash could hardly have been less like the scruffy young men of id Software, those makers of DOOM, the biggest hardcore-gaming sensation of the year. The id boys were just the latest of the long line of literal or metaphorical bedroom programmers who had built the games industry as it currently existed, young men who played games and obsessed over the inner workings of the computers that ran them almost to the exclusion of all the rest of life’s rich pageant. Nash, on the other hand, was steeped in a broader, more aesthetically nuanced tradition of arts and humanities, and knew almost nothing about the games that had come before the multimedia boom he found so bracing. In an ideal world, each might have learned from the other: Nash might have pushed the existing game studios to mine some of the rich veins of culture beyond epic fantasy and action-movie science fiction, and they in their turn might have taught Nash how to make good games that made you want to keep coming back to them. In the real world, however, the two camps mostly just sniped snidely at one another — when, that is, they deigned to acknowledge one another’s existence at all. Nash was too busy beating the drum for “radical alternative subversive perspectives, what I call transgressive work” to think much about the more grounded, sober craft of good game design.

Most of Inscape’s output, then, is all too typical of such an entity in such an era. The Residents stayed loyal to Nash after he left Voyager, and helped Inscape to make Bad Day on the Midway, another, modestly more ambitious take on the lives of circus freaks. Meanwhile Nash, who seemed to have a special affinity for avant-garde rock music, also joined forces with the only slightly less subversive but much more commercially successful collective known as Devo — in a reflection of their shared sensibilities, both Devo and the Residents had once recorded radically deconstructed versions of the Rolling Stones classic “Satisfaction” — to make something called Adventures of the Smart Patrol. Such works garnered some degree of praise in their time from organs of higher culture who were determined to see that which they most wished to see in them; writing for The Atlantic, Ralph Lombreglia went so far as to call Smart Patrol “the CD-ROM equivalent of Terry Gilliam’s remarkable film Brazil.” Those who encounter these and other, similar rock-star vanity projects today, from artists as diverse as Prince and Peter Gabriel, are more likely to choose adjectives like “aimless” and “tedious.” (“Will we look back in nostalgia on such titles as Bad Day on the Midway and Adventures of the Smart Patrol?” asked Lombreglia in his 1997 article, which was already mourning the end of the multimedia boom. Well, I’m from the future, Ralph… and no, we really don’t.)

It seems to me that the discipline of game design has often suffered from the same fallacy that dogs writing: the assumption that, because virtually everyone can design a game on some literal level, the gulf between bad and good design is easily bridged, with no special skills or experience required. Most of the products of Inscape and their direct competitors serve as cogent examples of where that fallacy — and its associated disinterest in the process that leads to compelling interactivity, from the concept to the testing phase — can lead you.

In the case of Inscape, however, there is one blessed exception to the rule of trendy multimedia mediocrity. And it’s to that exception, which is known as The Dark Eye, that I’d like to devote the rest of this article.


The Dark Eye was Inscape’s very first game, released in late 1995. It’s an interactive exploration of the macabre world of Edgar Allan Poe — not a particularly easy thing to pull off, which explains why games that use Poe’s writings as a direct inspiration are so rare. When we do encounter traces of him in games, it’s generally through the filter of H.P. Lovecraft, the longstanding poet laureate of ludic horror, who himself acknowledged Poe as his most important literary influence. But Poe, whose short, generally unhappy life ended in 1849, was a vastly better, subtler writer than his twentieth-century disciple, with both a more variegated and empathetic emotional range and an ear for language that utterly eluded him. While Poe can occasionally lapse into Lovecraftian turgidity in prose, his poetry is almost uniformly magnificent; works like “The Bells” and “Annabel Lee” positively swing with a musical rhythm that belies his popular reputation as a parched, unremittingly dour soul. Like so much of the best writing, they beg to be read aloud.


The problem with adapting Poe’s stories into a computer game — or into a movie, for that matter — is that their action, such as it is, is so internal. Their narrators, who are generally mentally disturbed if not outright insane and therefore thoroughly unreliable, are always their most fascinating characters. Their stories are constructed as epistles to us the readers; we learn of their protagonists not through dialog or their actions in the physical world, but through the words they write directly to us, explaining themselves to us. Without this dimension, the stories would be fairly banal tales of misfortune and mayhem, pulp rather than fine literature.

Bringing the spirit of Edgar Allan Poe to life on the computer, then, requires getting beyond the realm of the literal in which most digital games exist. It requires an affinity for subtlety and symbolism, and a fearless willingness to deploy them in a medium not terribly known for such things. Fortunately, Michael Nash had a person with just such qualities to hand, in the form of one Russell Lees.

In 1994, Lees was an electrical engineer and aspiring playwright who had little interest in or experience with computer games. But then Nash, a “friend of a friend,” happened to show him Freak Show. He found it endlessly intriguing, and was in fact so enthusiastic that Nash suggested he send him a list of possible projects he might like to make for this new venture called Inscape. One of the suggestions Lees came up with was, he remembers, “dropping into the tales of Poe.” Only after Nash gave the Poe project the green light and Lees found himself suddenly thrust into the unlikely role of game designer did the difficulties inherent in such an endeavor dawn on him: “What have I done? Dropping into the tales of Poe? What does that mean? It’s a completely nonsensical sentence!”

Lees and Inscape eventually decided to present three Poe stories in an interactive format, along with an original tale in his spirit that would serve as a jumping-off and landing place for the player’s explorations of the master’s works. Two of the trio, “The Tell-tale Heart” and “The Cask of Amontillado,” are among Poe’s most famous works of all, the stuff of English-language high-school curricula for time immemorial; the other, “Berenice,” is less commonly read, but is if you ask me the most disturbing of the lot. All are intimate tales of psychological obsession and, in two cases, murder. (“Berenice” settles for necrophilia in its stead…)

The game begins with you knocking on the door of your uncle’s house. Once inside, your casual family visit takes on a more serious dimension, when you become the reluctant go-between in a love affair between your beautiful young cousin and your brother — a love affair of which your uncle most definitely does not approve. (The relationship is a presumably deliberate echo of Poe’s courtship and marriage to his own thirteen-year-old cousin Virginia Clemm, whose long, slow death from tuberculosis became the defining event of his life, the catalyst for his final descent into alcoholism, despair, and at last the sweet release of death.) As this frame story plays out, you’re periodically plunged into nightmares and hallucinations in which you enact Poe’s tales. In fact, you enact each of them twice: once in the role of the aggressor, once in that of the victim.

Through it all, The Dark Eye shows the unmistakable influence of the adventure games that other studios were making at the time. The creepily expressive human hand it uses for a mouse cursor, for example, is blatantly stolen from The 7th Guest. But the more pervasive model is MystThe Dark Eye‘s node-based navigation through contiguous environments, first-person viewpoint, and minimalist, inventory-less interface are obvious legacies of Myst. The technologies behind it as well are the same as Myst: a middleware presentation engine (Macromedia Director in this case), 3D modelers, QuickTime movie clips, all far removed from the heavily optimized bare-metal code which powered games like DOOM (and thus one more reason for fans and programmers of games like that one to hold this one in contempt).

Likewise, all four of the stories that make up The Dark Eye engage in a style of environmental storytelling — or, perhaps better said, backstory-revealing — that will on one level be familiar to players of Myst and its many heirs. And yet it serves a markedly different agenda here. The character you played in Myst was you or whomever else you chose to imagine her to be, a blank slate wandering an alternate multiverse. Not so in The Dark Eye. Lees:

I think coming from a theater background influenced how I thought about it. In my head, “dropping into the tales of Poe” is only interesting if you drop into a character: if you drop into some character’s head. We’re asking the player to not play themself. In many games, the whole idea is that the player gets to be themself, with all kinds of freedom. If you’re playing Grand Theft Auto, you’re you, but a different version of you who can steal cars.

We weren’t interested in that at all. What we were interested in was… you drop into a character, and basically you’re an actor trying to play that character. What does that mean? If you’re a real actor playing the narrator in “The Tell-tale Heart,” for example, you would read through [the script], come up with some backstory for the character, try to flesh the character out so that every line in the performance resonates with a life lived. As the player, you’re not going to get that. So, how do we make up for that in an interactive situation? The way we solved it — and I feel like we did solve it, in fact — was this:

We tried to map that psychological investigation that an actor would bring to a part onto spatial investigation. You’re exploring a space where certain objects have importance to you. It’s not just, I pick up a letter and learn about my character [by reading it]. It’s, I pick up an object that’s important to my character and I hear my character thinking about it, or that object triggers a movie where I see something from my character’s past, or maybe it just plays a little bit of music. So, all these objects are imbued with something from your past. We were trying to “trick” the player into doing a psychological investigation of the part they were playing.

The Dark Eye is interested in enriching your experience of the stories of Edgar Allan Poe, not in giving you a way of changing them; you can’t choose not to plunge the knife into the old man who is murdered in “The Tell-tale Heart.” But you can inhabit the story and the characters in a way interestingly different from, if not necessarily superior to, the way you can understand them through the pages of a book. The best compliment I can give to Russell Lees is that the framing story and the three Poe narratives from the perspective of the victims feel thoroughly of a piece with the three more familiar stories and perspectives. It’s no trivial feat to expand upon the work of a literary master so seamlessly.


The Dark Eye employs many tricks to evoke Edgar Allan Poe’s Gothic nineteenth-century world. As you uncover more story segments, for example, you can return to them from this screen. It’s based upon the pseudo-science of phrenology, of which Poe, like many of his peers, was a great devotee. (“The forehead is broad, with prominent organs of ideality,” he wrote in a typical reference to it, in an 1846 character sketch of his fellow poet William Cullen Bryant.)



Like so many of gaming’s more esoteric art projects, The Dark Eye is a polarizing creation. Some people love it, while others greet it with a veritable rage that seems entirely out of proportion to such a humble relic of a bygone age. It rams smack into one of the fundamental tensions that have dogged adventure games as long as they have existed. Ought you to be playing yourself in these games, or is it acceptable to be asked to play the role of someone else, perhaps even someone you would never wish to be in real life? The question was first thrashed over in the gaming press in 1983, when Infocom released Infidel, a text adventure whose fleshed-out protagonist was almost as unpleasant as a Poe narrator. It has continued to raise its head from time to time ever since.

But there’s even more to the polarization than that. It seems to me that The Dark Eye divides the waters so because, although it bears many of the surface trappings of a traditional adventure game, its goals are ultimately different. While a game like Myst is built around its puzzles, The Dark Eye has quite literally no puzzles at all. In fact, admits Russell Lees, freely acknowledging the worst of the criticism leveled against it,  it has “no gameplay beyond exploration.” You don’t “beat” The Dark Eye, in other words; you explore it. More specifically, you explore its characters’ interior spaces. Watching many gamers engage with it is akin to watching fans of genre fiction confronted with a literary novel, except that here “where’s the puzzles?” stands in for “where’s the plot?” This is not to say that those who appreciate The Dark Eye are better, more refined souls than those who find it aimless and tedious, any more than those who enjoy John Steinbeck are superior to readers of John Grisham. It’s just to say that clashes of expectation can be difficult things to overcome. “We need some new words for works that are interactive but aren’t so much games,” says Lees — a noble if hopeless proposition.

We can see these things play out in the reaction to The Dark Eye from the gaming press after its release. Most reviewers just didn’t know what to do with it. The always articulate Charles Ardai of Computer Gaming World reacted somewhat typically:

As with many of the new “exploration” adventure games, the environment reeks of emptiness, especially at first. But it’s worse here than in most: not only are there too many empty rooms, but you aren’t asked to solve puzzles of any sort, not even the lame brainteasers most games use as filler. Making matters worse, there are hallways you see that, for no apparent reason, the computer doesn’t let you go down; doors the game doesn’t let you open; and characters the game doesn’t let you click on. Even the few objects you run across — a meat cleaver, a paper knife — the game doesn’t let you take.

But, because he is a thoughtful if not infallible critic, Ardai must also acknowledge The Dark Eye to be “a singular, disturbing vision equal to the task of rendering Poe’s nightmare worlds.” He even calls it “brave.”

Instead of puzzles, The Dark Eye gives you atmosphere — all the atmosphere you can inhale, enough atmosphere to send you running to a less pressurized room of your house after spending a while in its company. You witness no actual violence on the screen; the camera always cuts away at the pivotal moment. Yet the game is thoroughly unnerving, more psychologically oppressive than a thousand everyday videogame zombies; this game will creep you the hell out. It’s in vacant eyes of the stop-motion-animated digitized puppets that are used to represent the other characters; in the way that the soundtrack, provided by associates of avant-rock musician Thomas Dolby, suddenly swells with nerve-jangling ferocity and then fades into silence again just as quickly; in knowing what awaits you as perpetrator or victim in each of the stories, and being unable to stop it.

The crowning touch is the voice of the legendary Beat author William S. Burroughs, a rare instance of stunt casting that worked out perfectly. Michael Nash, who seemed never to have heard of an edgy cultural icon whose involvement in one of his multimedia projects he didn’t want to trumpet in his advertising, sought out and cast Burroughs for the game without Lees even being aware he was attempting to do so. But Lees was very, very happy when he was informed of it. Burroughs plays the part of your crotchety uncle in the game, and also provides two non-interactive Edgar Allan Poe recitals for you to stumble across: of the poem “Annabel Lee,” which you can hear earlier in this article, and of the story “The Masque of the Red Death.” One anecdote which Lees has shared about the three days he spent directing Burroughs’s performances in the author’s Lawrence, Kansas, home is too delicious not to include here.

He liked starting off the day by toking up. We’re in the [sound] booth and he’s lighting up his marijuana and he says, “Do you want a drag?” And I say, “You know, Inscape’s spending a lot of money to send me out here. I think I have to stay on the ball. You go ahead.”

So, he’d start off by getting a little bit high, and that would loosen him up. Then in the afternoon he liked to drink vodka and Sprite. He would start around 3 PM, and things would get a little mushy, but it also brought some interesting performances out.

I have to admit that on the very last day when we were finishing up, he lit up a joint, and I did share it with Bill.

Within two years of these events, the confluence of cultural forces that could produce such an anecdote would be ancient history. Russell Lees was about halfway through the production of a game based on the Tales from the Crypt comic books and television series when Michael Nash sold Inscape to Graphix Zone, a Voyager-like publisher of multimedia CD-ROMs that was scrambling to reinvent itself as a games publisher in a changing world. The attempt wasn’t successful: the conjoined entity, which was known as Ignite Games, disappeared by the end of 1997. Nash went on to a high-profile career as a music executive, and was instrumental in convincing the hidebound powers that were in that industry to reluctantly embrace streaming rather than attempting to sue it out of existence in the post-Napster era. Russell Lees continued to bounce among the worlds of theater, home video, and games for many years, until finding a stable home at last as a staff writer for Ubisoft’s Assassin’s Creed franchise in 2011.

As the fate of the company that developed and published it would indicate, The Dark Eye wasn’t an overly big seller in its day. Yet it’s still remembered fondly in some circles today — and deservedly so. It solves one of the basic paradoxes of licensed works by not attempting to replace the stories on which it’s based, but rather to complement them. If you haven’t read them before playing it– or if you haven’t done so since your school days — you might find yourself wanting to when you’re done. And if you have read them recently, the new perspectives on them which the game opens up might just unnerve you all over again. Then again, you might merely be bored by it all. And that’s okay too; not all art is for everyone.

(Sources: in addition to the Edgar Allan Poe collection that belongs in every real or virtual library — the Penguin one is excellent — the book DVD and the Study of Film: The Attainable Text by Mark Parker and Deborah Parker; Computer Gaming World of April 1996 and May 1996; Electronic Entertainment of August 1995; MacAddict of December 1996; Next Generation of August 1997; Wired of March 1995; Los Angeles Times of July 12 1994 and February 28 1997; American Literature of November 1930. Online sources include “What Happened to Multimedia?” by Ralph Lombreglia in Atlantic Unbound and an accompanying interview with Michael Nash, Emily Rose’s podcast interview with Russell Lees, and Lees’s own website.

The Dark Eye isn’t available for sale, but the CD image can be downloaded from The Macintosh Garden; note that you’ll need StuffIt to decompress it. Unfortunately, it’s a Windows 3.1 application, which means it’s somewhat complicated to get running on modern hardware. But you can do it with a bit of time and patience: Egee has written a very good tutorial on getting Windows 3.1 set up in DOSBox, and you can find the vintage software you’ll need on WinWorld. Another option is to run it on a real or emulated classic Macintosh, as the CD-ROM is a hybrid disc for both Windows and Mac computers. See my article on ten standout Voyager discs for some advice on doing this.)

 
 

Tags: , , ,

I Have No Mouth, and I Must Scream

To the person who [is] contemplating buying this game, what would I say? I would say take your money and give it to the homeless, you’ll do more good. But if you are mad to buy this game, you’ll probably have a hell of a lot of fun playing it, it will probably make you uneasy, and you’ll probably be a smarter person when you’re done playing the game. Not because I’m smarter, but because everything was done to confuse and upset you. I am told by people that it is a game unlike any other game around at the moment and I guess that’s a good thing. Innovation and novelty is a good thing. It would be my delight if this game set a trend and all of the arcade bang-bang games that turn kids into pistol-packing papas and mamas were subsumed into games like this in which ethical considerations and using your brain and unraveling puzzles become the modus operandi. I don’t think it will happen. I don’t think you like to be diverted too much. So I’m actually out here to mess with you, if you want to know it. We created this game to give you all the stuff you think you want, but to put a burr into your side at the same time. To slip a little loco weed into your Coca-Cola. See you around.

— Harlan Ellison

Harlan Ellison made a very successful career out of biting the hands that fed him. The pint-sized dervish burst into literary prominence in the mid-1960s, marching at the vanguard of science fiction’s New Wave. In the pages of Frederick Pohl’s magazine If, he paraded a series of scintillatingly trippy short stories that were like nothing anyone had ever seen before, owing as much to James Joyce and Jack Kerouac as they did to Isaac Asimov and Robert Heinlein. Ellison demanded, both implicitly in his stories and explicitly in his interviews, that science fiction cast off its fetish for shiny technology-fueled utopias and address the semi-mythical Future in a more humanistic, skeptical way. His own prognostications in that vein were almost unrelentingly grim: “‘Repent, Harlequin!’ Said the Ticktockman” dealt with a future society where everyone was enslaved to the ticking of the government’s official clock; “I Have No Mouth, and I Must Scream” told of the last five humans left on a post-apocalyptic Earth, kept alive by an insane artificial intelligence so that he could torture them for all eternity; “A Boy and His Dog” told of a dog who was smarter than his feral, amoral human master, and helped him to find food to eat and women to rape as they roamed another post-apocalyptic landscape. To further abet his agenda of dragging science fiction kicking and screaming into the fearless realm of True Literature, Ellison became the editor of a 1967 anthology called Dangerous Visions, for which he begged a diverse group of established and up-and-coming science-fiction writers to pick a story idea that had crossed their mind but was so controversial and/or provocative that they had never dared send it to a magazine editor — and then to write it up and send it to him instead.

Ellison’s most impactful period in science fiction was relatively short-lived, ending with the publication of the somewhat underwhelming Again, Dangerous Visions in 1972. He obstinately refused to follow the expected career path of a writer in his position: that of writing a big, glossy novel to capitalize on the cachet his short stories had generated. Meanwhile even his output of new stories slowed in favor of more and more non-fiction essays, while those stories that did emerge lacked some of the old vim and vinegar. One cause of this was almost certainly his loss of Frederick Pohl as editor and bête noire. Possessing very different literary sensibilities, the two had locked horns ferociously over the most picayune details — Pohl called Ellison “as much pain and trouble as all the next ten troublesome writers combined” — but Pohl had unquestionably made Ellison’s early stories better. He was arguably the last person who was ever truly able to edit Harlan Ellison.

No matter. Harlan Ellison’s greatest creation of all was the persona of Harlan Ellison, a role he continued to play very well indeed right up until his death in 2018. “He is a test of our credulity,” wrote his fellow science-fiction writer David Gerrold in 1984. “He is too improbable to be real.”

Harlan Ellison on the set of Star Trek with Leonard Nimoy and William Shatner.

The point of origin of Harlan Ellison as science fiction’s very own enfant terrible can be traced back to the episode of Star Trek he wrote in 1966. “The City on the Edge of Forever” is often called the best single episode of the entire original series, but to Ellison it was and forever remained an abomination in its broadcast form. As you may remember, it’s a time-travel story, in which Kirk, Spock, and McCoy are cast back into the Great Depression on Earth, where Kirk falls in love with a beautiful social worker and peace activist, only to learn that he has to let her die in a traffic accident in order to prevent her pacifism from infecting the body politic to such an extent that the Nazis are able to win World War II. As good as the produced version of the episode is, Ellison insisted until his death that the undoctored script he first submitted was far, far better — and it must be acknowledged that at least some of the people who worked on Star Trek agreed with him. In a contemporaneous memo, producer Bob Justman lamented that, following several rounds of editing and rewriting, “there is hardly anything left of the beauty and mystery that was inherent in the screenplay as Harlan originally wrote it.” For his part, Ellison blamed Star Trek creator Gene Roddenberry loudly and repeatedly for “taking a chainsaw” to his script. In a fit of pique, he submitted his undoctored script for a 1967 Writers Guild Award. When it won, he literally danced on the table in front of Roddenberry inside the banquet hall, waving his trophy in his face. Dorothy Fontana, the writer who had been assigned the unenviable task of changing Ellison’s script to fit with the series’s budget and its established characters, was so cowed by his antics that for 30 years she dared not tell him she had done so.

Despite this incident and many another, lower-profile one much like it, Ellison continued to work in Hollywood — as, indeed, he had been doing even before his star rose in literary science-fiction circles. Money, he forthrightly acknowledged, was his principal reason for writing for a medium he claimed to loathe. He liked creating series pilots most of all, he said, “because when they screw those up, they just don’t go on the air. I get paid and I’ve written something nice and it doesn’t have to get ruined.” His boorish behavior in meetings with the top movers and shakers of Hollywood became legendary, as did the lawsuits he fired hither and yon whenever he felt ill-used. Why did Hollywood put up with it? One answer is that Harlan Ellison was at the end of the day a talented writer who could deliver the goods when it counted, who wasn’t unaware of the tastes and desires of the very same viewing public he heaped with scorn at every opportunity. The other is that his perpetual cantankerousness made him a character, and no place loves a character more than Hollywood.

Then again, one could say the same of science-fiction fandom. Countless fans who had read few to none of Ellison’s actual stories grew up knowing him as their genre’s curmudgeonly uncle with the razor wit and the taste for blood. For them, Harlan Ellison was famous simply for being Harlan Ellison. Any lecture or interview he gave was bound to be highly entertaining. An encounter with Ellison became a rite of passage for science-fiction journalists and critics, who gingerly sidled up to him, fed him a line, and then ducked for cover while he went off at colorful and profane length.

Harlan Ellison was a talk-show regular during the 1970s. And small wonder: drop a topic in his slot, and something funny, outrageous, or profound — or all three — was guaranteed to come out.

It’s hard to say how much of Ellison’s rage against the world was genuine and how much was shtick. He frequently revealed in interviews that he was very conscious of his reputation, and hinted at times that he felt a certain pressure to maintain it. And, in keeping with many public figures with outrageous public personas, Ellison’s friends did speak of a warmer side to his private personality, of a man who, once he brought you into his fold, would go to ridiculous lengths to support, protect, and help you.

Still, the flame that burned in Ellison was probably more real than otherwise. He was at bottom a moralist, who loathed the hypocrisy and parsimony he saw all around him. Often described as a futurist, he was closer to a reactionary. Nowhere could one see this more plainly than in his relationship to technology. In 1985, when the personal-computer revolution had become almost old hat, he was still writing on a mechanical typewriter, using reasoning that sounded downright Amish.

The presence of technology does not mean you have to use that technology. Understand? The typewriter that I have — I use an Olympia and I have six of them — is the best typewriter ever made. That’s the level of technology that allows me to do my job best. Electric typewriters and word processors — which are vile in every respect — seem to me to be crutches for bad writing. I have never yet heard an argument for using a word processor that didn’t boil down to “It’s more convenient.” Convenient means lazy to me. Lazy means I can write all the shit I want and bash it out later. They can move it around, rewrite it later. What do I say? Have it right in your head before you sit down, that’s what art is all about. Art is form, art is shape, art is pace, it is measure, it is the sound of music. Don’t write slop and discordancy and think just because you have the technology to cover up your slovenliness that it makes you a better writer. It doesn’t.

Ellison’s attitude toward computers in general was no more nuanced. Asked what he thought about computer entertainment in 1987, he pronounced the phrase “an oxymoron.” Thus it came as quite a surprise to everyone five years later when it was announced that Harlan Ellison had agreed to collaborate on a computer game.



The source of the announcement was a Southern California publisher and developer called Cyberdreams, which had been founded by Pat Ketchum and Rolf Klug in 1990. Ketchum was a grizzled veteran of the home-computer wars, having entered the market with the founding of his first software publisher DataSoft on June 12, 1980. After a couple of years of spinning their wheels, DataSoft found traction when they released a product called Text Wizard, for a time the most popular word processor for Atari’s 8-bit home-computer line. (Its teenage programmer had started on the path to making it when he began experimenting with ways to subtly expand margins and increase line spacings in order to make his two-page school papers look like three…)

Once established, DataSoft moved heavily into games. Ketchum decided early on that working with pre-existing properties was the best way to ensure success. Thus DataSoft’s heyday, which lasted from roughly 1983 to 1987, was marked by a bewildering array of television shows (The Dallas Quest), martial-arts personalities (Bruce Lee), Sunday-comics characters (Heathcliff: Fun with Spelling), blockbuster movies (Conan, The Goonies), pulp fiction (Zorro), and even board games (221 B Baker St.), as well as a bevy of arcade ports and British imports. The quality level of this smorgasbord was hit or miss at best, but Ketchum’s commercial instinct for the derivative proved well-founded for almost a half a decade. Only later in the 1980s, when more advanced computers began to replace the simple 8-bit machines that had been the perfect hosts for DataSoft’s cheap and cheerful games, did his somewhat lackadaisical attitude toward the nuts and bolts of his products catch up to him. He then left DataSoft to work for a time at Sullivan Bluth Interactive Media, which made ports of the old laser-disc arcade game Dragon’s Lair for various personal-computing platforms. Then, at the dawn of the new decade, he founded another company of his own with his new partner Rolf Klug.

The new company’s product strategy was conceived as an intriguing twist on that of the last one he had founded. Like DataSoft, Cyberdreams would rely heavily on licensed properties and personalities. But instead of embracing DataSoft’s random grab bag of junk-food culture, Cyberdreams would go decidedly upmarket, a move that was very much in keeping with the most rarefied cultural expectations for the new era of multimedia computing. Their first released product, which arrived in 1992, was called Dark Seed; it was an adventure game built around the striking and creepy techno-organic imagery of the Swiss artist H.R. Giger, best known for designing the eponymous creatures in the 1979 Ridley Scott film Alien. If calling Dark Seed a “collaboration” with Giger is perhaps stretching the point — although Giger licensed his existing paintings to Cyberdreams, he contributed no new art to the game — the end result certainly does capture his fetishistic aesthetic very, very well. Alas, it succeeds less well as a playable game. It runs in real time, meaning events can and will run away without a player who isn’t omniscient enough to be in the exact right spot at the exact right time, while its plot is most kindly described as rudimentary — and don’t even get me started on the pixel hunts. Suffice to say that few games in history have screamed “style over substance” louder than this one. Still, in an age hungry for fodder for the latest graphics cards and equally eager for proof that computer games could be as provocative as any other form of media, it did quite well.

By the time of Dark Seed‘s release, Cyberdreams was already working on another game built around the aesthetic of another edgy artist most famous for his contributions to a Ridley Scott film: Syd Mead, who had done the set designs for Blade Runner, along with those of such other iconic science-fiction films as Star Trek: The Motion Picture, TRON, 2010, and the Alien sequel Aliens. CyberRace, the 1993 racing game that resulted from the partnership, was, like its Cyberdreams predecessor, long on visuals and short on satisfying gameplay.

Well before that game was completed — in fact, before even Dark Seed was released — Pat Ketchum had already approached Harlan Ellison to ask whether he could make a game out of his classic short story “I Have No Mouth, and I Must Scream.” Doing so was, if nothing else, an act of considerable bravery, given not only Ellison’s general reputation but his specific opinion of videogames as “an utter and absolute stupid waste of time.” And yet, likely as much to Ketchum’s astonishment as anyone else’s, he actually agreed to the project. Why? That is best left to Ellison to explain in his own inimitable fashion:

The question frequently asked of me is this: “Since it is common knowledge that you don’t even own a computer on which you could play an electronic game this complex, since it is common knowledge that you hate computers and frequently revile those who spend their nights logging onto bulletin boards, thereby filling the air with pointless gibberish, dumb questions that could’ve been answered had they bothered to read a book of modern history or even this morning’s newspaper, and mean-spirited gossip that needs endless hours the following day to be cleaned up; and since it is common knowledge that not only do you type your books and columns and TV and film scripts on a manual typewriter (not even an electric, but an actual finger-driven manual), but that the closest you’ve ever come to playing an actual computer- or videogame is the three hours you wasted during a Virgin Airlines flight back to the States from the UK; where the hell do you get off creating a high-tech cutting-edge enigma like this I Have No Mouth thing?”

To which my usual response would be, “Yo’ Mama!”

But I have been asked to attempt politeness, so I will vouchsafe courtesy and venture some tiny explication of what the eff I’m doing in here with all you weird gazoonies. Take your feet off the table.

Well, it goes back to that Oscar Wilde quote about perversion: “You may engage in a specific perversion once, and it can be chalked up to curiosity. But if you do it again, it must be presumed you are a pervert.”

They came to me in the dead of night, human toads in silk suits, from this giant megapolitan organization called Cyberdreams, and they offered me vast sums of money — all of it in pennies, with strings attached to each coin, so they could yank them back in a moment, like someone trying to outsmart a soft-drink machine with a slug on a wire — and they said, in their whispery croaky demon voices, “Let us make you a vast fortune! Just sell us the rights to use your name and the name of your most famous story, and we will make you wealthy beyond the dreams of mere mortals, or even Aaron Spelling, our toad brother in riches.”

Well, I’d once worked for Aaron Spelling on Burke’s Law, and that had about as much appeal to me as spending an evening discussing the relative merits of butcher knives with O.J. Simpson. So I told the toads that money was something I had no trouble making, that money is what they give you when you do your job well, and that I never do anything if it’s only for money. ‘Cause money ain’t no thang.

Well, for the third time, they then proceeded to do the dance, and sing the song, and hump the drums, and finally got down to it with the fuzzy ramadoola that can snare me: they said, “Well (#4), you’ve never done this sort of thing. Maybe it is that you are not capable of doing this here now thing.”

Never tell me not to go get a tall ladder and climb it and open the tippy-topmost kitchen cabinet in my mommy’s larder and reach around back there at the rear of the topmost shelf in the dark with the cobwebs and the spider-goojies and pull out that Mason jar full of hard nasty petrified chickpeas and strain and sweat to get the top off the jar till I get it open and then take several of those chickpeas and shove them up my nose. Never tell me that. Because as sure as birds gotta swim an’ fish gotta fly, when you come back home, you will find me lying stretched out blue as a Duke Ellington sonata, dead cold with beans or peas or lentils up my snout.

Or, as Oscar Wilde put it: “I couldn’t help it. I can resist anything except temptation.”

And there it is. I wish it were darker and more ominous than that, but the scaldingly dopey truth is that I wanted to see if I could do it. Create a computer game better than anyone else had created a computer game. I’d never done it, and I was desirous of testing my mettle. It’s a great flaw with me. My only flaw, as those who have known me longest will casually attest. (I know where they live.)

Having entered the meeting hoping only to secure the rights to Ellison’s short story, Pat Ketchum thus walked away having agreed to a full-fledged collaboration with the most choleric science-fiction writer in the world, a man destined to persist forevermore in referring to him simply as “the toad.” Whether this was a good or a bad outcome was very much up for debate.

Ketchum elected to pair Ellison with David Sears, a journalist and assistant editor for Compute! magazine who had made Cyberdreams’s acquaintance when he was assigned to write a preview of Dark Seed, then had gone on to write the hint book for the game. Before the deal was consummated, he had been told only that Cyberdreams hoped to adapt “one of” Ellison’s stories into a game: “I was thinking, oh, it could be ‘Repent, Harlequin!’ Said the Ticktockman,’ or maybe ‘A Boy and His Dog,’ and it’s going to be some kind of RPG or something.” When he was told that it was to be “I Have No Mouth, and I Must Scream,” he was taken aback: “I was like, what? There’s no way [to] turn that into a game!” In order to fully appreciate his dismay, we should look a bit more closely at the story in question.

Harlan Ellison often called “No Mouth” “one of the ten most-reprinted stories in the English language,” but this claim strikes me as extremely dubious. Certainly, however, it is one of the more frequently anthologized science-fiction classics. Written “in one blue-white fit of passion,” as Ellison put it, “like Captain Nemo sitting down at his organ and [playing] Toccata and Fugue in D Minor,” it spans no more than fifteen pages or so in the typical paperback edition, but manages to cram quite a punch into that space.

The backstory entails a three-way world war involving the United States, the Soviet Union, and China and their respective allies, with the forces of each bloc controlled by a supercomputer in the name of maximal killing efficiency. That last proved to be a mistake: instead of merely moving ships and armies around, the American computer evolved into a sentient consciousness and merged with its rival machines. The resulting personality was twisted by its birthright of war and violence. Thus it committed genocide on the blighted planet’s remaining humans, with the exception of just five of them, which it kept alive to physically and psychologically torture for its pleasure.  As the story proper opens, it’s been doing so for more than a century. Our highly unreliable narrator is one of the victims, a paranoid schizophrenic named Ted; the others, whom we meet only as the sketchiest of character sketches, are named Gorrister, Benny, Ellen (the lone woman in the group), and Nimdok. The computer calls itself AM, an acronym for its old designation of “Allied Mastercomputer,” but also a riff on Descartes: “I think, therefore I AM.”

The story’s plot, such as it is, revolves around the perpetually starving prisoners’ journey to a place that AM has promised them contains food beyond their wildest dreams. It’s just one more of his cruel jokes, of course: they wind up in a frigid cavern piled high with canned food, without benefit of a can opener. But then something occurs which AM has failed to anticipate: Ted and Ellen finally accept that there is only one true means of escape open to them. They break off the sharpest stalactites they can find and use them to kill the other three prisoners, after which Ted kills Ellen. But AM manages to intervene before Ted can kill himself. Enraged at having his playthings snatched away, he condemns the very last human on Earth to a fate more horrific even than what he has already experienced:

I am a great soft jelly thing. Smoothly rounded, with no mouth, with pulsing white holes filled by fog where my eyes used to be. Rubbery appendages that were once my arms; bulks rounding down into legless humps of slippery matter. I leave a moist trail when I move. Blotches of diseased, evil gray come and go on my surface, as though light is being beamed from within.

Outwardly: dumbly, I shamble about, a thing that could never have been known as human, a thing whose shape is so alien a travesty that humanity becomes more obscene for the vague resemblance.

Inwardly: alone. Here. Living under the land, under the sea, in the belly of AM, whom we created because our time was badly spent and we must have known unconsciously that he could do it better. At least the four of them are safe at last.

AM will be the madder for that. It makes me a little happier. And yet… AM has won, simply… he has taken his revenge…

I have no mouth. And I must scream.

Harlan Ellison was initially insistent that the game version of No Mouth preserve this miserably bleak ending. He declared himself greatly amused by the prospect of “a game that you cannot possibly win.” Less superciliously, he noted that the short story was intended to be, like so much of his work, a moral fable: it was about the nobility of doing the right thing, even when one doesn’t personally benefit — indeed, even when one will be punished terribly for it. To change the story’s ending would be to cut the heart out of its message.

Thus when poor young David Sears went to meet with Ellison for the first time — although Cyberdreams and Ellison were both based in Southern California, he himself was still working remotely from his native Mississippi — he faced the daunting prospect of convincing one of the most infamously stubborn writers in the world — a man who had spent decades belittling no less rarefied a character than Gene Roddenberry over the changes to his “City on the Edge of Forever” script — that such an ending just wouldn’t fly in the contemporary games market. The last company to make an adventure game with a “tragic” ending had been Infocom back in 1983, and they’d gotten so much blow back that no one had ever dared to try such a thing again. People demanded games that they could win.

Much to Sears’s own surprise, his first meeting with Ellison went very, very well. He won Ellison’s respect almost immediately, when he asked a question that the author claimed never to have been asked before: “Why are these [people] the five that AM has saved?” The question pointed a way for the game of No Mouth to become something distinctly different from the story — something richer, deeper, and even, I would argue, more philosophically mature.

Ellison and Sears decided together that each of AM’s victims had been crippled inside by some trauma before the final apocalyptic war began, and it was this that made them such particularly delightful playthings. The salt-of-the-earth truck driver Gorrister was wracked with guilt for having committed his wife to a mental institution; the hard-driving military man Benny was filled with self-loathing over his abandonment of his comrades in an Asian jungle; the genius computer scientist Ellen was forever reliving a brutal rape she had suffered at the hands of a coworker; the charming man of leisure Ted was in reality a con artist who had substituted sexual conquest for intimacy. The character with by far the most stains on his conscience was the elderly Nimdok, who had served as an assistant to Dr. Josef Mengele in the concentration camps of Nazi Germany.

You the player would guide each of the five through a surreal, symbolic simulacrum of his or her checkered past, helpfully provided by AM. While the latter’s goal was merely to torture them, your goal would be to cause them to redeem themselves in some small measure, by looking the demons of their past full in the face and making the hard, selfless choices they had failed to make the first time around. If they all succeeded in passing their tests of character, Ellison grudgingly agreed, the game could culminate in a relatively happy ending. Ellison:

This game [says] to the player there is more to the considered life than action. Television tells you any problem can be solved in 30 minutes, usually with a punch in the jaw, and that is not the way life is. The only thing you have to hang onto is not your muscles, or how pretty your face is, but how strong is your ethical behavior. How willing are you to risk everything — not just what’s convenient, but everything — to triumph. If someone comes away from this game saying to himself, “I had to make an extremely unpleasant choice, and I knew I was not going to benefit from that choice, but it was the only thing to do because it was the proper behavior,” then they will have played the game to some advantage.

Harlan Ellison and David Sears were now getting along fabulously. After several weeks spent working on a design document together, Ellison pronounced Sears “a brilliant young kid.” He went out of his way to be a good host. When he learned, for example, that Sears was greatly enamored with Neil Gaiman’s Sandman graphic novels, he called up said writer himself on his speakerphone: “Hi, Neil. This is David. He’s a fan and he’d love to talk to you about your work.” In retrospect, Ellison’s hospitality is perhaps less than shocking. He was in fact helpful and even kind throughout his life to young writers whom he deemed to be worth his trouble. David Sears was obviously one of these. “I don’t want to damage his reputation because I’m sure he spent decades building it up,” says Sears, “but he’s a real rascal with a heart of gold — but he doesn’t tolerate idiots.”

Harlan Ellison prepares to speak at the 1993 Game Developers Conference.

The project had its industry coming-out party at the seventh annual Computer Game Developers Conference in May of 1993. In a measure of how genuinely excited Harlan Ellison was about it, he agreed to appear as one of the most unlikely keynote speakers in GDC history. His speech has not, alas, been preserved for posterity, but it appears to have been a typically pyrotechnic Ellison rant, judging by the angry response of Computer Gaming World editor Johnny L. Wilson, who took Ellison to be just the latest in a long line of clueless celebrity pundits swooping in to tell game makers what they were doing wrong. Like all of the others, Wilson said, Ellison “didn’t really understand technology or the challenges faced daily by his audience [of game developers].” His column, which bore the snarky title of “I Have No Message, but I Must Scream,” went on thusly:

The major thesis of the address seemed to be that the assembled game designers need to do something besides create games. We aren’t quite sure what he means.

If he means to take the games which the assembled designers are already making and infuse them with enough human emotion to bridge the gaps of interpersonal understanding, there are designers trying to accomplish this in many different ways (games with artificial personalities, multiplayer cooperation, and, most importantly, with story).

If he objects to the violence which is so pervasive in both computer and video games, he had best revisit the anarchic and glorious celebration of violence in his own work. Violence is an easy way to express conflict and resolution in any art form. It can also be powerful. That is why we advocate a more careful use of violence in certain games, but do not editorialize against violence per se.

Harlan Ellison says that the computer-game design community should quit playing games with their lives. We think Ellison should stop playing games with his audiences. It’s time to put away his “Bad Melville” impression and use his podium as a “futurist” to challenge his audiences instead of settling for cheap laughs and letting them miss the message.

Harlan Ellison seldom overlooked a slight, whether in print or in person, and this occasion was no exception. He gave Computer Gaming World the rather hilarious new moniker of Video Wahoo Magazine in a number of interviews after Wilson’s editorializing was brought to his attention.

But the other side of Harlan Ellison was also on display at that very same conference. David Sears had told Ellison shortly before he made his speech that he really, really wanted a permanent job in the games industry, not just the contract work he had been getting from Cyberdreams. So, Ellison carried a fishbowl onstage with him, explained to the audience that Sears was smart and creative as heck and urgently needed a job, and told them to drop their business cards in the bowl if they thought they might be able to offer him one. “Three days later,” says Sears, “I had a job at Virgin Games. If he called me today [this interview was given before Ellison’s death] and said, ‘I need you to fix the plumbing in my bathroom,’ I’d be on a plane.”

Ellison’s largess was doubly selfless in that it stopped his No Mouth project in its tracks. With Sears having departed for Virgin Games, it spent at least six months on the shelf while Cyberdreams finished up CyberRace and embarked on a Dark Seed II. Finally Pat Ketchum handed it to a new hire, a veteran producer and designer named David Mullich.

It so happens that we met Mullich long, long ago, in the very early days of these histories. At the dawn of the 1980s, as a young programmer just out of university, he worked for the pioneering educational-software publisher Edu-Ware, whom he convinced to let him make some straight-up games as well. One of these was an unauthorized interactive take on the 1960s cult-classic television series The Prisoner; it was arguably the first commercial computer game in history to strive unabashedly toward the status of Art.

Mullich eventually left Edu-Ware to work for a variety of software developers and publishers. Rather belying his earliest experiments in game design, he built a reputation inside the industry as a steady hand well able to churn out robust and marketable if not always hugely innovative games and educational products that fit whatever license and/or design brief he was given. Yet the old impulse to make games with something to say about the world never completely left him. He was actually in the audience at the Game Developers Conference where Harlan Ellison made his keynote address; in marked contrast to Johnny L. Wilson, he found it bracing and exciting, not least because “I Have No Mouth, and I Must Scream” was his favorite short story of all time. Half a year or so later, Pat Ketchum called Mullich up to ask if he’d like to help Ellison get his game finished. He didn’t have to ask twice; after all those years spent slogging in the trenches of commerce, here was a chance for Mullich to make Art again.

His first meeting with Ellison didn’t begin well. Annoyed at the long delay from Cyberdreams’s side, Ellison mocked him as “another member of the brain trust.” It does seem that Mullich never quite developed the same warm relationship with Ellison that Sears had enjoyed: Ellison persisted in referring to him as “this new David, whose last name I’ve forgotten” even after the game was released. Nonetheless, he did soften his prejudicial first judgment enough to deem Mullich “a very nice guy.” Said nice guy took on the detail work of refining Sears and Ellison’s early design document — which, having been written by two people who had never made a game before, had some inevitable deficiencies — into a finished script that would combine Meaning with Playability, a task his background prepared him perfectly to take on. Mullich estimates that 50 percent of the dialog in the finished game is his, while 30 percent is down to Sears and just 20 percent to Ellison himself. Still, even that level of involvement was vastly greater than that of most established writers who deigned to put their names on games. And of course the core concepts of No Mouth were very much Ellison and Sears’s.

Pat Ketchum had by this point elected to remove Cyberdreams from the grunt work of game development; instead the company would act as a design mill and publisher only. Thus No Mouth was passed to an outfit called The Dreamers Guild for implementation under Mullich’s supervision. That became another long process; the computer game of I Have No Mouth, and I Must Scream wasn’t finally released until late 1995, fully three and a half years after Pat Ketchum had first visited Harlan Ellison to ask his permission to make it.

The latter’s enthusiasm for the project never abated over the course of that time. He bestowed his final gift upon David Mullich and the rest of Cyberdreams when he agreed to perform the role of AM himself. The result is one of the all-time great game voice-acting performances; Ellison, a man who loved to hear himself speak under any and all circumstances, leans into the persona of the psychopathic artificial intelligence with unhinged glee. After hearing him, you’ll never be able to imagine anyone else in the role.


Upon the game’s release, Ellison proved a disarmingly effective and professional spokesman for it; for all that he loved to rail against the stupidity of mainstream commercial media, he had decades of experience as a writer for hire, and knew the requirements of marketing. He wrote a conciliatory, generous, and self-deprecatory letter to Computer Gaming World — a.k.a., Video Wahoo Magazine — after the magazine pronounced No Mouth its Adventure Game of the Year. He even managed to remember David Mullich’s last name therein.

With a bewildering admixture of pleasure and confusion — I’m like a meson which doesn’t know which way to quark — I write to thank you and your staff. Pleasure, because everybody likes to cop the ring as this loopy caravanserie chugs on through Time and Space. Confusion, because — as we both know — I’m an absolute amateur at this exercise. To find myself not only avoiding catcalls and justified laughter at my efforts, but to be recognized with a nod of approval from a magazine that had previously chewed a neat, small hole through the front of my face… well, it’s bewildering.

David Sears and I worked very hard on I Have No Mouth. And we both get our accolades in your presentation. But someone else who had as much or more to do with bringing this project to fruition is David Mullich. He was the project supervisor and designer after David Sears moved on. He worked endlessly, and with what Balzac called “clean hands and composure,” to produce a property that would not shame either of us. It simply would not have won your award had not David Mullich mounted the barricades.

I remember when I addressed the Computer Game Designers’ banquet a couple of years ago, when I said I would work to the limits of my ability on I Have No Mouth, but that it would be my one venture into the medium. Nothing has changed. I’ve been there, done that, and now you won’t have to worry about me making a further pest of myself in your living room.

But for the honor you pay me, I am grateful. And bewildered.

Ellison’s acknowledgment of Mullich’s contribution is well-taken. Too often games that contain or purport to contain Deep Meaning believe this gives them a pass on the fundamentals of being playable and soluble. (For example, I might say, if you’ll allow me just a bit of Ellisonian snarkiness, that a large swath of the French games industry operated on this assumption for many years.) That No Mouth doesn’t fall victim to this fallacy — that it embeds its passion plays within the framework of a well-designed puzzle-driven adventure game — must surely be thanks to Mullich. In this sense, then, Sears’s departure came at the perfect time, allowing the experienced, detail-oriented Mullich to run with the grandiose concept which Sears and Ellison, those two game-design neophytes, had cooked up together. It was, one might say, the best of both worlds.

But, lest things start to sound too warm and fuzzy, know that Harlan Ellison was still Harlan Ellison. In the spring of 1996, he filed a lawsuit against Cyberdreams for unpaid royalties. Having spent his life in books and television, it appears that he may have failed to understand just how limited the sales prospects of an artsy, philosophical computer game like this one really were, regardless of how many awards it won. (Witness his comparison of Cyberdreams to the television empire of Aaron Spelling in one of the quotes above; in reality, the two operated not so much in different media galaxies as different universes.) “With the way the retail chain works, Cyberdreams probably hadn’t turned a profit on the game by the time the lawsuit was filed,” noted Computer Gaming World. “We’re not talking sales of Warcraft II here, folks.” I don’t know the details of Ellison’s lawsuit, nor what its ultimate outcome was. But I do know that David Mullich estimates today that No Mouth probably sold only about 40,000 copies in all.

Harlan Ellison didn’t always keep the sweeping promises he made in the heat of the moment; he huffily announced on several occasions that he was forever abandoning television, the medium with which he passed so much of his career in such a deadly embrace, only to be lured back in by money and pledges that this time things would be different. He did, however, keep his promise of never making another computer game. And that, of course, makes the one game he did help to make all the more special. I Have No Mouth, and I Must Scream stands out from the otherwise drearily of-its-time catalog of Cyberdreams as a multimedia art project that actually works — works as a game and, dare I say it, as a form of interactive literature. It stands today as a rare fulfillment of the promise that so many saw in games back in those heady days when “multimedia” was the buzzword of the zeitgeist — the promise of games as a sophisticated new form of storytelling capable of the same relevance and resonance as a good novel or movie. This is by no means the only worthwhile thing that videogames can be, nor perhaps even the thing they are best at being; much of the story of gaming during the half-decade after No Mouth‘s release is that of a comprehensive rejection of the vision Cyberdreams embodied. The company went out of business in 1997, by which time its artsy-celebrity-driven modus operandi was looking as anachronistic as Frank Sinatra during the heyday of the Beatles.

Nevertheless, I Have No Mouth, and I Must Scream remains one of the best expressions to stem from its confused era, a welcome proof positive that sometimes the starry-eyed multimedia pundits could be right. David Mullich went on to work on such high-profile, beloved games as Heroes of Might and Magic III and Vampire: The Masquerade — Bloodlines, but he still considers No Mouth one of the proudest achievements of a long and varied career that has encompassed the naïvely idealistic and the crassly commercial in equal measure. As well he should: No Mouth is as meaningful and moving today as it was in 1995, a rare example of a game adaptation that can be said not just to capture but arguably to improve on its source material. It endures as a vital piece of Harlan Ellison’s literary legacy.


In I Have No Mouth, and I Must Scream, you explore the traumas of each of the five people imprisoned by the psychotic supercomputer AM, taken in whatever order you like. Finding a measure of redemption for each of them opens up an endgame which offers the same chance for the rest of humanity — a dramatic departure from the infamously bleak ending of the short story on which the game is based.

Each character’s vignette is a surreal evocation of his tortured psyche, but is also full of opportunities for him to acknowledge and thereby cleanse himself of his sins. Harlan Ellison particularly loved this bit of symbolism, involving the wife and mother-in-law of the truck driver Gorrester: he must literally let the two principal women in his life off the hook. (Get it?) Ellison’s innocent delight in interactions like these amused the experienced game designer David Mullich, for whom they were old hat.

In mechanical terms, No Mouth is a fairly typical adventure game of its period. Its engine’s one major innovation can be seen in the character portrait at bottom left. The background here starts out black, then lightens through progressive shades of green as the character in question faces his demons (literally here, in the case of Ted — the game is not always terribly subtle). Ideally, each vignette will conclude with a white background. Be warned: although No Mouth mostly adheres to a no-deaths-and-no-dead-ends philosophy — “dying” in a vignette just gets the character bounced back to his cage, whence he can try again — the best ending becomes impossible to achieve if every character doesn’t demonstrate a reasonable amount of moral growth in the process of completing his vignette.

The computer genius Ellen is mortified by yellow, the color worn by the man who raped her. Naturally, the shade features prominently in AM’s decor.

The professional soldier Benny confronts the graves of the men who died under his command.

If sins can be quantified, then Nimdok, the associate to Dr. Mengele, surely has the most to atone for. His vignette involves the fable of the Golem of Prague, who defended the city’s Jewish ghetto against the pogroms of the late sixteenth century. Asked whether he risked trivializing the Holocaust by putting it in a game, Harlan Ellison answered in the stridently negative: “Nothing could trivialize the Holocaust. I don’t care whether you mention it in a comic book, on bubble-gum wrappers, in computer games, or write it in graffiti on the wall. Never forget. Never forget.


People say, “Oh, you’re so prolific.” That’s a remark made by assholes who don’t write. If I were a plumber and I repaired 10,000 toilets, would they say, “Boy, you’re a really prolific plumber?”

If I were to start over, I would be a plumber. I tell that to people, they laugh. They think I’m making it up. It’s not funny. I think a plumber, a good plumber who really cares and doesn’t overcharge and makes sure things are right, does more good for the human race in a given day than 50 writers. In the history of the world, there are maybe, what, 20, 30 books that ever had any influence on anybody, maybe The Analects of Confucius, maybe The History of the Peloponnesian Wars, maybe Uncle Tom’s Cabin. If I ever write anything that is remembered five minutes after I’m gone, I will consider myself having done the job well. I work hard at what I do; I take my work very seriously. I don’t take me particularly seriously. But I take the work seriously. But I don’t think writing is all that inherently a noble chore. When the toilet overflows, you don’t need Dostoevsky coming to your house.

That’s what I would do, I would get myself a job as a plumber. I would go back to bricklaying, which I used to do. I would become an electrician. Not an electrical engineer. I would become an electrician. I would, you know, install a night light in a kid’s nursery, and at the end of the day, if I felt like writing, I would write something. I don’t know what that has to do with the game or anything, but you asked so I told you.

— Harlan Ellison (1934-2018)

(Sources: the books The Way the Future Was by Frederick Pohl, These Are the Voyages: Season One by Marc Cushman with Susan Osborn, The Cambridge Companion to Science Fiction edited by Edward James and Farah Mendlesohn, I Have No Mouth & I Must Scream: Stories by Harlan Ellison, and I Have No Mouth, and I Must Scream: The Official Strategy Guide by Mel Odom; Starlog of September 1977, April 1980, August 1980, August 1984, November 1985, and December 1985; Compute! of November 1992; Computer Gaming World of March 1988, September 1992, July 1993, September 1993, April 1996, May 1996, July 1996, August 1996, November 1996, and June 1999; CU Amiga of November 1992 and February 1993; Next Generation of January 1996; A.N.A.L.O.G. of June 1987; Antic of August 1983; Retro Gamer 183. Online sources include a 1992 Game Informer retrospective on I Have No Mouth, and I Must Scream and a history of Cyberdreams at Game Nostalgia. My thanks also go to David Mullich for a brief chat about his career and his work on No Mouth.

I Have No Mouth, and I Must Scream is available as a digital purchase at GOG.com.)

 
 

Tags: , , ,