RSS

Search results for ‘infocom’

Hitchhiking the Galaxy Infocom-Style

The Hitchhiker's Guide to the Galaxy

Given that Hitchhiker’s is both one of the most commercially successful text adventures ever released and one that oozes with interesting things to talk about, I thought I would look at the experience in more detail than I have any Infocom game in quite some time. As we’ll see, Hitchhiker’s is not least interesting in that it manages to represent both a step forward and a step back for Infocom and the art of interactive fiction. What follows is a sort of guided tour of the game.

The Hitchhiker's Guide to the Galaxy

As with any Infocom game, the experience of Hitchhiker’s for any original player began long before she put the disk in the drive. It began with the box and its contents. The Hitchhiker’s package is one of the most storied of all from this company that became so famous for their rich packages. It’s bursting with stuff, most of it irrelevant to the actual contents of the disk but all of it fun: an advertising brochure for the titular guidebook;[1]“As seen on Tri-D!” a microscopic space fleet;[2]Easily mistaken for an empty plastic baggie. a set of “peril-sensitive sunglasses”;[3]They turn opaque when danger is at hand to avoid upsetting your delicate sensibilities. The ones in the game package are, naturally, made of black construction paper. a piece of pocket fluff; a set of destruct orders for Arthur Dent’s house and the Earth; the obligatory “Don’t Panic!” button.[4]These were manufactured in huge quantities and given away for some time at trade shows and the like as well as being inserted into game boxes.

Impressive as the packaging is, not all of it was to Douglas Adams’s taste. He hated the gibbering green planet,[5]Or whatever it’s supposed to be. which had been designed and pressed into service by Simon & Schuster’s Pocket Books imprint without any input from him when they first began to publish the books in North America. He briefly kicked up a fuss when he saw it leering at him from the Infocom box as well, but Infocom’s contacts at Simon & Schuster, whom Infocom was considering allowing to buy them at just this time and thus preferred to remain on good terms with, had asked with some urgency that it be there. By the time Adams saw the box there wasn’t really time to change it anyway. And so the planet — and I have to agree with him that it’s pretty hideous — remained.

The game proper begins just where the books and the smorgasbord of other variations of Hitchhiker’s did: with you as Arthur Dent waking up hungover in bed on what is going to be “the worst day of your life.” You immediately get a couple of clues that this is not going to be your typical Infocom game. The first command you must enter is “TURN ON LIGHT,” a typical enough action to take upon waking up in a dark bedroom, perhaps, but one that could momentarily stump a seasoned adventurer, so accustomed to living in an object-oriented world where she can only interact with what the game tells her exists around her; the game, you see, never mentions the presence of a light switch or a light. Hitchhiker’s will continue to occasionally accept or demand inputs that you, the Seasoned Adventurer, just don’t expect to work. And then when you dutifully check your starting inventory, you learn that you have “no tea” and “a splitting headache.” Such mixing of the physical and the metaphysical amongst your possessions will also remain an ongoing theme.

Especially if you’re at all familiar with any of the other Hitchhiker’s manifestations, you probably won’t have too many problems getting out of the house and lying in front of Mr. Prosser’s bulldozer to stop (for the moment) the destruction of your house to make room for a new bypass. If you do misjudge things and get yourself killed, you get an early introduction to another of the game’s throughgoing traits: the narrator, or the computerized “game master” if you will, has a voice and personality all his own. Playing Hitchhiker’s is like having a conversation — often an infuriating one — with him.

>stand
The bulldozer driver gives a quick chew of his gum and slams in the clutch. The bulldozer piles into the side of your home.

Your home collapses in a cloud of dust, and a stray flying brick hits you squarely on the back of the head. You try to think of some suitable last words, but what with the confusion of the moment and the spinning of your head, you are unable to compose anything pithy and expire in silence.

>s
You keep out of this, you're dead. An ambulance arrives.

>don't panic
You keep out of this, you're dead and should be concentrating on developing a good firm rigor mortis. You are put in the ambulance, which drives away.

>pray
For a dead person you are talking too much. As the ambulance reaches the mortuary a fleet of Vogon Constructor ships unexpectedly arrives and demolishes the Earth to make way for a new hyperspace bypass.

We are about to give you your score. Put on your peril-sensitive sunglasses now. (Hit RETURN or ENTER when ready.) >

The narrator doesn’t just tell you what’s happening. He chats with you, messes with you, has fun at your expense, occasionally lies to you. This isn’t precisely new ground for Infocom; their games going back to Zork I had occasionally broken from being the neutral voice of the simulation to offer quips and funny responses. But it’s much, much more pronounced here, an integral part of the experience instead of just the odd response to the occasional misguided or intentionally bizarre input. A similar smart-alecky narrative voice was par for the course with other versions of Hitchhiker’s and, indeed, kind of Adams’s default persona as a writer. Certainly anyone who’s ever read a Dickens novel knows that a second- or third-person voice doesn’t mean the narrator isn’t a character in the work. But seeing it here… well, that feels significant for anyone who’d like to see interactive fiction as a defensible literary form. Chalk that up as a big step forward.

So, Ford Prefect arrives just as expected to return your towel preparatory to getting off this rock before the Vogons blow it up to make way for a hyperspace bypass. If you haven’t experienced it already, here you are also introduced to another dominant trait: the game’s often arbitrary cruelty. If you simply take the towel Ford offers, he marches away, the Vogons arrive, and it’s game over. No, you have to refuse the towel to force Ford to actually notice your situation and come to the snap decision to take you with him.[6]Ford Prefect’s name, by the way, is one of the subtler jokes in Hitchhiker’s, and one that was entirely lost on American readers. The Ford Prefect, you see, was once a model of automobile in Britain. When the Betelgeusian Ford Prefect chose the name as “nicely inconspicuous,” he did so because he had, as Adams himself later clarified, “mistaken the dominant life form” on the planet.

Ford convinces Prosser to take your place in front of the bulldozer, just like in the other Hitchhiker’s manifestations. You head for the pub, where you consume beer and peanuts to fortify yourselves for the matter transference beam. Then Prosser wises up and knocks down your house after all, prompting you, reenacting the role of the Arthur you remember from the book, to run screaming out of the pub. On the way you meet a hungry dog, who is key to one of the most legendarily cruel puzzles in the game. You need to feed him. As for why… well, that you won’t learn until much later. Then the Vogons show up with appropriate pomp and circumstance, and you and Ford have a bare instant to make your escape.

You wake up in darkness. The puzzle that follows can almost be read as an homage to the original Adventure, which had a maze for which close reading was similarly key. It’s early evidence of another trait of Hitchhiker’s: a tendency to evoke and occasionally subvert the adventuring heritage that preceded it. For instance, it will later hit you with a blessedly fake maze. As David Lebling recently put it, the form by 1984 was old and established enough that it could be parodied.

>z
You can hear nothing, smell nothing, taste nothing, see nothing, feel nothing, and are not even certain who you are.

>z
There's nothing you can taste, nothing you can see, nothing you can hear, nothing you can feel, nothing you can smell, you do not even know who you are.

>z
There's nothing you can taste, nothing you can see, nothing you can hear, nothing you can feel, nothing you can smell, you do not even know who you are.

>z
You can see nothing, feel nothing, hear nothing, taste nothing, and are not entirely certain who you are.

>smell
(darkness)
It does smell a bit. There's something pungent being waved under your nose. Your head begins to clear. You can make out a shadow moving in the dark.

>examine shadow
The shadow is vaguely Ford Prefect-shaped.

Vogon Hold
This is a squalid room filled with grubby mattresses, unwashed cups, and unidentifiable bits of smelly alien underwear. A door lies to port, and an airlock lies to starboard.
In the corner is a glass case with a switch and a keyboard.
It looks like the glass case contains:
an atomic vector plotter
Along one wall is a tall dispensing machine.

Ford removes the bottle of Santraginean Mineral Water which he's been waving under your nose. He tells you that you are aboard a Vogon spaceship, and gives you some peanuts.

That “tall dispensing machine” marks the most famous puzzle ever to appear in an Infocom game, or in any text adventure by anyone for that matter. A whole mythology sprung up around it. Infocom did a booming business for a while in “I got the babel fish!” tee-shirts, while it’s still mentioned from time to time today — sometimes, one suspects, by folks who actually know it only as a trope — as the ultimate in cruel puzzles. Yet I’ve always been a bit nonplussed by its reputation. Oh, getting the babel fish from dispenser to auditory canal is a difficult, convoluted game of Mouse Trap which is made yet more difficult by the facts that the dispenser has only a limited number of fish and you have only a limited number of turns in which to work before you’re hauled off to the Vogon captain’s poetry reading. Still, solving this puzzle is far from an insurmountable task. You’re given good feedback upon each failure as to exactly what happened to intercept the babel fish on its journey, while your scope of possibility is somewhat limited by the fact that this is still quite early in the game, when there aren’t yet that many objects to juggle. I feel like its reputation probably stems from this fact that it’s met so early in the game. Thus even most casual players did encounter it — and, it being the first really difficult puzzle, and one of the first for which prior knowledge of the other Hitchhiker’s manifestations was of no use, many or most of those players likely never got any further. The Imps have often noted that most people never finished most of the Infocom games they bought. What with its mass appeal to people who knew nothing of Infocom or adventure games thanks to the license as well as its extreme difficulty, one would presume that Hitchhiker’s had an even more abysmal rate of completion than the norm.

Since solving the babel-fish puzzle[7]Or not. is something of a rite of passage for all adventurers, I won’t totally spoil it here. I will note, however, that the very last step, arguably the most difficult of all, was originally even more difficult.

A small upper-half-of-the-room cleaning robot flies into the room, catches the babel fish (which is all the flying junk it can find), and exits.

The original version didn’t have that crucial parenthesis; it was wisely added at the insistence of Mike Dornbrook, who felt the player deserved just a little nudge.

The babel fish, of course, lets you understand the Vogon language, which is in turn key to getting that atomic vector plotter that is for some reason on display under glass amidst the “smelly bits of alien underwear.” Also key to that endeavor is the Vogon poetry reading to which you’re soon subjected.[8]The original Hitchhiker’s radio serial mentions Vogon poetry as the third worst in the universe. The second is that of the Azgoths of Kria, while the first is that of Paul Neil Milne Johnstone of Earth. Rather astoundingly, Johnstone is actually a real person, a bunk mate of Adams’s back at Brentwood School who would keep him awake nights “scratching this awful poetry about swans and stuff.” Now, it was kind of horrible of Adams to call him out like that (and probably kind of horrible for me to tell this story now), but it just keeps getting better. Poor Johnstone, who was apparently an earnest poet into adult life but not endowed with much humor not of the unintentional stripe, wrote a letter to Time Out magazine that’s as funny as just about anything in Hitchhiker’s:

“Unfortunate that Douglas Adams should choose to reopen a minor incident; that it remains of such consequence to him indicates a certain envy, if not paranoia. Manifest that Adams is being base-minded and mean-spirited, but it is surely unnecessary for Steve Grant [a journalist to whom Adams had told the story] to act as a servile conduit for this pettiness.”

With Johnstone’s lawyers beginning to circle, Paul Neil Milne Johnstone became Paula Nancy Millstone Jennings in the book and later adaptations.
What you’re confronted with here is a puzzle far more cruel in my eyes than the babel-fish puzzle. It’s crucial that you get the Vogon captain to extend his reading to two verses; let’s not get into why. Unfortunately, at the end of the first verse he remarks that “you didn’t seem to enjoy my poetry at all” and has you tossed out the airlock. The solution to this conundrum is a bit of lateral thinking that will likely give logical, object-focused players fits: you just have to “ENJOY POETRY.”

>enjoy poetry
You realise that, although the Vogon poetry is indeed astoundingly bad, worse things happen at sea, and in fact, at school. With an effort for which Hercules himself would have patted you on the back, you grit your teeth and enjoy the stuff.

I’m not sure how to feel about this. It’s undeniably clever, and almost worth any pain for the great line “worse things happen at sea, and in fact, at school.” But at heart it’s guess-the-verb, or at least guess-the-phrase, a rather shocking thing to find in an Infocom game of 1984. Now maybe my description of Hitchhiker’s as both progressive and regressive starts to become clearer, as does Dornbrook’s assertion that Adams pushed Meretzky to “break the rules.” A comparison with the babel-fish puzzle shows Hitchhiker’s two puzzling personalities at their extremes. For all its legendary difficulty, the babel-fish puzzle feels to me like a vintage Meretzky puzzle: intricate but logical, responsive to careful reading and experimentation. “ENJOY POETRY,” on the other hand, is all Adams. You either make the necessary intuitive leap or you don’t. If you do, it’s trivial; if you don’t, it’s impossible.

In the session I played before writing this article, something else happened in the midst of the poetry-as-torture-device. Suddenly this long piece of text appeared, apropos of nothing going on at the time:

It is of course well known that careless talk costs lives, but the full scale of the problem is not always appreciated. For instance, at the exact moment you said "look up vogon in guide" a freak wormhole opened in the fabric of the space-time continuum and carried your words far far back in time across almost infinite reaches of space to a distant galaxy where strange and warlike beings were poised on the brink of frightful interstellar battle.

The two opposing leaders were meeting for the last time. A dreadful silence fell across the conference table as the commander of the Vl'Hurgs, resplendent in his black jewelled battle shorts, gazed levelly at the G'Gugvunt leader squatting opposite him in a cloud of green, sweet-smelling steam. As a million sleek and horribly beweaponed star cruisers poised to unleash electric death at his single word of command, the Vl'Hurg challenged his vile enemy to take back what it had said about his mother.

The creature stirred in its sickly broiling vapour, and at that very moment the words "look up vogon in guide" drifted across the conference table. Unfortunately, in the Vl'hurg tongue this was the most dreadful insult imaginable, and there was nothing for it but to wage terrible war for centuries. Eventually the error was detected, but over two hundred and fifty thousand worlds, their peoples and cultures perished in the holocaust.

You have destroyed most of a small galaxy. Please pick your words with greater care.

It incorporates an invalid input I had tried earlier, an attempt to look something up in the in-game version of the Hitchhiker’s Guide using syntax the game didn’t much like.[9]It’s fairly persnickety here; you can only “CONSULT GUIDE ABOUT” things. The little story is funny, especially if you haven’t recently read the novel version of Hitchhiker’s; it’s lifted verbatim from a passing riff near the end of the book, with only your invalid input replacing the novel’s version of Arthur’s comment that “I seem to be having tremendous difficulty with my lifestyle.”[10]Indeed, it seems to go relatively unremarked just how much text in the game is lifted directly from the novel, another artifact perhaps of the sheer difficulty of getting original prose out of Adams. More interesting to me, however, is what it represents conceptually. In incorporating a spurious input into the story in this way, it represents a sort of breaking of the fourth wall — a fascinating development in light of the fact that Infocom had spent a great deal of effort building said wall in the first place. By the time of Hitchhiker’s they scrupulously distinguished between what I’ll refer to as diegetic commands (things that cause things to happen in the storyworld) and non-diegetic — or, if you like, utility — commands (things like “SAVE” or “RESTORE” or, indeed, invalid inputs that don’t affect the storyworld). For instance, time passes in the story and the turn counter advances only in the case of the former. Infocom’s goal had long ago become to separate the undesirable challenge of interacting with the parser from the desirable one of interacting with the storyworld. Now along comes Adams to muddy it all up again. The difference, of course, is that early text adventures confused the layers of interface and simulation because they didn’t entirely know what they were doing. Adams and Meretzky break the formal rules the way artists do — consciously.

That’s not the end of the fellow in the jeweled battle shorts and his buddy; you’ll be meeting them again soon. But in the meantime you’re thrown out of the Vogon airlock[11]Although hopefully not before collecting the essential atomic vector plotter and picked up at the last possible second by the Heart of Gold, thanks to the magic of Infinite Improbability Physics. You end up in Darkness again, a motif that will continue to recur. You come out of it via another of the game’s most storied bits, the first of two incidents of it flat-out lying to you:

>listen
(to darkness)
You hear the deep and distant hum of a star drive coming from far above. There is an exit to port.

>w
You can't go that way.

>n
You can't go that way.

>s
(We were lying about the exit to port.) You emerge from a small doorway...

Entry Bay Number Two
This is an entry bay for the Heart of Gold. A corridor lies aft of here.
There is a sales brochure here.

"This looks like that incredible new Infinite Improbability Drive spaceship, the Heart of Gold!" says Ford, with growing excitement.

"Announcement, announcement. This is Eddie (the shipboard computer). We have just picked up two hitchhikers at an improbability factor of 2 to the 21,914 power to 1 against."

Just as you’d expect from the novel, you soon meet the masters of the Heart of Gold, two-headed party fiend Zaphod Beeblebrox and his Earthling girlfriend Trillian, née Tricia McMillan, whom you (Arthur) once tried to pick up at a party in London only to watch her leave with Zaphod.[12]I’ve always found Zaphod a hilarious character because he was such a walking, talking anachronism even in the early 1980s. He’s just so obviously a creature of the 1970s, from his hippy-dippy diction to his easygoing, lackadaisically stoned take on existence. He’d fit right in in Dazed and Confused. But from here things suddenly diverge from the novel. Your companions all bugger off to the sauna, conveniently removing themselves from the implementation equation and leaving you to explore the Heart of Gold and, eventually, a number of other realities to obtain a collection of tools,[13]Don’t ask. a collection of fluff,[14]Really don’t ask. and, stereotypical Englishman that you are, a good cup of tea. Ford helpfully leaves his copy of the Guide with you; you can “CONSULT” it about an impressive number of things. Some of these entries are just meant for fun, although they are once again often just recycled bits from the book. At least a few, however, are essential reading.

The Heart of Gold also contains the second instance of the game lying to you, this one much more extended.

>u
Corridor, Aft End
This is one end of a short corridor that continues fore along the main deck of the Heart of Gold. Doorways lead to aft and port. In addition, a gangway leads downward.

>s
That entrance leads to the Infinite Improbability Drive chamber. It's supposed to be a terribly dangerous area of the ship. Are you sure you want to go in there?

>yes
Absolutely sure?

>yes
I can tell you don't want to really. You stride away with a spring in your step, wisely leaving the Drive Chamber safely behind you. Telegrams arrive from well-wishers in all corners of the Galaxy congratulating you on your prudence and wisdom, cheering you up immensely.

>s
What? You're joking, of course. Can I ask you to reconsider?

>no
Engine Room
You're in the Infinite Improbability Drive chamber. Nothing happens; there is nothing to see.

>l
Engine Room
I mean it! There's nothing to see here!

>l
Engine Room
Okay, okay, there are a FEW things to see here. This is the room that houses the powerful Infinite Improbability Generator that drives the Heart of Gold. An exit lies fore of here.
Sitting in the corner is a spare, portable Improbability Generator.
There is an ionic diffusion rasp here.
There is a pair of hypersonic pliers here.

(Footnote 10)

>footnote 10
I guess it isn't all that dangerous a place after all.

Those footnotes which pop up from time to time are another of the game’s blizzard of new ideas — rather pointless really, but good fun.[15]Like (hopefully) the ones I’ve included in this article in homage. Or maybe this is my bid for literary greatness via my own version of Pale Fire.

If you experiment and use the Guide wisely, you’ll eventually find a way to transport yourself into about half a dozen little vignettes, sometimes still in the person of Arthur, sometimes in that of one of your three companions currently slumming it in the sauna. I won’t belabor most of these; this article has to end at some point, after all, and if you do play for yourself you deserve to discover something for yourself. But I do want to talk just a bit about one, or rather two that are closely interrelated, because they involve a puzzle often cited as an example of Hitchhiker’s extreme, downright un-Infocom-like cruelty.

One of the vignettes features our friend of the jeweled battle shorts. It seems that he and his erstwhile enemy have worked out the source of the misunderstanding that led to all those centuries of terrible war: a creature from Earth.[16]This would seem to belie the Guide‘s description of Earth as “harmless,” and even the revised description of it as “mostly harmless.” You’re transported onto the bridge of his flagship as he and his erstwhile enemy hurtle toward your planet, not yet destroyed by the Vogons in this vignette,[17]There’s a joke, or maybe an aphorism, in there somewhere. “Between a Vl’Hurg and a Vogon,” maybe? with malice in their hearts.

War Chamber
Spread before you, astonishingly enough, is the War Chamber of a star battle cruiser. Through the domed canopy of the ship you can see a vast battle fleet flying in formation behind you through the black, glittering emptiness of space. Ahead is a star system towards which you are hurtling at a terrifying speed.
There is an ultra-plasmic vacuum awl here.

Standing near you are two creatures who are gazing at the star system with terrible hatred in their eyes. One is wearing black jewelled battle shorts, and the other is wreathed in a cloud of green, sweet-smelling steam. They are engaged in conversation.

The fleet continues to hurtle sunwards.

If you’re like, oh, about 95% of players, your journey will end abruptly when the battle fleet, which in a fatal oversight on the part of our militant alien friends turns out to be microscopic by the scale of the Earth, is swallowed by a small dog. To prevent this, you needed to have taken the unmotivated (at the time) step of feeding something to the aforementioned dog way back on Earth in the first act of the game, before the Vogons arrived. Horribly cruel, no? Well, yes and no. Another of the vignettes — they appear in random order, thus justifying Meretzky’s assertion that Hitchhiker’s ends up representing one of the “most ruthlessly nonlinear designs we [Infocom] ever did” — has you replaying the opening sequence of the game again, albeit from the perspective of Ford Prefect. You can also feed the dog there. If you fail at a vignette, meanwhile — and that’s very easy to do — you usually “die,” but that’s not as bad as you might expect. You’re merely returned to the Heart of Gold, and can have another go at it later. This mechanism saves Hitchhiker’s repeatedly, and not least in the case of this puzzle, from accusations of relying on extensive learning by death.

Still, there should be no mistake: Hitchhiker’s is punishingly difficult for even the most experienced of adventurers, the most challenging Infocom release since Suspended and the one with the most elements of, shall we say, questionable fairness since the days of Zork II and Deadline. While it is possible to repeat the vignettes until you solve each overarching challenge, it’s painfully easy to leave small things undone. Having “solved” the vignette in the sense of completing its overarching goal, you’re then locked out of experiencing it again, and thus locked out of victory for reasons that are obscure indeed.[18]Zaphod’s sequence is particularly prone to this, to the extent that I’ll offer a hint: look under the seat! One or two puzzles give no immediate feedback after you solve them, which can lead you to think you’re on the wrong track.[19]I’m thinking particularly of growing the plant here. For virtually the entire game after arriving on the Heart of Gold you labor away with no clear idea what it is you’re really supposed to be accomplishing. Sometimes vital properties of things go undescribed just for the hell of it.[20]I’m speaking particularly of the brilliantly Adamsian “thing your aunt gave you that you don’t know what it is,” of which it’s vital to know — take this as another tip — that you can put things inside it, even though that’s never noted or implied by its description. And then many of these puzzles are… well, they’re just hard, and at least as often hard in the way of “ENJOY POETRY” as in the way of the babel fish. The “Standard” difficulty label on the box, which was placed there purely due to marketing needs, is the cruelest touch of all.

So, we must ask just how Hitchhiker’s became such an aberration in the general trend of Infocom games to become ever fairer and, yes, easier. Meretzky noted that trend in his interview for Get Lamp and was not, either back in the day or at the time of his interview, entirely happy about it. He felt that wrestling with a game for weeks or months until you had that “Eureka!” moment in the bathtub or the middle of a working day was a huge part of the appeal of the original Zork — an appeal that Infocom was gradually diluting. Thus Meretzky and Adams explicitly discussed his opinion that “adventure games were becoming a little too easy,” and that Hitchhiker’s could be a corrective to that. Normally puzzles that were exceptionally difficult had their edges rounded during Infocom’s extensive testing process. But that didn’t happen for Hitchhiker’s to the extent that it normally did, for a couple of reasons. First, many of these puzzles had been written not by any ordinary Imp but by Douglas Adams; for obvious reasons, Infocom was reluctant to step on his toes. Additionally, the testers didn’t have nearly as much time with Hitchhiker’s as with an ordinary Infocom game, thanks to Adams’s procrastination and the resultant delays and Infocom’s determination to get the game out in time for Christmas. The testers did a pretty good job with the purely technical side; even the first release of Hitchhiker’s is not notably buggy. But there wasn’t time for the usual revisions to the design as a whole even had there been a strong motivation to do them from Infocom’s side. Any lack of such motivation was not down to lack of complaining from the testers: Meretzky admits that they “strongly urged that the game be made easier.”

The decision to go ahead with such a cruel design has been second-guessed by folks within Infocom in the years since, especially in light of the declining commercial fortunes of the company’s post-Hitchhiker’s era. Jon Palace presented a pretty good summary of the too-difficult camp’s arguments in his own Get Lamp interview:

Some have argued that The Hitchhiker’s Guide to the Galaxy was one of the biggest mistakes we made because it introduced a huge audience to a relatively difficult game. The difficulty of the game and its design flaws[21]Palace was no fan of the dog-feeding puzzle in particular. may have turned off the largest new audience we could have had. Perhaps we should have made that game a lot easier. It’s very funny, and it’s got some terrific puzzles. But my point is that if it was the first time people were experiencing an Infocom game, because of the names “Hitchhiker’s Guide” and “Douglas Adams,” there was only so much Douglas Adams they could get out of it without working harder than they wanted to.

Steve Meretzky, on the other hand, remains unrepetant, as do Mike Dornbrook and others. Dornbrook’s argument, which strikes me as flawed, is essentially that most people didn’t finish most Infocom games anyway — even the easier ones — so Hitchhiker’s difficulty or hypothetical lack thereof didn’t make much difference. I suppose your attitude toward these issues says much about what you want Infocom’s games to be: accessible interactive stories with a literary bent or intricate puzzle boxes. It’s Graham Nelson’s memorable description of interactive fiction as a narrative at war with a crossword writ large yet again. For my part, I think interactive fiction can be either, an opinion apparently shared by Meretzky himself, the man who went on to write both the forthrightly literary A Mind Forever Voyaging and the unabashed puzzle box that is Zork Zero. Yet I do demand that my puzzle boxes play fair, and find that Hitchhiker’s sometimes fails me here. And while I have no objection to the concept of a tougher Infocom game for the hardcore who cut their teeth on Zork,[22]See 1985’s Spellbreaker, which unlike Hitchhiker’s was explicitly billed as exactly that and does a superb job at it. I’m not sure that Hitchhiker’s should have been that game, for the obvious commercial considerations Palace has just outlined for us.

And yet, and yet… it’s hard to see how some of the more problematic aspects of Hitchhiker’s could be divorced from its more brilliant parts. As a final example of that, I want to talk about — and, yes, spoil — one last puzzle, one of the last in the game in fact. By now you’ve collected all of the various bits and pieces from the vignettes and the narrative of the game has rejoined that of the book; the Heart of Gold has landed on the legendary lost planet of Magrathea. You’ve also managed to brew yourself a nice hot cup of tea. Now you need to get inside the room of Marvin the Paranoid Android to convince him to open the ship’s hatch to let you go exploring.

>s
Corridor, Aft End
This is one end of a short corridor that continues fore along the main deck of the Heart of Gold. Doorways lead to aft and port. In addition, a gangway leads downward.

>w
The screening door is closed.

>open door
The door explains, in a haughty tone, that the room is occupied by a super-intelligent robot and that lesser beings (by which it means you) are not to be admitted. "Show me some tiny example of your intelligence," it says, "and maybe, just maybe, I might reconsider."

>consult guide about intelligence
The Guide checks through its Sub-Etha-Net database and eventually comes up with the following entry:

Thirty million generations of philosophers have debated the definition of intelligence. The most popular definition appears in the Sirius Cybernetics Corporation android manuals: "Intelligence is the ability to reconcile totally contradictory situations without going completely bonkers -- for example, having a stomach ache and not having a stomach ache at the same time, holding a hole without the doughnut, having good luck and bad luck simultaneously, or seeing a real estate agent waive his fee."

>get no tea
no tea: Taken.

>i
You have:
no tea
tea
a flowerpot
The Hitchhiker's Guide
a towel
a thing your aunt gave you which you don't know what it is
a babel fish (in your ear)
your gown (being worn)

>open door
The door is almost speechless with admiration. "Wow. Simultaneous tea and no tea. My apologies. You are clearly a heavy-duty philosopher." It opens respectfully.

I’m not quite sure how you make that intuitive leap precisely fair, but I am pretty sure I wouldn’t want to live without it. Maybe Hitchhiker’s is fine just the way it is. Soon after, you drink that glorious cup of tea, a feat which, in possibly the most trenchant and certainly the funniest piece of social commentary on the nature of Britishness in the entire game, scores you a full 100 of the game’s total of 400 points. Soon after that you step onto the surface of Magrathea, where “almost instantly the most incredible adventure starts which you’ll have to buy the next game to find out about.” That game, of course, would never materialize. The ludic version of Arthur Dent has remained frozen in amber just outside the Heart of Gold for almost thirty years now, giving Hitchhiker’s claim to one final dubious title: that of the only game in the Infocom canon that doesn’t have an ending.

Crazy and vaguely subversive as it is, Hitchhiker’s would have a massive influence on later works of interactive fiction. Contemporaneous Infocom games are filled with what feels to modern sensibilities like an awful lot of empty rooms that exist only to be mapped and trekked across. Hitchhiker’s, on the other hand, is implemented deeply rather than widely. There are just 31 rooms in the entire game, but virtually every one of them has interesting things to see and do within it. Further, these 31 rooms come not in a single contiguous and unchanging block, but a series of linked dramatic scenes. The Heart of Gold, which contains all of nine rooms, is by far the biggest contiguous area in the game. Hitchhiker’s can thus lay pretty good claim to being the first text adventure to completely abandon the old obsession with geography that defined the likes of Adventure and Zork. Certainly it’s the first Infocom game in which map-making is, even for the most cartographically challenged amongst us, utterly superfluous. This focus on fewer rooms with more to do in them feels rather shockingly modern for a game written in 1984. Ditto the dynamism of most of the scenes, with things always happening around you that demand a reaction. The only place where you can just explore at your leisure is the Heart of Gold.

Many a later game, including such 1990s classics as Curses, Jigsaw, and The Mulldoon Legacy, have used linked vignettes like those in Hitchhiker’s to send the player hopscotching through time and space. More have followed its lead in including books and other materials to be “CONSULT”ed. Even a fair number[23]Not to mention this post. have latched onto the pointless but somehow amusing inclusion of footnotes. Less positively, quite a number of games both inside the interactive-fiction genre and outside of it have tried very hard to mimic Adams’s idiosyncratic brand of humor, generally to less than stellar effect.[24]Tolkien is about the only other generally good author I can think of who has sparked as much bad writing as Adams.

Hitchhiker’s is an original, with a tone and feel unique in the annals of interactive fiction. It breaks the rules and gets away with it. I’m not sure prospective designers should try to copy it in that, but they certainly should play it, as should everyone interested in interactive fiction. It’s easily one of the dozen or so absolutely seminal works in the medium. Fortunately, it’s also the most effortless of all Infocom games to play today, as the BBC has for some years now hosted an online version of it. Yes, there’s lots of graphical gilding around the lily, but at heart it’s still the original text adventure. If you’re interested enough in interactive fiction to make it this far in this article and you still haven’t played it, by all means remedy that right away.

(In addition to the various Get Lamp interviews, Steve Meretzky’s interview in the book Game Design Theory and Practice was very valuable in writing this article.)

Footnotes

Footnotes
1 “As seen on Tri-D!”
2 Easily mistaken for an empty plastic baggie.
3 They turn opaque when danger is at hand to avoid upsetting your delicate sensibilities. The ones in the game package are, naturally, made of black construction paper.
4 These were manufactured in huge quantities and given away for some time at trade shows and the like as well as being inserted into game boxes.
5 Or whatever it’s supposed to be.
6 Ford Prefect’s name, by the way, is one of the subtler jokes in Hitchhiker’s, and one that was entirely lost on American readers. The Ford Prefect, you see, was once a model of automobile in Britain. When the Betelgeusian Ford Prefect chose the name as “nicely inconspicuous,” he did so because he had, as Adams himself later clarified, “mistaken the dominant life form” on the planet.
7 Or not.
8 The original Hitchhiker’s radio serial mentions Vogon poetry as the third worst in the universe. The second is that of the Azgoths of Kria, while the first is that of Paul Neil Milne Johnstone of Earth. Rather astoundingly, Johnstone is actually a real person, a bunk mate of Adams’s back at Brentwood School who would keep him awake nights “scratching this awful poetry about swans and stuff.” Now, it was kind of horrible of Adams to call him out like that (and probably kind of horrible for me to tell this story now), but it just keeps getting better. Poor Johnstone, who was apparently an earnest poet into adult life but not endowed with much humor not of the unintentional stripe, wrote a letter to Time Out magazine that’s as funny as just about anything in Hitchhiker’s:

“Unfortunate that Douglas Adams should choose to reopen a minor incident; that it remains of such consequence to him indicates a certain envy, if not paranoia. Manifest that Adams is being base-minded and mean-spirited, but it is surely unnecessary for Steve Grant [a journalist to whom Adams had told the story] to act as a servile conduit for this pettiness.”

With Johnstone’s lawyers beginning to circle, Paul Neil Milne Johnstone became Paula Nancy Millstone Jennings in the book and later adaptations.

9 It’s fairly persnickety here; you can only “CONSULT GUIDE ABOUT” things.
10 Indeed, it seems to go relatively unremarked just how much text in the game is lifted directly from the novel, another artifact perhaps of the sheer difficulty of getting original prose out of Adams.
11 Although hopefully not before collecting the essential atomic vector plotter
12 I’ve always found Zaphod a hilarious character because he was such a walking, talking anachronism even in the early 1980s. He’s just so obviously a creature of the 1970s, from his hippy-dippy diction to his easygoing, lackadaisically stoned take on existence. He’d fit right in in Dazed and Confused.
13 Don’t ask.
14 Really don’t ask.
15 Like (hopefully) the ones I’ve included in this article in homage. Or maybe this is my bid for literary greatness via my own version of Pale Fire.
16 This would seem to belie the Guide‘s description of Earth as “harmless,” and even the revised description of it as “mostly harmless.”
17 There’s a joke, or maybe an aphorism, in there somewhere. “Between a Vl’Hurg and a Vogon,” maybe?
18 Zaphod’s sequence is particularly prone to this, to the extent that I’ll offer a hint: look under the seat!
19 I’m thinking particularly of growing the plant here.
20 I’m speaking particularly of the brilliantly Adamsian “thing your aunt gave you that you don’t know what it is,” of which it’s vital to know — take this as another tip — that you can put things inside it, even though that’s never noted or implied by its description.
21 Palace was no fan of the dog-feeding puzzle in particular.
22 See 1985’s Spellbreaker, which unlike Hitchhiker’s was explicitly billed as exactly that and does a superb job at it.
23 Not to mention this post.
24 Tolkien is about the only other generally good author I can think of who has sparked as much bad writing as Adams.
 
55 Comments

Posted by on November 19, 2013 in Digital Antiquaria, Interactive Fiction

 

Tags: , , ,

Infocom: Going It Alone

With Zork on the market and proving to be a major hit, it was time for Infocom to think about the inevitable sequel. The task of preparing it fell to Dave Lebling. At first glance, it looked straightforward enough. He needed only take the half of the original PDP-10 Zork that had not made it into the PC version, label it Zork II, and be done with it. In actuality, however, it was a little more complicated. The new game would at a minimum have to have some restructuring. For example, the goal of the PDP-10 Zork, like the PC version, was to deliver a collection of treasures to the white house outside of which the player started the game. Yet in Zork II said house would not exist. Perhaps motivated at first largely by necessity, Lebling began to tinker with the original design. Soon, inspired by the new ZIL technology Infocom had developed to let them port Zork to the PC, technology that was actually more flexible, more powerful, and simpler to work with than the MDL behind the original Zork, Lebling began to dramatically reshape the design, interspersing elements from the original with new areas, puzzles, and characters. In the end, he would use only about half of the leftover PDP-10 material, which in turn would make up about half of Zork II; the other half would be new. Lebling thus became the first implementer to consciously craft an Infocom game, for sale as a commercial product on PCs.

To the outside world, Infocom now began to establish the corporate personality that people would soon come to love almost as much as their games — a chummy, witty inclusiveness that made people who bought the games feel like they had just signed up for a “smart persons club.” Rather than one of the Zork creators or even one of the Infocom shareholders, the organizer and guider of the club was Mike Dornbrook, a recent MIT biology graduate who had come to Zork only in 1980, as the first and most important playtester of the PC version.

More than anyone else around Infocom, Dornbrook was a believer in Zork, convinced it was far more than an interesting hacking exercise, a way to get some money coming in en route to more serious products, or even “just” a really fun game. He saw Zork as something new under the sun, something that could in some small way change the world. He strongly encouraged Infocom to build a community around this nascent new art form. At his behest, the earliest version of Zork included the following message on a note in the artist’s studio:

Congratulations!
You are the privileged owner of a genuine ZORK Great Underground Empire (Part I), a self contained and self maintaining universe. As a legitimate owner, you have available to you both the Movement Assistance Planner (MAP) and Hierarchical Information for Novice Treasure Seekers (HINTS). For information about these and other services, send a stamped, self-addressed, business-size envelope to:

Infocom, Inc.
GUE I Maintenance Division
PO Box 120, Kendall Station
Cambridge, Mass. 02142

Joining the smart-persons club was at this stage still quite a complicated process. The aforementioned self-addressed envelope would be retrieved by Stu Galley, who dutifully visited the post office each day. He then sent back a sheet offering a map for purchase, as well as the ultimate personalized hint service; for a couple of dollars a pop, Infocom would personally answer queries.

The map was adapted from Lebling’s original by Dave Ardito, an artist friend of Galley’s who embellished the lines and boxes with some appropriately adventurous visual flourishes. Dornbrook, who had some experience with printing, used his MIT alumni status to print the maps in the middle of the night on a big printing press that normally produced posters and flyers for upcoming campus events. He enlisted his roommate, Steve Meretzky, to help him.

Meretzky was also an MIT alum, having graduated in 1979 with a degree in construction management. He may have gone to the most important computer-science university in the world, but Meretzky wanted no part of that world. He “despised” computers and hackers. In Get Lamp‘s Infocom feature, Dornbrook described Meretzky’s introduction to Zork. Dornbrook was testing the game, and had borrowed a TRS-80 and brought it home to their apartment, where he set it up on the kitchen table.

He [Meretzky] came in the back door and saw the computer and said, “Away!” as only Steve could. I started telling him, “Steve, you’re going to love this!” I was trying to explain to him how to start the game up, and he puts his hands over his ears and starts screaming so he can’t hear me.

But apparently he heard enough. Over the course of the next several weeks, I started noticing when I’d come home and was about to start testing again that the keyboard might have moved half an inch or my notes had moved slightly. I realized Steve was playing the game but wasn’t willing to admit it. One night he finally broke down and said, “Alright! Alright! I need a hint!” And that was the beginning of the end for Steve.

Meretzky soon signed up as a tester, and also joined Dornbrook in his other Infocom-related projects.

There’s a great interview amongst the Get Lamp extras with David Shaw, an MIT student who wrote for the campus newspaper, whose offices were just above the press Dornbrook and Meretzky were surreptitiously borrowing. Shaw was confused by the fact that the press “always seemed to be running,” even when there were no new campus events to promote: “There were always the same two or three guys down there. They were printing something out that clearly wasn’t a movie poster, but they were also being very cagey about what it was they were printing.” One day Shaw found Dornbrook and Meretzky’s apparent “discard pile” of Zork maps and realized at last what was going on.

While the maps were a team effort, hints fell entirely to Dornbrook. He hand-wrote replies on ordinary paper. After a time he found it to be quite a profitable, if occasionally tedious, endeavor. Because most of the queries were variations on the same handful of questions, crafting personal answers didn’t take as much time as one might expect. (See the Infocom section of the Gallery of Undiscovered Entities for scans of the original maps and, even better, a couple of Dornbrook’s handwritten replies to hint requests.)

Then Dornbrook was accepted into an MBA program at the University of Chicago, scheduled to begin in the fall, meaning of course that he would have to leave Boston and give up day-to-day contact with the Infocom folks. No one else felt equipped to replace Dornbrook, who had by this point become in reality if not title Infocom’s head of public relations. Dornbrook, concerned about what would happen to “his” loyal customers, tried to convince President Joel Berez to hire a replacement. Impossible, Berez replied; the company just didn’t yet have the resources to devote someone to nothing but customer relations. So Dornbrook pitched another idea. He would form a new company, the Zork Users Group, to sell hints, maps, memorabilia, and even Infocom games themselves at a slight discount to eager players who joined his new club, which he would run out of Chicago between classes. Infocom in turn would be relieved of this burden. They could simply refer hint requests to Dornbrook, and worry only about making more and better games. Berez agreed, and ZUG was officially born in October of 1981. It would peak at over 20,000 members — but more about that in future posts.

Through much of 1981, Infocom assumed that Personal Software, publisher of the first Zork, would also publish Zork II. After all, Zork was a substantial hit. And indeed, PS responded positively when Infocom first talked with them about Zork II in April. The two companies went so far as to sign a contract that June. But just a few months later PS suddenly pulled the deal. Further, they also announced that they would be dropping the first Zork as well. What happened? wondered Infocom.

What had happened, of course, was VisiCalc. Dan Fylstra, founder of PS, had nurtured Dan Bricklin and Bob Frankston’s creation from its very early days, donating an Apple II to the pair to help them develop their idea. Once released in October of 1979, VisiCalc transformed the microcomputer industry — and transformed its publisher. PS, formerly a publisher of games and hobbyist programs, was suddenly “the VisiCalc publisher,” one of the hottest up-and-coming companies in the country. As big as Zork was, it didn’t amount to much in comparison to VisiCalc. By 1981 games and hobbyist software made up less than 10 percent of PS’s revenue. Small wonder that Infocom often felt like their game was something of an afterthought for PS. Now the IBM PC was on the horizon, and PS found itself being courted even by the likes of Big Blue themselves, who needed for VisiCalc to be available on their new computer. Just as Microsoft was also doing at this time, PS began to reshape themselves, leaving behind their hacker and hobbyist roots to focus on the exploding market for VisiCalc and other business software. They began doing in-house development for the first time, rolling out a whole line of programs to capitalize on the VisiCalc name: VisiDex, VisiPlot, VisiTrend, VisiTerm, VisiFile. The following year PS would complete their Visification by renaming themselves VisiCorp, en route to disappearing up their own VisiBum in one of the more spectacular flameouts in software history.

In this new paradigm Zork was not just unnecessary but potentially dangerous. Games were anathema to the new army of pinstriped business customers suddenly buying PCs. Companies like PS, who wished to serve them and be taken seriously despite their own questionable hacker origins, thus began to give anything potentially entertaining a wide berth. The games line would have to go, victim of the same paranoia that kept Infocom’s own Al Vezza up at night.

This rejection left Infocom at a crossroads. It wasn’t, mind you, a disaster; there would doubtlessly be plenty of other publishers eager to sign them now that they had a hit game under their belt. Yet they weren’t sure that was the direction they wanted to go. While there was a certain prestige in being published by the biggest software publisher in the world, they had never really been satisfied with PS. They had always felt like a low priority. The awful Zork “barbarian” packaging PS had come up with made one wonder if anyone at PS had actually bothered to play the game, and promotion efforts had felt cursory and disinterested. Certainly PS had never shown the slightest interest in helping Infocom and Dornbrook to build a loyal customer base. If they wanted to build Infocom as a brand, as the best text adventures in the business, why should they have another company’s logo on their boxes?

But of course becoming a publisher would require Infocom to become a “real” company rather than one that did business from a P.O. Box, with more people involved and real money invested. In a choice between keeping Infocom a profitable little sideline or, well, going for it, the Infocom founders chose the latter.

Several of them secured a substantial loan to bankroll the transition. They also secured a fellow named Mort Rosenthal as marketing manager. He lasted less than a year with Infocom, getting himself fired when he overstepped his authority to offer Infocom’s games to Radio Shack at a steep discount that would get them into every single store. Before that, however, he worked wonders, and not just in marketing. A natural wheeler and dealer, he in Stu Galley’s words secured “a time-shared production plant in Randolph, an ad agency in Watertown, an order-taking service in New Jersey, a supplier of disks in California, and so on,” all in a matter of weeks. He also found them their first tiny office above Boston’s historic Faneuil Hall Marketplace. The first two salaried employees to come to work there became Berez, the company’s most prominent business mind, and Marc Blank, the architect of the Z-Machine who had already more than a year before set aside his medical internship and moved back to Boston to take a flyer on the venture.

Showing an instinct for public perception that’s surprising to find in a bunch of hackers, Infocom made one last deal with PS — to buy back PS’s remaining copies of Zork and prevent them from dumping the games onto the market at a discount, thus devaluing the Zork brand. They needed to have Zork II out in time for Christmas, and so worked frantically with the advertising agency Rosenthal had found to craft a whole new look for the series. The motif they came up with was much more appropriate and classy than the old PS barbarian. In fact, it remains the established “look” of Zork to this day.

Ironically for a company whose games were all text, Infocom’s level of visual refinement set them apart, not least in the classic logo that debuted at this time and would remain a fixture for the rest of the company’s life. But speaking of text: in Zork II‘s advertising and packaging we can already see the rhetorical voice that Infocom fans would come to know, a seemingly casual, humorous vibe that nevertheless reflected an immense amount of care — this at a time when most game publishers still seemed to consider even basic grammar of little concern. In comparison to everybody else, Infocom just seemed a little bit classier, a little bit smarter, a little bit more adult. It’s an image that would serve them well.

Next time we’ll accept the invitation above and dive into Zork II itself, which did indeed make it out just in time for Christmas.

 

Tags: , ,

The Birth of Infocom

As the Dynamic Modeling Group put the final touches on Zork and put it to bed at last, it was beginning to feel like the end of an era at MIT. Marc Blank was about to graduate medical school and begin his residency in Pittsburgh, which would make extensive MIT hacking impossible even given his seemingly superhuman capacities. Others were finishing their own degree programs at MIT, or just running out of justifications for forestalling “real” careers with real salaries by hanging around their alma mater. In fact, a generational exodus was beginning, not just from the DMG but from MIT’s Laboratory for Computer and AI Lab in general as well. Pressures from the outside world were intruding on the hacker utopia inside MIT at last, pressures which in the next few years would change it forever. Much of the change stemmed from the invention of the microcomputer.

Most in established institutional hacking environments like MIT were initially nonplussed by what’s come to be called the PC revolution. That’s not so surprising, really. Those early microcomputers were absurdly limited machines. The homebrew hackers who bought (and often built) them were just excited to have unfettered access to something that, however minimally, met the definition of “computer.” Those privileged to find a place at an institution like MIT, however, not only had unfettered or nearly unfettered access to the systems there, but said systems were powerful enough to really do something. What charms did an Altair or even TRS-80 have to compare with sophisticated operating systems like TOPS-10 or TOPS-20 or ITS, with well-structured programming languages like LISP and MDL, with research into AI and natural-language processing, even with networked games like Maze and Trivia and, yes, Zork? The microcomputer world looked like a hopelessly uncultured and untutored one, bereft of a whole hacking tradition stretching back two decades or more. How could anyone try to build complex software using BASIC? When many institutional hackers deigned to notice the new machines at all, it was with withering contempt; Stu Galley called “We hate micros!” the unofficial motto of the DMG. They regarded the micros as little more than toys — the very same reaction as most of the general population.

By the spring of 1979, though, it was becoming increasingly clear to anyone willing to look that the little machines had their uses. WordStar, the first really usable microcomputer word processor, had been out for a year, and was moving more and more CP/M-based machines into offices and even writer’s studies. At the West Coast Computer Faire that May, Dan Bricklin demonstrated for the first time VisiCalc, the world’s first spreadsheet program, which would revolutionize accounting and business-planning practice. “How did you ever do without it?” asked the first pre-release advertisement, hyperbolically but, as it turned out, presciently; a few years later millions would be asking themselves just that question. Unlike WordStar and even Scott Adams’s Adventureland, VisiCalc was not a more limited version of an institutional computing concept implemented on microcomputer hardware. It had been conceived, designed, and implemented entirely on the Apple II, the first genuinely new idea in software to be born on the microcomputer — and a sign of a burgeoning changing of the guard.

The microcomputer brought many, many more users to computers than had ever existed before. That in turn brought more private-industry investment into the field, driven by a new reality: that you could make real money at this stuff. And that knowledge brought big changes to MIT and other institutions of “pure” hacking. Most (in)famously, the AI Lab was riven that winter and spring of 1979 by a dispute between Richard Greenblatt, pretty much the dean of the traditional hacker ethic at MIT, and a more pragmatic administrator named Russell Noftsker. Along with a small team of other hackers and hardware engineers, Greenblatt had developed a small single-user computer — a sort of boutique micro, the first of what would come to be called “workstations” — optimized for running LISP. Believing the design to have real commercial potential, Noftsker approached Greenblatt with a proposal to form a company and manufacture it. Greenblatt initially agreed, but soon proved (at least in Noftsker’s view) unwilling to sacrifice even the most minute hacker principle in the face of business realities. The two split in an ugly way, with Noftsker taking much of the AI Lab with him to implement Greenblatt’s original concept as Symbolics, Inc. Feeling disillusioned and betrayed, Greenblatt eventually left as well to form his own, less successful company, Lisp Machines.

It’s not as if no one had ever founded a company out of MIT before, nor that commerce had never mixed with the idealism of the hackers there. The founders of DEC itself, Ken Olson and Harlan Anderson, were MIT alumni who had done the basic design for what became DEC’s first machine, the PDP-1, as students there in the mid-1950s. Thereafter, MIT maintained always a cozy relationship with DEC, testing hardware and, most significantly, developing much essential software for the company’s machines — a relationship that was either, depending on how you look at it, a goldmine for the hackers in giving them perpetual access to the latest technology or a brilliant scheme by DEC for utilizing some of the best computing minds of their generation without paying them a dime. Still, what was happening at MIT in 1979 felt qualitatively different. These hackers were almost all software programmers, after all, and the microcomputer market was demonstrating that it was now possible to sell software on its own as prepackaged works, the way you might a record or a book. As a wise man once said, “Money changes everything.” Many MIT hackers were excited by the potential lucre, as evidenced by the fact that many more chose to follow Noftsker than the idealistic Greenblatt out of the university. Only a handful, such as Marvin Minsky and the ever-stubborn Richard Stallman, remained behind and continued to hew relentlessly to the old hacker ethic.

Infocom’s founders were not among the diehards. As shown by their willingness to add (gasp!) security to ITS to protect their Zork source, something that would have drawn howls of protest from Stallman on at least two different levels, their devotion to the hacker ethic of total sharing and transparency was negotiable at best. In fact, Al Vezza and the DMG had been mulling over commercial applications for the group’s creations as far back as 1976. As the 1979 spring semester wrapped up, however, it seemed clear that if this version of the DMG, about to be scattered to the proverbial winds as it was, wanted to do something commercially, the time to get started was now. And quite a lot of others at MIT were doing the same thing, weren’t they? It wouldn’t do to be left behind in an empty lab, as quite literally happened to poor old Richard Stallman. That’s how Al Vezza saw the situation, anyway, and his charges, eager to remain connected and not averse to increasing their modest university salaries, quickly agreed.

And so Infocom was officially founded on June 22, 1979, with ten stockholders. Included were three of the four hackers who had worked on Zork: Tim Anderson, Dave Lebling, and the newly minted Dr. Marc Blank (commuting from his new medical residency in Pittsburgh). There were also five other current or former DMG hackers: Mike Broos, Scott Cutler, Stu Galley, Joel Berez, Chris Reeve. And then there was Vezza himself and even Licklider, who agreed to join in the same sort of advisory role he had filled for the DMG back at MIT. Each person kicked in whatever funding he could afford, ranging from $400 to $2000, and received an appropriate percentage of the new company’s stock in return. Total startup funds amounted to $11,500. The name was necessarily nondescript, considering that no one knew quite what (if anything) the company would eventually do. The fractured, futuristic compound was much in vogue amongst technology companies of the time — Microsoft, CompuWare, EduWare — and Infocom just followed the trend in choosing the name “least objectionable to everyone.”

As should be clear from the above, Infocom did not exactly begin under auspicious circumstances. I’d call them a garage startup, except that they didn’t even have a garage. Infocom would exist for some months as more of a theoretical company in limbo than an actual business entity. It didn’t even get its first proper mailing address — a P.O. Box — until March of 1980. Needless to say, no one was quitting their day jobs as they met from time to time over the following months to talk about what ought to come next. In August, Mike Broos had already gotten bored with the endeavor and quit, leaving just nine partners. Everyone agreed that they needed something they could put together relatively quickly to sell and really get the company off the ground. More ambitious projects could then follow. But what could they do for that first project?

The hackers trolled through their old projects from MIT, looking for ideas. They kept coming back to the games. There was that Trivia game, but it wouldn’t be practical to store enough questions on a floppy disk to make it worthwhile. More intriguing was the Maze game. Stand-up arcades were booming at the time. If Infocom could build a version of Maze for arcades, they would have something unprecedented. Unfortunately, getting there would require a huge, expensive hardware- as well as software-engineering project. The Infocom partners were clever enough, but they were all software rather than hardware hackers, and money was in short supply. And then of course there was Zork… but there was no way to squeeze a 1 MB adventure game into a 32 K or 48 K microcomputer. Anyway, Vezza wasn’t really comfortable with getting into the games business on any terms, fearing it could tarnish the company’s brand even if only used to raise some early funds and bootstrap the startup. So there was also plenty of discussion of other, more business-like ideas also drawn from the DMG’s project history: a document-tracking system, an email system, a text-processing system.

Meanwhile, Blank was living in Pittsburgh and feeling rather unhappy at being cut off from his old hacking days at MIT. Luckily, he did have at least one old MIT connection there. Joel Berez had worked with the DMG before graduating in 1977. He had spent the last two years living in Pittsburgh and working for his family’s business (which experience perhaps influenced the others to elect him as Infocom’s President in November of 1979). Blank and Berez made a habit of getting together for Chinese food (always the hacker’s staple) and talking about the old times. These conversations kept coming back to Zork. Was it really impossible to even imagine getting the game onto a microcomputer? Soon the conversations turned from nostalgic to technical. As they began to discuss technical realities, other challenges beyond even that of sheer computing capacity presented themselves.

Even if they could somehow get Zork onto a microcomputer, which microcomputer should they choose? The TRS-80 was by far the best early seller, but the Apple II, the Cadillac of the trinity of 1977, was beginning to come on strong now, aided by the new II Plus model and VisiCalc. Next year, and the year after that… who knew? And all of these machines were hopelessly incompatible with one another, meaning that reaching multiple platforms must seemingly entail re-implementing Zork — and any future adventure games they might decide to create — from scratch on each. Blank and Berez cast about for some high-level language that might be relatively portable and acceptable for implementing a new Zork, but they didn’t find much. BASIC was, well, BASIC, and not even all that consistent from microcomputer to microcomputer. There was a promising new implementation of the more palatable Pascal for the Apple II on the horizon, but no word of a similar system on other platforms.

So, if they wanted to be able to sell their game to the whole microcomputer market rather than just a slice of it, they would need to come up with some sort of portable data design that could be made to work on many different microcomputers via an interpreter custom-coded for each model. Creating each interpreter would be a task in itself, of course, but at least a more modest one, and if Infocom should decide to do more games after Zork the labor savings would begin to become very significant indeed. In reaching this conclusion, they followed a line of reasoning already well-trod by Scott Adams and Automated Simulations.

But then there was still another problem: Zork currently existed only as MDL source, a language which of course had no implementation on any microcomputer. If they didn’t want to rewrite the entire game from scratch — and wasn’t the point of this whole exercise to come up with a product relatively quickly and easily? — they would have to find a way to make that code run on microcomputers.

They had, then, quite a collection of problems. We’ll talk about how they solved every one of them — and pretty brilliantly at that — next time.

 
 

Tags: , ,

The Roots of Infocom

In November of 1980 Personal Software began running the advertisement above in computer magazines, plugging a new game available then on the TRS-80 and a few months later on the Apple II. It’s not exactly a masterpiece of marketing; its garish, amateurish artwork is defensible only in being pretty typical of the era, and the text is remarkably adept at elucidating absolutely nothing that might make Zork stand out from its text-adventure peers. A jaded adventurer might be excused for turning the page on Zork‘s “mazes [that] confound your quest” and “20 treasures” needing to be returned to the “Trophy Case.” Even Scott Adams, not exactly a champion of formal experimentation, had after all seen fit to move on at least from time to time from simplistic fantasy treasure hunts, and Zork didn’t even offer the pretty pictures of On-Line Systems’s otherwise punishing-almost-to-the-point-of-unplayability early games.

In fact, though, Zork represented a major breakthrough in the text-adventure genre — or maybe I should say a whole collection of breakthroughs, from its parser that actually displayed some inkling of English usage in lieu of simplistic pattern matching to the in-game text that for the first time felt crafted by authors who actually cared about the quality of their prose and didn’t find proper grammar and spelling a needless distraction. In one of my favorite parts of Jason Scott’s Get Lamp documentary, several interviewees muse about just how truly remarkable Zork was in the computing world of 1980-81. The consensus is that it was, for a brief window of time, the most impressive single disk you could pull out to demonstrate what your new TRS-80 or Apple II was capable of.

Zork was playing in a whole different league from any other adventure game, a fact that’s not entirely surprising given its pedigree. You’d never guess it from the advertisement above, but Zork grew out of the most storied area of the most important university in computer-science history: MIT. In fact, Zork‘s pedigree is so impressive that it’s hard to know where to begin and harder to know where to end in describing it, hard to avoid getting sucked into an unending computer-science version of “Six Degrees of Kevin Bacon.” To keep things manageable I’ll try as much as I can to restrict myself to people directly involved with Zork or Infocom, the company that developed it. So, let’s begin with Joseph Carl Robnett Licklider, a fellow who admittedly had more of a tangential than direct role in Infocom’s history but who does serve as an illustration of the kind of rarified computer-science air Infocom was breathing.

Born in 1915 in St. Louis, Licklider was a psychologist by trade, but had just the sort of restless intellect that Joseph Weizenbaum would lament the (perceived) loss of in a later generation of scholars at MIT. He received a triple BA degree in physics, mathematics, and psychology from St. Louis’s Washington University at age 22, having also flirted with chemistry and fine arts along the way. He settled down a bit to concentrate on psychology for his MA and PhD, but remained consistently interested in connecting the “soft” science of psychology with the “hard” sciences and with technology. And so, when researching the psychological component of hearing, he learned more about the physical design of the human and animal auditory nervous systems than do many medical specialists. (He once described it as “the product of a superb architect and a sloppy workman.”) During World War II, research into the effects of high altitude on bomber crews led him to get equally involved with the radio technology they used to communicate with one another and with other airplanes.

After stints at various universities, Licklider came to MIT in 1950, initially to continue his researches into acoustics and hearing. The following year, however, the military-industrial complex came calling on MIT to help create an early-warning network for the Soviet bombers they envisioned dropping down on America from over the Arctic Circle. Licklider joined the resulting affiliated institution, Lincoln Laboratory, as head of its human-engineering group, and played a role in the creation of the Semi-Automatic Ground Environment (SAGE), by far the most ambitious application of computer technology conceived up to that point and, for that matter, for many years afterward. Created by MIT’s Lincoln Lab with IBM and other partners, the heart of SAGE was a collection of IBM AN/FSQ-7 mainframes, physically the largest computers ever built (a record that they look likely to hold forever). The system compiled data from many radar stations to allow operators to track a theoretical incoming strike in real time. They could scramble and guide American aircraft to intercept the bombers, enjoying a bird’s eye view of the resulting battle. Later versions of SAGE even allowed them to temporarily take over control of friendly aircraft, guiding them to the interception point via a link to their autopilot systems. SAGE remained in operation from 1959 until 1983, cost more than the Manhattan Project that had opened this whole can of nuclear worms in the first place, and was responsible for huge advances in computer science, particularly in the areas of networking and interactive time-sharing. (On the other hand, considering that the nuclear-bomber threat SAGE had been designed to counter had been largely superseded by the ICBM threat by the time it went operational, its military usefulness is debatable at best.)

During the 1950s most people, including even many of the engineers and early programmers who worked on them, saw computers as essentially huge calculators. You fed in some numbers at one end and got some others out at the other, whether they be the correct trajectory settings for a piece of artillery to hit some target or other or the current balances of a million bank customers. As he watched early SAGE testers track simulated threats in real time, however, Licklider was inspired to a radical new vision of computing, in which human and computer would actively work together, interactively, to solve problems, generate ideas, perhaps just have fun. He took these ideas with him when he left the nascent SAGE project in 1953 to float around MIT in various roles, all the while drifting slowly away from traditional psychology and toward computer science. In 1957 he became a full-time computer scientist when he (temporarily, as it turned out) left MIT for the consulting firm Bolt Beranek and Newman, a company that would play a huge role in the development of computer networking and what we’ve come to know as the Internet. (Loyal readers of this blog may recall that BBN is also where Will Crowther was employed when he created the original version of Adventure as a footnote to writing the code run by the world’s first computerized network routers.)

Licklider, who insisted that everyone, even his undergraduate students, just call him “Lick,” was as smart as he was unpretentious. Speaking in a soft Missouri drawl that could obscure the genius of some of his ideas, he never seemed to think about personal credit or careerism, and possessed not an ounce of guile. When a more personally ambitious colleague stole one of his ideas, Lick would just shrug it off, saying, “It doesn’t matter who gets the credit; it matters that it gets done.” Everyone loved the guy. Much of his work may have been funded by the realpolitik of the military-industrial complex, but Lick was by temperament an idealist. He became convinced that computers could mold a better, more just society. In it, humans would be free to create and to explore their own potential in partnership with the computer, which would take on all the drudgery and rote work. In a surprising prefiguring of the World Wide Web, he imagined a world of “home computer consoles” connected to a larger network that would bring the world into the home — interactively, unlike the passive, corporate-controlled medium of television. He spelled out all of these ideas carefully in a 1960 paper, “Man-Computer Symbiosis,” staking his claim as one of a long line of computing utopianists that would play a big role in the development of more common-man friendly technologies like the BASIC programming language and eventually of the microcomputer itself.

In 1958, the U.S. government formed the Advanced Research Projects Agency in response to alleged Soviet scientific and technological superiority in the wake of their launch of Sputnik, the world’s first satellite, the previous year. ARPA was intended as something of a “blue-sky” endeavor, pulling together scientists and engineers to research ideas and technology that might not be immediately applicable to ongoing military programs, but that might just prove to be in the future. It became Lick’s next stop after BBN: in 1962 he took over as head of their “Information Processing Techniques Office.” He remained at ARPA for just two years, but is credited by many with shifting the agency’s thinking dramatically. Previously ARPA had focused on monolithic mainframes operating as giant batch-processing “answer machines.” From Where Wizards Stay Up Late:

The computer would be fed intelligence information from a variety of human sources, such as hearsay from cocktail parties or observations of a May Day parade, and try to develop a best-guess scenario on what the Soviets might be up to. “The idea was that you take this powerful computer and feed it all this qualitative information, such as ‘The air force chief drank two martinis,’ or ‘Khrushchev isn’t reading Pravda on Mondays,” recalled Ruina. “And the computer would play Sherlock Holmes and conclude that the Russians must be building an MX-72 missile or something like that.”

“Asinine kinds of things” like this were the thrust of much thinking about computers in those days, including plenty in prestigious universities such as MIT. Lick, however, shifted ARPA in a more manageable and achievable direction, toward networks of computers running interactive applications in partnership with humans — leave the facts and figures to the computer, and leave the conclusions and the decision-making to the humans. This shift led to the creation of the ARPANET later in the decade. And the ARPANET, as everyone knows by now, eventually turned into the Internet. (Whatever else you can say about the Cold War, it brought about some huge advances in computing.) The humanistic vision of computing that Lick championed, meanwhile, remains viable and compelling today as we continue to wait for the strong AI proponents to produce a HAL.

Lick returned to MIT in 1968, this time as the director of the legendary Project MAC. Formed in 1963 to conduct research for ARPA, MAC stood for either (depending on whom you talked to) Multiple Access Computing or Machine Aided Cognition. Those two names also define the focus of its early research: into time-shared systems that let multiple users share resources and use interactive programs on a single machine; and into artificial intelligence, under the guidance of the two most famous AI proponents of all, John McCarthy (inventor of the term itself) and Marvin Minsky. I could write a few (dozen?) more posts on the careers and ideas of these men, fascinating, problematic, and sometimes disturbing as they are. I could say the same about many other early computing luminaries at MIT with whom Lick came into close contact, such as Ivan Sutherland, inventor of the first paint program and, well, pretty much the whole field of computer-graphics research as well as the successor to his position at ARPA. Instead, I’ll just point you (yet again) to Steven Levy’s Hackers for an accessible if necessarily incomplete description of the intellectual ferment at 1960s MIT, and to Where Wizards Stay Up Late by Matthew Lyon and Katie Hafner for more on Lick’s early career as well as BBN, MIT, and our old friend Will Crowther.

Project MAC split into two in 1970, becoming the MIT AI Laboratory and the Laboratory for Computer Science (LCS). Lick stayed with the latter as a sort of grandfather figure to a new generation of young hackers that gradually replaced the old guard described in Levy’s book as the 1970s wore on. His was a shrewd mind always ready to take up their ideas, and one who, thanks to his network of connections in the government and industry, could always get funding for said ideas.

LCS consisted of a number of smaller working groups, one of which was known as the Dynamic Modeling Group. It’s oddly difficult to pin any of these groups down to a single purpose. Indeed, it’s not really possible to do so even for the AI Lab and LCS themselves; plenty of research that could be considered AI work happened at LCS, and plenty that did not comfortably fit under that umbrella took place at the AI Lab. (For instance, Richard Stallman developed the ultimate hacker text editor, EMACS, at the AI Lab — a worthy project certainly but hardly one that had much to do with artificial intelligence.) Groups and the individuals within them were given tremendous freedom to hack on any justifiable projects that interested them (with the un-justifiable of course being left for after hours), a big factor in LCS and the AI Lab’s becoming such beloved homes for hackers. Indeed, many put off graduating or ultimately didn’t bother at all, so intellectually fertile was the atmosphere inside MIT in contrast to what they might find in any “proper” career track in private industry.

The director of the Dynamic Modeling Group was a fellow named Albert (Al) Vezza; he also served as an assistant director of LCS as a whole. And here we have to be a little bit careful. If you know something about Infocom’s history already, you probably recognize Vezza as the uptight corporate heavy of the story, the guy who couldn’t see the magic in the new medium of interactive fiction that the company was pursuing, who insisted on trivializing the game division’s work as a mere source of funding for a “serious” business application, and who eventually drove the company to ruin with his misplaced priorities. Certainly there’s no apparent love lost between the other Infocom alumni and Vezza. An interview with Mike Dornbrook for an MIT student project researching Infocom’s history revealed the following picture of Vezza at MIT:

Where Licklider was charismatic and affectionately called “Lick” by his students, Vezza rarely spoke to LCS members and often made a beeline from the elevator to his office in the morning, shut the door, and never saw anyone. Some people at LCS were unhappy with his managerial style, saying that he was unfriendly and “never talked to people unless he had to, even people who worked in the Lab.”

On the other hand, Lyon and Hafner have this to say:

Vezza always made a good impression. He was sociable and impeccably articulate; he had a keen scientific mind and first-rate administrative instincts.

Whatever his failings, Vezza was much more than an unimaginative empty suit. He in fact had a long and distinguished career which he largely spent furthering some of the ideas first proposed by Lick himself; he appears in Lyon and Hafner’s book, for instance, because he was instrumental in organizing the first public demonstration of the nascent ARPANET’s capabilities. Even after the Infocom years, his was an important voice on the World Wide Web Consortium that defined many of the standards that still guide the Internet today. Certainly it’s a disservice to Vezza that his Wikipedia page consists entirely of his rather inglorious tenure at Infocom, a time he probably considers little more than a disagreeable career footnote. That footnote is of course the main thing we’re interested in, but perhaps we can settle for now on a picture of a man with more of the administrator or bureaucrat than the hacker in him and who was more of a pragmatist than an idealist — and one who had some trouble relating to his charges as a consequence.

Many of those charges had names that Infocom fans would come to know well: Dave Lebling, Marc Blank, Stu Galley, Joel Berez, Tim Anderson, etc., etc. Like Lick, many of these folks came to hacking from unexpected places. Lebling, for instance, obtained a degree in political science before getting sucked into LCS, while Blank commuted back and forth between Boston and New York, where he somehow managed to complete medical school even as he hacked like mad at MIT. One thing, however, most certainly held true of everyone: they were good. LCS didn’t suffer fools gladly — or at all.

One of the first projects of the DMG was to create a new programming language for their own projects, which they named with typical hacker cheekiness “Muddle.” Muddle soon became MDL (MIT Design Language) in response to someone (Vezza?) not so enamoured with the DMG’s humor. It was essentially an improved version of an older programming language developed at MIT by John McCarthy, one which was (and remains to this day) the favorite of AI researchers: LISP.

With MDL on hand, the DMG took on a variety of projects, individually or cooperatively. Some of these had real military applications to satisfy the folks who were ultimately funding all of these shenanigans; Lebling, for instance, spent quite some time on computerized Morse-Code recognition systems. But there were plenty of games, too, in some of which Lebling was also a participant, including the best remembered of them all, Maze. Maze ran over a network, with up to 8 Imlac PDS-1s, very simple minicomputers with primitive graphical capabilities, serving as “clients” connected to a single DEC PDP-10 “server.” Players on the PDS-1s could navigate around a shared environment and shoot at each other — the ancestor of modern games like Counterstrike. Maze became a huge hit, and a real problem for administrative types like Vezza; not only did a full 8-player game stretch the PDP-10 server to the limit, but it had a tendency to eventually crash entirely this machine that others needed for “real” work. Vezza demanded again and again that it be removed from the systems, but trying to herd the cats at DMG was pretty much a lost cause. Amongst other “fun” projects, Lebling also created a trivia game which allowed users on the ARPANET to submit new questions, leading to an eventual database of thousands.

And then, in the spring of 1977, Adventure arrived at MIT. Like computer-science departments all over the country, work there essentially came to a standstill while everyone tried to solve it; the folks at DMG finally got the “last lousy point” with the aid of a debugging tool. And with that accomplished, they began, like many other hackers in many other places, to think about how they could make a better Adventure. DMG, however, had some tools to hand that would make them almost uniquely suited to the task.

 
 

Tags: , ,

The Rise of POMG, Part 1: It Takes a Village…

No one on their deathbed ever said, “I wish I had spent more time alone with my computer!”

— Dani Bunten Berry

If you ever want to feel old, just talk to the younger generation.

A few years ago now, I met the kids of a good friend of mine for the very first time: four boys between the ages of four and twelve, all more or less crazy about videogames. As someone who spends a lot of his time and earns a lot of his income writing about games, I arrived at their house with high expectations attached.

Alas, I’m afraid I proved a bit of a disappointment to them. The distance between the musty old games that I knew and the shiny modern ones that they played was just too far to bridge; shared frames of reference were tough to come up with. This was more or less what I had anticipated, given how painfully limited I already knew my knowledge of modern gaming to be. But one thing did genuinely surprise me: it was tough for these youngsters to wrap their heads around the very notion of a game that you played to completion by yourself and then put on the shelf, much as you might a book. The games they knew, from Roblox to Fortnite, were all social affairs that you played online with friends or strangers, that ended only when you got sick of them or your peer group moved on to something else. Games that you played alone, without at the very least leader boards and achievements on-hand to measure yourself against others, were utterly alien to them. It was quite a reality check for me.

So, I immediately started to wonder how we had gotten to this point — a point not necessarily better or worse than the sort of gaming that I knew growing up and am still most comfortable with, just very different. This series of articles should serve as the beginning of an answer to that complicated question. Their primary focus is not so much how computer games went multiplayer, nor even how they first went online; those things are in some ways the easy, obvious parts of the equation. It’s rather how games did those things persistently — i.e., permanently, so that each session became part of a larger meta-game, if you will, embedded in a virtual community. Or perhaps the virtual community is embedded in the game. It all depends on how you look at it, and which precise game you happen to be talking about. Whichever way, it has left folks like me, whose natural tendency is still to read games like books with distinct beginnings, middles, and ends, anachronistic iconoclasts in the eyes of the youthful mainstream.

Which, I hasten to add, is perfectly okay; I’ve always found the ditch more fun than the middle of the road anyway. Still, sometimes it’s good to know how the other 90 percent lives, especially if you claim to be a gaming historian…



“Persistent online multiplayer gaming” (POMG, shall we say?) is a mouthful to be sure, but it will have to do for lack of a better descriptor of the phenomenon that has created such a divide between myself and my friend’s children.  It’s actually older than you might expect, having first come to be in the 1970s on PLATO, a non-profit computer network run out of the University of Illinois but encompassing several other American educational institutions as well. Much has been written about this pioneering network, which uncannily presaged in so many of its particulars what the Internet would become for the world writ large two decades later. (I recommend Brian Dear’s The Friendly Orange Glow for a book-length treatment.) It should suffice for our purposes today to say that PLATO became host to, among other online communities of interest, an extraordinarily vibrant gaming culture. Thanks to the fact that PLATO games lived on a multi-user network rather than standalone single-user personal computers, they could do stuff that most gamers who were not lucky enough to be affiliated with a PLATO-connected university would have to wait many more years to experience.

The first recognizable single-player CRPGs were born on PLATO in the mid-1970s, inspired by the revolutionary new tabletop game known as Dungeons & Dragons. They were followed by the first multiplayer ones in amazingly short order. Already in 1975’s Moria,[1]The PLATO Moria was a completely different game from the 1983 single-player roguelike that bore the same name. players met up with their peers online to chat, brag, and sell or trade loot to one another. When they were ready to venture forth to kill monsters, they could do so in groups of up to ten, pooling their resources and sharing the rewards. A slightly later PLATO game called Oubliette implemented the same basic concept in an even more sophisticated way. The degree of persistence of these games was limited by a lack of storage capacity — the only data that was saved between sessions were the statistics and inventory of each player’s character, with the rest of the environment being generated randomly each time out — but they were miles ahead of anything available for the early personal computers that were beginning to appear at the same time. Indeed, Wizardry, the game that cemented the CRPG’s status as a staple genre on personal computers in 1981, was in many ways simply a scaled-down version of Oubliette, with the multiplayer party replaced by a party of characters that were all controlled by the same player.

Chester Bolingbroke, better known online as The CRPG Addict, plays Moria. Note the “Group Members” field at bottom right. Chester is alone here, but he could be adventuring with up to nine others.

A more comprehensive sort of persistence arrived with the first Multi-User Dungeon (MUD), developed by Roy Trubshaw and Richard Bartle, two students at the University of Essex in Britain, and first deployed there in a nascent form in late 1978 or 1979. A MUD borrowed the text-only interface and presentation of Will Crowther and Don Woods’s seminal game of Adventure, but the world it presented was a shared, fully persistent one between its periodic resets to a virgin state, chockablock with other real humans to interact with and perhaps fight. “The Land,” as Bartle dubbed his game’s environs, expanded to more than 600 rooms by the early 1980s, even as its ideas and a good portion of its code were used to set up other, similar environments at many more universities.

In the meanwhile, the first commercial online services were starting up in the United States. By 1984, you could, for the price of a substantial hourly fee, dial into the big mainframes of services like CompuServe using your home computer. Once logged in there, you could socialize, shop, bank, make travel reservations, read newspapers, and do much else that most people wouldn’t begin to do online until more than a decade later — including gaming. For example, CompuServe offered MegaWars, a persistent grand-strategy game of galactic conquest whose campaigns took groups of up to 100 players four to six weeks to complete. (Woe betide the ones who couldn’t log in for some reason of an evening in the midst of that marathon!) You could also find various MUDs, as well as Island of Kesmai, a multiplayer CRPG boasting most of the same features as PLATO’s Oubliette in a genuinely persistent world rather than a perpetually regenerated one. CompuServe’s competitor GEnie had Air Warrior, a multiplayer flight simulator with bitmapped 3D graphics and sound effects to rival any of the contemporaneous single-player simulators on personal computers. For the price of $11 per hour, you could participate in grand Air Warrior campaigns that lasted three weeks each and involved hundreds of other subscribers, organizing and flying bombing raids and defending against the enemy’s attacks on their own lines. In 1991, America Online put up Neverwinter Nights,[2]Not the same game as the 2002 Bioware CRPG of the same name. which did for the “Gold Box” line of licensed Dungeons & Dragons CRPGs what MUD had done for Adventure and Air Warrior had done for flight simulators, transporting the single-player game into a persistent multiplayer space.

All of this stuff was more or less incredible in the context of the times. At the same time, though, we mustn’t forget that it was strictly the purview of a privileged elite, made up of those with login credentials for institutional-computing networks or money in their pockets to pay fairly exorbitant hourly fees to feed their gaming habits. So, I’d like to back up now and tell a different story of POMG — one with more of a populist thrust, focusing on what was actually attainable by the majority of people out there, the ones who neither had access to a university’s mainframe nor could afford to spend hundreds of dollars per month on a hobby. Rest assured that the two narratives will meet before all is said and done.



POMG came to everyday digital gaming in the reverse order of the words that make up the acronym: first games were multiplayer, then they went online, and then these online games became persistent. Let’s try to unpack how that happened.

From the very start, many digital games were multiplayer, optionally if not unavoidably so. Spacewar!, the program generally considered the first fully developed graphical videogame, was exclusively multiplayer from its inception in the early 1960s. Ditto Pong, the game that launched Atari a decade later, and with it a slow-building popular craze for electronic games, first in public arcades and later in living rooms. Multiplayer here was not so much down to design intention as technological affordances. Pong was an elaborate analog state machine rather than a full-blown digital computer, relying on decentralized resistors and potentiometers and the like to do its “thinking.” It was more than hard enough just to get a couple of paddles and a ball moving around on the screen of a gadget like this; a computerized opponent was a bridge too far.

Very quickly, however, programmable microprocessors entered the field, changing everyone’s cost-benefit analyses. Building dual controls into an arcade cabinet was expensive, and the end result tended to take up a lot of space. The designers of arcade classics like Asteroids and Galaxian soon realized that they could replace the complications of a human opponent with hordes of computer-controlled enemies, flying in rudimentary, partially randomized patterns. Bulky multiplayer machines thus became rarer and rarer in arcades, replaced by slimmer, more standardized single-player cabinets. After all, if you wanted to compete with your friends in such games, there was still a way to do so: you could each play a round against the computerized enemies and compare your scores afterward.

While all of this was taking shape, the Trinity of 1977 — the Radio Shack TRS-80, Apple II, and Commodore PET — had ushered in the personal-computing era. The games these early microcomputers played were sometimes ports or clones of popular arcade hits, but just as often they were more cerebral, conceptually ambitious affairs where reflexes didn’t play as big — or any — role: flight simulations, adventure games, war and other strategy games. The last were often designed to be played optimally or even exclusively against another human, largely for the same reason Pong had been made that way: artificial intelligence was a hard thing to implement under any circumstances on an 8-bit computer with as little as 16 K of memory, and it only got harder when you were asking said artificial intelligence to formulate a strategy for Operation Barbarossa rather than to move a tennis racket around in front of a bouncing ball. Many strategy-game designers in these early days saw multiplayer options almost as a necessary evil, a stopgap until the computer could fully replace the human player, thus alleviating that eternal problem of the war-gaming hobby on the tabletop: the difficulty of finding other people in one’s neighborhood who were able and willing to play such weighty, complex games.

At least one designer, however, saw multiplayer as a positive advantage rather than a kludge — in fact, as the way the games of the future by all rights ought to be. “When I was a kid, the only times my family spent together that weren’t totally dysfunctional were when we were playing games,” remembered Dani Bunten Berry. From the beginning of her design career in 1979, when she made an auction game called Wheeler Dealers for the Apple II,[3]Wheeler Dealers and all of her other games that are mentioned in this article were credited to Dan Bunten, the name under which she lived until 1992. multiplayer was her priority. In fact, she was willing to go to extreme lengths to make it possible; in addition to a cassette tape containing the software, Wheeler Dealers shipped with a custom-made hardware add-on, the only method she could come up with to let four players bid at once. Such experiments culminated in M.U.L.E., one of the first four games ever published by Electronic Arts, a deeply, determinedly social game of economics and, yes, auctions for Atari and Commodore personal computers that many people, myself included, still consider her unimpeachable masterpiece.

A M.U.L.E. auction in progress.

And yet it was Seven Cities of Gold, her second game for Electronic Arts, that became a big hit. Ironically, it was also the first she had ever made with no multiplayer option whatsoever. She was learning to her chagrin that games meant to be played together on a single personal computer were a hard sell; such machines were typically found in offices and bedrooms, places where people went to isolate themselves, not in living rooms or other spaces where they went to be together. She decided to try another tack, thereby injecting the “online” part of POMG into our discussion.

In 1988, Electronic Arts published Berry’s Modem Wars, a game that seems almost eerily prescient in retrospect, anticipating the ludic zeitgeist of more than a decade later with remarkable accuracy. It was a strategy game played in real time (although not quite a real-time strategy of the resource-gathering and army-building stripe that would later be invented by Dune II and popularized by Warcraft and Command & Conquer). And it was intended to be played online against another human sitting at another computer, connected to yours by the gossamer thread of a peer-to-peer modem hookup over an ordinary telephone line. Like most of Berry’s games, it didn’t sell all that well, being a little too far out in front of the state of her nation’s telecommunications infrastructure.

Nevertheless, she continued to push her agenda of computer games as ways of being entertained together rather than alone over the years that followed. She never did achieve the breakout hit she craved, but she inspired countless other designers with her passion. She died far too young in 1998, just as the world was on the cusp of embracing her vision on a scale that even she could scarcely have imagined. “It is no exaggeration to characterize her as the world’s foremost authority on multiplayer computer games,” said Brian Moriarty when he presented Dani Bunten Berry with the first ever Game Developers Conference Lifetime Achievement Award two months before her death. “Nobody has worked harder to demonstrate how technology can be used to realize one of the noblest of human endeavors: bringing people together. Historians of electronic gaming will find in these eleven boxes [representing her eleven published games] the prototypes of the defining art form of the 21st century.” Let this article and the ones that will follow it, written well into said century, serve as partial proof of the truth of his words.

Danielle Bunten Berry, 1949-1998.

For by the time Moriarty spoke them, other designers had been following the trails she had blazed for quite some time, often with much more commercial success. A good early example is Populous, Peter Molyneux’s strategy game in real time (although, again, not quite a real-time strategy) that was for most of its development cycle strictly a peer-to-peer online multiplayer game, its offline single-player mode being added only during the last few months. An even better, slightly later one is DOOM, John Carmack and John Romero’s game of first-person 3D mayhem, whose star attraction, even more so than its sadistic single-player levels, was the “deathmatch” over a local-area network. Granted, these testosterone-fueled, relentlessly zero-sum contests weren’t quite the same as what Berry was envisioning for gaming’s multiplayer future near the end of her life; she wished passionately for games with a “people orientation,” directed toward “the more mainstream, casual players who are currently coming into the PC market.” Still, as the saying goes, you have to start somewhere.

But there is once more a caveat to state here about access, or rather the lack thereof. Being built for local networks only — i.e., networks that lived entirely within a single building or at most a small complex of them — DOOM deathmatches were out of reach on a day-to-day basis for those who didn’t happen to be students or employees at institutions with well-developed data-processing departments and permissive or oblivious authority figures. Outside of those ivory towers, this was the era of the “LAN party,” when groups of gamers would all lug their computers over to someone’s house, wire them together, and go at it over the course of a day or a weekend. These occasions went on to become treasured memories for many of their participants, but they achieved that status precisely because they were so sporadic and therefore special.

And yet DOOM‘s rise corresponded with the transformation of the Internet from an esoteric tool for the technological elite to the most flexible medium of communication ever placed at the disposal of the great unwashed, thanks to a little invention out of Switzerland called the World Wide Web. What if there was a way to move DOOM and other games like it from a local network onto this one, the mother of all wide-area networks? Instead of deathmatching only with your buddy in the next cubicle, you would be able to play against somebody on another continent if you liked. Now wouldn’t that be cool?

The problem was that local-area networks ran over a protocol known as IPX, while the Internet ran on a completely different one called TCP/IP. Whoever could bridge that gap in a reasonably reliable, user-friendly way stood to become a hero to gamers all over the world.



Jay Cotton discovered DOOM in the same way as many another data-processing professional: when it brought down his network. He was employed at the University of Georgia at the time, and was assigned to figure out why the university’s network kept buckling under unprecedented amounts of spurious traffic. He tracked the cause down to DOOM, the game that half the students on campus seemed to be playing more than half the time. More specifically, the problem was caused by a bug, which was patched out of existence by John Carmack as soon as he was informed. Problem solved. But Cotton stuck around to play, the warden seduced by the inmates of the asylum.

He was soon so much better at the game than anyone else on campus that he was getting a bit bored. Looking for worthier opponents, he stumbled across a program called TCPSetup, written by one Jake Page, which was designed to translate IPX packets into TCP/IP ones and vice versa on the fly, “tricking” DOOM into communicating across the vast Internet. It was cumbersome to use and extremely unreliable, but on a good day it would let you play DOOM over the Internet for brief periods of time at least, an amazing feat by any standard. Cotton would meet other players on an Internet chat channel dedicated to the game, they’d exchange IP addresses, and then they’d have at it — or try to, depending on the whims of the Technology Gods that day.

On August 22, 1994, Cotton received an email from a fellow out of the University of Illinois — yes, PLATO’s old home — whom he’d met and played in this way (and beaten, he was always careful to add). His name was Scott Coleman. “I have some ideas for hacking TCPSetup to make it a little easier. Care to do some testing later?” Coleman wrote. “I’ve already emailed Jake [Page] on this, but he hasn’t responded (might be on vacation or something). If he approves, I’m hoping some of these ideas might make it into the next release of TCPSetup. In the meantime, I want to do some experimenting to see what’s feasible.”

Jake Page never did respond to their queries, so Cotton and Coleman just kept beavering away on their own, eventually rewriting TCPSetup entirely to create iDOOM, a more reliable and far less fiddly implementation of the same concept, with support for three- or four-player deathmatches instead of just one-on-one duels. It took off like a rocket; the pair were bombarded with feature requests, most notably to make iDOOM work with other IPX-only games as well. In January of 1995, they added support for Heretic, one of the most popular of the first wave of so-called “DOOM clones.” They changed their program’s name to “iFrag” to reflect the fact that it was now about more than just DOOM.

Having come this far, Cotton and Coleman soon made the conceptual leap that would transform their software from a useful tool to a way of life for a time for many, many thousands of gamers. Why not add support for more games, they asked themselves, not in a bespoke way as they had been doing to date, but in a more sustainable one, by turning their program into a general-purpose IPX-to-TCP/IP bridge, suitable for use with the dozens of other multiplayer games out there that supported only local-area networks out of the box. And why not make their tool into a community while they were at it, by adding an integrated chat service? In addition to its other functions, the program could offer a list of “servers” hosting games, which you could join at the click of a button; no more trolling for opponents elsewhere on the Internet, then laboriously exchanging IP addresses and meeting times and hoping the other guy followed through. This would be instant-gratification online gaming. It would also provide a foretaste at least of persistent online multiplayer gaming; as people won matches, they would become known commodities in the community, setting up a meta-game, a sporting culture of heroes and zeroes where folks kept track of win-loss records and where everybody clamored to hear the results when two big wheels faced off against one another.

Cotton and Coleman renamed their software for the third time in less than nine months, calling it Kali, a name suggested by Coleman’s Indian-American girlfriend (later his wife). “The Kali avatar is usually depicted with swords in her hands and a necklace of skulls from those she has killed,” says Coleman, “which seemed appropriate for a deathmatch game.” Largely at the behest of Cotton, always the more commercially-minded of the pair, they decided to make Kali shareware, just like DOOM itself: multiplayer sessions would be limited to fifteen minutes at a time until you coughed up a $20 registration fee. Cotton went through the logistics of setting up and running a business in Georgia while Coleman did most of the coding in Illinois. (Rather astonishingly, Cotton and Coleman had still never met one another face to face in 2013, when gaming historian David L. Craddock conducted an interview with them that has been an invaluable source of quotes and information for this article.)

Kali certainly wasn’t the only solution in this space; a commercial service called DWANGO had existed since December of 1994, with the direct backing of John Carmack and John Romero, whose company id Software collected 20 percent of its revenue in return for the endorsement. But DWANGO ran over old-fashioned direct-dial-up connections rather than the Internet, meaning you had to pay long-distance charges to use it if you weren’t lucky enough to live close to one of its host computers. On top of that, it charged $9 for just five hours of access per month, with the fees escalating from there. Kali, by contrast, was available to you forever for as many hours per month as you liked after you plunked down your one-time fee of $20.

So, Kali was popular right from its first release on April 26, 1995. Yet it was still an awkward piece of software for the casual user despite the duo’s best efforts, being tied to MS-DOS, whose support for TCP/IP relied on a creaky edifice of third-party tools. The arrival of Windows 95 was a godsend for Kali, as it was for computer gaming in general, making the hobby accessible in a way it had never been before. The so-called “Kali95” was available by early 1996, and things exploded from there. Kali struck countless gamers with all the force of a revelation; who would have dreamed that it could be so easy to play against another human online? Lloyd Case, for example, wrote in Computer Gaming World magazine that using Kali for the first time was “one of the most profound gaming experiences I’ve had in a long time.” Reminiscing seventeen years later, David L. Craddock described how “using Kali for the first time was like magic. Jumping into a game and playing with other people. It blew my fourteen-year-old mind.” In late 1996, the number of registered Kali users ticked past 50,000, even as quite possibly just as many or more were playing with cracked versions that bypassed the simplistic serial-number-registration process. First-person-shooter deathmatches abounded, but you could also play real-time strategies like Command & Conquer and Warcraft, or even the Links golf simulation. Computer Gaming World gave Kali a special year-end award for “Online-Enabling Technology.”

Kali for Windows 95.

Competitors were rushing in at a breakneck pace by this time, some of them far more conventionally “professional” than Kali, whose origin story was, as we’ve seen, as underground and organic as that of DOOM itself. The most prominent of the venture-capital-funded startups were MPlayer (co-founded by Brian Moriarty of Infocom and LucasArts fame, and employing Dani Bunten Berry as a consultant during the last months of her life) and the Total Entertainment Network, better known as simply TEN. In contrast to Kali’s one-time fee, they, like DWANGO before them, relied on subscription billing: $20 per month for MPlayer, $15 per month for TEN. Despite slick advertising and countless other advantages that Kali lacked, neither would ever come close to overtaking its scruffy older rival, which had price as well as oodles of grass-roots goodwill on its side. Jay Cotton:

It was always my belief that Kali would continue to be successful as long as I never got greedy. I wanted everyone to be so happy with their purchase that they would never hesitate to recommend it to a friend. [I would] never charge more than someone would be readily willing to pay. It also became a selling point that Kali only charged a one-time fee, with free upgrades forever. People really liked this, and it prevented newcomers (TEN, Heat [a service launched in 1997 by Sega of America], MPlayer, etc.) from being able to charge enough to pay for their expensive overheads.

Kali was able to compete with TEN, MPlayer, and Heat because it already had a large established user base (more users equals more fun) and because it was much, much cheaper. These new services wanted to charge a subscription fee, but didn’t provide enough added benefit to justify the added expense.

It was a heady rush indeed, although it would also prove a short-lived one; Kali’s competitors would all be out of business within a year or so of the turn of the millennium. Kali itself stuck around after that, but as a shadow of what it had been, strictly a place for old-timers to reminisce and play the old hits. “I keep it running just out of habit,” said Jay Cotton in 2013. “I make just enough money on website ads to pay for the server.” It still exists today, presumably as a result of the same force of habit.

One half of what Kali and its peers offered was all too obviously ephemeral from the start: as the Internet went mainstream, developers inevitably began building TCP/IP support right into their games, eliminating the need for an external IPX-to-TCP/IP bridge. (For example, Quake, id Software’s much-anticipated follow-up to DOOM, did just this when it finally arrived in 1996.) But the other half of what they offered was community, which may have seemed a more durable sort of benefit. As it happened, though, one clever studio did an end-run around them here as well.



The folks at Blizzard Entertainment, the small studio and publisher that was fast coming to rival id Software for the title of the hottest name in gaming, were enthusiastic supporters of Kali in the beginning, to the point of hand-tweaking Warcraft II, their mega-hit real-time strategy, to run optimally over the service. They were rewarded by seeing it surpass even DOOM to become the most popular game there of all. But as they were polishing their new action-CRPG Diablo for release in 1996, Mike O’Brien, a Blizzard programmer, suggested that they launch their own service that would do everything Kali did in terms of community, albeit for Blizzard’s games alone. And then he additionally suggested that they make it free, gambling that knowledge of its existence would sell enough games for them at retail to offset its maintenance costs. Blizzard’s unofficial motto had long been “Let’s be awesome,” reflecting their determination to sell exactly the games that real hardcore gamers were craving, honed to a perfect finish, and to always give them that little bit extra. What better way to be awesome than by letting their customers effortlessly play and socialize online, and to do so for free?

The idea was given an extra dollop of urgency by the fact that Westwood Games, the maker of Warcraft‘s chief competitor Command & Conquer, had introduced a service called Westwood Chat that could launch people directly into a licensed version of Monopoly. (Shades of Dani Bunten Berry’s cherished childhood memories…) At the moment it supported only Monopoly, a title that appealed to a very different demographic from the hardcore crowd who favored Blizzard’s games, but who knew how long that would last?[4]Westwood Chat would indeed evolve eventually into Westwood Online, with full support for Command & Conquer, but that would happen only after Blizzard had rolled out their own service.

So, when Diablo shipped in the last week of 1996, it included something called Battle.net, a one-click chat and matchmaking service and multiplayer facilitator. Battle.net made everything easier than it had ever been before. It would even automatically patch your copy of the game to the latest version when you logged on, pioneering the “software as a service” model in gaming that has become everyday life in our current age of Steam. “It was so natural,” says Blizzard executive Max Schaefer. “You didn’t think about the fact that you were playing with a dude in Korea and a guy in Israel. It’s really a remarkable thing when you think about it. How often are people casually matched up in different parts of the world?” The answer to that question, of course, was “not very often” in the context of 1997. Today, it’s as normal as computers themselves, thanks to groundbreaking initiatives like this one. Blizzard programmer Jeff Strain:

We believed that in order for it [Battle.net] to really be embraced and adopted, that accessibility had to be there. The real catch for Battle.net was that it was inside-out rather than outside-in. You jumped right into the game. You connected players from within the game experience. You did not alt-tab off into a Web browser to set up your games and have the Web browser try to pass off information or something like that. It was a service designed from Day One to be built into actual games.

The combination of Diablo and Battle.net brought a new, more palpable sort of persistence to online gaming. Players of DOOM or Warcraft II might become known as hotshots on services like Kali, but their reputation conferred no tangible benefit once they entered a game session. A DOOM deathmatch or a Warcraft II battle was a one-and-done event, which everyone started on an equal footing, which everyone would exit again within an hour or so, with nothing but memories and perhaps bragging rights to show for what had transpired.

Diablo, however, was different. Although less narratively and systemically ambitious than many of its recent brethren, it was nevertheless a CRPG, a genre all about building up a character over many gaming sessions. Multiplayer Diablo retained this aspect: the first time you went online, you had to pick one of the three pre-made first-level characters to play, but after that you could keep bringing the same character back to session after session, with all of the skills and loot she had already collected. Suddenly the link between the real people in the chat rooms and their avatars that lived in the game proper was much more concrete. Many found it incredibly compelling. People started to assume the roles of their characters even when they were just hanging out in the chat rooms, started in some very real sense to live the game.

But it wasn’t all sunshine and roses. Battle.net became a breeding ground of the toxic behaviors that have continued to dog online gaming to this day, a social laboratory demonstrating what happens when you take a bunch of hyper-competitive, rambunctious young men and give them carte blanche to have at it any way they wish with virtual swords and spells. The service was soon awash with “griefers,” players who would join others on their adventures, ostensibly as their allies in the dungeon, then literally stab them in the back when they least expected it, killing their characters and running off with all of their hard-won loot. The experience could be downright traumatizing for the victims, who had thought they were joining up with friendly strangers simply to have fun together in a cool new game. “Going online and getting killed was so scarring,” acknowledges David Brevick, Diablo‘s original creator. “Those players are still feeling a little bit apprehensive.”

To make matters worse, many of the griefers were also cheaters. Diablo had been born and bred a single-player game; multiplayer had been a very late addition. This had major ramifications. Diablo stored all the information about the character you played online on your local hard drive rather than the Battle.net server. Learn how to modify this file, and you could create a veritable god for yourself in about ten minutes, instead of the dozens of hours it would take playing the honest way. “Trainers” — programs that could automatically do the necessary hacking for you — spread like wildfire across the Internet. Other folks learned to hack the game’s executable files themselves. Most infamously, they figured out ways to attack other players while they were still in the game’s above-ground town, supposedly a safe space reserved for shopping and healing. Battle.net as a whole took on a siege mentality, as people who wanted to play honorably and honestly learned to lock the masses out with passwords that they exchanged only with trusted friends. This worked after a fashion, but it was also a betrayal of the core premise and advantage of Battle.net, the ability to find a quick pick-up game anytime you wanted one. Yet there was nothing Blizzard could do about it without rewriting the whole game from the ground up. They would eventually do this — but they would call the end result Diablo II. In the meanwhile, it was a case of player beware.

It’s important to understand that, for all that it resembled what would come later all too much from a sociological perspective, multiplayer Diablo was still no more persistent than Moria and Oubliette had been on the old PLATO network: each player’s character was retained from session to session, but nothing about the state of the world. Each world, or instance of the game, could contain a maximum of four human players, and disappeared as soon as the last player left it, leaving as its legacy only the experience points and items its inhabitants had collected from it while it existed. Players could and did kill the demon Diablo, the sole goal of the single-player game, one that usually required ten hours or more of questing to achieve, over and over again in the online version. In this sense, multiplayer Diablo was a completely different game from single-player Diablo, replacing the simple quest narrative of the latter with a social meta-game of character-building and player-versus-player combat.

For lots and lots of people, this was lots and lots of fun; Diablo was hugely popular despite all of the exploits it permitted — indeed, for some players perchance, because of them. It became one of the biggest computer games of the 1990s, bringing online gaming to the masses in a way that even Kali had never managed. Yet there was still a ways to go to reach total persistence, to bring a permanent virtual world to life. Next time, then, we’ll see how mainstream commercial games of the 1990s sought to achieve a degree of persistence that the first MUD could boast of already in 1979. These latest virtual worlds, however, would attempt to do so with all the bells and whistles and audiovisual niceties that a new generation of gamers raised on multimedia and 3D graphics demanded. An old dog in the CRPG space was about to learn a new trick, creating in the process a new gaming acronym that’s even more of a mouthful than POMG.



Did you enjoy this article? If so, please think about pitching in to help me make many more like it. You can pledge any amount you like.


Sources: the books Stay Awhile and Listen Volumes 1 and 2 by David L. Craddock, Masters of Doom by David Kushner, and The Friendly Orange Glow by Brian Dear; Retro Gamer 43, 90, and 103; Computer Gaming World of September 1996 and May 1997; Next Generation of March 1997. Online sources include “The Story of Battle.net” by Wes Fenlon at PC Gamer, Dan Griliopoulos’s collection of interviews about Command & Conquer, Brian Moriarty’s speech honoring Dani Bunten Berry from the 1998 Game Developers Conference, and Jay Cotton’s history of Kali on the DOOM II fan site. Plus some posts on The CRPG Addict, to which I’ve linked in the article proper.

Footnotes

Footnotes
1 The PLATO Moria was a completely different game from the 1983 single-player roguelike that bore the same name.
2 Not the same game as the 2002 Bioware CRPG of the same name.
3 Wheeler Dealers and all of her other games that are mentioned in this article were credited to Dan Bunten, the name under which she lived until 1992.
4 Westwood Chat would indeed evolve eventually into Westwood Online, with full support for Command & Conquer, but that would happen only after Blizzard had rolled out their own service.
 
 

Tags: , , , , , , , , , , , ,