RSS

Tag Archives: zork

ZIL and the Z-Machine

When we left off last time, Marc Blank and Joel Berez were considering how to bring Zork to the microcomputer. Really, they were trying to solve three interrelated problems. At the risk of being pedantic, let me lay them out for you:

1. How to get Zork, a massive game that consumed 1 MB of memory on the PDP-10, onto their chosen minimum microcomputer system, an Apple II or TRS-80 with 32 K of RAM and a single floppy-disk drive.

2. How to do so in a portable way that would make it as painless as possible to move Zork not only to the Apple II and TRS-80 but also, if all went well, to many more current and future mutually incompatible platforms.

3. How to use the existing MDL source code to Zork as the basis for the new microcomputer version, rather than having to start all over again and implement the game from scratch in some new environment.

If you like, you can see the above as a ranking of the problems in order of importance, from “absolutely, obviously essential” to “would be really nice.” That’s not strictly necessary, though, because, as we’re about to see, Blank and Berez, with the eventual help of the others, actually solved them all pretty brilliantly. I wish I could neatly deal with each item above one at a time, but, as anyone who’s ever tackled a complicated programming task knows, solutions tend to get tangled up with one another pretty quickly. So instead I’ll have to just ask you to keep those three goals in mind as I explain how Blank and Berez’s design worked as a whole.

When faced with a game that is just too large to fit into a new environment, the most obvious solution is simply to make the game smaller — to remove content. That’s one of the things Infocom did with Zork. Stu Galley:

Dave examined his complete map of Zork and drew a boundary around a portion that included about 100 or so locations: everything “above ground” and a large section surrounding the Round Room. The object was to create a smaller Zork that would fit within the constraints established by the design of Joel and Marc. Whatever wouldn’t fit was to be saved for another game, another day.

By cutting Zork‘s world almost in half, Infocom were able to dramatically reduce the size of the game. 191 rooms became 110; 211 items became 117; 911 parseable words became 617. It wasn’t a complete solution to their problems, but it certainly helped, and still left them with a huge game, about the same size as the original Adventure in numbers of rooms but dwarfing it in terms of items and words, and easily bigger than any other microcomputer adventure game. And, as Galley notes above, it left them with plenty of raw material out of which to build a possible sequel.

There were more potential savings to be had by looking at the MDL compiler. As a language designed to perform many general-purpose computing tasks, many of MDL’s capabilities naturally went unused by an adventure game like Zork. Even unused, however, they consumed precious memory. Infocom therefore took a pair of pruning shears to MDL just as they had to Zork itself, cutting away extraneous syntax and libraries, and retaining only what was necessary for implementing an adventure game. They named the new language ZIL, for Zork Implementation Language; the compiler that enabled the language, which still ran only on the PDP-10, they called Zilch. ZIL remained similar enough to MDL in syntax and approach that porting the old MDL Zork to ZIL was fairly painless, yet the new language not only produced tighter, faster executables but was much cleaner syntactically. In fact, ZIL encouraged Infocom to not just port Zork to the new language but to improve it in some ways; the parser, in particular, became even better when implemented in the more sympathetic ZIL.

Here is Zork‘s lantern in MDL:

<OBJECT ["LAMP" "LANTE" "LIGHT"]
	["BRASS"]
	"lamp"
	<+ ,OVISON ,TAKEBIT ,LIGHTBIT>
	LANTERN
	()
	(ODESCO "A battery-powered brass lantern is on the trophy case."
	 ODESC1 "There is a brass lantern (battery-powered) here."
	 OSIZE 15
	 OLINT [0 >])>

And here’s the same item in ZIL:

<OBJECT LANTERN 
           (LOC LIVING-ROOM) 
           (SYNONYM LAMP LANTERN LIGHT) 
           (ADJECTIVE BRASS) 
           (DESC "brass lantern") 
           (FLAGS TAKEBIT LIGHTBIT) 
           (ACTION LANTERN-F) 
           (FDESC "A battery-powered lantern is on the trophy 
             case.") 
           (LDESC "There is a brass lantern (battery-powered) 
             here.") 
           (SIZE 15)>


Just for the record, I’ll give a quick explanation of the ZIL code shown above for those interested. The first line simply tells us that what follows will describe an item — or, in ZIL terminology, “object” — called “lantern.” The next line tells us it is in the living room of the white house. Then we see that it can be referred to by the player as “lamp,” “lantern,” or “light,” with the optional adjective “brass” (which might come in handy to distinguish it from the broken lantern found in another part of the game). The so-called short description — more properly the name under which it shows up in inventory listings and other places where it must be plugged into the text — is “brass lantern.” The TAKEBIT flag means that it is an item the player can pick up and carry around with her; the LIGHTBIT means that it casts light, illuminating any dark room in which it is placed or carried. LANTERN-F is the special action routine for the lantern, a bit of code that allows us to write special “rules” for the lantern that apply only to it, such as routines to allow the player to turn it off and on; as I discussed earlier, this level of programmability and the associated object-oriented approach really make MDL, and by extension ZIL, stand out from other adventure-game development systems of their era. The FDESC is the description of the lantern that appears before it has been moved, as part of the room description for the living room; the LDESC appears after it has been moved and set down somewhere else. Finally, the SIZE determines the size and weight of the lantern for purposes of deciding how much the player can carry with her at one time. The rather messier MDL source I’ll leave as an exercise for you to translate…

So, at this point Infocom have largely addressed problem #3, and at least come a long way with problem #1. That left them still with problem #2. You might think it would be easy enough to design an adventure-engine / database partnership like that Scott Adams came up with. However, this was problematic. Remember that one of the things that made Zork‘s development environment, whether it be MDL or ZIL, so unique was its programmability. To go to a solution like that of Adams would force them to sacrifice that, and ZIL in the process. For ZIL to work, it needed to be able to run code to handle those special interactions like turning the lamp on or off; it needed, in other words, to be a proper, Turing-complete programming language, not just a data-entry system. But how to do that while also having a system that was portable from machine to (incompatible) machine? The answer: they would design a virtual machine, an imaginary computer optimized just for playing text adventures in the same way that ZIL was for coding them, then code an interpreter to simulate that computer on each platform for which they decided to release Zork.

Virtual machines are everywhere today. The apps you run on your Android smartphone actually run inside a virtual machine. You might use something like VMWare on your desktop machine to let you run Linux inside Windows, or vice versa. Big mainframe installations and, increasingly, high-end servers running operating systems like Linux often run in virtual machines abstracted from the underlying hardware, which amongst other benefits lets one carve one giant mainframe up into a number of smaller mainframes. Scenarios like that aside, virtual machines are so appealing for essentially two reasons; virtually (ha!) everyone who decides to employ one does so for one or the other, or, often, both. First, they are much more secure. If malicious code such as a virus gets introduced into a virtual machine, it is contained there rather than infecting the host system, and code that crashes the virtual machine — whether it does so intentionally or accidentally — crashes only the virtual machine, not the host system as a whole. Second, a virtual machine allows one to run the same program on otherwise incompatible devices. It is “write once, run everywhere,” as Java zealots used to say. In their case, each target platform need only have a current implementation of the Java virtual machine (not necessarily the language; just the virtual machine). Virtual machines do also have one big disadvantage: because the host platform is emulating another computer, they tend to be much, much slower than native code run on the same platform. (Yes, technologies like just-in-time compilation can do a lot to alleviate this, but let’s not get any further afield.) Still, computing power is cheap and ubiquitous these days, so this generally doesn’t present such a problem. In fact, the modern situation can get kind of ridiculous; my Kindle version of The King of Shreds and Patches is actually built from one virtual machine (Glulx) running inside another virtual machine (the Java virtual machine), all running on a tiny handheld e-reader — and performance is still acceptable.

Even in 1979 the virtual machine was not a new idea. Between 1965 and 1967, a team at IBM had worked in close partnership with MIT’s Lincoln Laboratory to create an operating system called CP-40, under which up to 14 users were each able to log into their own, single-user computer — simulated entirely in software running on a big IBM mainframe. CP-40 eventually became the basis of the appropriately named VM operating system, first released by IBM in 1972 and still widely used on mainframes today. In 1978, a Pascal implementation known as UCSD Pascal introduced the P-Machine, a portable virtual machine that allowed programs written in UCSD Pascal to run on many disparate machines, including even the Apple II following the release of Apple Pascal in August of 1979. The P-Machine became a major influence on Infocom’s own virtual machine, the Z-Machine.

In opting for a virtual machine they would of course have to pay the performance penalty all virtual machines exact, but this wouldn’t be quite as big as you might expect. Just as they had optimized ZIL, Blank and Berez made the Z-Machine as light and efficient as they possibly could, including only those features really useful for running adventure games. They would implement each platform’s interpreter entirely in highly optimized assembly language, with the result that Zork would, even running inside a virtual machine, still run much, much faster than the BASIC adventures that were common at the time. Anyway, the processing powers of the micros, limited as they were, had never been their real concern in getting Zork onto them — memory was the bottleneck. Yes, they would have to sacrifice some additional memory for the interpreter, but they could save even more by building efficiencies into the Z-Machine. For instance, a special encoding scheme allowed them to store most characters in 5 rather than 8 bits, and to replace the most commonly used words with abbreviations in the code. Such text compression was very significant considering that text is, after all, most of what makes up a text adventure. With such compression techniques, along with all of the slicing and dicing of the game itself and the ZIL language, they ended up with a final game just 77 K in size, not counting of course the virtual-machine interpreter needed to run it; this latter Infocom called Zip (not to be confused with the file-compression format). The 77 K game file itself, which Infocom took to calling the “story file,” is essentially a snapshot of the virtual machine’s memory in its opening state.

When we talk about the storage capacity of a computer, we’re really talking about (much to the confusion of parents and grandparents everywhere) two separate figures: disk capacity and memory (RAM) capacity. An Apple II could store 140 K on a single floppy disk, while the TRS-80 actually did a bit better, managing 180 K. Thus, Infocom now had a game that could fit quite comfortably along with the necessary interpreter on a single disk. RAM was the problem: even if we forget about the necessary interpreter, 77 K just doesn’t go into 32 K, no matter how much you try to force it. Or does it?

It was not unheard of even at this time to use the disk as a sort of secondary memory, reading bits and pieces of data from there into RAM and then discarding them when no longer needed. Microsoft had used just this technique to fit Adventure into the 32 K TRS-80; each bit of text, all stored in a file external to the game itself as per Crowther and Woods’s original design, was read in from disk only when it needed to be printed. However, Infocom’s more sophisticated object-oriented system necessarily intermingled its text with its code, making such a segregated approach impractical. Blank and Berez therefore went a step further: having already designed a virtual machine, they now added an implementation of virtual memory to accompany it.

The concept of virtual memory was also then not a new one in the general world of computer science. In fact, virtual memory dates back even further than the virtual machine, to an early supercomputer developed at the University of Manchester called the Atlas, officially commissioned in 1962. In a virtual-memory system, each program does not have an “honest” view of the host computer’s physical memory. It rather is given a sort of idealized memory map to play with, which may have little to do with the real layout of its host computer’s physical RAM. When it reads from and writes to pieces of this map, the host automatically translates the virtual addresses into real addresses inside its physical memory, transparently. Why bother with such a thing, especially as it necessarily adds processing overhead? Once again, for two main reasons, both of which are usually taken as applicable to a multitasking operating system only, something that was little more than a dream for a microcomputer of 1979 or 1980. First, by effectively sandboxing each program’s memory from every other program’s memory, as well as that being used by the operating system itself, virtual memory assures that a program cannot, out of malice or simple bugginess, go rogue and trash other programs or even bring down the whole system. Second, it gives a computer a fallback position of sorts — an alternative to outright failure — should the program(s) running on it ask for more physical memory than it actually has to give. When that happens, the operating system looks through its memory to find pieces that aren’t being used very often. It then caches these away on disk, making room in physical RAM to allocate the new request. When cached areas are accessed again, they must of course be read back into RAM, possibly being swapped with other chunks if memory is still scarce. All of this happens transparently to the program(s) in question, which continue to live within their idealized view of memory, blissfully unaware of the huffing and puffing the underlying system is doing to keep everything going. Virtual memory has been with us for many years now in the desktop PC world. Of course, there inevitably comes a point of diminishing returns with such a scheme; if you’ve ever opened lots and lots of windows or programs on an older PC and seen everything get really, really slow while the hard disk grinds like a saw mill, now you know what was going on (assuming you didn’t already know, of course; we assume no default level of technical knowledge here at Digital Antiquaria Central).

For the Z-Machine, Berez and Blank employed a much simpler version of virtual memory than you’ll find in the likes of Windows, Linux, or OS X. While such important dynamic information as the current position of the items in the game world must of course always be tracked and updated dynamically, most of the data that makes up a game like Zork is static, unchanging: lots and lots of text, of course, along with lots of program code. Berez and Blank were able to design the ZIL compiler in such a way that it placed all of the stuff that could conceivably change, which we’ll called the dynamic data, first in the story file. Everything else, which we’ll call the static data, came afterward. As it turned out, the 77 K Zork story file contained only 18 K of dynamic data. So, here’s what they did…

The dynamic data — memory the virtual machine will write to as well as read — is always stored in the host computer’s RAM. The static data, however, is loaded in and out of RAM by the interpreter as needed in 1 K blocks known as pages. Put another way: from the perspective of the game program, it has fully 77 K of memory to work with. The interpreter, meanwhile, is frantically swapping blocks of memory in and out of the much more limited physical RAM to maintain this illusion. Like the virtual machine itself, this virtual-memory scheme obviously brings with it a speed penalty, but needs must. On a 32 K system with 18 K reserved for dynamic data, Infocom still had 14 K left over to host the VM interpreter itself, a small stack (an area where programs store temporary information needed for moment-to-moment processing), and a page or two of virtual memory. Sure, it was a bit sluggish at times, but it worked. And, when run on a system with, say, 48 K, the interpreter could automatically detect and use this additional memory to keep more static data in physical RAM, thus speeding things along and rewarding the user for her hardware investment.

With the ZIL / Z-Machine scheme as a whole, Infocom had created a robust, reusable system that could have life far beyond this one-time task of squeezing Zork onto the TRS-80 and Apple II. I trust I’m not spoiling anything if I reveal that that’s exactly what happened.

With this technical foundation, we’ll look next time at the process of actually getting Zork onto the market.

 
 

Tags: , ,

The Birth of Infocom

As the Dynamic Modeling Group put the final touches on Zork and put it to bed at last, it was beginning to feel like the end of an era at MIT. Marc Blank was about to graduate medical school and begin his residency in Pittsburgh, which would make extensive MIT hacking impossible even given his seemingly superhuman capacities. Others were finishing their own degree programs at MIT, or just running out of justifications for forestalling “real” careers with real salaries by hanging around their alma mater. In fact, a generational exodus was beginning, not just from the DMG but from MIT’s Laboratory for Computer and AI Lab in general as well. Pressures from the outside world were intruding on the hacker utopia inside MIT at last, pressures which in the next few years would change it forever. Much of the change stemmed from the invention of the microcomputer.

Most in established institutional hacking environments like MIT were initially nonplussed by what’s come to be called the PC revolution. That’s not so surprising, really. Those early microcomputers were absurdly limited machines. The homebrew hackers who bought (and often built) them were just excited to have unfettered access to something that, however minimally, met the definition of “computer.” Those privileged to find a place at an institution like MIT, however, not only had unfettered or nearly unfettered access to the systems there, but said systems were powerful enough to really do something. What charms did an Altair or even TRS-80 have to compare with sophisticated operating systems like TOPS-10 or TOPS-20 or ITS, with well-structured programming languages like LISP and MDL, with research into AI and natural-language processing, even with networked games like Maze and Trivia and, yes, Zork? The microcomputer world looked like a hopelessly uncultured and untutored one, bereft of a whole hacking tradition stretching back two decades or more. How could anyone try to build complex software using BASIC? When many institutional hackers deigned to notice the new machines at all, it was with withering contempt; Stu Galley called “We hate micros!” the unofficial motto of the DMG. They regarded the micros as little more than toys — the very same reaction as most of the general population.

By the spring of 1979, though, it was becoming increasingly clear to anyone willing to look that the little machines had their uses. WordStar, the first really usable microcomputer word processor, had been out for a year, and was moving more and more CP/M-based machines into offices and even writer’s studies. At the West Coast Computer Faire that May, Dan Bricklin demonstrated for the first time VisiCalc, the world’s first spreadsheet program, which would revolutionize accounting and business-planning practice. “How did you ever do without it?” asked the first pre-release advertisement, hyperbolically but, as it turned out, presciently; a few years later millions would be asking themselves just that question. Unlike WordStar and even Scott Adams’s Adventureland, VisiCalc was not a more limited version of an institutional computing concept implemented on microcomputer hardware. It had been conceived, designed, and implemented entirely on the Apple II, the first genuinely new idea in software to be born on the microcomputer — and a sign of a burgeoning changing of the guard.

The microcomputer brought many, many more users to computers than had ever existed before. That in turn brought more private-industry investment into the field, driven by a new reality: that you could make real money at this stuff. And that knowledge brought big changes to MIT and other institutions of “pure” hacking. Most (in)famously, the AI Lab was riven that winter and spring of 1979 by a dispute between Richard Greenblatt, pretty much the dean of the traditional hacker ethic at MIT, and a more pragmatic administrator named Russell Noftsker. Along with a small team of other hackers and hardware engineers, Greenblatt had developed a small single-user computer — a sort of boutique micro, the first of what would come to be called “workstations” — optimized for running LISP. Believing the design to have real commercial potential, Noftsker approached Greenblatt with a proposal to form a company and manufacture it. Greenblatt initially agreed, but soon proved (at least in Noftsker’s view) unwilling to sacrifice even the most minute hacker principle in the face of business realities. The two split in an ugly way, with Noftsker taking much of the AI Lab with him to implement Greenblatt’s original concept as Symbolics, Inc. Feeling disillusioned and betrayed, Greenblatt eventually left as well to form his own, less successful company, Lisp Machines.

It’s not as if no one had ever founded a company out of MIT before, nor that commerce had never mixed with the idealism of the hackers there. The founders of DEC itself, Ken Olson and Harlan Anderson, were MIT alumni who had done the basic design for what became DEC’s first machine, the PDP-1, as students there in the mid-1950s. Thereafter, MIT maintained always a cozy relationship with DEC, testing hardware and, most significantly, developing much essential software for the company’s machines — a relationship that was either, depending on how you look at it, a goldmine for the hackers in giving them perpetual access to the latest technology or a brilliant scheme by DEC for utilizing some of the best computing minds of their generation without paying them a dime. Still, what was happening at MIT in 1979 felt qualitatively different. These hackers were almost all software programmers, after all, and the microcomputer market was demonstrating that it was now possible to sell software on its own as prepackaged works, the way you might a record or a book. As a wise man once said, “Money changes everything.” Many MIT hackers were excited by the potential lucre, as evidenced by the fact that many more chose to follow Noftsker than the idealistic Greenblatt out of the university. Only a handful, such as Marvin Minsky and the ever-stubborn Richard Stallman, remained behind and continued to hew relentlessly to the old hacker ethic.

Infocom’s founders were not among the diehards. As shown by their willingness to add (gasp!) security to ITS to protect their Zork source, something that would have drawn howls of protest from Stallman on at least two different levels, their devotion to the hacker ethic of total sharing and transparency was negotiable at best. In fact, Al Vezza and the DMG had been mulling over commercial applications for the group’s creations as far back as 1976. As the 1979 spring semester wrapped up, however, it seemed clear that if this version of the DMG, about to be scattered to the proverbial winds as it was, wanted to do something commercially, the time to get started was now. And quite a lot of others at MIT were doing the same thing, weren’t they? It wouldn’t do to be left behind in an empty lab, as quite literally happened to poor old Richard Stallman. That’s how Al Vezza saw the situation, anyway, and his charges, eager to remain connected and not averse to increasing their modest university salaries, quickly agreed.

And so Infocom was officially founded on June 22, 1979, with ten stockholders. Included were three of the four hackers who had worked on Zork: Tim Anderson, Dave Lebling, and the newly minted Dr. Marc Blank (commuting from his new medical residency in Pittsburgh). There were also five other current or former DMG hackers: Mike Broos, Scott Cutler, Stu Galley, Joel Berez, Chris Reeve. And then there was Vezza himself and even Licklider, who agreed to join in the same sort of advisory role he had filled for the DMG back at MIT. Each person kicked in whatever funding he could afford, ranging from $400 to $2000, and received an appropriate percentage of the new company’s stock in return. Total startup funds amounted to $11,500. The name was necessarily nondescript, considering that no one knew quite what (if anything) the company would eventually do. The fractured, futuristic compound was much in vogue amongst technology companies of the time — Microsoft, CompuWare, EduWare — and Infocom just followed the trend in choosing the name “least objectionable to everyone.”

As should be clear from the above, Infocom did not exactly begin under auspicious circumstances. I’d call them a garage startup, except that they didn’t even have a garage. Infocom would exist for some months as more of a theoretical company in limbo than an actual business entity. It didn’t even get its first proper mailing address — a P.O. Box — until March of 1980. Needless to say, no one was quitting their day jobs as they met from time to time over the following months to talk about what ought to come next. In August, Mike Broos had already gotten bored with the endeavor and quit, leaving just nine partners. Everyone agreed that they needed something they could put together relatively quickly to sell and really get the company off the ground. More ambitious projects could then follow. But what could they do for that first project?

The hackers trolled through their old projects from MIT, looking for ideas. They kept coming back to the games. There was that Trivia game, but it wouldn’t be practical to store enough questions on a floppy disk to make it worthwhile. More intriguing was the Maze game. Stand-up arcades were booming at the time. If Infocom could build a version of Maze for arcades, they would have something unprecedented. Unfortunately, getting there would require a huge, expensive hardware- as well as software-engineering project. The Infocom partners were clever enough, but they were all software rather than hardware hackers, and money was in short supply. And then of course there was Zork… but there was no way to squeeze a 1 MB adventure game into a 32 K or 48 K microcomputer. Anyway, Vezza wasn’t really comfortable with getting into the games business on any terms, fearing it could tarnish the company’s brand even if only used to raise some early funds and bootstrap the startup. So there was also plenty of discussion of other, more business-like ideas also drawn from the DMG’s project history: a document-tracking system, an email system, a text-processing system.

Meanwhile, Blank was living in Pittsburgh and feeling rather unhappy at being cut off from his old hacking days at MIT. Luckily, he did have at least one old MIT connection there. Joel Berez had worked with the DMG before graduating in 1977. He had spent the last two years living in Pittsburgh and working for his family’s business (which experience perhaps influenced the others to elect him as Infocom’s President in November of 1979). Blank and Berez made a habit of getting together for Chinese food (always the hacker’s staple) and talking about the old times. These conversations kept coming back to Zork. Was it really impossible to even imagine getting the game onto a microcomputer? Soon the conversations turned from nostalgic to technical. As they began to discuss technical realities, other challenges beyond even that of sheer computing capacity presented themselves.

Even if they could somehow get Zork onto a microcomputer, which microcomputer should they choose? The TRS-80 was by far the best early seller, but the Apple II, the Cadillac of the trinity of 1977, was beginning to come on strong now, aided by the new II Plus model and VisiCalc. Next year, and the year after that… who knew? And all of these machines were hopelessly incompatible with one another, meaning that reaching multiple platforms must seemingly entail re-implementing Zork — and any future adventure games they might decide to create — from scratch on each. Blank and Berez cast about for some high-level language that might be relatively portable and acceptable for implementing a new Zork, but they didn’t find much. BASIC was, well, BASIC, and not even all that consistent from microcomputer to microcomputer. There was a promising new implementation of the more palatable Pascal for the Apple II on the horizon, but no word of a similar system on other platforms.

So, if they wanted to be able to sell their game to the whole microcomputer market rather than just a slice of it, they would need to come up with some sort of portable data design that could be made to work on many different microcomputers via an interpreter custom-coded for each model. Creating each interpreter would be a task in itself, of course, but at least a more modest one, and if Infocom should decide to do more games after Zork the labor savings would begin to become very significant indeed. In reaching this conclusion, they followed a line of reasoning already well-trod by Scott Adams and Automated Simulations.

But then there was still another problem: Zork currently existed only as MDL source, a language which of course had no implementation on any microcomputer. If they didn’t want to rewrite the entire game from scratch — and wasn’t the point of this whole exercise to come up with a product relatively quickly and easily? — they would have to find a way to make that code run on microcomputers.

They had, then, quite a collection of problems. We’ll talk about how they solved every one of them — and pretty brilliantly at that — next time.

 
 

Tags: , ,

Zork on the PDP-10

One distinguishing trait of hackers is the way they never see any program as completely, definitively done; there are always additions to be made, rough edges to be smoothed. Certainly Adventure, impressive as it was, left plenty of room for improvement. On top of all that, though, one also has to consider that Adventure came to MIT from Don Woods of Stanford’s AI Lab, perhaps the only computer-science program in the country with a stature remotely comparable to that of MIT. MIT students are fiercely proud of their alma mater. If Stanford had done the adventure game first, MIT’s Dynamic Modeling Group could still do it better. And it didn’t hurt that the heritage of Project MAC and the Laboratory for Computer Science, not to mention the DMG itself, gifted them with quite some tools to bring to bear on the problem.

Adventure had been implemented in FORTRAN, a language with no particular suitability for the creation of a text adventure. Indeed, FORTRAN wasn’t even natively designed to handle variable-length strings, leaving Crowther and Woods to kludge their way around this problem as they did plenty of others. Both were of course very talented programmers, and so they made the best of it. Still, the hackers at DMG, whose opinion of FORTRAN was elevated about half a step above their opinion of BASIC, couldn’t wait to design their own adventure game using their own pet language, MDL. Not only did MDL, as a language at least partially designed for AI research, boast comparatively robust string-handling capabilities, but it also offered the ability to define complex new data types suitable to a specific task at hand and even to tie pieces of code right into those structures. Let me try to explain what made that so important.

We’ll start with the opening room of Zork, the game the DMG eventually produced in response to Adventure. Its description reads like this to the player:

West of House
This is an open field west of a white house, with a boarded front door.
There is a small mailbox here.
A rubber mat saying 'Welcome to Zork!' lies by the door.

Here’s the original MDL source that describes this room:

<ROOM "WHOUS"
"This is an open field west of a white house, with a boarded front door."
"West of House"
<EXIT "NORTH" "NHOUS" "SOUTH" "SHOUS" "WEST" "FORE1"
"EAST" #NEXIT "The door is locked, and there is evidently no key.">
(<GET-OBJ "FDOOR"> <GET-OBJ "MAILB"> <GET-OBJ "MAT">)
<>
<+ ,RLANDBIT ,RLIGHTBIT ,RNWALLBIT ,RSACREDBIT>
(RGLOBAL ,HOUSEBIT)><

Just about everything the program needs to know about this room is nicely encapsulated here. Let’s step through it line by line. The “ROOM” tag at the beginning defines this structure as a room, with the shorthand name “WHOUS.” The following line of text is the room description the player sees when entering for the first time, or typing “LOOK.” “West of House” is the full name of the room, the one which appears as the header to the room description and in the status line at the top of the screen whenever the player is in this room. Next we have a list of exits from the room: going north will take the player to “North of House,” south to “South of House”, west to one of several rooms that make up the “Forest.” Trying to go east will give a special failure message, saying that the player doesn’t have a key for the door, rather than a generic “You can’t go that way.” Next we have the items in the room as the game begins: the front door, the mailbox, and the welcome mat. Then a series of flags define some additional properties of the room: that it is on land rather than overrun with water; that it has light even if the player does not have a lit lantern with her; that (being outdoors) it has no walls; that it is “sacred,” meaning that the thief, a character who wanders about annoying the player in a manner akin to the dwarfs and the pirate in Adventure, cannot come here. And finally the last line defines this room as being associated with the white house, or if you will a part of the house “region” of the game’s geography.

Each item and character in the game has a similar definition block explaining most of what the game needs to know about it. Notably, even the special abilities or properties of these are defined as part of them, via links to special sections of code crafted just for them. Thus, once the scaffolding of utility code that enables all of this was created (no trivial task, of course), adding on to Zork largely involved simply defining more rooms, items, and characters, with no need to dive again into the engine that enables everything; only special capabilities of items and characters needed to be coded from scratch and linked into their hosts. In forming their world from a collection of integrated “objects,” the hackers at DMG were pushing almost accidentally toward a new programming paradigm that would first become a hot topic in computer science years later: object-oriented programming, in which programs are not divided rigorously into code that executes and the data it manipulates, but are rather built out of the interaction of semi-autonomous objects encapsulating their own code and data. Regarded for a time as the ideal solution to pretty much everything (possibly including the attainment of world peace), there is today a (probably justified) push-back in some quarters against the one-size-fits-all imposition of OOP theory found in some languages, such as Java. Be that as it may, OOP is pretty much ideal for crafting a text adventure. To show what I mean, let’s look at the alternative, as illustrated by the very non-OOP FORTRAN Adventure.

Each room in Adventure is given a number, from 1 (the starting location outside the small brick building, naturally) to 140 (a dead end location in the maze, less naturally). To find the long description of a room, shown when the player enters for the first time or LOOKs, the program digs through the first table in an external data file, matching the room number to the entries:

1
1	YOU ARE STANDING AT THE END OF A ROAD BEFORE A SMALL BRICK BUILDING.
1	AROUND YOU IS A FOREST.  A SMALL STREAM FLOWS OUT OF THE BUILDING AND
1	DOWN A GULLY.


Another table defines the short description shown upon entering an already visited room:

1	YOU'RE AT END OF ROAD AGAIN.


And now it gets really fun. Another table tells us what lies in what direction:

1	2	2	44	29
1	3	3	12	19	43
1	4	5	13	14	46	30
1	5	6	45	43
1	8	63


The first line above tells us that when in room 1 we can go to room 2 by typing any of three entries from yet another table, this time of keywords: “ROAD or HILL” (entries 2); “WEST” or “W” (entries 44); or “UPWAR” (pattern matching is done on just the first 5 characters of each word), “UP,” “ABOVE,” or “ASCEN” (entries 29). Definitions for items are similarly scattered over multiple tables within the data file. Thus, while Adventure does make some attempt to abstract its game engine from the data that makes up its world (placing the latter as much as possible within the external data file), modifying the world is a fiddly, error-prone process of editing multiple cryptic tables. Early adventuring engines created on microcomputers, such as those of Scott Adams, work in a similar fashion. Although it is of course possible to develop tools to ease the burden of hand-editing data files, the MDL Zork system is flexible and programmable in a way that these systems are not; with no ability to build code right into the world’s objects, as it were, crafting non-standard objects in Adventure or a Scott Adams game generally required hacking on the engine code itself, an ugly proposition.

So, MDL was just better for writing an adventure game, capable of cleanly representing a huge world in a readable, maintainable way. It was almost as if MDL had been designed for the purpose. Indeed, if you’ve used a more modern IF programming language like Inform 6, you might be surprised at how little their approach to defining a world has changed since the days of MDL Zork. (Inform 7, one of the latest and greatest tools for IF development, does drift away from the OOP model in favor of a more human-readable — even “literary” — rules-based approach. Suffice to say that the merits and drawbacks of the Inform 7 approach is a subject too complex to go into here. Maybe in 20 years, when the Digital Antiquarian finally makes it to 2006…)

And the DMG hackers had still another ace up their sleeve.

MIT had a large body of research into natural-language understanding on the computer, stretching back at least as far as Joseph Weizenbaum and his 1966 ELIZA system. If that program was ultimately little more than an elaborate parlor trick, it did inspire other, more rigorous attempts at getting a program to parse plain English. Most famously, between 1968 and 1970 Terry Winograd developed a program he called SHRDLU, which simulated a model world made up of blocks. The user could ask the program to manipulate this world, shifting blocks from place to place, placing them on top of each other, and so on, all by typing in her requests as simple imperative English sentences. She could even ask the computer simple questions, about what was placed where, etc. Rather overvalued in its time (as so much AI research tended to be) as a step on the road to HAL, SHRDLU nevertheless showed that when held within a very restricted domain it is very possible for a program to parse and genuinely “understand” at least a reasonable subset of English. Working from the tradition of SHRDLU, the DMG hackers crafted an adventure-game parser that was arguably the first to be truly worthy of the term. While Adventure got by with simple pattern matching (as betrayed by the fact that “LAMP GET” works as well as “GET LAMP”), Zork would have a real understanding not only of verb and direct object, but also of preposition, indirect object, conjunction, punctuation, even article. Helping the process along was once again MDL, which as a language designed with AI research in mind had superb string-manipulation capabilities. The parser they ended up with is a remarkable creation indeed, one that would stand alone for several years — then as now an eternity in the world of computer science. But now we’re getting ahead of ourselves.

The road to Zork began in late May of 1977, when Dave Lebling put together a very simple parser and game engine quite similar to Adventure‘s, from which Marc Blank and Tim Anderson built their first four-room game as a sort of proof of concept. At this point Lebling went on vacation for two weeks, while Blank, Anderson, and Bruce Daniels hacked like crazy, crafting the basic structure of Zork as we know it to this day. The name itself was a nonsense word floating around MIT that one might use in place of something, shall we say, stronger in stressful situations: “Zork the bloody thing!” when a piece of code just wouldn’t work correctly, etc. The file holding the game-in-progress got named “Zork” as a sort of placeholder until someone came up with something better. Every programmer tends to have a few names like this which she uses for programs, variables, functions, etc., when she’s just experimenting and can’t be bothered to come up with something better. (My own go-to placeholder, for reasons too embarrassing and idiosyncratic to elaborate on here, has been “fuzzy” for the last 25 years.) In the case of Zork, though, a proper name was slow in coming. And so Zork the game remained for the first six months of its existence.

By the time Lebling returned from that vacation to resume working on the game, a solid foundation was in place. Everything about the design was modular, meaning not only that (as demonstrated above) it was easy to add more rooms, items, and puzzles, but also that parts of the underlying technology could be easily removed, improved, and inserted again. Most notably, the parser gradually progressed from a two-word job “almost as smart as Adventure‘s” to the state-of-the-art creation it eventually became, mostly thanks to the efforts of Blank, who obsessed over it to the tune of “40 or 50” iterations.

In later years Infocom would develop an elaborate if comedic history and mythology around Zork and its “Great Underground Empire,” but in these early days they were interested in the game’s world only as a setting for cool if ridiculously disparate scenery and, of course, puzzles to solve, very much in the tradition of Don Woods’s approach to Adventure. In fact, Zork‘s world paid homage to Adventure almost to the point of initially seeming like a remake. Like in Adventure, you start above ground next to a small house; like in Adventure, there is a small wilderness area to explore, but the real meat of the game takes place underground; like in Adventure, your goal is to collect treasures and return them to the house that serves as your base of operations; etc., etc. Only deeper in the game did Zork diverge and really take on its own character, with imaginative locations of its own and much more intricate puzzles enabled by that magnificent parser. Of course, these parts were also crafted later, when the development team was more experienced and when said parser was much better. I’ll be having a detailed look at Zork the game in its microcomputer rather than its PDP-10 incarnation, but if you’re interested in learning more about this original shaggy-dog implementation I’d encourage you to have a look at Jason Dyer’s detailed play-through.

Like Adventure, Zork ran on a DEC PDP-10. Unlike Adventure, however, it ran under the operating system which also hosted the MDL environment, the Incompatible Timesharing System (named with a bit of hacker humor as a sarcastic response to an earlier Compatible Timesharing System; once again see — sorry to keep beating this drum — Levy’s Hackers for a great account of its origins). ITS was largely unique to MIT, the institution that had developed it. There was something very odd about it: in extravagant (some would say foolhardy) tribute to the hacker tradition of total openness and transparency, it had no passwords — in fact, no security whatsoever. Absolutely anyone could log on and do what they pleased. This led to a substantial community of what the MIT hackers came to call “net randoms,” people with nothing to do with MIT but who were blessed with access to an ARPANET-connected computer somewhere who stopped by and rummaged through the systems just to see what all those crazy MIT hackers were up to. DMG’s machine had collected quite a community of randoms thanks to the earlier Trivia game. It didn’t take them long to find Zork, even though it was never officially announced anywhere, and get to work adventuring. Soon the game-in-progress was developing a reputation across the ARPANET. For the benefit of this community of players the development team started to place a copy of U.S. News and Dungeon Report in one of the first rooms, which detailed the latest changes and additions to this virtual world they were exploring. The randoms as well as other, more “legitimate” MIT-based users (John McCarthy, the father of AI, among them) served as a sort of extended beta-testing team; the implementers could see what they tried to do, not to mention what they complained about, and adjust their game to accommodate them. Many of the parser improvements in particular were undoubtedly driven by just this process; anyone who’s ever put a text adventure through beta testing knows that you just can’t predict the myriad ways people will try to say things.

Still, Zork‘s growing popularity raised obvious concerns about overloading the DMG’s PDP-10 system — which was funded by the Defense Department and theoretically needed for winning the Cold War, after all — with all of these gamers. Meanwhile, others were asking for their own copies of the game, to install on other machines. Although developed and used primarily under ITS, there was as it happened a version of the MDL environment that ran on yet another PDP-10 operating system, TOPS-20, first released by DEC in 1976 and positioned as a more advanced, user-friendly version of TOPS-10. Unlike ITS, TOPS-20 was widely used outside of MIT. The DMG hackers therefore modified Zork as necessary to run on TOPS-20 and began distributing it to any administrator who requested a copy. By that fall, machines all over the country were hosting Zork, and the maintainers had even set up an electronic mailing list to keep administrators aware of expansions and improvements.

The DMG hackers were generous, but not quite so generous as Don Woods had been with Adventure. They distributed Zork only as encrypted files that were runnable in an MDL environment but were not readable (and modifiable) as source code. They even went so far as to patch their famously insecure ITS development system, adding security to just the directory that stored the source. Hackers, however, won’t be denied, and soon one from DEC itself had penetrated the veil. From Infocom’s own official “History of Zork“:

[The security] was finally beaten by a system hacker from Digital: using some archaic ITS documentation (there’s never been any other kind), he was able to figure out how to modify the running operating system. Being clever, he was also able to figure out how our patch to protect the source directory worked. Then it was just a matter of decrypting the sources, but that was soon reduced to figuring out the key we’d used. Ted had no trouble getting machine time; he just found a new TOPS-20 machine that was undergoing final testing, and started a program that tried every key until it got something that looked like text. After less than a day of crunching, he had a readable copy of the source. We had to concede that anyone who’d go to that much trouble deserved it. This led to some other things later on.

About those “other things”:

At some point around the fall of 1977, the DMG hackers had decided that their creation really, really needed a “proper” name. Lebling suggested Dungeon, which excited no one (Lebling included), but no one could come up with anything better. And so Dungeon it was. It was shortly after this that the security breach just described took place — thus, the game that that DEC hacker recovered was not called Zork, but rather Dungeon. Shortly after that, MIT heard legal rumblings from, of all places, TSR, publishers of Dungeons and Dragons — and of a dungeon-crawling board game called simply Dungeon! TSR was always overzealous with lawsuits, and the consensus amongst the MIT lawyers that the DMG hackers consulted was that they didn’t have a legal leg to stand on. However, rather than get sucked into a lengthy squabble over a name none of them much liked in the first place, they decided to just revert to the much more memorable Zork. And so by the beginning of 1978 Dungeon became Zork once more, and retained that name forevermore.

Almost. Remember that source that “Ted” had liberated from MIT? Well, it made its way to another hacker at DEC, one Robert Supnik, who ported the whole thing to the more common and portable (if intrinsically vastly less suitable for text adventures) FORTRAN — a herculean feat that amazed even the DMG hackers. Since the game described in the MDL source he had access to was called Dungeon, Dungeon this version remained. Supnik originally did the port with an eye to getting Dungeon running on the DEC PDP-11 (not, as its name might seem to imply, a successor to the PDP-10, but rather a physically smaller, less powerful, less expensive machine). With Supnik’s FORTRAN source free distributable, however, it was a short hop from the PDP-11 to other architectures. Indeed, during these early years Supnik’s Dungeon was probably more widely distributed and thus more commonly played than the DMG’s own Zork. When PCs appeared that could support it, Dungeon inevitably made its way there as well. Thus by the latter part of the 1980s the situation was truly baffling for those without knowledge of all this history: there was this free game called Dungeon which was strangely similar to the official commercial Zork games, which were in turn very similar to this other game, Adventure, available by then in at least a dozen free or commercial versions. To this day Supnik’s Dungeon is available alongside the free-at-last MDL source to the PDP-10 Zork.

Back at MIT, development continued on Zork proper, albeit at a gradually diminishing pace, through 1978. Apart from some minor bug fixing that would go on for another couple of years, the last bits of Zork were put into place in February of 1979. By this point the game had grown to truly enormous proportions: 191 rooms, 211 items, a vocabulary of 908 words including 71 distinct verbs (not counting synonyms). The implementers were just about out of new puzzle ideas and understandably a bit exhausted with the whole endeavor, and, as if that weren’t justification enough, they had completely filled the 1 MB or so of memory an MDL program was allowed to utilize. And so they set Zork aside and moved on to other projects.

The story could very well have finished there, with Zork passing into history as another, unusually impressive example of the text adventures that flourished on institutional machines for a few brief years after Adventure‘s release; Zork as another Mystery Mansion, Stuga (UPDATE: not quite; see Jason Dyer’s comment below), or HAUNT. It didn’t, though, thanks to the DMG’s very non-hackerish director, Al Vezza, who decided a few months later that the time was right to enter along with his charges the burgeoning new frontier of the microcomputer by starting a software company. Little did he realize where that decision would lead.

Update, July 27, 2023: The full source code of the PDP-10 Zork is now available for the dedicated and curious!

 
 

Tags: , , ,

The Roots of Infocom

In November of 1980 Personal Software began running the advertisement above in computer magazines, plugging a new game available then on the TRS-80 and a few months later on the Apple II. It’s not exactly a masterpiece of marketing; its garish, amateurish artwork is defensible only in being pretty typical of the era, and the text is remarkably adept at elucidating absolutely nothing that might make Zork stand out from its text-adventure peers. A jaded adventurer might be excused for turning the page on Zork‘s “mazes [that] confound your quest” and “20 treasures” needing to be returned to the “Trophy Case.” Even Scott Adams, not exactly a champion of formal experimentation, had after all seen fit to move on at least from time to time from simplistic fantasy treasure hunts, and Zork didn’t even offer the pretty pictures of On-Line Systems’s otherwise punishing-almost-to-the-point-of-unplayability early games.

In fact, though, Zork represented a major breakthrough in the text-adventure genre — or maybe I should say a whole collection of breakthroughs, from its parser that actually displayed some inkling of English usage in lieu of simplistic pattern matching to the in-game text that for the first time felt crafted by authors who actually cared about the quality of their prose and didn’t find proper grammar and spelling a needless distraction. In one of my favorite parts of Jason Scott’s Get Lamp documentary, several interviewees muse about just how truly remarkable Zork was in the computing world of 1980-81. The consensus is that it was, for a brief window of time, the most impressive single disk you could pull out to demonstrate what your new TRS-80 or Apple II was capable of.

Zork was playing in a whole different league from any other adventure game, a fact that’s not entirely surprising given its pedigree. You’d never guess it from the advertisement above, but Zork grew out of the most storied area of the most important university in computer-science history: MIT. In fact, Zork‘s pedigree is so impressive that it’s hard to know where to begin and harder to know where to end in describing it, hard to avoid getting sucked into an unending computer-science version of “Six Degrees of Kevin Bacon.” To keep things manageable I’ll try as much as I can to restrict myself to people directly involved with Zork or Infocom, the company that developed it. So, let’s begin with Joseph Carl Robnett Licklider, a fellow who admittedly had more of a tangential than direct role in Infocom’s history but who does serve as an illustration of the kind of rarified computer-science air Infocom was breathing.

Born in 1915 in St. Louis, Licklider was a psychologist by trade, but had just the sort of restless intellect that Joseph Weizenbaum would lament the (perceived) loss of in a later generation of scholars at MIT. He received a triple BA degree in physics, mathematics, and psychology from St. Louis’s Washington University at age 22, having also flirted with chemistry and fine arts along the way. He settled down a bit to concentrate on psychology for his MA and PhD, but remained consistently interested in connecting the “soft” science of psychology with the “hard” sciences and with technology. And so, when researching the psychological component of hearing, he learned more about the physical design of the human and animal auditory nervous systems than do many medical specialists. (He once described it as “the product of a superb architect and a sloppy workman.”) During World War II, research into the effects of high altitude on bomber crews led him to get equally involved with the radio technology they used to communicate with one another and with other airplanes.

After stints at various universities, Licklider came to MIT in 1950, initially to continue his researches into acoustics and hearing. The following year, however, the military-industrial complex came calling on MIT to help create an early-warning network for the Soviet bombers they envisioned dropping down on America from over the Arctic Circle. Licklider joined the resulting affiliated institution, Lincoln Laboratory, as head of its human-engineering group, and played a role in the creation of the Semi-Automatic Ground Environment (SAGE), by far the most ambitious application of computer technology conceived up to that point and, for that matter, for many years afterward. Created by MIT’s Lincoln Lab with IBM and other partners, the heart of SAGE was a collection of IBM AN/FSQ-7 mainframes, physically the largest computers ever built (a record that they look likely to hold forever). The system compiled data from many radar stations to allow operators to track a theoretical incoming strike in real time. They could scramble and guide American aircraft to intercept the bombers, enjoying a bird’s eye view of the resulting battle. Later versions of SAGE even allowed them to temporarily take over control of friendly aircraft, guiding them to the interception point via a link to their autopilot systems. SAGE remained in operation from 1959 until 1983, cost more than the Manhattan Project that had opened this whole can of nuclear worms in the first place, and was responsible for huge advances in computer science, particularly in the areas of networking and interactive time-sharing. (On the other hand, considering that the nuclear-bomber threat SAGE had been designed to counter had been largely superseded by the ICBM threat by the time it went operational, its military usefulness is debatable at best.)

During the 1950s most people, including even many of the engineers and early programmers who worked on them, saw computers as essentially huge calculators. You fed in some numbers at one end and got some others out at the other, whether they be the correct trajectory settings for a piece of artillery to hit some target or other or the current balances of a million bank customers. As he watched early SAGE testers track simulated threats in real time, however, Licklider was inspired to a radical new vision of computing, in which human and computer would actively work together, interactively, to solve problems, generate ideas, perhaps just have fun. He took these ideas with him when he left the nascent SAGE project in 1953 to float around MIT in various roles, all the while drifting slowly away from traditional psychology and toward computer science. In 1957 he became a full-time computer scientist when he (temporarily, as it turned out) left MIT for the consulting firm Bolt Beranek and Newman, a company that would play a huge role in the development of computer networking and what we’ve come to know as the Internet. (Loyal readers of this blog may recall that BBN is also where Will Crowther was employed when he created the original version of Adventure as a footnote to writing the code run by the world’s first computerized network routers.)

Licklider, who insisted that everyone, even his undergraduate students, just call him “Lick,” was as smart as he was unpretentious. Speaking in a soft Missouri drawl that could obscure the genius of some of his ideas, he never seemed to think about personal credit or careerism, and possessed not an ounce of guile. When a more personally ambitious colleague stole one of his ideas, Lick would just shrug it off, saying, “It doesn’t matter who gets the credit; it matters that it gets done.” Everyone loved the guy. Much of his work may have been funded by the realpolitik of the military-industrial complex, but Lick was by temperament an idealist. He became convinced that computers could mold a better, more just society. In it, humans would be free to create and to explore their own potential in partnership with the computer, which would take on all the drudgery and rote work. In a surprising prefiguring of the World Wide Web, he imagined a world of “home computer consoles” connected to a larger network that would bring the world into the home — interactively, unlike the passive, corporate-controlled medium of television. He spelled out all of these ideas carefully in a 1960 paper, “Man-Computer Symbiosis,” staking his claim as one of a long line of computing utopianists that would play a big role in the development of more common-man friendly technologies like the BASIC programming language and eventually of the microcomputer itself.

In 1958, the U.S. government formed the Advanced Research Projects Agency in response to alleged Soviet scientific and technological superiority in the wake of their launch of Sputnik, the world’s first satellite, the previous year. ARPA was intended as something of a “blue-sky” endeavor, pulling together scientists and engineers to research ideas and technology that might not be immediately applicable to ongoing military programs, but that might just prove to be in the future. It became Lick’s next stop after BBN: in 1962 he took over as head of their “Information Processing Techniques Office.” He remained at ARPA for just two years, but is credited by many with shifting the agency’s thinking dramatically. Previously ARPA had focused on monolithic mainframes operating as giant batch-processing “answer machines.” From Where Wizards Stay Up Late:

The computer would be fed intelligence information from a variety of human sources, such as hearsay from cocktail parties or observations of a May Day parade, and try to develop a best-guess scenario on what the Soviets might be up to. “The idea was that you take this powerful computer and feed it all this qualitative information, such as ‘The air force chief drank two martinis,’ or ‘Khrushchev isn’t reading Pravda on Mondays,” recalled Ruina. “And the computer would play Sherlock Holmes and conclude that the Russians must be building an MX-72 missile or something like that.”

“Asinine kinds of things” like this were the thrust of much thinking about computers in those days, including plenty in prestigious universities such as MIT. Lick, however, shifted ARPA in a more manageable and achievable direction, toward networks of computers running interactive applications in partnership with humans — leave the facts and figures to the computer, and leave the conclusions and the decision-making to the humans. This shift led to the creation of the ARPANET later in the decade. And the ARPANET, as everyone knows by now, eventually turned into the Internet. (Whatever else you can say about the Cold War, it brought about some huge advances in computing.) The humanistic vision of computing that Lick championed, meanwhile, remains viable and compelling today as we continue to wait for the strong AI proponents to produce a HAL.

Lick returned to MIT in 1968, this time as the director of the legendary Project MAC. Formed in 1963 to conduct research for ARPA, MAC stood for either (depending on whom you talked to) Multiple Access Computing or Machine Aided Cognition. Those two names also define the focus of its early research: into time-shared systems that let multiple users share resources and use interactive programs on a single machine; and into artificial intelligence, under the guidance of the two most famous AI proponents of all, John McCarthy (inventor of the term itself) and Marvin Minsky. I could write a few (dozen?) more posts on the careers and ideas of these men, fascinating, problematic, and sometimes disturbing as they are. I could say the same about many other early computing luminaries at MIT with whom Lick came into close contact, such as Ivan Sutherland, inventor of the first paint program and, well, pretty much the whole field of computer-graphics research as well as the successor to his position at ARPA. Instead, I’ll just point you (yet again) to Steven Levy’s Hackers for an accessible if necessarily incomplete description of the intellectual ferment at 1960s MIT, and to Where Wizards Stay Up Late by Matthew Lyon and Katie Hafner for more on Lick’s early career as well as BBN, MIT, and our old friend Will Crowther.

Project MAC split into two in 1970, becoming the MIT AI Laboratory and the Laboratory for Computer Science (LCS). Lick stayed with the latter as a sort of grandfather figure to a new generation of young hackers that gradually replaced the old guard described in Levy’s book as the 1970s wore on. His was a shrewd mind always ready to take up their ideas, and one who, thanks to his network of connections in the government and industry, could always get funding for said ideas.

LCS consisted of a number of smaller working groups, one of which was known as the Dynamic Modeling Group. It’s oddly difficult to pin any of these groups down to a single purpose. Indeed, it’s not really possible to do so even for the AI Lab and LCS themselves; plenty of research that could be considered AI work happened at LCS, and plenty that did not comfortably fit under that umbrella took place at the AI Lab. (For instance, Richard Stallman developed the ultimate hacker text editor, EMACS, at the AI Lab — a worthy project certainly but hardly one that had much to do with artificial intelligence.) Groups and the individuals within them were given tremendous freedom to hack on any justifiable projects that interested them (with the un-justifiable of course being left for after hours), a big factor in LCS and the AI Lab’s becoming such beloved homes for hackers. Indeed, many put off graduating or ultimately didn’t bother at all, so intellectually fertile was the atmosphere inside MIT in contrast to what they might find in any “proper” career track in private industry.

The director of the Dynamic Modeling Group was a fellow named Albert (Al) Vezza; he also served as an assistant director of LCS as a whole. And here we have to be a little bit careful. If you know something about Infocom’s history already, you probably recognize Vezza as the uptight corporate heavy of the story, the guy who couldn’t see the magic in the new medium of interactive fiction that the company was pursuing, who insisted on trivializing the game division’s work as a mere source of funding for a “serious” business application, and who eventually drove the company to ruin with his misplaced priorities. Certainly there’s no apparent love lost between the other Infocom alumni and Vezza. An interview with Mike Dornbrook for an MIT student project researching Infocom’s history revealed the following picture of Vezza at MIT:

Where Licklider was charismatic and affectionately called “Lick” by his students, Vezza rarely spoke to LCS members and often made a beeline from the elevator to his office in the morning, shut the door, and never saw anyone. Some people at LCS were unhappy with his managerial style, saying that he was unfriendly and “never talked to people unless he had to, even people who worked in the Lab.”

On the other hand, Lyon and Hafner have this to say:

Vezza always made a good impression. He was sociable and impeccably articulate; he had a keen scientific mind and first-rate administrative instincts.

Whatever his failings, Vezza was much more than an unimaginative empty suit. He in fact had a long and distinguished career which he largely spent furthering some of the ideas first proposed by Lick himself; he appears in Lyon and Hafner’s book, for instance, because he was instrumental in organizing the first public demonstration of the nascent ARPANET’s capabilities. Even after the Infocom years, his was an important voice on the World Wide Web Consortium that defined many of the standards that still guide the Internet today. Certainly it’s a disservice to Vezza that his Wikipedia page consists entirely of his rather inglorious tenure at Infocom, a time he probably considers little more than a disagreeable career footnote. That footnote is of course the main thing we’re interested in, but perhaps we can settle for now on a picture of a man with more of the administrator or bureaucrat than the hacker in him and who was more of a pragmatist than an idealist — and one who had some trouble relating to his charges as a consequence.

Many of those charges had names that Infocom fans would come to know well: Dave Lebling, Marc Blank, Stu Galley, Joel Berez, Tim Anderson, etc., etc. Like Lick, many of these folks came to hacking from unexpected places. Lebling, for instance, obtained a degree in political science before getting sucked into LCS, while Blank commuted back and forth between Boston and New York, where he somehow managed to complete medical school even as he hacked like mad at MIT. One thing, however, most certainly held true of everyone: they were good. LCS didn’t suffer fools gladly — or at all.

One of the first projects of the DMG was to create a new programming language for their own projects, which they named with typical hacker cheekiness “Muddle.” Muddle soon became MDL (MIT Design Language) in response to someone (Vezza?) not so enamoured with the DMG’s humor. It was essentially an improved version of an older programming language developed at MIT by John McCarthy, one which was (and remains to this day) the favorite of AI researchers: LISP.

With MDL on hand, the DMG took on a variety of projects, individually or cooperatively. Some of these had real military applications to satisfy the folks who were ultimately funding all of these shenanigans; Lebling, for instance, spent quite some time on computerized Morse-Code recognition systems. But there were plenty of games, too, in some of which Lebling was also a participant, including the best remembered of them all, Maze. Maze ran over a network, with up to 8 Imlac PDS-1s, very simple minicomputers with primitive graphical capabilities, serving as “clients” connected to a single DEC PDP-10 “server.” Players on the PDS-1s could navigate around a shared environment and shoot at each other — the ancestor of modern games like Counterstrike. Maze became a huge hit, and a real problem for administrative types like Vezza; not only did a full 8-player game stretch the PDP-10 server to the limit, but it had a tendency to eventually crash entirely this machine that others needed for “real” work. Vezza demanded again and again that it be removed from the systems, but trying to herd the cats at DMG was pretty much a lost cause. Amongst other “fun” projects, Lebling also created a trivia game which allowed users on the ARPANET to submit new questions, leading to an eventual database of thousands.

And then, in the spring of 1977, Adventure arrived at MIT. Like computer-science departments all over the country, work there essentially came to a standstill while everyone tried to solve it; the folks at DMG finally got the “last lousy point” with the aid of a debugging tool. And with that accomplished, they began, like many other hackers in many other places, to think about how they could make a better Adventure. DMG, however, had some tools to hand that would make them almost uniquely suited to the task.

 
 

Tags: , ,