Monthly Archives: September 2011

Jobs and Woz

As I write this the news media and the blogosphere are just tailing off from an explosion of commentary and retrospectives triggered by an obviously ill Steve Jobs stepping down at last from his post as Apple’s CEO. The event marks the end of an era. With Bill Gates having retired from day-to-day involvement with Microsoft a few years ago, the two great survivors from those primordial computing days of the late 1970s and early 1980s no longer run the iconic companies that they began to build all those years ago.

For many, Bill and Steve embodied two fundamentally opposing approaches to technology. On one side was Gates, the awkwardly buttoned-down overachiever who never even as a multi-billionaire seemed quite comfortable in his own skin, wielding spreadsheets and databases while obsessing over Microsoft’s latest financial reports. On the other was Jobs, the epitome of California cool who never met a person he couldn’t charm, wielding art packages and music production software while talking about how technology could allow us to live better, more elegant lives. These attitudes were mirrored in the products of their respective companies. In In the Beginning Was the Command Line, Neal Stephenson compared the Macintosh with a sleek European sedan, while Windows was a station wagon which “had all the aesthetic appeal of a Soviet worker housing block; it leaked oil and blew gaskets, and [of course] it was an enormous success.” These contrasts — or should we say caricatures? — run deep. They were certainly not lost on Apple itself when it made its classic series of “I’m a Mac / I’m a PC” commercials to herald its big post-millennial Jobs-helmed comeback.

Even in the late 1970s, when he was a very young man, Jobs had an intuitive feeling for the way that technology ought to work and an aesthetic eye that was lacking in just about every one of the nerds and hackers that made up the rest of the early microcomputer industry. Almost uniquely among his contemporaries, Jobs had a vision of where all this stuff could go, a vision of a technological future that would appeal not just to PC guy in the commercials above but also to Mac guy. The industry desperately needed a guy like Jobs — good-looking, glib, articulate, with an innate sense of aesthetics and design — to serve as an ambassador between the hackers and ordinary people. Jobs was the kind of guy who might visit a girlfriend’s home for dinner and walk away with a check to fund his startup business from the father and a freshly baked cake from the mother. He made all these hackers with their binary code and their soldering irons seem almost normal, and almost (if only by transference) kind of cool.

There’s a trope that comes up again and again amongst the old-timers who remember those days and the histories that are written of them: that it was a fundamentally innocent time, when hackers hacked just for the joy of it and accidentally created the modern world. In Triumph of the Nerds, Jobs’s partner in founding Apple, Steve Wozniak, said:

“It was just a little hobby company, like a lot of people do, not thinking anything of it. It wasn’t like we both thought it was going to go a long ways. We thought we would both do it for fun, but back then there was a short window in time where one person who could sit down and do some neat, good designs could turn them into a huge thing like the Apple II.”

I believe Wozniak, a hacker’s hacker if ever there was one. To imagine that an amity of hacking bliss united those guiding the companies that made up the early industry, though, is deeply mistaken. As shown by the number of companies and computer models that had already come and gone by even 1982, the PC industry was a cutthroat, hyper-competitive place.

In the same video, Jobs has this to say about those days:

“I was worth over a million dollars when I was 23, and over ten million dollars when I was 24, and over a hundred million dollars when I was 25, and it wasn’t that important, because I never did it for the money.”

In contrast to Wozniak’s comments, there’s a note of disingenuousness here. It seems suspicious that, for someone for whom finances are so unimportant, Jobs has such a specific recollection of his net worth at exact points in time; something tells me Wozniak would be challenged to come up with similar figures. I mentioned once before on this blog how Jobs cheated his best friend Wozniak out of a $5000 bonus for designing Breakout on his behalf for Atari. Jobs was of course a very young man at the time, and we’d all like to have things back from our youth, but this moment always struck me as one of those significant markers of character that says something about who a person fundamentally is. Wozniak might dismiss the incident in his autobiography by saying, “We were just kids, you know,” but I can’t imagine him pulling that stunt on Jobs. In another of those markers of character, Wozniak was so honest that, upon designing the computer that would come to be known as the Apple I and founding a company with Jobs to market it, he suddenly recalled the employment contract he had signed with Hewlett Packard which said that all of his engineering work belonged to HP during the term of his employment, whether created in the office or at home, and tried to give his computer design to HP. Much to Jobs’s relief, HP just looked at it bemusedly and told Wozniak to knock himself out trying to sell the thing on his own.

In the case of Jobs, when we drill down past the veneer of California cool and trendy Buddhism we find a man as obsessively competitive as Gates; both men were the most demanding of bosses in their younger days, who belittled subordinates and deliberately fomented discord in the name of keeping everyone at their competitive best. Gates, however, lacked the charm and media savvy that kept Jobs the perpetual golden boy of technology. Even when he was very young, people spoke about the “reality distortion field” around Jobs that seemed to always convince others to see things his way and do his bidding.

And if Jobs isn’t quite the enlightened New Man whose image he has so carefully crafted, there’s a similarly subtle cognitive dissonance about his company. Apple’s contemporary products are undeniably beautiful in both their engineering and their appearance, and they’re even empowering in their way, but this quality only goes so far. To turn back to Stephenson again, these sleek machines have “their innards hermetically sealed, so that how they work is something of a mystery.” Empowering they may be, but only on Apple’s terms. In another sense, they foster dependence — dependence on Apple — rather than independence. And then, of course, all of that beauty and elegance comes at a premium price, such that they become status symbols. The idea of a computing device, whatever its price, becoming a status symbol anywhere but inside the community of nerds would of course have been inconceivable in 1980 — so that’s progress of a sort, and largely down to Jobs’s influence. Still, it’s tempting sometimes to compare the sealed unknowability of Apple’s products with the commodity PCs that once allowed the “evil” Bill Gates to very nearly take over the computing world entirely. A Windows-based PC may have been a domestic station wagon or (in another popular analogy) a pickup truck, but like those vehicles it was affordable to just about everyone, and it was easy to pop the hood open and tinker. Apple’s creations required a trip to the metaphorical exotic car dealership just to have their oil changed. A Macintosh might unleash your inner artist and impress the coffee-house circuit, but a PC could be purchased dirt cheap — or assembled from cast-off parts — and set up in the savannah to control those pumps that keep that village supplied with drinking water. There’s something to be said for cheap, ubiquitous, and deeply uncool commodity hardware; something to be said for the idea of (as another microcomputer pioneer put it) “computers for the masses, not the classes.”

A mention of Linux might seem appropriate at this juncture, as might a more fine-grained distinction between hardware and software, but these metaphors are already threatening to buckle under the strain. Let’s instead try to guide this discussion back to Jobs and Woz, an odd couple if ever there was one.

Wozniak was a classic old-school hacker. Even during high school in the late 1960s, he fantasized about computers the way that normal teenagers obsessed over girls and cars. His idea of fun was to laboriously write out programs in his notebooks, programs which he had no computer to run, and to imagine them in action. While other boys hoarded girlie magazines, Woz (as everyone called him) collected manuals for each new computer to hit the market — sometimes so he could redesign them better, more efficiently, in his imagination.

In 1970, during a working sabbatical of sorts from university, the 20-year-old Woz met the 15-year-old Steve Jobs. Despite the age difference, they became fast friends, bonding over a shared love of technology, music, and practical jokes. Soon they discovered another mutual obsession: phone phreaking, hacking the phone system to let one call long distance for free. The pair’s first joint business venture — instigated, as these sort of things always were, by Jobs — was selling homemade “blue boxes” that could generate the tones needed to mimic a long-distance carrier.

Jobs was… not a classic old-school hacker. He was, outwardly at least, a classic hippie with a passion for Eastern philosophy and Bob Dylan, a “people person” with little patience for programming or engineering. Nevertheless, the reality distortion field allowed him to talk his way into a technician’s job at rising arcade-game manufacturer Atari. He even got Atari to give him a summer off and an airline ticket to India to do “spiritual research.” In spite of it all, though, the apparently clueless Jobs just kept delivering the goods. The reason, of course, was Woz, who by then was working full-time for Hewlett Packard during the day, then doing Jobs’s job for him by night. The dynamic duo’s finest hour at Atari was the arcade game Breakout. In what at least from the outside has all the markings of a classic codependent relationship, poor Woz was told that they had just four days to get the design done; actually, Jobs just wanted to get finished so he could jet off to attend the harvest at an apple orchard commune in Oregon. (You just can’t make some of this stuff up…) Woz met the deadline by going without sleep for four days straight, and did it using such an impossibly low number of chips that it ended up being un-manufactureable. Atari engineer Al Alcorn:

“Ironically, the design was so minimized that normal mere mortals couldn’t figure it out. To go to production, we had to have technicians testing the things so they could make sense of it. If any one part failed, the whole thing would come to its knees. And since Jobs didn’t really understand it and didn’t want us to know that he hadn’t done it, we ended up having to redesign it before it could be shipped.”

But Jobs made it to the apple festival, and also got that $5000 bonus he neglected to tell Woz about to spend there. Even in 1984 Woz still believed that he and Jobs had earned only $700 for a design that became the big arcade hit of 1976.

We can really only speculate about what caused Woz to put up with treatment like this — but speculation is fun, so let’s have at it. Woz was one of those good-hearted sorts who want to like and be liked, but who, due to some failure of empathy or just from sheer trying too hard, are persistently just enough out of sync in social situations to make everything a bit awkward. Woz always seemed to laugh a little bit too loud or too long, couldn’t quite sense when the time was right to stop reciting from his store of Polish jokes, didn’t recognize when his endless pranks were about to cross the line from harmless fun into cruelty. For a person like this the opportunity to hang out with a gifted social animal like Jobs must have been hard to resist, no matter how unequal the relationship might seem.

And it wasn’t entirely one way — not at all, actually. When Woz was hacking on the project that would become the Apple I, he lusted after a new type of dynamic RAM chips, but couldn’t afford them. Jobs just called up the manufacturer and employed the reality distortion field to talk them into sending him some “samples.” Jobs was Woz’s enabler, in the most positive sense; he had a genius for getting things done. In fact, in the big picture it is Woz that is in Jobs’s debt. One senses that Jobs would have made his mark on the emerging microcomputer industry even if he had never met Woz — such was his drive. To be blunt, Jobs would have found another Woz. Without Jobs, though, Woz would have toiled away — happily, mind you — in some obscure engineering lab or other his entire life, quietly weaving his miniaturized magic out of silicon, and retired with perhaps a handful of obscure patents to mark his name for posterity.

Unsurprisingly given their backgrounds and interests, Woz and Jobs were members of the famous Homebrew Computer Club, Woz from the very first meeting on March 5, 1975. There, the social hierarchy was inverted, and it was Woz with his intimate knowledge of computers that was the star, Jobs that was the vaguely uncomfortable outsider.

Woz designed the machine that became the Apple I just for fun. It was somewhat unique within Homebrew in that it used the new MOS 6502 CPU rather than the Intel 8080 of the original Altair, for the very good reason that Woz didn’t have a whole lot of money to throw around and the 6502 cost $25 versus $175 for the 8080. The process was almost committee-driven; Woz, who had the rare and remarkable gift of being without ego when it came to matters of design, would bring his work-in-progress to each biweekly Homebrew meeting, explaining what he’d done, describing where he was having problems, and soliciting advice and criticism. What he ended up with was pretty impressive. The machine could output to a television screen, as opposed to the flashing lights of the Altair; it used a keyboard, as opposed to toggle switches; and it could run a simple BASIC interpreter programmed by Woz himself. Woz said he “designed the Apple I because I wanted to give it away free for other people. I gave out schematics for building my computer at the next meeting I attended.”

Steve Jobs put a stop to those dangerous tendencies. He stepped in at this point to convince Woz to do what he never would have done on his own: to turn his hacking project into a real product provided by a real company. Woz sold his prized HP calculator and Jobs his Volkswagen van (didn’t someone once say that stereotypes are so much fun because they’re so often true?) to form Apple Computer on April 1, 1976. The Apple I was not a fully assembled computer like the trinity of 1977, but it was an intermediate step between the Altair and them; instead of a box of loose chips, you got a finished, fully soldered motherboard to build onto with your own case, power supply, keyboard, and monitor. The owner of an important early computer store, The Byte Shop, immediately wanted to buy 50 of them. Problem was, Jobs and Woz didn’t have the cash to buy the parts to make them. No problem; Jobs employed the reality distortion field to convince a wholesale electronics firm to give these two hippies tens of thousands of dollars in hardware in exchange for a promise to pay them in one month. Apple ended up selling 175 Apple Is over the next year, each assembled by hand in Jobs’s parents’ garage by Jobs and Woz and a friend or family member or two.

While that was going on, Woz was designing his masterpiece: the Apple II.


Posted by on September 9, 2011 in Digital Antiquaria, Interactive Fiction



Binning the Trash-80

The microcomputer landscape of 1980 looked very different than it had when the trinity of 1977 first hit the scene. The hackers and early adopters who first made the TRS-80 a success were a step closer to sane than the solder-iron-wielding crazies who had constructed Altairs in their garages out of a set of diagrams and a loose pile of chips, but only a step. Owning and operating a computer was still expensive and difficult, and the question on the lips of wives and girlfriends across the country — “But what is it really good for?” — did not have any particularly strong answers. By 1980, though, that was changing, sufficiently so in fact that segments of the population were beginning to purchase computers not out of fascination with the technology itself, but rather because of what the technology would allow them to accomplish. That was due to the work of all those early adopters, who hacked like mad to create useful things that would justify their time in the terms that matter most in a market economy, dollars and cents, and thus in turn buy them yet more time to hack.

The most celebrated of these early killer apps today, perhaps due to its having been featured on The Triumph of the Nerds documentary, is VisiCalc, the spreadsheet program whose basic approach is still echoed in the Microsoft Excel we all know and love (?) today. Introduced in late 1979, it gave accountants, small-business owners, and even home users compelling reasons to own a microcomputer — whether to calculate taxes or accounts receivable and payable, or just to keep the checkbook balanced. But there are other examples. The first crude word processing application was called The Electric Pencil; it predated even the trinity of 1977, appearing for the early kit computers in December of 1976. It took WordStar, however, to refine the concept into a program flexible and powerful enough to begin to replace the expensive specialized word-processing machines found on secretary’s desks around the country upon its release in September of 1978. dBase, the first programmable relational database for microcomputers, made its first appearance in 1979. And while they were seldom openly mentioned as a reason to buy these early computers, games were always present as a sort of guilty pleasure and secret motivator. They were still crude and limited in 1980, but growing by leaps and bounds in both ambition and sales as the first specialized entertainment publishers such as Adventure International got off the ground, and as new microcomputers much more suited for play began to appear in the wake of the Atari VCS game-console sensation which began sweeping the country in earnest during the holiday season of 1979.

Ah, yes, the new machines. As new applications showed how useful and/or entertaining computers could be in both businesses and homes and as their sales figures responded, plenty of new players came rushing into the market. Some, such as the Exidy Sorcerer and Texas Instruments 99/4, found little traction, becoming mere historical footnotes and modern collector’s items. Others, though, heralded major new technological and cultural developments. We’ll get to these at some point, but for this post let’s see if we can bring some sort of order — i.e., some categories — to the crazy quilt of microcomputers available by 1980. Oddities like the TI 99/4 (the world’s first 16-bit microcomputer based on a CPU of TI’s own design) aside, most computers were based on one of two 8-bit CPU architectures.

First there was the Intel 8080, the chip at the heart of the original Altair kit computer and its contemporaries, and the Z80, a mostly compatible CPU from Zilog that nevertheless offered a more flexible, efficient design; this, you may recall, was the chip Tandy chose for the TRS-80. Apart from the TRS-80, which for better and (as we shall shortly see) for worse remained largely its own thing, these machines generally ran the first widespread platform-agnostic operating system for microcomputers, CP/M (Control Program for Microcomputers). Developed by Gary Kildall at the very dawn of the microcomputer era and published by his company Digital Research, CP/M was the MS-DOS — or, if you like, the Microsoft Windows — of this early era, a de facto if not official standard that allowed machines from a huge variety of makers to share software and information. (There is also a more tangible link between CP/M and MS-DOS: depending on whom you talk to, the original MS-DOS from 1981 was either “inspired by” CP/M or an outright unauthorized reverse engineering of the earlier O/S. But that subject will doubtlessly come up again in later posts…) For a computer to run CP/M, it required two things: an Intel 8080 or Zilog Z80 CPU, and a certain standard bus design for communicating with its disk drives and other peripherals, known as the S-100 — a design which had its origins as far back as the original Altair.(UPDATE: As Jonno points out in the comments, an S-100 bus was not a strict requirement for CP/M.)

CP/M and the Intel- and Zilog-based architectures on which it ran became the standard environment for “serious” microcomputing of the late 1970s and early 1980s, the kind done in corporate offices and small businesses. WordStar and dBase were both born there, and VisiCalc, although conceived on the Apple II, quickly found its way there. CP/M had, however, no graphics capabilities at all and only limited support for real-time operations, making it problematic as a platform for many types of games and even educational software. It also relied upon the existence of at least one disk drive on its host platform at a time when such devices tended to be very pricy. These factors made CP/M and the 8080 a poor fit for the less expensive, usually cassette-based computers generally chosen by home users. That market was dominated by another hardware architecture, that of the MOS Technologies 6502 CPU.

When the 6502 first appeared in 1975, MOS was a tiny independent chip-maker, but that changed when Commodore purchased the entire company in late 1976. This move, one of the smartest that Commodore head Jack Tramiel ever made, left Commodore in the enviable position of making money not only when it sold its own machines such as the PET, but also every time a rival purchased 6502s for its own products. Said rivals initially included only Apple with its Apple II line and a number of kit-based computers from various small manufacturers, but that would change soon enough.

A CP/M equivalent for 6502-based machines was never developed, meaning that they remained largely incompatible with one another. BASIC did serve as a partial lingua franca, as virtually all of these machines housed a version of Microsoft’s industry-standard BASIC in their ROMs, but there was enough variation from implementation to implementation that most programs needed at least some customizing. And of course when one progressed beyond BASIC to assembly language to take full advantage of everything a 6502-based machine had to offer — especially graphics and sound, which capabilities varied wildly from model to model — one was faced with essentially coding everything from scratch for each machine one wished to support. Crazy times — although with the ever-increasing proliferation of incompatible mobile computing devices in our own times it’s starting to look like 1980 all over again.

What the 6502 world lost in compatibility it gained in flexibility. Freed from the need to work through a comparatively complex and inefficient OS like CP/M, programmers could code right to the metal on these machines, manipulating every element of the hardware directly for maximum efficiency. Further, the 6502-based machines, being generally aimed at the home and education markets, tended to feature the graphics and sound capabilities that were missing from the bland, textual world of CP/M; the Apple II, for instance, was the only member of the trinity of 1977 with support for proper bitmap graphics, a subject I’ll begin to discuss in more detail in my next post.

But now you might be wondering where all of this left the TRS-80, which fit neatly into neither of the two categories just described. Although the TRS-80 was built around the Z80 CPU, Radio Shack had chosen in the name of penny pinching not to implement the S-100 bus design. (UPDATE: As happens from time to time around these parts, this is not correct. Actually, the problem involved the memory map of the original TRS-80, in which ROM preceded RAM; a CP/M machine required the reverse. Thanks to Jonno for pointing this out in the comments.) This made CP/M a nonstarter. Despite being a huge success in its early years and still having the largest installed base of any microcomputer, the TRS-80’s future was, at least in retrospect, already clouded in 1980. Its incompatibility with CP/M left it cut off from the quickly growing base of serious business software found on that OS. In spite of the TRS-80’s relatively cheap price, Radio Shack’s reputation as purveyors of cheap junk for the masses did little to attract business users, and in a classic chicken-or-the-egg scenario this lack of business users discouraged developers from porting their products from CP/M to the little oddball Tandy machine. And in the other half of the microcomputer market, the 6502-dominated world of games machines and hobbyist computing, the TRS-80 was also looking like an increasingly poor fit with its almost complete lack of graphics and absolutely complete lack of sound. The arrival of the Atari 400 and 800, colorful 6502-based machines with superb graphics and sound for the time, and, a bit later in early 1981, the Commodore VIC-20, a much less capable machine in comparison but one nevertheless sporting color graphics and sound for an unprecedentedly low price, were particularly ominous signs.

While the wisdom of many of its moves is debatable, Tandy at least did not stand entirely still in the face of these developments. In fact, it released quite a blizzard of new machines, none of which came close to recapturing the market share the TRS-80 enjoyed in the late 1970s.

Tandy released a new machine called the TRS-80 Model 2 (the original TRS-80 being now retroactively renamed to the Model 1) in late 1979. The Model 2 was designed to capture the business computing market that was passing the Model 1 by; it sold with integrated disk drives and did properly implement the S-100 bus included bank-switchable ROM, thus allowing it to run CP/M. But it was also a much more expensive machine than the Model 1 and, most dismaying of all, completely incompatible with it. Thanks to Radio Shack’s usual lack of marketing acumen and genius for clunky, tacky-looking design as well as its high price, it was not a big success in the business market, while its incompatibility made it of little interest to existing Model 1 owners.

The Model 3 which appeared to replace the Model 1 in the summer of 1980, meanwhile, was rather forced on Radio Shack. The Model 1 had put out so much radio interference that, in an example of the boundless ingenuity that marked the early microcomputer era, people began writing programs to manipulate memory so as to make music using this interference along with a nearby transistor radio to pick it up. New FCC regulations for 1981 forced Radio Shack to build in proper RF shielding, and thus spoiled that particular kind of fun. In addition to fixing this issue, the Model 3 also sported a slightly faster version of the Z80 CPU and (hallelujah!) real lower-case letter support for both input and output amongst other modest improvements. Yet it did nothing to improve the Model 1’s meager display capabilities. And, in the one-step-forward two-steps-back dance that seemed to define Radio Shack, the Model 3 was optimistically said to be just “80%” compatible with the Model 1, while, once again, no S-100 bus meant no the design did not allow for CP/M. Radio Shack in their marketing genius now had three separate machines labeled the TRS-80, each now partially or entirely incompatible with its siblings. Just imagine trying to figure out what software actually worked on your version…

And incredibly, there was yet another completely incompatible TRS-80 released in 1980, this one the most significant of all. Although officially called the TRS-80 Color Computer, it was a radical departure from anything seen before, being built around perhaps the most advanced 8-bit CPU ever produced, the new Motorola 6809E. Like so many Radio Shack systems, it offered intriguing potential bundled together with some dismaying weaknesses. On the plus side were the powerful 6809E itself and an advanced Microsoft BASIC that made it a favorite among hobbyist programmers; on the weak side were sound and graphics capabilities that, while a step up from the other TRS-80 models, were still not competitive with new and upcoming models from companies like Atari and Commodore. In spite of that the CoCos, as they soon became affectionately known, had a long run during which they consistently flew under the radar of the mainstream, attracting little in the way of games or applications from most publishers or even from Radio Shack itself, but survived on the back of a sort of cult industry all their own sustained by a fanatically loyal user base. The CoCo line did not finally go out of production until 1991.

There are many more interesting stories to tell about Radio Shack’s quirky little computers, but none would ever come close to dominating the industry the way that the TRS-80 Model 1 did for those first few years. In truth, even the Model 1 was popular because it was widely available at a time when distribution channels for other brands were barely extant and because its price was reasonable rather than because of any sterling technical qualities of the machine itself. The TRS-80 was really not so far removed from Radio Shack’s other products: it basically got the job done, but in about the most uncool and unsexy way imaginable. It primed the pump of the home computer industry and brought adventure games into the home for the first time, but already in 1980 its time was passing.

So, we’ll bid adieu to the old Trash-80 and move on next time to look at the machine that made the company that has come to define cool and sexy in technology. Yes, I’m talking about those two plucky kids in that California garage.


Posted by on September 6, 2011 in Digital Antiquaria, Interactive Fiction



Robert Lafore’s Interactive Fiction

Quick: Who first coined the term interactive fiction? And why?

Assuming you had an answer at all, and assuming you’re a loquacious git like me, it may have run something like this:

The term originated, many years after the birth of the genre it describes, in the early 1980s with a company called Infocom. At that time, games of this sort were commonly known as “adventure games” or “text adventures,” the latter to distinguish them from the graphical brand of story-based games which were just beginning to compete with text-based titles in the marketplace of that time. Indeed, both terms are commonly used to this day, although they generally connote a rather “old-school” form of the genre that places most of its emphasis on the more gamelike, as opposed to literary, potentials of the form. Infocom decided that interactive fiction was a term which more accurately described their goal of creating a viable new literary form, and following that company’s demise the term was appropriated by a modern community of text-based storytellers who in many ways see themselves as heirs to Infocom’s legacy.

That, anyway, is what I wrote five years ago in my history of interactive fiction. I still think it describes pretty accurately Infocom’s motivation for replacing the term text adventure with IF, but it’s inaccurate in one important sense: the term did not actually originate with Infocom. It was rather the creation of a fellow named Robert Lafore, who founded a company under that name in 1979 and published software from 1980 to 1982 through Scott Adams’s Adventure International. By the time that Lafore came to AI, he already had three titles in the line ready to go. Local Call for Death and Two Heads of the Coin are mystery stories with an obvious debt to Dorothy L. Sayers’s Lord Peter Wimsey and Arthur Conan Doyle’s Sherlock Holmes respectively, while Six Micro-Stories presents six brief vignettes in a variety of settings and genres. Over the next year or so he wrote two more: His Majesty’s Ship Impetuous, in the style of C.S. Forester’s Horatio Hornblower novels; and Dragons of Hong Kong, in the spirit of Sax Rohmer’s Fu Manchu series of oriental mysteries.

To understand what Lafore’s concept of IF is and how it works, let’s begin with some promotional copy. After asking us to “step into a new dimension in literature,” Adventure International’s advertisement for the line continues:

Traditionally, literature has been a one-way medium. The information flow was from the novel to the reader, period. Interactive fiction changes this by permitting the reader to participate in the story itself.

The computer sets the scene with a fictional situation, which you read from the terminal. Then you become a character in the story: when it’s you’re turn to speak, you type in your response. The dialog of the other characters, and even the plot, will depend on what you say.

Wow. No previous computer “game” had dared to compare its story to that of a novel. Just the text above, divorced from the actual program it describes, demonstrates a real vision of the future of ludic narrative.

But as anyone who’s had experience with early computer-game ad copy knows, the reality often doesn’t match the rhetoric, with the latter often seeming aspirational rather than descriptive, corresponding more with the game the authors would like to have created than with the technical constraints of 8-bit processors and minuscule memories. Here’s a complete play-through of the first of the vignettes of Six Micro-Stories, “The Fatal Admission”:

Admittedly, this is not Lafore’s finest hour, so let’s try to be gentle. Let’s leave aside the fact that an admiral could only have been in the Kriegsmarine, not the Gestapo, as well as Lafore’s obvious cluelessness about German. Let’s also leave aside the illogicality of the question on which the story turns. (If I’ve been actively impersonating Colonel Braun for so long, how could I not know what flight wing I am with?) And let’s leave aside the unfair, learning-by-death aspect of the whole experience. I just want to get down to how the program works right now.

As will probably surprise no one, the program is not parsing the player’s responses in any meaningful sense, but rather doing simple pattern matching on the player’s input, somewhat in the style of Eliza but without even that program’s sophistication. Given this, it’s inevitably very easy to trip the program up, intentionally or unintentionally. Consider the following response to the admiral’s question about Captain Eiderdown:

What’s happened here is that the program has found the “not” in the player’s input and thrown out everything else, assuming the sentence to be a negative answer. This is not an isolated incident. Let’s try yet again to answer the trick question about the 57th Air Wing correctly and stay alive.

Cool! Now we can accept our new assignment and learn even more juicy Nazi secrets.

Woops. The program has failed to understand us entirely that time, which is at least better than a misunderstanding I suppose.

Obviously simple answers are the best answers.

Various vignettes in Six Micro-Stories do various things with the entered text. Perhaps the most complex and computationally interesting entry in the collection is called “Encounter in the Park,” in which you must try to get a date from a young lady you meet by chance in the park. It plays like a goal-directed version of Eliza, albeit a very primitive one. In case the connection is not obvious, the love interest’s name is even, you guessed it, Eliza.

The ultimate solution is ice cream; the mere mention of it causes Eliza to shed her sophisticated Updike-reading trappings and collapse into schoolgirl submissiveness. (The implications of this behavior in a paternalistic society like ours we will leave for the gender-studies experts.)

Another vignette is little more than a multiple-choice quiz on the age-old question of the definition of art.

And then there’s this nihilistic little number, in which nothing you type makes any difference whatsoever:

Lafore was either in a pissy mood when he wrote that one or homing in on some existential truth the rest of us can’t bear to face — take your choice.

But these are exceptions. When we get past the parser to look at the player’s actual options (thank God for BASIC!), we find that the remainder of the vignettes, as well as all of the vastly more compelling full-length stories, are really multiple-choice narratives, in which the player can choose from (usually) two or (occasionally) as many as three, four, or five options in a series of hard-coded decision points. In other words, these are really choose-your-own-adventure stories / hypertext narratives / choice-based narratives (choose your term). They are much closer to the Choose Your Own Adventure books that were just beginning to flood bookstores in 1980 than they are to the text adventures of Scott Adams or, indeed, to the interactive fiction that Infocom would soon be publishing. It’s just that their real nature is obscured by the frustrating Eliza-esque “parser” which adds an extra layer of guesswork to each decision. Sure, there are arguments to be made for the parser here. Theoretically at least, allowing the player to make decisions “in her own words” could help to draw her into the story and the role she plays there. In practice, though, the opportunities for miscommunication are so great that they outweigh any possible positives.

In a demonstration of just how ridiculously far I’m willing to go to prove a point, I reimplemented what I consider to be the most satisfying of the longer stories, His Majesty’s Ship Impetuous, using the ChoiceScript system. (Well, okay, I did want to try out ChoiceScript as well, and this project made a good excuse…) If you care to play it, you’ll find that it’s a much more complete and satisfying effort than those I’ve highlighted above, if not totally free of some of their design problems, in that getting an optimal outcome requires a bit of learning from death. Still, Lafore’s writing is sharp and well suited to the genre, and the story as a whole is carefully thought through; this represents easily the most competently crafted fiction yet to grace a computer screen in 1980. Its good qualities come through much better shorn of the Eliza trappings. Indeed, it’s much more interesting to consider in this light, because choice-based narratives had not yet made their way to the computer before Lafore set to work.

I don’t want to try to formulate a detailed theory of choice-based narrative design here, particularly because Sam Kabo Ashwell is gradually doing exactly that via his amazing series of analyses of various works in the form — a series to which nothing I say here can bear comparison. I do, however, want to note that parser-based and choice-based narratives are very different from one another, yet are constantly confused; Lafore was perhaps the first to make this mistake, but he was hardly the last. For example, the Usenet newsgroup that was the center of IF discussion for many years,, was originally founded by hypertext aficionados, only to be invaded and co-opted by the text-adventure people. And even today, the very interesting-looking Varytale project bills itself as “interactive fiction.” Choice of Games doesn’t go that far, but does actively encourage authors to submit their ChoiceScript games to IF competitions, something that doesn’t really bother me but that perhaps does bring two sets of expectations into a collision that doesn’t always serve either so well.

The primary formal difference here is in the granularity of the choices offered. Parser-based IF deals in micro-actions: picking up and dropping objects, moving from one concretely bounded space to another, fiddling with that lock in exactly the right way to get that door open. Choice-based narratives at their best deal in large, defining decisions: going to war with Eastasia or with Eurasia, trying to find your way back to the Cave of Time or giving up. Even when the choices presented are seemingly of an IF-like granularity, such as whether to take the left or the right branch in that dungeon you’re exploring, they should turn out to be of real consequence. A single choice in a choice-based narrative can encompass thousands of turns in a parser-based work of IF — or easily an entire game. When authors combine a choice-based structure with an IF-like level of granularity, the results are almost always unfortunate; see for example 2009 IF Competition entry Trap Cave, which attempted to shoehorn an Adventure-style cave crawl into a choice-based format, or Ashwell’s analysis of a Fighting Fantasy gamebook. A choice-based narrative needs to give its player real narrative agency — at the big, defining level of choosing where the plot goes — to be successful. Parser-based IF does not; it can let the player happily busy herself with the details while guiding her firmly along an essentially railroaded plot arc.

Given that they can tell such large swathes of story at a hop, we might be tempted to conclude that choice-based games allow deeper, richer stories. In a way that’s true, especially in the context of 1980; it would have been impossible to pack even 10% of the story of His Majesty’s Ship Impetuous into a Scott Adams-style game. It’s also true that choice-based narratives are generally not so much about challenging their players with puzzles or tactical dilemmas as they are about the “pure” experience of story. A contemporary reviewer of His Majesty’s Ship Impetuous writing in SoftSide magazine, Dave Albert, very perceptively picked up on these qualities and in the process summarized the joys of a choice-based narrative as well as some of the frustrations of Lafore’s early implementation of same:

Lafore has tried to write an open-ended story with several possible endings, and he has tried to structure it so that the reader/player is unaware of the import of the decisions made. Where in previous stories the player is allowed to ask any question that comes to mind (with often incongruous and confusing results), in Impetuous yes or no decisions are presented. There is no way to work around this structure, and it is greatly to the benefit of the program that such is the case. There is no puzzle to solve, only a story to develop. The end goal is to survive and the decisions that you make will dictate whether you do or not. However, you cannot decipher what is the proper course of action that will guarantee your success. There are enough critical points (decisions) in the program to make you uncertain of your actions after several games. This greatly enhances the value of the program.

I wouldn’t frame all the specific design choices in Impetuous quite so positively as Albert, but I think the larger points stand. Choice-based works encourage the player to view them from on-high, like a puppet master manipulating not just her character but the strings of the plot itself; note that Impetuous is written in the third-person past tense. The player manipulates the story, but she does not always feel herself to be in the story. Some more recent choice-based works have even divorced the player entirely from any set in-world avatar. Parser-based IF, however, excels at putting you right there, immersed in the virtual reality you are exploring. Both approaches are valid for telling different kinds of stories, creating different kinds of experiences, and both can go horribly awry. Too many choice-based works leave their player feeling so removed from the action that she ceases to care at all (this, I must admit, is my typical reaction when playing even the modern ChoiceScript games, and the main reason my heart belongs firmly to the parser-based camp); too many parser-based works, especially early ones, become so fiddly that they only frustrate.

Minus the parser frustrations, Impetuous is a fairly successful piece of work, written at an appropriate level of abstraction for the choice-based form. If many of its choices are ultimately false ones, having no real effect on the plot, it disguises this well enough that the player does not really realize it, at least on the first play-through, while enough choices do matter to keep the player interested. Best of all, and most surprisingly when we consider the structure of, say, the early Choose Your Own Adventure books, there are no arbitrary, context-less choices (will you go right or left in this dungeon?) and no choices that lead out of the blue to death. Some of its ethical positions are debatable, such as the way it favors plunging headlong into battle versus a more considered approach, but perhaps that’s par for the course given its genre.

One of the amusing and/or disconcerting aspects of writing this blog has been that I sometimes find myself honoring pioneers who have no idea they are pioneers. Lafore traded in his entertainment-software business after writing Dragons of Hong Kong for a long, successful, and still ongoing career as an author of technical books. I’m going to guess that he has no idea that the term he invented all those years ago remains vital while the works to which he applied it have been largely forgotten.

By way of remedying that at least a little bit, do give my implementation of His Majesty’s Ship Impetuous a shot. I think it works pretty well in this format, and is more entertaining and well-written than it has a right to be. (Credit goes to Lafore for that, of course.) And if you’d like to play the originals of either Six Micro-Stories or His Majesty’s Ship Impetuous, you can do that too.

1. Download my Robert Lafore starter pack.
2. Start the sdltrs emulator.
3. Press F7, then load “newdos.dsk” in floppy drive 0 and either “microstories.dsk” or “impetuous.dsk” into floppy drive 1.
4. Reboot the emulator by pressing F10.
5. At the DOS prompt, type BASIC.
6. Type LOAD “STORY:1″ for Six Micro-Stories; LOAD “STORY1:1” for His Majesty’s Ship Impetuous.
7. Type RUN.

Next time I want to have a look at the evolving state of the computer industry in 1980 — and begin to execute a (hopefully) deft platform switch.


Posted by on September 1, 2011 in Digital Antiquaria, Interactive Fiction


Tags: ,