RSS

Search results for ‘trinity’

The Forth Dimension

Forth

The Forth programming language reached maturity around 1970 after more than ten years of development and experimentation by its creator, Charles H. Moore. Its first practical use was to control a radio telescope at the National Radio Astronomy Observatory, where Moore was employed at the time. From there Forth spread to other telescopes and other observatories, cementing a connection with astronomy and space science that persists to this day; in addition to controlling countless telescopes and planetariums earthside, Forth has been into space many times on probes and satellites of all descriptions. Yet already by the end of its first decade Forth had spread far beyond astronomical circles. It was being used to control the motorized cameras used to film miniature-based special-effects sequences (suddenly a booming business in the wake of Star Wars); to control automotive diagnostic equipment; as the firmware in various medical devices; to control automated agricultural equipment. Closer to our usual interests, Atari had invested a lot of money into developing a version of the language suitable for programming pinball machines and stand-up arcade games, while versions of the language were available for all of the trinity of 1977 within a year or so of their appearance. The key to Forth’s burgeoning popularity was its efficiency: it not only ran faster than just about any language short of assembly, but in the right hands it was also almost unbelievably stingy with memory. Those were good qualities to have in the late 1970s, when the average PC ran at 1 MHz and had no more than 16 K.

We’ll get into why Forth is so efficient in just a bit. But first let’s take a look at the language itself. If you’ve programmed before in just about any other language, Forth will likely seem deeply, deeply weird. Still, there’s also something kind of beautiful about it. If you’d like to follow along with the few examples I’ll give in this article, you have many free implementations of the language to choose from. A good choice, and one that has the advantage of working on Windows, Macintosh, and Linux alike, is Gforth.

Forth is an interactive programming language, like the microcomputer BASICs so many of us grew up with. This means that you can enter commands directly at the Forth console and watch them run immediately.

Forth is also a stack-based programming language, and this is the key to everything else about it. Virtually every programming language uses a stack under the hood; it’s one of the most fundamental mechanisms of computer science. But most other languages try to hide the stack from us, strain to make it so that we need not trouble ourselves over it and, indeed, don’t really need to know much about it at all. The only time many programmers even hear the word “stack” is when an infinite loop or runaway recursion causes a program to crash with a “stack overflow error.” Forth, however, doesn’t hide its stack away like something shameful. No, Forth loves its stack, sets it front and center for all to see. Forth demands that if we are to love it, we must also love its stack. Given this, it would behoove me at this point to explain just what is meant by the idea of a stack in the first place.

A stack is just that: a stack of numbers stored in a special part of memory, used for storing transient data. Adding a number to the stack is called pushing to the stack. It always goes on top. Taking a number from the stack is called popping the stack. It’s always the top number — i.e., the most recently pushed — that’s popped, after which that number is erased from the stack. A stack is, in other words, a first-in-last-out system — or, if you like, a last-in-first-out system. If you haven’t quite wrapped your head around the idea, don’t sweat it. It should become clearer momentarily.

Let’s look at how we can do simple arithmetic in Forth. Let’s say we want to add 2 and 3 together and print the result. In a typical modern language like Java, we’d just do something like this:

System.out.print(2 + 3);

In Forth, we do it a bit differently. If you’ve started up a Forth environment, you can type this in and see the result immediately.

2 3 + .

If you happened to use a Hewlett-Packard scientific calculator back in the day, this notation might look familiar to you. It’s known as “postfix” or “reverse Polish” notation. Let’s unpack this piece by piece to see how exactly Forth is handling this expression.

The first thing to understand here is that Forth reads almost everything as a word — Forthese for a command. A number standing by itself is actually interpreted as a word, a command to push that number onto the stack. Therefore, assuming we started with an empty stack, the stack looks like this after the first two parts of the expression above have been processed:

3
2

Now the interpreter comes to the “+,” which is also read as a command, instructing it to pop two values off the stack, add them together, and push the result back onto the stack. After doing so, the stack looks like this:

5

Finally, the “.” just instructs the interpreter to pop the top value off the stack and print it.

Now let’s consider a more complicated algebraic expression, like “(4 + 5) * (6 + 7).” In Forth, it would be written like this:

4 5 + 6 7 + * .

Let’s walk through this. We push 4 and 5 onto the stack.

5
4

We pop them off, add them together, and push the result to the stack.

9

We push 6 and 7 onto the stack.

7
6
9

We add them together and push the result.

13
9

We pop the top two values on the stack, multiply them together, and push the result.

117

And finally we pop and print the result.

To this point we’ve been working interactively. The key to programming in Forth, however, is to define new words; this is Forth’s nearest equivalent to the function calls common to other languages. Let’s consider a function to cube a number, which would look like this in Java:

int cube (int num) {
   return (num * num * num);
}


In Forth, we might do it by entering the following lines at the console:

: CUBE ( N -> N. Cube a number)
   DUP DUP ( Now there are three copies)
   * * ( Get the cube)
;


Let’s again unpack this piece by piece. The colon is a word which tells the interpreter that what follows will be a new word definition, to be terminated by a semicolon. “CUBE” is the name of the new word we are creating. All text within parenthesis are comments, to be ignored by the interpreter. The “N -> N.” notation within the first parenthesis is not required, but is considered good practice in Forth programming. It tells us that this word will pop and operate on the current topmost word on the stack, and will push a single result onto the stack. Forth words do not take arguments like functions in other languages, but operate only on the current contents of the stack. Thus it’s the programmer’s responsibility to set the stack up properly before invoking a word, and to know what that word will have done to the stack when it finishes. The two lines in the middle are the meat of the word, the actual instructions it represents.

Let’s say we call this new word “CUBE” with a 5 on top of the stack — i.e., by entering “5 CUBE .” at the console. Thus the initial stack looks like this:

5

Now we’re going into the body of the word itself. The two “DUP” statements tell the interpreter to duplicate the top value on the stack twice, without destroying — i.e., without actually popping — the original value. So, we end up with:

5
5
5

Now we pop the top two values, multiply them together, and push the result.

25
5

Then we just do the same thing again.

125

And our work is done.

Next we’ll see how we can use this word within another word. But first let’s see how we would do that as a function in Java.

void cubes10() {
   for (int i = 0; i < 10; i ++) {
      System.out.print("\n");
      System.out.print(i + " ");
      System.out.print(cube(i));
   }
}


Here it is as a Forth word:

: CUBES10 ( ->. Print a table of cubes of 0-9.)
   10 0 ( Indices of loop)
   DO ( Start Loop)
      CR I . I CUBE . ( Print a number and its cube.)
   LOOP ( End of loop.)
;


As the first comment indicates, the “CUBES10” word expects nothing on the stack and leaves nothing there. We begin by pushing 10 and 0 onto the stack. Now Forth’s back-asswordness really comes to the fore: the “DO” word pops the last two words off the stack. It will increment a variable — always known as “I” — from the second of these until it is equal to the first of these, looping each time through the block of words contained between “DO” and “LOOP.” Within the loop, the word “CR” simply causes the cursor to move down to the next line. Keeping in mind that “I” represents the current value of the variable being incremented, which can be pushed and popped just like a constant, the rest should hopefully be comprehensible. The output looks something like this:

0 0
1 1
2 8
3 27
4 64
5 125
6 216
7 343
8 512
9 729

Forth is built entirely from words like the ones we’ve just created. In fact, calling Forth a programming language may be something of a misnomer because virtually every piece of its vocabulary is redefinable. Forth comes with a dictionary of common, useful words, but the programmer is always free to replace these with others of her own devising, to make Forth into whatever she wants it to be. The most basic words are not constructed from other Forth words but rather written as in-line assembly language. The programmer adds words to this base which do ever more complicated tasks, until finally she writes a word that subsumes the entire program. To take an example from Leo Brodie’s classic book Starting Forth (one of Forth’s chief products down through the decades has been horrid puns), a Forth program to control a washing machine might have this as its top-level word:

: WASHER
   WASH SPIN RINSE SPIN
;


Each of the words referenced within “WASHER” would likely call several words of their own. “RINSE,” for instance, might look like this:

: RINSE
   FAUCETS OPEN TILL-FULL FAUCETS CLOSE
;


Each of these words would call still more words of its own, until we come to the level of fundamental assembly-language commands to control the CPU on its most basic level. Forth words can even create new words dynamically, resulting in programs that effectively rewrite themselves as they run to suit their environment.

Especially if you’re a programmer yourself, you may have already formed an impression by now of Forth’s strengths and weaknesses. And yes, contrary to the claims of many Forth zealots, the latter do exist in considerable numbers. Even leaving aside the strange reverse notation, which one can eventually get used to, Forth programs can be incredibly hard to actually read thanks to their reliance on pushing and popping to the stack, with the associated lack of helpful variable names. For this reason Forth has occasionally been called a “write-only” language; Forth code can be well-nigh incomprehensible even to the person who originally wrote it after just a week or so has elapsed. It’s the polar opposite of a contemporaneous language I once wrote about on this blog, Pascal, replacing the latter’s pedantic emphasis on structure and readability above all else with a love of hackerish tricks, sleight of hand, and cleverness that can sometimes come off as sort of facile. Just trying to keep a picture in your head of the current configuration of the stack, something on which absolutely everything you do in Forth depends, can be a nightmare as programs get more complicated and their possible states get more varied. If not quite the last language in the world I’d use to write a complicated modern application, it must be pretty close to it. It’s “write-only” qualities make it particularly unsuitable for team projects, a problem given that most useful software long ago got too complicated for solo programmers.

Yet there’s also an uncompromising beauty about Forth that has drawn many people to it, a beauty that has occasionally been compelling enough to override people’s better judgment and make them use the language for purposes to which it really isn’t terribly suited. Whatever else you you can say about it, it sticks to its philosophical guns tenaciously. There’s a fascination to building a dictionary of your own, to effectively making a programming language all your own. Grizzled Forth programmers have often replaced virtually everything that comes with the language to create something that is absolutely theirs. That’s a rare experience indeed in modern programming. People who love Forth really love it. This (in Leo Brodie’s words) “high-level language,” “assembly language,” “operating system,” “set of development tools,” and “software design philosophy” has that rare ability, like my old love the Commodore Amiga, to inspire a level of visceral, emotional commitment that smacks more of romance or religion than practicality.

If we do insist on speaking practically, within certain domains Forth excels. It’s still widely used today in extremely constrained environments where every byte and every processor cycle counts, such as, well, the firmware inside a washing machine. To understand what makes Forth so efficient, we need to first understand that those more readable Java functions I showed you above must ultimately be converted into a form pretty close to that we see in the Forth versions. By making us meet the computer halfway (or further), Forth eliminates a whole lot of shuffling about that costs precious processor time. A well-written Forth program can actually be smaller than its pure assembly-language equivalent — much less the same program written in some other high-level language — because Forth so emphasizes reusable words. And it can be surprisingly easy to port Forth programs from computer to computer; one need only re-implement that bottommost layer of words in the new machine’s assembly language, and leave the rest alone.

Of course, all of these advantages that make Forth so attractive to programmers working on embedded systems and device firmware today also made it mighty appealing to programmers of ordinary PCs of the late 1970s and 1980s, working as they were under stringent restrictions of their own. For some early PCs Forth was the only language other than the ROM-housed BASIC or assembly language that made any sense at all. Stripped down to its essentials, Forth can be tiny; for example, Cognetics Corporation, a developer we met in a recent article, worked with a version of Forth that fit into just 6 K. Thus Forth enjoyed considerable popularity, with a fair number of games and other commercial software written in the language. John Draper, the legendary “Captain Crunch” who taught Steve Wozniak and Steve Jobs how to phone phreak amidst myriad other hacking accomplishments, was a particular devotee, distributing a Forth development system for the Apple II which he also used to write the II’s first really usable word processor, EasyWriter. Many of the magazines ran columns or extended series on Forth, which was available, and generally in multiple versions, for virtually every remotely viable machine of the era. One British computer, the ill-fated but fascinating Jupiter Ace, even included Forth in ROM in lieu of BASIC. Tellingly, however, as the 1980s wore on and software got more complex Forth became less common amongst commercial application and game developers, even as it retained a dedicated cult of hobbyists who have persisted with the language to this day. According to Charles Moore, this was as it should be. Forth, he told Jerry Pournelle in Byte‘s March 1985 issue, had never been intended for closed-source commercial software.

Writing big programs to be distributed in object code is a distortion of what Forth is all about. Forth is like a set of craftsman’s tools. You use it to make still more tools that work with whatever you specialize in. Then you use it to solve problems. Forth programs should always be distributed in source code. You should have Forth online at all times. Recompile whenever you want to use a program. Forth programs are tailored, they’re living and dynamic, not static object code.

“Distortion” or not, the most important Forth game, and arguably the most ambitious project ever completed in the language, would appear more than a year after those remarks. I know I’ve been teasing you with it for a while, but, with all the pieces in place at last, we’ll get there next time… really, I promise.

(Probably the best place to look to get an idea of the excitement Forth once generated, as well as a very good picture of the language itself, is the August 1980 Byte, which had Forth as its main theme. My example code in this article has its origins there, as does the picture.)

 
 

Tags: ,

Send in the Clones

In computer parlance, a clone is Company B’s copycat version of Company A’s computer that strains to be as software and hardware compatible with its inspiration as possible. For a platform to make an attractive target for cloning, it needs to meet a few criteria. The inspiration needs to be simple and/or well-documented enough that it’s practical for another company — and generally a smaller company at that, with far fewer resources at its disposal — to create a compatible knock-off in the first place. Then the inspiration needs to be successful enough that it’s spawned an attractive ecosystem that lots of people want to be a part of. And finally, there needs to be something preventing said people from joining said ecosystem by, you know, simply buying the machine that’s about to be cloned. Perhaps Company A, believing it has a lock on the market, keeps the price above what many otherwise interested people are willing or able to pay; perhaps Company A has simply neglected to do business in a certain part of the world filled with eager would-be buyers.

Clones have been with us almost from the moment that the trinity of 1977 kicked off the PC revolution in earnest. The TRS-80 was the big early winner of the trio thanks to its relatively low price and wide distribution through thousands of Radio Shack stores, outselling the Apple II in its first months by margins of at least twenty to one (as for the Commodore PET, it was the Bigfoot of the three, occasionally glimpsed in its natural habitat of trade-show booths but never available in a form you could actually put your hands on until well into 1978). The first vibrant, non-business-focused commercial software market in history sprung up around the little Trash 80. Cobbled together on an extreme budget out of generic parts that were literally just lying around at Radio Shack — the “monitor,” for instance, was just a cheap Radio Shack television re-purposed for the role — the TRS-80 was eminently cloneable. Doing so didn’t make a whole lot of sense in North America, where Radio Shack’s volume manufacturing and distribution system would be hard advantages to overcome. But Radio Shack had virtually no presence outside of North America, where there were nevertheless plenty of enthusiasts eager to join the revolution.

EACA shindig in Hong Kong

A shindig for EACA distributors in Hong Kong. Shortly after this photo was taken, Eric Chung, third from right in front, would abscond with $10 million and that would be that for EACA.

The most prominent of the number of TRS-80 cloners that had sprung up by 1980 was a rather shady Hong Kong-based company called EACA, who made cheap clones for any region of the world with distributors willing to buy them. Their knock-offs popped up in Europe under the name “The Video Genie”; in Australasia as the “Dick Smith System 80,” distributed under the auspices of Dick Smith Electronics, the region’s closest equivalent to Radio Shack; even in North America as the “Personal Micro Computers PMC-80.” EACA ended in dramatic fashion in 1983 when founder Eric Chuang absconded to Taiwan with all of his company’s assets that he could liquify, $10 million worth, stuffed into his briefcase. He or his descendents are presumably still living the high life there today.

By the time of those events, the TRS-80’s heyday was already well past, its position as the most active and exciting PC platform long since having been assumed by the Apple II, which had begun a surge to the fore in the wake of the II Plus model of 1979. The Apple II was if anything an even more tempting target for cloners than the TRS-80. While Steve Wozniak’s hardware design is justly still remembered as a marvel of compact elegance, it was also built entirely from readily available parts, lacking the complex and difficult-to-duplicate custom chips of competitors like Atari and Commodore. Wozniak had also insisted that every last diode on the Apple II’s circuit board be meticulously documented for the benefit of hackers just like him. And Apple, then as now, maintained some of the highest profit margins in the industry, creating a huge opportunity for a lean-and-mean cloner to undercut them.

The Franklin Ace 1000

A Franklin Ace 1000 mixed and matched with a genuine Apple floppy drive.

Assorted poorly distributed Far Eastern knock-offs aside, the first really viable Apple II clone arrived in mid-1982 in the form of the Franklin Ace line. The most popular model, the Ace 1000, offered for about 25 percent less than a II Plus complete hardware and software compatibility while also having more memory as well as luxuries like a numeric keypad and upper- and lowercase letter input. The Ace terrified Apple. With the Apple III having turned into a disaster, Apple remained a one-platform company, completely dependent on continuing Apple II sales — and continuing high Apple II profit margins — to fund not one but two hugely ambitious, hugely innovative, and hugely expensive new platform initiatives, Lisa and Macintosh. A viable market in Apple II workalikes which cut seriously into sales, or that forced price cuts, could bring everything down around their ears. Already six months before the Ace actually hit the market, as soon as they got word of Franklin’s plans, Apple’s lawyers were therefore looking for a way to challenge Franklin in court and drive their machine from the market.

As it turned out, the basis for a legal challenge wasn’t hard to find. Yes, the Apple II’s unexceptional hardware would seem to be fair game — but the machine’s systems software was not. Apple quickly confirmed that, like most of the TRS-80 cloners, Franklin had simply copied the contents of the II’s ROM chips; even bugs and the secret messages Apple’s programmers had hidden inside them were still there in Franklin’s versions. A triumphant Apple rushed to federal court to seek a preliminary injunction to keep the Ace off the market until the matter was decided through a trial. Much to their shocked dismay, the District Court for the Eastern District of Pennsylvania found the defense offered by Franklin’s legal team compelling enough to deny the injuction. The Ace came out right on schedule that summer of 1982, to good reviews and excellent sales.

Franklin’s defense sounds almost unbelievable today. They readily admitted that they had simply copied the contents of the ROM chips. They insisted, however, that the binary code contained on the chips, being a machine-generated sequence of 1s and 0s that existed only inside the chips and that couldn’t be reasonably read by a human, was not a form of creative expression and thus not eligible for copyright protection in the first place. In Franklin’s formulation, only the human-readable source code used to create the binary code stored on the ROM chips, which Franklin had no access to and no need for given that they had the binary code, was copyrightable. It was an audacious defense to say the least, one which if accepted would tear down the legal basis for the entire software industry. After all, how long would it take someone to leap to the conclusion that some hot new game, stored only in non-human-readable form on a floppy disk, was also ineligible for copyright protection? Astonishingly, when the case got back to the District Court for a proper trial the judge again sided with Franklin, stating that “there is some doubt as to the copyrightability of the programs described in this litigation,” in spite of an earlier case, Williams Electronics, Inc. v. Arctic International, Inc., which quite clearly had established binary code as copyrightable. Only in August of 1983 was the lower court’s ruling overturned by the Federal Court of Appeals in Philadelphia. A truculent Franklin threatened to appeal to the Supreme Court, but finally agreed to a settlement that January that demanded they start using their own ROMs if they wanted to keep cloning Apple IIs.

Apple Computer, Inc., v. Franklin Computer Corp. still stands today as a landmark in technology jurisprudence. It firmly and finally established the copyrightable status of software regardless of its form of distribution. And it of course also had an immediate impact on would-be cloners, making their lives much more difficult than before. With everyone now perfectly clear on what was and wasn’t legal, attorney David Grais clarified the process cloners would need to follow to avoid lawsuits in an episode of Computer Chronicles:

You have to have one person prepare a specification of what the program [the systems software] is supposed to do, and have another person who’s never seen the [original] program write a program to do it. If you can persuade a judge that the second fellow didn’t copy from the [original] code, then I think you’ll be pretty safe.

After going through this process, Apple II cloners needed to end up with systems software that behaved absolutely identically to the original. Every system call needed to take the exact same amount of time that it did on a real Apple II; each of the original software’s various little quirks and bugs needed to be meticulously duplicated. Anything less would bring with it incompatibility, because there was absolutely nothing in those ROMs that some enterprising hacker hadn’t used in some crazy, undocumented, unexpected way. This was a tall hurdle indeed, one which neither Franklin nor any other Apple II cloner was ever able to completely clear. New Franklins duly debuted with the new, legal ROMs, and duly proved to be much less compatible and thus much less desirable than the older models. Franklin left the Apple-cloning business within a few years in favor of hand-held dictionaries and thesauri.

There is, however, still another platform to consider, one on which the cloners would be markedly more successful: the IBM PC. The open or (better said) modular architecture of the IBM PC was not, as so many popular histories have claimed, a sign of a panicked or slapdash design process. It was rather simply the way that IBM did business. Back in the 1960s the company had revolutionized the world of mainframe computing with the IBM System/360, not a single computer model but a whole extended family of hardware and software designed to plug and play together in whatever combination best suited a customer’s needs. It was this product line, the most successful in IBM’s history, that propelled them to the position of absolute dominance of big corporate computing that they still enjoyed in the 1980s, and that reduced formerly proud competitors to playing within the house IBM had built by becoming humble “Plug-Compatible Manufacturers” selling peripherals that IBM hadn’t deigned to provide — or, just as frequently, selling clones of IBM’s products for lower prices. Still, the combined profits of all the cloners remained always far less than those of IBM itself; it seemed that lots of businesses wanted the security that IBM’s stellar reputation guaranteed, and were willing to pay a bit extra for it. IBM may have thought the PC market would play out the same way. If so, they were in for a rude surprise.

The IBM PC was also envisioned as not so much a computer as the cornerstone of an ever-evolving, interoperable computing family that could live for years or decades. Within three years of the original machine’s launch, you could already choose from two CPUs, the original Intel 8088 or the new 80286; could install as little as 16 K of memory or as much as 640 K; could choose among four different display cards, from the text-only Monochrome Display Adapter to the complicated and expensive CAD-oriented Professional Graphics Controller; could choose from a huge variety of other peripherals: floppy and hard disks, tape backup units, modems, printer interfaces, etc. The unifying common denominator amongst all this was a common operating system, MS-DOS, which had quickly established itself as the only one of the four operating paradigms supported by the original IBM PC that anyone actually used. Here we do see a key difference between the System/360 and the IBM PC, one destined to cause IBM much chagrin: whereas the former ran an in-house-developed IBM operating system, the operating system of the latter belonged to Microsoft.

The IBM architecture was different from that of the Apple II in that its operating system resided on disk, to be booted into memory at system startup, rather than being housed in ROM. Still, every computer needs to have some code in ROM. On an IBM PC, this code was known as the “Basic Input/Output System,” or BIOS, a nomenclature borrowed from the CP/M-based machines that preceded it. The BIOS was responsible on startup for doing some self-checks and configuration and booting the operating system from disk. It also contained a set of very basic, very low-level routines to do things like read from and write to the disks, detect keyboard input, or display text on the screen; these would be called constantly by MS-DOS and, very commonly, by applications as well while the machine was in operation. The BIOS was the one piece of software for the IBM PC that IBM themselves had written and owned, and for obvious reasons they weren’t inclined to share it with anyone else. Two small companies, Corona Labs and Eagle Computer, would simply copy IBM’s BIOS a la Franklin. It took the larger company all of one day to file suit and force complete capitulation and market withdrawal when those machines came to their attention in early 1984.

Long before those events, other wiser would-be cloners recognized that creating a workalike, “clean-room” version of IBM’s BIOS would be the key to executing a legal IBM clone. The IBM PC’s emphasis on modularity and future expansion meant that it was a bit more forgiving in this area than the likes of the more tightly integrated Apple II. Yet an IBM-compatible BIOS would still be a tricky business, fraught with technical and financial risk.

As the IBM PC was beginning to ship, a trio of Texas Instruments executives named Rod Canion, James Harris, and William Murto were kicking around ideas for getting out from under what they saw as a growing culture of non-innovation inside TI. Eager to start a business of their own, they considered everything from a Mexican restaurant to household gadgets like a beeper for finding lost keys. Eventually they started to ask what the people around them at TI wanted but weren’t getting in their professional lives. They soon had their answer: a usable portable computer that executives and engineers could cart around with them on the road, and that was cheap enough that their purchasing managers wouldn’t balk. Other companies had explored this realm before, most notably the brief-lived Osborne Computer with the Osborne 1, but those products had fallen down badly in the usability sweepstakes; the Osborne 1, for example, had a 5-inch display screen the mere thought of which could prompt severe eye strain in those with any experience with the machine, disk drives that could store all of 91 K, and just 64 K of memory. Importantly, all of those older portables ran CP/M, until now the standard for business computing. Canion, Harris, and Murto guessed, correctly, that CP/M’s days were numbered in the wake of IBM’s adoption of MS-DOS. Not wanting to be tied to a dying operating system, they first considered making their own. Yet when they polled the big software publishers about their interest in developing for yet another new, incompatible machine the results were not encouraging. There was only one thing for it: they must find a way to make their portable compatible with the IBM PC. If they could bring out such a machine before IBM did, the spoils could be enormous. Prominent tech venture capitalist Ben Rosen agreed, investing $2.5 million to help found Compaq Computer Corporation in February of 1982. What with solid funding and their own connections within the industry, Canion, Harris, and Murto thought they could easily design a hardware-compatible portable that was better than anything else available at the time. That just left the software side.

Given Bill Gates’s reputation as the Machiavelli of the computer industry, we perhaps shouldn’t be surprised that some journalists have credited him with anticipating the rise of PC clones from well before the release of the first IBM PC. That, however, is not the case. All indications are that Gates negotiated a deal that let Microsoft lease MS-DOS to IBM rather than sell it to them simply in the expectation that the IBM PC would be a big success, enough so that an ongoing licensing fee would amount to far more than a lump-sum payout in the long run. Thus he was as surprised as anyone when Compaq and a few other early would-be cloners contacted him to negotiate MS-DOS license deals for their own machines. Of course, Gates being Gates, it took him all of about ten minutes to grasp the implications of what was being requested, and to start making deals that, not incidentally, actually paid considerably better than the one he’d already made with IBM.

The BIOS would be a tougher nut to crack, the beachhead on which this invasion of Big Blue’s turf would succeed or fail. Having quickly concluded that simply copying IBM’s ROMs wasn’t a wise option, Compaq hired a staff of fifteen programmers who would dedicate the months to come to creating a slavish imitation. Programmers with any familiarity at all with the IBM BIOS were known as “dirty,” and barred from working on the project. Instead of relying on IBM’s published BIOS specifications (which might very well be incorrect due to oversight or skulduggery), the team took the thirty biggest applications on the market and worked through them one at a time, analyzing each BIOS call each program made and figuring out through trial and error what response it needed to receive. The two trickiest programs, which would go on to become a sort of stress test for clone compatibility both inside and outside of Compaq, proved to be Lotus 1-2-3 and Microsoft Flight Simulator.

Before the end of the year, Compaq was previewing their new portable to press and public and working hard to set up a strong dealer network. For the latter task they indulged in a bit of headhunting: they hired away from IBM H. L. ”Sparky” Sparks, the man who had set up the IBM PC dealer network. Knowing all too well how dealers thought and what was most important to them, Sparks instituted a standard expected dealer markup of 36 percent, versus the 33 percent offered by IBM, thus giving them every reason to look hard at whether a Compaq might meet a customer’s needs just as well or better than a machine from Big Blue.

The Compaq Portable

Compaq’s first computer, the Portable

Savvy business realpolitik like that became a hallmark of Compaq. Previously clones had been the purview of small upstarts, often with a distinct air of the fly-by-night about them. The suburban-Houston-based Compaq, though, was different, not only from other cloners but also from the established companies of Silicon Valley. Compaq was older, more conservative, interested in changing the world only to the extent that that meant more Compaq computers on desks and in airplane luggage racks. ”I don’t think you could get a 20-year-old to not try to satisfy his ego by ‘improving’ on IBM,” said J. Steven Flannigan, the man who led the BIOS reverse-engineering effort. “When you’re fat, balding, and 40, and have a lot of patents already, you don’t have to try.” That attitude was something corporate purchasing managers could understand. Indeed, Compaq bore with it quite a lot of the same sense of comforting stolidity as did IBM itself. Not quite the first to hit the market with an IBM clone with a “clean” BIOS (that honor likely belongs to Columbia Data Products, a much scruffier sort of operation that would be out of business by 1985), Compaq nevertheless legitimized the notion in the eyes of corporate America.

The Compaq Portable goes flying

The worst possible 1980s airplane seatmate: a business traveler lugging along a Compaq Portable.

Yet the Compaq Portable that started shipping very early in 1983 also succeeded because it was an excellent and — Flannigan’s sentiments aside — innovative product. By coming out with their portable before IBM itself, Compaq showed that clones need not be mere slavish imitations of their inspirations distinguished only by a lower price. “Portable” in 1983 did not, mind you, mean what it does today. The Compaq Portable was bigger and heavier  — a full 28 pounds — than most desktop machines of today, something you manhandled around like a suitcase rather than slipping into a pocket or backpack. There wasn’t even a battery in the thing, meaning the businessperson on the go would likely be doing her “portable” computing only in her hotel room. Still, it was very thoughtfully designed within the technical constraints of its era; you could for instance attach it to a real monitor at your desk to enjoy color graphics in lieu of the little 9-inch monochrome screen that came built-in, a first step on the road to the ubiquitous laptop docking stations of today.

Launching fortuitously just as some manufacturing snafus and unexpected demand for the new PC/XT were making IBM’s own computers hard to secure in some places, the Compaq Portable took off like a rocket. Compaq sold 53,000 of them for $111 million in sales that first year, a record for a technology startup. IBM, suddenly in the unaccustomed position of playing catch-up, released their own portable the following year with fewer features but — and this was truly shocking — a lower price than the Compaq Portable; by forcing high-and-mighty IBM to compete on price, Compaq seemed to have somehow turned the world on its head. The IBM Portable PC was a notable commercial failure, first sign of IBM’s loosening grip on the monster they had birthed. Meanwhile Compaq launched their own head-to-head challenge that same year with the DeskPro line of desktop machines, to much greater success. Apple may have been attacking IBM in melodramatic propaganda films and declaring themselves and IBM to be locked in a battle of Good versus Evil, but IBM hardly seemed to notice the would-be Apple freedom fighters. The only company that really mattered to IBM, the only company that scared them, wasn’t sexy Apple but buttoned-down, square-jawed Compaq.

But Compaq was actually far from IBM’s only problem. Cloning just kept getting easier, for everyone. In the spring of 1984 two little companies called Award Software and Phoenix Technologies announced identical products almost simultaneously: a reverse-engineered, completely legal IBM-compatible BIOS which they would license to anyone who felt like using it to make a clone. Plenty of companies did, catapulting Award and Phoenix to the top of what was soon a booming niche industry (they would eventually resolve their rivalry the way that civilized businesspeople do it, by merging). With the one significant difficulty of cloning thus removed, making a new clone became almost a triviality, a matter of ordering up a handful of components along with MS-DOS and an off-the-shelf BIOS, slapping it all together, and shoving it out the door; the ambitious hobbyist could even do it in her home if she liked. By 1986, considerably more clones were being sold than IBMs, whose own sales were stagnant or even decreasing.

That year Intel started producing the 80386, the third generation of the line of CPUs that powered the IBM PC and its clones. IBM elected to wait a bit before making use of it, judging that the second-generation 80286, which they had incorporated into the very successful PC/AT in 1984, was still plenty powerful  for the time being. It was a bad decision, predicated on a degree dominance which IBM no longer enjoyed. Smelling opportunity, Compaq made their own 80386-based machine, the DeskPro 386, the first to sport the hot new chip. Prior to this machine, the cloners had always been content to let IBM pave the way of such fundamental advances. The DeskPro 386 marks Compaq’s — and the clone industry’s — coming of age. No longer just floating along in the wake of IBM, tinkering with form factors, prices, and feature sets, now they were driving events. Already in November of 1985, Bill Machrone of PC Magazine had seen where this was leading: “Now that it [IBM] has created the market, the market doesn’t necessarily need IBM for the machines.” We see here business computing going through its second fundamental shift (the first being the transition from CP/M to MS-DOS). What was an ecosystem of IBM and IBM clones now became a set of sometimes less-than-ideal, sometimes accidental, but nevertheless agreed-upon standards that were bigger than IBM or anyone else. IBM, Machrone wrote, “had better conform” to the standards or face the consequences just like anyone else. Tellingly, it’s at about this time that we see the phrase “IBM clone” begin to fade, to be replaced by “MS-DOS machine” or “Intel-based machine.”

The emerging Microsoft/Intel juggernaut (note the lack of an “IBM” in there) would eventually conquer the home as well. Already by the mid-1980s certain specimens of the breed were beginning to manifest features that could make them attractive for the home user. Let’s rewind just slightly to look at the most important of them, which I’ve mentioned in a couple of earlier articles but have never really given its full due.

When the folks at Radio Shack, trying to figure out what to do with their aging, fading TRS-80 line, saw the ill-fated IBM PCjr, they saw things well worth salvaging in its 16-color graphics chip and its three-voice sound synthesizer, both far superior to the versions found in its big brothers. Why not clone those pieces, package them into an otherwise fairly conventional PC clone, and sell the end result as the perfect all-around computer, one which could run all the critical business applications but could also play games in the style to which kids with Commodore 64s were accustomed? Thanks to the hype that had accompanied the PCjr’s launch, there were plenty of publishers out there with huge inventories of games and other software that supported the PCjr’s audiovisuals, inventories they’d be only too eager to unload on Radio Shack cheap. With those titles to prime the pump, who knew where things might go…

Launched in late 1984, the Tandy 1000 was the first IBM clone to be clearly pitched not so much at business as at the ordinary consumer. In addition to the audiovisual enhancements and very aggressive pricing, it included DeskMate, a sort of proto-GUI operating environment designed to insulate the user from the cryptic MS-DOS command prompt while giving access to six typical home applications that came built right in. A brilliant little idea all the way around, the Tandy 1000 rescued Radio Shack from the brink of computing irrelevance. It also proved a godsend for many software publishers who’d bet big on the PCjr; John Williams credits it with literally saving Sierra by providing a market for King’s Quest, a game Sierra had developed for the PCjr at horrendous expense and to underwhelming sales given that platform’s commercial failure. Indeed, the Tandy 1000 became so popular that it prompted lots of game publishers to have a second look at the heretofore dull beige world of the clones. As they jumped aboard the MS-DOS gravy train, many made sure to take advantage of the Tandy 1000’s audiovisual enhancements. Thousands of titles would eventually blurb what became known as “Tandy graphics support” on their boxes and advertisements. Having secured the business market, the Intel/Microsoft architecture’s longer, more twisting road to hegemony over home computing began in earnest with the Tandy 1000. And meanwhile poor IBM couldn’t even get proper credit for the graphics standard they’d actually invented. Sometimes you just can’t win for losing.

Another sign of the nascent but inexorably growing power of Intel/Microsoft in the home would come soon after the Tandy 1000, with the arrival of the first game to make many Apple, Atari, and Commodore owners wish that they had a Tandy 1000 or, indeed, even one of its less colorful relatives. We’ll get to that soon — no, really! — but first we have just one more detour to take.

(I was spoiled for choice on sources this time. A quick rundown of periodicals: Creative Computing of January 1983; Byte of January 1983, November 1984, and August 1985; PC Magazine of January 1987; New York Times of November 5 1982, October 26 1983, January 5 1984, February 1 1984, and February 22 1984; Fortune of February 18 1985. Computer Wars by Charles H. Ferguson and Charles R. Morris is a pretty book book-length study of IBM’s trials and tribulations during this period. More information on the EACA clones can be found at Terry Stewart’s site. More on Compaq’s roots in Houston can be found at the Texas Historical Association. A few more invaluable links are included in the article proper.)

 
 

Tags: , , , , ,

Amnesia

Thomas M. Disch, circa 1985

Thomas M. Disch, circa 1985

I feel fairly confident in stating that Thomas M. Disch trails only Robert Pinsky as the second most respected literary figure to turn his hand to the humble text adventure — speaking in terms of his literary prestige at the time of his game’s release, that is. The need for that last qualifier says much about his troubled and ultimately tragic life and career.

Disch burst to prominence alongside Roger Zelazny and the rest of science fiction’s New Wave in the mid-1960s. Yet Disch’s art was always even more uncompromising — and usually more uncompromisingly bleak — than that of his peers. His first novel bears the cheery name of The Genocides, and tells the story of the annihilation of humanity by an alien race who remake the Earth into a hyper-efficient nutrient farm, apparently without ever even recognizing humans as sentient. In its final scenes the remnants of the human race crawl naked through the innards of the aliens’ giant plants, stripped of even a veneer of civilization, reduced to feeding and fucking and waiting to be eradicated like the unwanted animal infestations they are. Camp Concentration — sensing a theme? — another early novel that is perhaps his most read and most acclaimed today, tells of another ignominious end to the human race, this time due to an intelligence-boosting super-drug that slowly drives its experimental recipients insane and then gets loose to spread through the general population as a contagion.

The protagonist of the latter novel is a pompous overweight intellectual who struggles with a self-loathing born of his homosexual and gastronomic lusts, a man who can feel uncomfortably close to Disch himself — or, more sadly, to the way Disch, a gay man who grew up in an era when that was a profoundly shameful thing to be, thought others must see him. Perhaps in compensation, he became a classic “difficult” artiste; his reputation as a notable pain in the ass for agents, editors, and even fellow writers was soon well-established throughout the world of publishing. He seemed to crave a validation from science fiction which he never quite achieved — he would never win a Hugo or Nebula for his fiction — while at the same time often dismissing and belittling the genre when not picking pointless fights within it with the likes Ursula Le Guin, whom he accused of being a fundamentally one-dimensional political writer concerned with advancing a “feminist agenda”; one suspects her real crime was that of selling far more books and collecting far more awards than Disch. Yet just when you might be tempted to dismiss him as an angry crank, Disch could write something extraordinary, like 334, an interwoven collection of vignettes and stories set in a rundown New York tenement of the near future that owes as much to James Joyce as it does to H.G. Wells; or On Wings of Song, both a sustained character study of a failed artist and a brutal work of satire in precise opposition to the rarefied promise of its title — these “Wings of Song,” it turns out, are a euphemism for a high-tech drug high. Disch wrote and wrote and wrote: high-brow criticism of theater and opera for periodicals like The New York Times and The Nation; reams of science-fiction commentary and criticism; copious amounts of poetry (always under the name “Tom Disch”), enough to fill several books; mainstream horror novels more accessible than most of his other efforts, which in 1991 yielded at last some of the commercial rewards that had eluded his science fiction and poetry when he published The M.D., his only bestseller; introductions and commentaries to the number of science-fiction anthologies he curated; two plays and an opera libretto; and, just to prove that the soul of this noted pessimist did house at least a modicum of sweetness and light, the children’s novel The Brave Little Toaster, later adapted into the cult classic of an animated film that is still the only Disch story ever to have made it to the screen.

The dawn of the brief bookware boom found Disch at something of a crossroads. On Wings of Song, published in 1979, would turn out to be his last major science-fiction novel, its poor commercial performance the final rejection that convinced him, the occasional short story or work of criticism aside, to write in other genres for the remaining quarter century of his life. He was just finishing his first horror novel, The Businessman, when his publisher Harper & Row came to him to ask if he might be interested in making his next novel interactive, in the form of the script for a computer game. Like just about every other book publisher in the United States, Harper & Row were in equal measure intrigued by the potential for interactive literature and terrified lest they be left out of a whole new field of literary endeavor. They were also, naturally, eager to leverage their existing stable of authors. Disch, a respected and established author of “literary” genre fiction who didn’t actually sell all that well as a rule, must have seemed an ideal choice; they’d get the cachet of his name without foregoing a bunch of guaranteed sales of a next traditional novel. For his part, Disch was intrigued, and jumped aboard with enthusiasm to write Amnesia.

We know quite a lot about Disch’s plans for the game thanks to a fellow historian named Stephane Racle, who in 2008 discovered his design script, an altogether fascinating document totaling almost 450 pages, along with a mock-up of Harper & Row’s planned packaging in a rare-books shop. The script evinces by its length and detail alone a major commitment to the project on Disch’s part. He later claimed to find it something of a philosophical revelation.

When you’re working on this kind of text, you’re operating in an entirely different mode from when you’re writing other forms of literature. You’re not writing in that trance state of entering a daydream and describing what’s to the left or right, marching forward, which is how most novels get written. Rather, you have to be always conscious of the ways the text can be deconstructed. In a very literal sense, any computer-interactive text deconstructs itself as you write because it’s always stopping and starting and branching off this way and that. You are constantly and overtly manifesting those decisions usually hidden in fiction because, of course, you don’t normally show choices that are ruled out — though in every novel the choices that are not made are really half the work, an invisible presence. With Amnesia, I found myself working with a form that allowed me to display these erasures, these unfollowed paths. It’s like a Diebenkorn painting, where you can see the lines that haven’t quite been covered over by a new layer of paint. There are elements of this same kind of structural candor in a good Youdunit.

Disch came to see the player’s need to figure out what to type next as a way to force her to engage more seriously with the text, to engage in deep reading and thereby come to better appreciate the nuances of language and style that were so important to him as a writer.

Readers who ordinarily skim past such graces wouldn’t be allowed to do that because they’d have to examine the text for clues as to how to respond; they’d have to read slowly and carefully. I thought that was theoretically appealing: a text whose form allowed me a measure of control over the readerly reponse in a way unavailable to a novelist or short-story writer. I’ve always been frustrated that genre readers are often addictive readers who will go through a novel in one night. I can’t read at that speed — and I don’t like to be read at that speed, either.

Philosophical flights aside, Disch didn’t shirk the nitty-gritty work that goes into crafting an interactive narrative. For instance, he painstakingly worked through how the protagonist should be described in the many possible states of dress he might assume. He even went so far as to author error messages to display if the player, say, tries to take off his pants without first removing his shoes. He also thought about ways to believably keep the story on track in the face of many possible player choices. One section of the story, for example, requires that the player be wearing a certain white tuxedo. Disch ensures this is the case by making sure the pair of jeans the player might otherwise choose to wear have a broken zipper which makes them untenable (this also offers an opportunity for some sly humor, an underrated part of Disch’s arsenal of writing talents). Even Douglas Adams, a much more technically aware writer who was very familiar with Infocom’s games before collaborating with Steve Meretzky on The Hitchhiker’s Guide to the Galaxy, couldn’t be bothered with this kind of detail work; he essentially authored just the main path through his game and left all the side details up to Meretzky.

Amnesia‘s story is not, outside the presence of a drug capable of inducing sustained and ever-encroaching amnesia, science fiction. It’s rather a noirish mystery in which no character, including the amnesiac protagonist, is pure, everyone has multiple layers of secrets and motivations, and nothing is quite what it initially seems. Disch almost seems to have challenged himself to make use of every hoary old cliché he can think of from classic detective fiction, including not only the device of amnesia itself but also hayseed Texans who shoot first and ask questions later, multiple femme fatales, and even two men who look so alike they can pass as identical twins. It takes a very good writer to get away with such a rogues gallery of stereotypes. Luckily, Disch was a very good writer when he wanted to be. Amnesia is not, mind you, deserving of mention alongside Disch’s most important literary works. Nor, one senses, is it trying to be. But it is a cracker of a knotty detective story, far better constructed and written than the norm in adventure games then or now. Among its most striking features are frank and even moving depictions of physical love that are neither pornographic nor comedic, arguably the first such to appear in a major commercial game.

 

Cognetics -- Pat Reilly, Kevin Bentley, Lis Romanov, and Charlie Kreitzberg -- trying to be EA rock stars and, with the notable exception of Benley, failing miserably at it.

Cognetics — Pat Reilly, Kevin Bentley, Lis Romanov, and Charlie Kreitzberg — trying to be EA rock stars and, with the notable exception of Bentley, failing miserably at it.

To implement his script, Harper & Row chose a tiny New Jersey company called Cognetics who were engaged in two completely different lines of endeavor: developing the user interface for Citibank ATMs and developing edutainment software for Harper & Row, specifically a line of titles based on Jim Henson’s Fraggle Rock television series. The owner of Cognetics, Charlie Kreitzberg, already had quite a long background in computing for both business and academia, having amongst other accomplishments authored a standard programming text called The Elements of FORTRAN Style a decade before. Working with some colleagues, Kreitzberg had developed an extendible version of the Forth programming language with a kernel of just 6 K or so to facilitate game development on the Apple II. He dubbed this micro-Forth “King Edward” for reasons known only to him. The actual programming of Amnesia he turned over to a local kid named Kevin Bentley; they had met through Kreitzberg’s wife, who shopped at the grocery store owned by Bentley’s family. And so it was poor young Kevin Bentley who had Disch’s doorstop of a script dropped on his desk — no one had apparently bothered to tell the untechnical Disch about the need to limit his text to fit into the computers of the time — with instructions to turn it into a working game. He had nothing to start with but the script itself and that 6 K implementation of Forth; he lacked even the luxury of an adventure-specific programming language like ZIL, SAL, AGI, or Comprehend.

It was of course a hopeless endeavor. Not only had Disch provided far, far too much text, but he’d provided it in a format that wasn’t very easy to work with. Disch, for understandable reasons, thought like a storyteller rather than a world builder. Therefore, and in the absence of other guidance, he’d written his story from the top down as essentially a hypertext narrative, a series of branching nodes, rather than from the bottom up, as a set of objects and rooms and people with descriptions of how they acted and reacted and how they could be manipulated by the player. Each part of his script begins with some text, followed by additional text passages to display if the player types this, that, or the other. Given the scope of possibility open to the player of a parser-driven game, that way lies madness. We’ve seen this phenomenon of text adventures that want to be hypertext narratives a surprising number of times already on this blog. Amnesia is perhaps the most painful victim of this fundamental confusion, born of an era when hypertext fiction didn’t yet exist outside of Choose Your Own Adventure books and any text- and story-driven game was assumed to necessarily be a parser-driven text adventure.

Harper & Row's original Amnesia box art

Harper & Row’s original Amnesia box art

In mid-1984, just as it was dawning on Cognetics what a mouthful of a project they’d bitten off, Harper & Row, the instigators of the whole thing, suddenly became the first of the big book publishers to realize that this software business was going to be more complicated than anticipated and, indeed, probably not worth the effort. (The depth of the blasé belief of which they were newly disabused that software publishing couldn’t be that hard is perhaps best measured by the fact that they had all of the box art for Amnesia prepared before Cognetics had really gotten started with the actual programming, evidently thinking that, what with Disch’s script delivered, it couldn’t be long now.) They abruptly pulled out, telling Kreitzberg he was welcome to do what he liked with Fraggle Rock and Amnesia. He found a home for the former with CBS, another old-media titan still making a go of software for the time being, and for the latter with Electronic Arts, eager to join many of their peers on the bookware bandwagon. EA producer Don Daglow was given the unenviable task of trying to mediate between Disch and Cognetics and come up with some sort of realizable design. He would have his hands full, to such an extent that EA must soon have started wondering why they’d signed the project at all.

In addition to being a noir mystery, Disch had conceived Amnesia as a sort of extended love letter to his adopted home of Manhattan. Telarium’s Fahrenheit 451, when released in late 1984, would also include a reasonably correct piece of Manhattan. Disch, however, wanted to go far beyond that game’s inclusion of twenty blocks or so of Fifth Avenue. He wanted to include almost all of the island, from Battery Park to the Upper West and East Side, with a functioning subway system to get around it. The resulting grid of cross-streets must add up to thousands of in-game locations. It was problematic on multiple levels; not only could Disch not possibly write enough text to properly describe this many locations, but the game couldn’t possibly contain it. Yet Disch, entranced by the idea of roaming free through a virtual Manhattan, refused to be disabused of the notion. No, EA and Cognetics had to admit, such a thing wasn’t technically impossible. It was just that this incarnation of Manhattan would have to be 99 percent empty, a grid of locations described only by their cross-streets, with only the occasional famous landmark or plot-relevant area poking out of the sea of nothingness. That’s exactly what the finished game would end up being, rivaling Level 9’s Snowball for the title of worst ratio of relevant to irrelevant locations in the history of the text adventure.

The previous paragraph underlines the most fundamental problem that dogged the various Amnesia teams. Disch never developed with Cognetics and EA the mutual respect and understanding that led to more successful bookware collaborations like Amazon, The Hitchhiker’s Guide to the Galaxy, and Mindwheel. Given the personality at the center of Amnesia, that’s perhaps not surprising. I described Disch as “difficult” earlier in this article, and, indeed, that’s exactly the word that Kevin Bentley used to describe him to me. His frustration with the collaboration was still palpable when I recently corresponded with him.

The conclusion I reached was that Tom wanted to write a book and have it turned into a game by creating a sort of screenplay adapted from a book. The trouble was that a screenplay to my mind was the wrong metaphor for an adventure game. The missing piece of the puzzle seemed to be that Tom didn’t grasp that an adventure game was a matrix of possibilities and it was up to the user to discover the route, and the point was not to cram the user toward the “conclusion.” Tom was very unhappy with the notion that the player might not experience the conclusion of the story the way that he intended in the script, so he insisted that the user be directed toward the conclusion.

Bentley and Kreitzberg met with Disch just a handful of times at his apartment near Union Square to try to iron out difficulties. The former remembers “lots of herbal tea being offered,” and being enlisted to fix problems with Disch’s computer and printer from time to time, but it’s safe to say that the sort of warm camaraderie that makes, say, the Mindwheel story such a pleasure to relate never developed. Before 1984 was out, frustrated with the endless circular feedback loop that the project had become and uninterested in the technical constraints being constantly raised as issues by his colleagues, Disch effectively washed his hands of the whole thing.

His exit did allow EA and Cognetics a freer hand, but that wouldn’t necessarily turn out for the better. Feeling that the game “was lacking in the standard sorts of gaming experience (like a score, sleep, food, etc.)” and looking for some purpose for that huge empty map of Manhattan, EA requested that Bentley shoehorn all that and more into the game; the player would now have to eat and sleep and earn money by taking odd jobs whilst trying to come to grips with the central mystery. The result was a shotgun marriage of the comparatively richly implemented plot-focused sections from Disch’s original script — albeit with more than half of the text and design excised for reasons of capacity — with a boring pseduo-CRPG that forces you to spend most of your time on logistics — earning money by begging or washing windows or doing other odd jobs, buying food and eating it, avoiding certain sections of the city after dark, finding a place to sleep and returning there regularly, dealing with the vagaries of the subway system — all implemented in little better than a Scott Adams level of detail. Daglow came up with an incomprehensible scoring system that tries to unify all this cognitive dissonance by giving you separate scores as a “detective,” a “character,” and a “survivor.” And as the cherry on top of this tedious sundae, EA added pedestrians who come up to you every handful of moves to ask you to look up numbers on a code wheel, one of the most irritating copy-protection measures ever implemented (and that, of course, is saying something).

All of this confusion fell to poor Kevin Bentley to program. He did a fine job, all things considered, even managing a parser that was, if not up to Infocom’s standards, also not worse than its other peers. Nevertheless growing frustrated and impatient with the game’s progress, EA put him up in an “artist apartment” near their San Mateo, California, headquarters in February of 1985 so that he could work on-site on a game that was now being haphazardly designed by whoever happened to shout the loudest. He spent some nine months there dutifully implementing — and often de-implementing — idea after idea to somehow make the game playable and fun. Bentley turned in the final set of code in November of 1985, by which time “everyone was over it,” enthusiasm long since having given way to a desire just to get something up to some minimal standard out there and be done with it. Certainly Bentley himself was under no illusions: “as a game I thought it sorta bombed.” Impressed with his dogged persistence, EA offered him a job on staff: “But I was 20 and far from home. I knew if I left immediately and drove back to New Jersey I could be home for Thanksgiving.” Unsurprisingly given the nature of the experience, Amnesia would mark the beginning and the end of his career as a game developer. He would go on to a successful and still ongoing career in other forms of programming and computer engineering. Charlie Kreitzberg and Cognetics similarly put games behind them, but are still in business today as a consulting firm, their brief time in games just a footnote on their website.

EA's released Amnesia package

EA’s released Amnesia package. Note that it’s simply called a “text adventure,” a sure sign that the bookware boom with its living literature and electronic novels has come and gone.

Amnesia, a deeply flawed effort released at last only during the sunset of the bookware boom, surprised absolutely no one at EA by failing to sell very well. It marks the only game EA would ever release to contain not a single graphic. Contemporary reviews were notably lukewarm, an anomaly for a trade press that usually saw very little wrong with much of anything. Computer Gaming World‘s Scorpia, admittedly never a big fan of overtly literary or experimental games, issued a pithy summary that details the gist of the game’s problems.

Overall, Amnesia is an unsatisfying game. You can run around here, and run around there, and work up your triple scores as a detective, a character, and a survivor, but so what? Much of what you actually do in the game doesn’t get you very far towards the ultimate solution. Boiled down to the essentials, there are only three things you need to do here: follow up on the clue from TTTT, get and read the disk, and meet Bette. There are auxiliary actions associated with them, but those are the key points. So when you think back on the game as a whole, you don’t see yourself as having done, really, a whole lot, as having been the main character. It’s more as though you came to certain places in a book, and turned a page to get on with the story.

Bottom line: terrific prose, nice maps, too much novel, not enough adventure.

Disch, despite having walked away from the hard work of trying to make the game better over a year before its release and despite having probably never even played the version of Amnesia which arrived in stores, took such reviews predictably personally. Amnesia, he pronounced, had been “one of the quickest disillusionments of my life.” He went on to blame the audience.

The real problem is that there’s simply no audience for this material, no one who would respond enthusiastically to what I do well. Those who buy it, who are aficionados of the form, are basically those who want trivial pursuits; and to offer them something, however entertaining, that involved reading and imaginative skills they did not care to exercise while playing with their computers was foolish. I felt like de Soto, who journeyed to Tennessee looking for the Fountain of Youth — an interesting enough trip, but neither of us found what we were looking for.

People who want to play this sort of game are looking, I suppose, for something like Douglas Adams’s Hitchhiker, where they can have their familiar experiences replayed. The computer-interactive games that have done well — like the Hitchhiker’s or Star Trek series — have been tied in with copyrighted materials that have already had success with the target audience in prior literary forms. I don’t think the quality of those scripts compares to what I did in Amnesia — Adams’s scripts, for example, are actually very good of a kind, but it’s a matter of one little joke after another. The notion of trying to superimpose over this structure a dramatic conception other than a puzzle was apparently too much for the audience. In the end, I just produced another literary curiosity.

There’s more than a grain of truth in all this if we can overlook the condescension toward Douglas Adams that would be more worthily applied to one of his derivatives like Space Quest. A computer-games audience more interested in the vital statistics of dragons and trolls than the emotional lives of the socially engaged humans around them undoubtedly did prove sadly unreceptive to games that tried to be about something. And reviewers like Scorpia did carry into their columns disconcertingly hidebound notions of what an “adventure game” could and should be, and seemed to lack even the language to talk about “a dramatic concept other than a puzzle,” to the extent that Scopia’s columns on Infocom’s two most forthrightly literary works, A Mind Forever Voyaging and Trinity, are little more than technical rundowns and catalogs of puzzle hints — not to mention her reaction to one of Infocom’s first bold literary experiments, the ending to Infidel and poor Brian Mortiarty found himself actively playing down the thematic message of Trinity in interviews in the hope of actually, you know, selling some copies of this supposedly “depressing” game. It’s just that Amnesia, being well-nigh unplayable, is an exceedingly poor choice to advance this argument, and for that Disch deserves his due share of the responsibility. At some level, having just served up — or having at least allowed his name to be attached to — a bad game, he’s not entitled to this argument.

Disch one month before his death.

Disch one month before his death.

Disch’s ultimate fate was an exceedingly sad one. After the millennium, his world crashed around him brick by brick. First there was the shock of witnessing the September 11 attack on his beloved New York, a shock that seemed to break a circuit somewhere deep inside him; often open to charges of nihilism, extreme pessimism, even misanthropy during his earlier career, it wasn’t until after September 11 that his hatred for the people who had done this made him begin to sound like a bigot. Then in 2005 Charles Naylor, his partner of three decades, died. In the aftermath came an effort by his landlord to evict him from the rent-controlled apartment the two had shared, an effort which appeared destined for success. With his writing career decidedly on the wane, his books dropping out of print one by one, and his income correspondingly diminishing, he did most of his writing after Naylor’s death in his LiveJournal blog. Amidst the poorly spelled and punctuated screeds against Muslim terrorists and organized religions of all stripes, depressingly similar to those of a million other angry bloggers, would come the occasional pearl of wisdom or poetry to remind everyone that somewhere inside this bitter, suffering man was the old Thomas M. Disch. And suffer he did, from sciatica, arthritis, diabetes, and ever-creeping obesity that left him all but housebound, trapped alone in his squalid apartment with only his computer for company. On July 4, 2008, he ended the suffering with a shotgun. In 1984, for Amnesia, a younger Disch had written from that same apartment that “suicide is always a dumb idea.” Obviously the pain of his later years changed his mind.

One of the writers with whom Disch seemed to feel the greatest connection was another brilliant, difficult man who always seemed to carry an aura of doom with him, and another who died in tragically pathetic circumstances: Edgar Allen Poe. Disch once wrote a lovely article about Poe’s “appalling life,” the last year of which “seems a headlong, hell-bent rush to suicide.”  Like Disch, Poe also died largely forgotten and unappreciated. Perhaps someday Disch will enjoy a revival akin to that of Poe. In the meantime that Amnesia script sits there tantalizingly, ripe as ever to become a modern work of interactive fiction that need not leave out a single word, that could give us Disch’s original vision undiluted by scores and copy protection and money problems and hunger and sleep timers. Maybe he’d forgive us for trimming some of that ridiculous Manhattan. And maybe, just maybe, his estate would be willing to give its blessing. Any takers?

(First and foremost, huge thanks to Kevin Bentley for sharing with me much of the history of Cognetics and Amnesia. Disch himself talked about Amnesia at greatest length in an interview published in Larry McCaffery’s Across the Wounded Galaxies: Interviews with Contemporary American Science Fiction Writers. Disch’s writings on science fiction are best collected in the Hugo-winning The Dreams Our Stuff Is Made Of and On SF. Scorpia’s review of Amnesia appeared in the January/February 1987 Computer Gaming World.

I’ve made Amnesia available here for download in its MS-DOS incarnation with a DOSBox configuration file that works well with it. Note that you’ll need to use the file “ACODES.TXT” in place of the code wheel when the irritating pedestrians start harassing you.)

 
15 Comments

Posted by on September 29, 2014 in Digital Antiquaria, Interactive Fiction

 

Tags: , ,

Leader Board

Leader Board

Like just about every other sport, golf made it to computer screens quite early. A textual version was passed around in BASIC circles even before the arrival of the trinity of 1977, and was included in the landmark 1978 book BASIC Computer Games. Two years later, Atari released their blandly if descriptively named Golf cartridge for the VCS. Yet neither of these crude efforts, nor the ones which followed over the next few years, did the sport much justice. Those that had graphics at all were all played from a disembodied overhead perspective that could make them feel more like pinball than golf, and no one came close to computerizing the mix of science, art, and exquisite terror that is the golf swing. Then, as these things so often happen, a whole field of golf games appeared in 1986 which showed their courses from an actual golfer’s perspective and put the player’s focus squarely where it belongs, on the swing itself.

BASIC Golf running on a Commodore PET

BASIC Golf running on a Commodore PET

Atari's Golf cartridge for the VCS

Atari’s Golf cartridge for the VCS

Of this suddenly crowded field the two most popular turned out to be Accolade’s Mean 18 and Access’s Leader Board. If you try them both out today, you’re likely to be more impressed, at least initially, by the former than the latter. Mean 18 is a much more complete simulation of the real game, including trees, sand traps, water hazards, varying elevations, re-creations of actual courses, even the chance to make more courses of your own with an included editor. Leader Board, on the other hand, turns you loose in a surreally minimalist environment of empty land and water and absolutely nothing else. On a bullet list of features, there’s no comparison. Yet if you play them both a bit you might just find that Leader Board, for all that it lacks, nevertheless feels better. For me anyway, it’s just somehow more fun. But even if you still prefer Mean 18, Leader Board deserves respect, as well as the chance to be graded on something of a curve. While Mean 18, you see, ran only on the bigger 16-bit machines, Leader Board was born and bred on the humble Commodore 64.

Mean 18

Mean 18

Leader Board

Leader Board

Given the technical and conceptual achievement it represents, I thought we’d do something we haven’t done in quite a while: look at Leader Board as the Carver brothers would have seen it, from the perspective of designers and programmers putting it together piece by piece. I will get just a bit technical in some of what follows, so you might want to review my earlier articles on the Commodore 64 and its capabilities, as well as the parts of my Elite history that dealt with the fraught transition from 2D to 3D graphics.

So, the Carver brothers wanted to create a golf simulation from a 3D perspective on a computer with a 1 MHz processor and 64 K of memory. Where to start? Well, the first thing to do was to simplify the bounds of the simulation brutally, out of the knowledge that anything you abstract away today represents the best kind of work, the kind that you don’t have to do at all. Any simulation is a simplification of reality. The art of the science is figuring out just how much detail is necessary. Suffice to say that the Carver brothers drew that line much farther along than anyone could get away with today. Maybe they could add some complications back in later, once they had an initial working version. In the meantime, much of what we think of when we think of the game of golf got tossed out the window, not without the occasional groan of regret: trees, sand traps, any notion of fairways as opposed to roughs, any notion of a putting green as anything other than a perfectly circular area around the hole with a radius of 64 feet, any concept of elevation when not on the green. Wind made the cut, but with the odd yet simplifying quality that it will always blow in the same direction relative to the golfer no matter which way he faces.

Despite all the editing, the Carvers still needed to map a 3D landscape, simplified though it may be, into the Commodore 64’s memory and be able to display the scenery in proper perspective from any location, facing in any direction, as the player hacked her away toward the hole. Additionally, given the success their earlier games had enjoyed in Europe it was critical to them that this one also be playable from cassette, meaning the whole program — including the four separate 18-hole courses they wanted to include — should reside in memory at once. This was hardly playing to the natural strengths of the 64, whose graphics had been designed with 2D sprite-based games in mind. The solution they arrived at was to first design and store about 30 different polygons, each of which could be used to represent an “island” on the course, which was otherwise assumed to be pure water. Each hole of each course could then be built by arranging these islands, up to seven of them per hole, in different, often overlapping configurations. Just as his tile-graphics system allowed Richard Garriott to build huge worlds by mixing and matching reusable chunks of landscape, these reusable polygons saved the Carvers gobs of precious memory. The views of the course must be drawn using the Commodore 64’s multicolor bitmap mode; they were too irregular for character graphics. Thus every bit of memory saved was doubly precious, as a multicolor bitmap display consumes a full 10 K of the 64’s 64 K. If you look at the diagrams of the holes, you can see how they’re all built from the same pile of interchangeable parts.

Leader Board course diagrams

By applying the mathematics of 3D perspective, it was now possible to display views from any arbitrary location and facing for every hole — the first necessary step for a 3D golf game. As you can see from the video clip a bit further down the page, when playing the game you can actually watch each polygon/island being drawn in outline form and then filled in with color as each new perspective of the course is generated.

Next must come the golfer himself. It was hugely important to the Carvers that he should make a correct, believable swing. Bruce therefore filmed Roger taking swings under carefully controlled conditions using a high-quality video camera. About every fourth frame of the swing was developed as a slide and projected onto a glass screen, from which Roger could trace it onto graph paper using colored pencils, to be translated from there into the grid of bits that makes up each frame of each sprite in the Commodore 64’s memory. Or rather, six different areas of the image were each individually translated: the actual golfer, club included, is built from no fewer than six of the 64’s eight available sprites, each of a single color and carefully placed in relation to its siblings; thus the golfer’s white shirt and hat are made from one sprite, his brown pants from another, his black club from yet another, etc. (Although Bruce Carver first made his reputation through his mastery of multicolor sprites, Leader Board actually makes no use of them.) As the golfer goes through his swing, each sprite steps through its own sequence of bitmaps to recreate as closely as possible the smooth swing that had been originally captured on video.

Bruce Carver films Roger taking a golf swing.

Bruce Carver films Roger taking a golf swing.

Roger Carver traces his own image from a frame of video.

Roger Carver traces his own image from a frame of video.

Now how to have the player actually control the swing? After much experimentation, the Carvers hit upon a system that didn’t try to duplicate the actual motions of a swing via complicated joystick jerks of the sort Epyx tended to favor in their Games series, but somehow just felt better than anything else. (The developers of Mean 18 came up with an almost identical system simultaneously but apparently independently.) This so-called “three-click” system has persisted with only modest variations for decades as the go-to control scheme for computerized golf; any new game that deviates from it always provokes intense debate, and those that opt for something other than this by now traditional approach often all but define themselves by their rejection of the golf-swing status quo.

In Leader Board, then, you first aim the shot horizontally with a small targeting cursor, then press and hold the joystick button to begin your back swing. You release it when you’re ready to end the back swing — more back swing will hit the ball farther — but must be careful not to wait too long. The golfer now begins his forward swing. Hit the button again just as the club strikes the ball to “snap” it straight, or slightly before or after to deliberately — or, more likely, accidentally — hook or slice it to the left or right. Timing being so critical in this, the very heart of the game of golf whether played in the real world or on a computer, the simulation here had to be absolutely smooth, consistent, and precise. As in many other places in Leader Board, the Carvers took advantage of the Commodore 64’s timer-interrupt system to be sure of this. (Timer interrupts work similarly to the raster interrupts I discussed in an earlier article, except that they are triggered not by the movements of the electron gun which paints the screen but rather can be set to occur at a precise interval of microseconds.)


After the ball is struck, its X, Y, and Z vectors are calculated, taking into account the swing itself, gravity and air resistance, and, if you’re playing at “Professional” level, the wind. The ball is represented by a seventh sprite, which can have a number of possible sizes depending on its distance from you. In a nice touch that adds a welcome note of verisimilitude, the eighth sprite is employed as the shadow of the ball in flight; before the ball is struck, when it’s lying on the ground before you, this sprite is used to represent the targeting cursor. The movement of the ball and its shadow are again tied to the 64’s interrupt timer to assure that they are absolutely smooth and believable. If the sprite lands in the water, you have to try again; likewise, in yet another simplification, if you send it off the screen to left or right. Otherwise another view is generated from where the ball landed, and the hole continues. It is possible to hit the ball directly into the hole from the fairway, even to score a hole in one on some of the shorter holes, but it’s very, very difficult; in the couple of dozen complete games I’ve played recently (we got a bit obsessed with Leader Board around here for a while), I’ve managed it exactly once.

More commonly, you’ll eventually end up on the putting green, defined in Leader Board as simply an arbitrary circle 64 feet in radius around the hole. With no need for the concept of a snap, the control scheme is here simplified: just aim with the targeting cursor, then hold down the button until the power meter reaches the desired level, keeping in mind that a ball that’s traveling too fast when it reaches the hole will bounce right over it.


In order to make putting any real challenge, the Carvers were forced to add back in the concept of elevation they had excised from the rest of the simulation. The problem became how to portray slope on the relatively small surface of the green given a screen resolution of just 160 X 200. The ideal method would have been to add color shading to visually indicate contour, but they already needed to keep available four colors — the maximum permitted by the Commodore 64’s VIC-II graphics chip in any 4 X 8-pixel region — for drawing the other elements of the landscape. The somewhat kludgy and not entirely intuitive solution became a visual indicator, conveniently drawn in two of the available colors, to the left of the golfer. The vertical line represents the magnitude of the slope; the other represents its direction. The same system is used to represent wind intensity and direction when not on the green.

With that, plus a small battery of sound effects which are often cleverly reused — for instance, the splash when a ball strikes water is always the same waveform played at one of four volumes depending on the distance of the ball — the Carvers had something quite special, if also something that was, like any game, full of sacrifices and compromises. They had always seen this minimalist world of green land and blue water as a mere jumping-off point. Now, however, their planned shipping date loomed, and Access wasn’t in a financial position to miss it. Therefore Leader Board went out the door as-is very early in 1986. When it proved a hit, the Carvers happily returned to the Leader Board well again and again: via Leader Board Tournament, a bare-bones sequel featuring four new courses but the same engine; via Executive Leader Board later in 1986, which added sand traps and trees; and finally via World Class Leader Board and three course expansion disks (Famous Courses of the World) in 1987 and 1988. By this time, the Carvers had something that approached a real game of golf, with real-world golf courses like St. Andrews and Pebble Beach, fairways and roughs and a whole variety of trees and other hazards, and variably shaped and sized greens. They had also largely remade Access in the eyes of gamers, from the Beach-Head company to the Leader Board company. Having accomplished all they felt they could on the Commodore 64 and seeing which way the industry winds were blowing, the Carvers now turned to bigger MS-DOS machines and what would become the most successful golf franchise of all, Links — a story for another time.

Executive Leader Board

Executive Leader Board

World Class Leader Board

World Class Leader Board

Before we say goodbye to Leader Board, I want to take a moment here to say just how ridiculously entertaining it is, even in its most minimalist configuration. There’s something elegant and classic about those bifurcated, abstract landscapes of the first Leader Board — enough so that, while the later Leader Boards are certainly more impressive as golfing simulations, I’m not entirely sure they’re all that much better as computer games. Leader Board is an engaging little diversion played alone against the course, trying to come in under par (there is no computer opponent available). But get some friends together and it’s absolute magic, like so many of the best Commodore 64 games and so many of my personal favorites. Find yourself an open-minded friend or two or three who are willing to overlook 8-bit-era graphics and give it a shot; I’ve prepared a download that includes the original Leader Board, Executive Leader Board, and World Class Leader Board — which I think I can without causing a great deal of controversy call the definitive 8-bit golf game — with all the course disks also included courtesy of some ingenious hackers from the days of yore. Fire up a Commodore 64 emulator and try it even if you wouldn’t be caught dead on a real golf course. Golf just works on a computer, as millions of players with no interest whatsoever in the real game have discovered over the years. A grand tradition begins in earnest right here.

(Sources this time out are the same as for the last article.)

 
 

Tags: , ,

Apple, Carmen Sandiego, and the Rise of Edutainment

If there was any one application that was the favorite amongst early boosters of personal computing, it was education. Indeed, it could sometimes be difficult to find one of those digital utopianists who was willing to prioritize anything else — unsurprisingly, given that so much early PC culture grew out of places like The People’s Computer Company, who made “knowledge is power” their de facto mantra and talked of teaching people about computers and using computers to teach with equal countercultural fervor. Creative Computing, the first monthly magazine dedicated to personal computing, grew out of that idealistic milieu, founded by an educational consultant who filled a big chunk of its pages with plans, schemes, and dreams for computers as tools for democratizing, improving, and just making schooling more fun. A few years later, when Apple started selling the II, they pushed it hard as the learning computer, making deals with the influential likes of the Minnesota Educational Consortium (MECC) of Oregon Trail fame that gave the machine a luster none of its competitors could touch. For much of the adult public, who may have had their first exposure to a PC when they visited a child’s classroom, the Apple II became synonymous with the PC, which was in turn almost synonymous with education in the days before IBM turned it into a business machine. We can still see the effect today: when journalists and advertisers look for an easy story of innovation to which to compare some new gadget, it’s always the Apple II they choose, not the TRS-80 or Commodore PET. And the iconic image of an Apple II in the public’s imagination remains a group of children gathered around it in a classroom.

For all that, though, most of the early educational software really wasn’t so compelling. The works of Edu-Ware, the first publisher to make education their main focus, were fairly typical. Most were created or co-created by Edu-Ware co-founder Sherwin Steffin, who brought with him a professional background of more than twenty years in education and education theory. He carefully outlined his philosophy of computerized instruction, backed as it was by all the latest research into the psychology of learning, in long-winded, somewhat pedantic essays for Softalk and Softline magazines, standard bearers of the burgeoning Apple II community. Steffin’s software may or may not have correctly applied the latest pedagogical research, but it mostly failed at making children want to learn with it. The programs were generally pretty boring exercises in drill and practice, lacking even proper titles. Fractions, Arithmetic Skills, or Compu-Read they said on their boxes, and fractions, arithmetic, or (compu-)reading was what you got, a series of dry drills to work through without a trace of wit, whimsy, or fun.

The other notable strand of early PC-based education was the incestuous practice of using the computer to teach kids about computers. The belief that being able to harness the power of the computer through BASIC would somehow become a force for social democratization and liberation is an old one, dating back to even before the first issues of Creative Computing — to the People’s Computer Club and, indeed, to the very researchers at Dartmouth University who created BASIC in the 1960s. As BASIC’s shortcomings became more and more evident, other instructional languages and courses based on them kept popping up in the early 1980s: PILOT, Logo, COMAL, etc. This craze for “computer literacy,” which all but insisted that every kid who didn’t learn to program was going to end up washing dishes or mowing lawns for a living, peaked along with the would-be home-computer revolution in about 1983. Advocating for programming as a universal life skill was like suggesting in 1908 that everyone needed to learn to take a car apart and put it back together to prepare for the new world that was about to arrive with the Model T — which, in an example of how some things never really change, was exactly what many people in 1908 were in fact suggesting. Joseph Weizenbaum of Eliza fame, always good for a sober corrective to the more ebullient dreams of his colleagues, offered a take on the real computerized future that was shockingly prescient by comparing the computer to the electric motor.

There are undoubtedly many more electric motors in the United States than there are people, and almost everybody owns a lot of electric motors without thinking about it. They are everywhere, in automobiles, food mixers, vacuum cleaners, even watches and pencil sharpeners. Yet, it doesn’t require any sort of electric-motor literacy to get on with the world, or, more importantly, to be able to use these gadgets.

Another important point about electric motors is that they’re invisible. If you question someone using a vacuum cleaner, of course they know that there is an electric motor inside. But nobody says, “Well, I think I’ll use an electric motor programmed to be a vacuum cleaner to vacuum the floor.”

The computer will also become largely invisible, as it already is to a large extent in the consumer market. I believe that the more pervasive the computer becomes, the more invisible it will become. We talk about it a lot now because it is new, but as we get used to the computer it will retreat into the background. How much hands-on computer experience will students need? The answer, of course, is not very much. The student and the practicing professional will operate special-purpose instruments that happen to have computers as components.

The pressure to make of every kid a programmer gradually faded as the 1980s wore on, leaving programming to those of us who found it genuinely fascinating. Today even the term “computer literacy,” always a strange linguistic choice anyway, feels more and more like a relic of history as this once-disruptive and scary new force has become as everyday as, well, the electric motor.

As for those other educational programs, they — at least some of them — got better by mid-decade. Programs like Number Munchers, Math Blaster, and Reader Rabbit added a bit more audiovisual sugar to their educational vegetables along with a more gamelike framework to their repetitive drills, and proved better able to hold children’s interest. For all the early rhetoric about computers and education, one could argue that the real golden age of the Apple II as an educational computer didn’t begin until about 1983 or 1984.

By that time a new category of educational software, partly a marketing construct but partly a genuinely new thing, was becoming more and more prominent: edutainment. Trip Hawkins, founder of Electronic Arts, has often claimed to have invented the portmanteau for EA’s 1984 title Seven Cities of Gold, but this is incorrect; a company called Milliken Publishing was already using the label for their programs for the Atari 8-bit line in late 1982, and it was already passing into common usage by the end of 1983. Edutainment dispensed with the old drill-and-practice model in preference to more open, playful forms of interactions that nevertheless promised, sometimes implicitly and sometimes explicitly, to teach. The skills they taught, meanwhile, were generally not the rigid, disembodied stuff of standardized tests but rather embedded organically into living virtual worlds. It’s all but impossible to name any particular game as the definitive first example of such a nebulous genre, but a good starting point might be Tom Snyder and Spinnaker Software.

Tom Snyder, 1984

Tom Snyder, 1984

Snyder had himself barely made it through high school. He came to blame his own failings as a student on his inability to relate to exactly the notions of arbitrary, contextless education that marked the early era of PC educational software: “Here, learn this set of facts. Write this paper. This is what you must know. This is what’s important.” When he became a fifth-grade teacher years later, he made it a point to ground his lessons always in the real world, to tell his students why it was useful to know the things he taught them and how it all related to the world around them. He often used self-designed games, first done with pencil and paper and cardboard and later done on computers, to let his students explore knowledge and its ramifications. In 1980 he founded a groundbreaking development company, Tom Snyder Productions, to commercialize some of those efforts. One of them became Snooper Troops, published as one of Spinnaker’s first titles in 1982; it had kids wandering around a small town trying to solve a mystery by compiling clues and using their powers of deduction. The next year’s In Search of the Most Amazing Thing, still a beloved memory of many of those who played it, combined clue-gathering with elements of economics and even diplomacy in a vast open world. Unlike so much other children’s software, Snyder’s games never talked down to their audience; children are after all just as capable of sensing when they’re being condescended to as anyone else. They differed most dramatically from the drill-and-practice software that preceded them in always making the educational elements an organic part of their worlds. One of Snyder’s favorite mantras applies to educational software as much as it does to any other creative endeavor and, indeed, to life: “Don’t be boring.” The many games of Tom Snyder Productions, most of which were not actually designed by Snyder himself, were often crude and slow, written as often as not in BASIC. But, at least at the conceptual level, they were seldom boring.

It’s of course true that a plain old game that requires a degree of thoughtfulness and a full-on work of edutainment can be very hard to disentangle from one another. Like so much else in life, the boundaries here can be nebulous at best, and often had as much to do with marketing, with the way a title was positioned by its owner, as with any intrinsic qualities of the title itself. When we go looking for those intrinsics, we can come up with only a grab bag of qualities of which any given edutainment title was likely to share a subset: being based on real history or being a simulation of some real aspect of science or technology; being relatively nonviolent; emphasizing thinking and logical problem-solving rather than fast reflexes. Like pornography, edutainment is something that many people seemed to just know when they saw it.

That said, there were plenty of titles that straddled the border between entertainment and edutainment. Spinnaker’s Telarium line of adventure games is a good example. Text-based games that were themselves based on books, published by a company that had heretofore specialized in education and edutainment… it wasn’t hard to grasp why parents might be expected to find them appealing, even if they were never explicitly marketed as anything other than games. Spinnaker’s other line of adventures, Windham Classics, blurred the lines even more by being based on acknowledged literary classics of the sort kids might be assigned to read in school rather than popular science fiction and fantasy, and by being directly pitched at adolescents of about ten to fourteen years of age. Tellingly, Tom Synder Productions wrote one of the Windham Classics games; Dale Disharoon, previously a developer of Spinnaker educational software like Alphabet Zoo, wrote two more.

A certain amount of edutational luster clung to the text adventure in general, was implicit in much of the talk about interactive fiction as a new form of literature that was so prevalent during the brief bookware boom. One could even say it clung to the home computer itself, in the form of notions about “good screens” and “bad screens.” The family television was the bad screen, locus of those passive and mindless broadcasts that have set parents and educators fretting almost from the moment the medium was invented, and now the home of videogames, the popularity of which caused a reactionary near-hysteria in some circles; they would inure children to violence (if they thought Space Invaders was bad, imagine what they’d say about the games of today!) and almost literally rot their brains, making of them mindless slack-jawed zombies. The computer monitor, on the other hand, was the good screen, home of more thoughtful and creative forms of interaction and entertainment. What parent wouldn’t prefer to see her kid playing, say, Project: Space Station rather than Space Invaders? Home-computer makers and software publishers — at least the ones who weren’t making Space Invaders clones — caught on to this dynamic early and rode it hard.

As toy manufacturers had realized decades before, there are essentially two ways to market children’s entertainment. One way is to appeal to the children themselves, to make them want your product and nag Mom and Dad until they relent. The other is to appeal directly to Mom and Dad, to convince them that what you’re offering will be an improving experience for their child, perhaps with a few well-placed innuendoes if you can manage them about how said child will be left behind if she doesn’t have your product. With that in mind, it can be an interesting experiment to look at the box copy from software of the early home-computer era whilst asking yourself whether it’s written for the kids who were most likely to play it or the parents who were most likely to pay for it — or whether it hedges its bets by offering a little for both. Whatever else it was, emphasizing the educational qualities of your game was just good marketing; a 1984 survey found that 46 percent of computers in homes had been purchased by parents with the primary goal of improving their children’s education. It was the perfect market for the title that would come to stand alongside The Oregon Trail as one of the two classic examples of 1980s edutainment software.

Doug, Cathy, and Gary Carlston, 1983

Doug, Cathy, and Gary Carlston, 1983

The origins of the game that would become known as Where in the World is Carmen Sandiego? are confused, with lots of oft-contradictory memories and claims flying around. However, the most consistent story has it beginning with an idea by Gary Carlston of Brøderbund Software in 1983. He and his brother Doug had been fascinated by their family’s almanac as children: “We used to lie there and ask each other questions out of the almanac.” This evolved into impromptu quiz games in bed after the lights went out. Gary now proposed a game or, better yet, a series of games which would have players running down a series of clues about geography and history, answerable via a trusty almanac or other reference work to be included along with the game disk right there in the box.

Brøderbund didn’t actually develop much software in-house, preferring to publish the work of outside developers on a contract basis. While they did have a small staff of programmers and even artists, they were there mainly to assist outside developers by helping with difficult technical problems, porting code to other machines, and polishing in-game art rather than working up projects from scratch. But this idea just seemed to have too much potential to ignore or outsource. Gary was therefore soon installed in Brøderbund’s “rubber room” — so-called because it was the place where people went to bounce ideas off one another — along with Lauren Elliott, the company’s only salaried game designer; Gene Portwood, Elliott’s best friend, manager of Brøderbund’s programming team, and a pretty good artist; Ed Bernstein, head of Brøderbund’s art department; and programmer Dane Bigham, who would be expected to write not so much a game as a cross-platform database-driven engine that could power many ports and sequels beyond the Apple II original.

Gary’s first idea was to name the game Six Crowns of Henry VIII, and to make it a scavenger hunt for the eponymous crowns through Britain. However, the team soon turned that into something wider-scoped and more appealing to the emerging American edutainment market. You would be chasing an international criminal ring through cities located all over the world, trying to recover a series of stolen cultural artifacts, like a jade goddess from Singapore, an Inca mask from Peru, or a gargoyle from Notre Dame Cathedral (wonder how the thieves managed that one). It’s not entirely clear who came up with the idea for making the leader of the ring, whose capture would become the game’s ultimate goal, a woman named Carmen Sandiego, but Elliott believes the credit most likely belongs to Portwood. Regardless, everyone immediately liked the idea. “There were enough male bad guys,” said Elliott later, and “girls [could] be just as bad.” (Later, when the character became famous, Brøderbund would take some heat from Hispanic groups who claimed that the game associated a Hispanic surname with criminality. Gary replied with a tongue-in-cheek letter explaining that “Sandiego” was actually Carmen’s married name, that her maiden name was “Sondberg” and she was actually Swedish.) When development started in earnest, the Carmen team was pared down to a core trio of Eliott, who broadly speaking put together the game’s database of clues and cities; Portwood, who drew the graphics; and Bigham, who wrote the code. But, as Eliott later said, “A lot of what we did just happened. We didn’t think much about it.”

Where in the World is Carmen Sandiego?

To play that first Carmen Sandiego game today can be just a bit of an underwhelming experience; there’s just not that much really to it. Each of a series of crimes and the clues that lead you to the perpetrator are randomly generated from the game’s database of 10 possible suspects, 30 cities, and 1000 or so clues. Starting in the home city of the stolen treasure in question, you have about five days to track down each suspect. Assuming you’re on the right track, you’ll get clues in each city as to the suspect’s next destination among the several possibilities represented by the airline connections from that city: perhaps he “wanted to know the price of tweed” or “wanted to sail on the Severn.” (Both of these clues would point you to Britain, more specifically to London.) If you make the right deductions each step of the way you’ll apprehend the suspect in plenty of time. You’ll know you’ve made the wrong choice if you wind up at a dead-end city with no further clues on offer. Your only choice then is to backtrack, wasting precious time in the process. The tenth and final suspect to track down is always Carmen Sandiego herself, who for all of her subsequent fame is barely characterized at all in this first installment. Capture her, and you retire to the “Detective Hall of Fame.” There’s a little bit more to it, like the way that you must also compile details of the suspect’s appearance as you travel so you can eventually fill out an arrest warrant, but not a whole lot. Any modern player with Wikipedia open in an adjacent window can easily finish all ten cases and win the game in a matter of a few hours at most. By the time you do, the game’s sharply limited arsenal of clues, cities, and stolen treasures is already starting to feel repetitive.

Which is not to say that Carmen Sandiego is entirely bereft of modern appeal. When my wife and I played it over the course of a few evenings recently, we learned a few interesting things we hadn’t known before and even discovered a new country that I at least had never realized existed: the microstate of San Marino, beloved by stamp and coin collectors and both the oldest and the smallest constitutional republic in the world. My wife is now determined that we should make a holiday there.

Still, properly appreciating Carmen Sandiego‘s contemporary appeal requires of us a little more work. The logical place to start is with that huge World Almanac and Books of Facts that made the game’s box the heaviest on the shelves. It can be a bit hard even for those of us old enough to have grown up before the World Wide Web to recover the mindset of an era before we had the world in our living rooms — or, better said in this age of mobile computing, in our pockets. Back in those days when you had to go to a library to do research, when your choices of recreation of an evening were between whatever shows the dozen or so television stations were showing and whatever books you had in the house, an almanac was magic to any kid with a healthy curiosity about the world and a little imagination, what with its thousand or more pages filled with exotic lands along with records of deeds, buildings, cities, people, animals, and geography whose very lack of context only made them more alluring. The whole world — and then some; there were star charts and the like for budding astronomers — seemed to have been stuffed within its covers.

In that spirit, one could almost call the Carmen Sandiego game disk ancillary to the almanac rather than the other way around. Who knew what delights you might stumble over while you tried to figure out, say, in which country the python made its home? The World Almanac continues to come out every year, and seems to have done surprisingly well, all things considered, surviving the forces that have killed dead typical companions on reference shelves like the encyclopedia. But of course it’s lost much of its old magic in these days of information glut. While we can still recapture a little of the old feeling by playing Carmen Sandiego with a web browser open, our search engines have just gotten too good; it’s harder to stumble across the same sorts of crazy facts and alluring diversions.

Carmen Sandiego captured so many kids because it tempted them to discover knowledge for themselves rather than attempting to drill it into them, and all whilst never talking down to them. Gary Carlston said of Brøderbund’s edutainment philosophy, “If we would’ve enjoyed it at age 12, and if we still enjoy it now, then it’s what we want. Whether it’s pedagogically correct is not relevant.” Carmen Sandiego did indeed attract criticism from earnest educational theorists armed with studies showing how it failed to live up to the latest research on learning; this low-level drumbeat of criticism continues to this day. Some of it may very well be correct and relevant; I’m hardly qualified to judge. What I do see, though, is that Carmen Sandiego offers a remarkably progressive view of knowledge and education for its time. At a time when schools were still teaching many subjects through rote memorization of facts and dates, when math courses were largely “take this set of numbers and manipulate them to become this other set of numbers” without ever explaining why, Carmen Sandiego grasped that success in the coming world of cheap and ubiquitous data would require not a head stuffed with facts but the ability to extract relevant information from the flood of information that surrounds us, to synthesize it into conclusions, and to apply it to a problem at hand. While drill-and-practice software taught kids to perform specific tasks, Carmen Sandiego, like all the best edutainment software, taught them how to think. Just as importantly, it taught them how much fun doing so could be.

Where in the World is Carmen Sandiego

Brøderbund may not have been all that concerned about making Carmen Sandiego “pedagogically correct,” but they were hardly blind to the game’s educational value, nor to the marketing potential therein. The back cover alone of Carmen Sandiego is a classic example of edutainment marketing, emphasizing the adventure aspects for the kids while also giving parents a picture of children beaming over an almanac and telling how they will be “introduced to world geography” — and all whilst carefully avoiding the E-word; telling any kid that something is “educational” was and is all but guaranteed to turn her off it completely.

For all that, though, the game proved to be a slow burner rather than an out-of-the-gates hit upon its release in late 1985. It was hardly a flop; sales were strong enough that Brøderbund released the first of many sequels, Where in the USA is Carmen Sandiego?, the following year. Yet year by year the game just got more popular, especially when Brøderbund started to reach out more seriously to educators, releasing special editions for schools and sending lots of free swag to those who agreed to host “Carmen Days,” for which students and teachers dressed up as Carmen or her henchmen or the detectives on their trail, and could call in to the “Acme Detective Agency” at Brøderbund itself to talk with Portwood or Elliott playing the role of “the Chief.” The combination of official school approval, the game’s natural appeal to both parents and children, and lots of savvy marketing proved to be a potent symbiosis indeed. Total sales of Carmen Sandiego games passed 1 million in 1989, 2 million in 1991, by which time the series included not only Where in the World is Carmen Sandiego? and Where in the USA is Carmen Sandiego? but also Where in Europe is Carmen Sandiego?, Where in Time is Carmen Sandiego?, Where in America’s Past is Carmen Sandiego?, and the strangely specific Where in North Dakota is Carmen Sandiego?, prototype for a proposed series of state-level games that never got any further; Where in Space is Carmen Sandiego? would soon go in the opposite direction, rounding out the original series of reference-work-based titles on a cosmic scale. In 1991 Carmen also became a full-fledged media star, the first to be spawned by a computer game, when Where in the World is Carmen Sandiego? debuted as a children’s game show on PBS.

A Print Shop banner: an artifact as redolent of its era as Hula Hoops or bellbottoms are of theirs.

A Print Shop banner: an artifact as redolent of its era as Hula Hoops or bellbottoms are of theirs.

Through the early 1980s, Brøderbund had been a successful software publisher, but not outrageously so in comparison to their peers. At mid-decade, though, the company’s fortunes suddenly began to soar just as many of those peers were, shall we say, trending in the opposite direction. Brøderbund’s success was largely down to two breakout products which each succeeded in identifying a real, compelling use for home computers at a time when that was proving far more difficult than the boosters and venture capitalists had predicted. One was of course the Carmen Sandiego line. The other was a little something called The Print Shop, which let users design and print out signs and banners using a variety of fonts and clip art. How such a simple, straightforward application could become so beloved may seem hard to understand today, but beloved The Print Shop most definitely became. For the rest of the decade and beyond its distinctive banners, enabled by the fan-fold paper used by the dot-matrix printers of the day, could be seen everywhere that people without a budget for professional signage gathered: at church socials, at amateur sporting events, inside school hallways and classrooms. Like the first desktop-publishing programs that were appearing on the Macintosh contemporaneously, The Print Shop was one more way in which computers were beginning to democratize creative production, a process, as disruptive and fraught as it is inspiring, that’s still ongoing today.

In having struck two such chords with the public in the form of The Print Shop and Carmen Sandiego, Brøderbund was far ahead of virtually all of their competitors who failed to find even one. Brøderbund lived something of a charmed existence for years, defying most of the hard-won conventional wisdom about consumer software being a niche product at best and the real money being in business software. If the Carlstons hadn’t been so gosh-darn nice, one might be tempted to begrudge them their success. (Once when the Carlstons briefly considered a merger with Electronic Arts, whose internal culture was much more ruthless and competitive, a writer said it would be a case of the Walton family moving in with the Manson family.) One could almost say that for Brøderbund alone the promises of the home-computer revolution really did materialize, with consumers rushing to buy from them not just games but practical software as well. Tellingly — and assuming we agree to label Carmen Sandiego as an educational product rather than a game — Brøderbund’s top-selling title was never a game during any given year between 1985 and the arrival of the company’s juggernaut of an adventure game Myst in 1993, despite their publication of hits like the Jordan Mechner games Karateka and Prince of Persia. Carmen Sandiego averaged 25 to 30 percent of Brøderbund’s sales during those years, behind only The Print Shop. The two lines together accounted for well over half of yearly revenues that were pushing past $50 million by decade’s end — still puny by the standards of business software but very impressive indeed by that of consumer software.

For the larger software market, Carmen Sandiego — and, for that matter, The Print Shop — were signs that, if the home computer hadn’t quite taken off as expected, it also wasn’t going to disappear or be relegated strictly to the role of niche game machine either, a clear sign that there were or at least with a bit more technological ripening could be good reasons to own one. The same year that Brøderbund pushed into edutainment with Carmen Sandiego, MECC, who had reconstituted themselves as the for-profit (albeit still state-owned) publisher Minnesota Educational Computing Corporation in 1984, released the definitive, graphically enhanced version of that old chestnut The Oregon Trail, a title which shared with Carmen Sandiego an easygoing, progressive, experiential approach to learning. Together Oregon and Carmen became the twin icons of 1980s edutainment, still today an inescapable shared memory for virtually everyone who darkened a grade or middle school door in the United States between about 1985 and 1995.

The consequences of Carmen and Oregon and the many other programs they pulled along in their wake were particularly pronounced for the one remaining viable member of the old trinity of 1977: the Apple II. Lots of people both outside and inside Apple had been expecting the II market to finally collapse for several years already, but so far that had refused to happen. Apple, whose official corporate attitude toward the II had for some time now been vacillating between benevolent condescension and enlightened disinterest, did grant II loyalists some huge final favors now. One was the late 1986 release of the Apple IIGS, a radically updated version produced on a comparative shoestring by the company’s dwindling II engineering team with assistance from Steve Wozniak himself. The IIGS used a 16-bit Western Design Center 65C816 CPU that was capable of emulating the old 8-bit 6502 when necessary but was several times as powerful. Just as significantly, the older IIs’ antiquated graphics and sound were finally given a major overhaul that now made them amongst the best in the industry, just a tier or two below those of the current gold standard, Commodore’s new 68000-based Amiga. The IIGS turned out to be a significant if fairly brief-lived hit, outselling the Macintosh and all other II models by a considerable margin in its first year.

But arguably much more important for the Apple II’s long-term future was a series of special educational offers Apple made during 1986 and 1987. In January of the former year, they announced a rebate program wherein schools could send them old computers made by Apple or any of their competitors in return for substantial rebates on new Apple IIs. In April of that year, they announced major rebates for educators wishing to purchase Apple IIs for home use. Finally, in March of 1987, Apple created two somethings called the Apple Unified School System and the Apple Education Purchase Program, which together represented a major, institutionalized outreach and support effort designed to get even more Apple IIs into schools (and, not incidentally, more Macs into universities). The Apple II had been the school computer of choice virtually from the moment that schools started buying PCs at all, but these steps along with software like Carmen Sandiego and The Oregon Trail cemented and further extended its dominance, to an extent that many schools and families simply refused to let go. The bread-and-butter Apple II model, the IIe, remained in production until November of 1993, by which time this sturdy old machine, thoroughly obsolete already by 1985, was selling almost exclusively to educators and Apple regarded its continued presence in their product catalogs like that of the faintly embarrassing old uncle who just keeps showing up for every Thanksgiving dinner.

Even after the inevitable if long-delayed passing of the Apple II as a fixture in schools, Carmen and Oregon lived on. Both received the requisite CD-ROM upgrades, although it’s perhaps debatable in both instances how much the new multimedia flash really added to the experience. The television Carmen Sandiego game shows also continued to air in various incarnation through the end of the decade. Carmen Choose Your Own Adventure-style gamebooks, conventional young-adult novels, comic books, and a board game were also soon on offer, along with yet more computerized creations like Carmen Sandiego Word Detective. Only with the millennium did Carmen — always a bit milquetoast as a character and hardly the real source of the original games’ appeal — along with The Oregon Trail see their stars finally start to fade. Both retain a certain commercial viability today, but more as kitschy artifacts and nostalgia magnets than serious endeavors in either learning or entertainment. Educational software has finally moved on.

Perhaps not enough, though: it remains about 10 percent inspired, 10 percent acceptable in a workmanlike way, and 80 percent boredom stemming sometimes from well-meaning cluelessness and sometimes from a cynical desire to exploit parents, teachers, and children. Those looking to enter this notoriously underachieving field today could do worse than to hearken back to the simple charms of Carmen Sandiego, created as it was without guile and without reams of pedagogical research to back it up, out of the simple conviction that geography could actually be fun. All learning can be fun. You just have to do it right.

(See Engineering Play by Mizuko Ito for a fairly thorough survey of educational and edutational software from an academic perspective. Gamers at Work by Morgan Ramsay has an interview with Doug and Gary Carlston which dwells on Carmen Sandiego at some length. Matt Waddell wrote a superb history of Carmen Sandiego for a class at Stanford University in 2001. A piece on Brøderbund on the eve of the first Carmen Sandiego game’s release was published in the September 1985 issue of MicroTimes. A summary of the state of Brøderbund circa mid-1991 appeared in the July 9, 1991, New York Times. Joseph Weizenbaum’s comments appeared in the July 1984 issue of Byte. The first use of the term “edutainment” that I could locate appeared in a Milliken Publishing advertisement in the January 1983 issue of Creative Computing. Articles involving Spinnaker and Tom Snyder appeared in the June 1984 Ahoy! and the October 1984 and December 1985 Compute!’s Gazette. And if you got through all that and would like to experience the original Apple II Carmen Sandiego for yourself, feel free to download the disk images and manual — but no almanac I’m afraid — from right here.)

 
 

Tags: , ,