RSS

Search results for ‘trinity’

The Dawn of Multimedia

Cover illustration from Byte, June 1982

Unless you’re an extremely patient and/or nostalgic sort, most of the games I’ve been writing about on this blog for over two years now are a hard sell as something to just pick up and play for fun. There have been occasional exceptions: M.U.L.E., probably my favorite of any game I’ve written about so far, remains as magical and accessible as it was the day it was made; some or most of the Infocom titles remain fresh and entertaining as both fictions and games. Still, there’s an aspirational quality to even some of the most remarkable examples of gaming in this era. Colorful boxes and grandiose claims of epic tales of adventure often far exceeded the minimalist content of the disks themselves. In another era we might levy accusations of false advertising, but that doesn’t feel quite like what’s going on here. Rather, players and developers entered into a sort of partnership, a shared recognition that there were sharp limits to what developers could do with these simple computers, but that players could fill in many of the missing pieces with determined imaginings of what they could someday actually be getting on those disks.

Which didn’t mean that developers weren’t positively salivating after technological advances that could turn more of their aspirations into realities. And progress did come. Between the Trinity of 1977 and 1983, the year we’ve reached as I write this, typical memory sizes on the relatively inexpensive 8-bit machines typical in homes increased from as little as 4 K to 48 K, with 64 K set to become the accepted minimum by 1984. The arrival of the Atari 400 and 800 in 1979 and the Commodore 64 in 1982 each brought major advances in audiovisual capabilities. Faster, more convenient disks replaced cassettes as the accepted standard storage medium, at least in North America. But other parts of the technological equation remained frozen, perhaps surprisingly so given the modern accepted wisdom about the pace of advancement in computing. Home machines in 1983 were still mostly based around one of the two CPUs found in the Trinity of 1977, the Zilog Z80 and the MOS 6502, and these chips were still clocked at roughly the same speeds as in 1977. Thus, Moore’s Law notwithstanding, the processing potential that programmers had to work with remained for the moment frozen in place.

To find movement in this most fundamental part of a microcomputer we have to look to the more expensive machines. The IBM PC heralded the beginning of 16-bit microcomputing in 1981. The Apple Lisa of 1983 became the first mass-produced PC to use the state-of-the-art Motorola 68000, a chip which would have a major role to play in computing for the rest of the decade and beyond. Both the Lisa and an upgraded model of the IBM PC introduced in 1983, the PC/XT, also sported hard drives, which let them store several megabytes of data in constantly accessible form, and to retrieve it much more quickly and reliably than could be done from floppy disks. Still, these machines carried huge disadvantages to offset their technical advancements. The IBM PC and especially the PC/XT were, as noted, expensive, and had fairly atrocious graphics and sound even by the standards of 1983. The Lisa was really, really expensive, lacked color and sound, and was consciously designed to be as inaccessible to the hackers and bedroom coders who built the games industry as the Apple II was wide open. The advancements of the IBM PC and the Lisa would eventually be packaged into forms more useful to gamers and game developers, but for now for most gamers it was 8 bits, floppy disks, and (at best) 64 K.

Developers and engineers — and, I should note, by no means just those in the games industry and by no means just those working with the 8-bit machines — were always on the lookout for a secret weapon that might let them leapfrog some steps in what must have sometimes seemed a plodding pace of technological change, something that might let them get to that aspirational future faster. They found one that looked like it might just have potential in a surprising place: in the world of ordinary consumer electronics. Or perhaps by 1983 it was not so surprising, for by then they had already been waiting for, speculating about, and occasionally tinkering with the technology in question for quite some years.

At the end of the 1960s, with the home-videocassette boom still years away, the American media conglomerate MCA and the Dutch electronics giant Phillips coincidentally each began working separately on a technology to encode video onto album-like discs using optical storage. The video would be recorded as a series of tiny pits in the plastic surface of the disc, which could be read by the exotic technology of a laser beam scanning it as the disc was rotated. The two companies decided to combine their efforts after learning of one another’s existence a few years later, and by the mid-1970s they were holding regular joint demonstrations of the new technology, to which they gave the perfect name for the era: DiscoVision.

A DiscoVision prototype in action

A DiscoVision prototype in action

Yet laser discs, as they came to be more commonly called, were painfully slow to reach the market. A few pilots and prototype programs aside, the first consumer-grade players didn’t reach stores in numbers until late 1980.

The Pioneer VP-1000, most popular of the early consumer-grade laser-disc players

The Pioneer VP-1000, most popular of the early consumer-grade laser-disc players

By that time VCRs were selling in huge numbers. Laser discs offered far superior video and audio than VCRs, but, at least from the standpoint of most consumers, had enough disadvantages to more than outweigh that. For starters, they were much more expensive. And they could only hold about 30 minutes of video on a side; thus the viewer had to get up and flip or change the disc, album-style, three times over the course of a typical movie. This was a hard sell indeed to a couch-loving nation who were falling in love with their new remote controls as quickly as their VCRs. Yet it was likely the very thing that the movie and television industry found most pleasing about the laser disc that really turned away consumers: the discs were read only, meaning it was impossible to use them to record from the television, or to copy and swap movies and other programs with friends. Some (admittedly anecdotal) reports claim that up to half of the laser-disc players sold in the early years of the format were returned when their purchasers realized they couldn’t use them to record.

Thus the laser-disc format settled into a long half-life in which it never quite performed up to expectations but never flopped so badly as to disappear entirely. It became the domain of the serious cineastes and home-theater buffs who were willing to put up with its disadvantages in return for the best video and audio quality you could get in the home prior to the arrival of the DVD. Criterion appeared on the scene in 1984 to serve this market with a series of elaborate special editions of classic films loaded with the sorts of extras that other publishers wouldn’t begin to offer until the DVD era: cast and crew interviews, “making of” documentaries, alternate cuts, unused footage, and of course the ubiquitous commentary track (like DVDs, laser discs had the ability to swap and mix audio streams). Even after DVDs began to replace VCRs en masse and thus to change home video forever circa 2000, a substratum of laser-disc loyalists soldiered on, some unwilling to give up on libraries they’d spent many years acquiring, others convinced, like so many vinyl-album boosters, that laser discs simply looked better than the “colder” digital images from DVDs or Blu-ray discs. (Although all of these mediums store data using the same basic optical techniques, in a laser disc the data is analog, and is processed using analog rather than digital circuitry.) Pioneer, who despite having nothing to do with the format’s development became its most consistent champion — they were responsible for more than half of all players sold — surprised those who already thought the format long dead in January of 2009 when they announced that they were discontinuing the last player still available for purchase.

The technology developed for the laser disc first impacted the lives of those of us who didn’t subscribe to Sound and Vision in a different, more tangential way. Even as DiscoVision shambled slowly toward completion during the late 1970s, a parallel product was initiated at Phillips to adapt optical-storage technology to audio only. Once again Philips soon discovered another company working on the same thing, this time Sony of Japan, and once again the two elected to join forces. Debuting in early 1983, the new compact disc was first a hit mainly with the same sorts of technophiles and culture vultures who were likely to purchase laser-disc players. Unlike the laser disc, however, the CD’s trajectory didn’t stop there. By 1988 400 million CDs were being pressed each year, by which time the format was on the verge of its real explosion in popularity; nine years later that number was 5 billion, close to one CD for every person on the planet.

But now let’s back up and relate this new optical audiovisual technology to the computer technologies with which we’re more accustomed to spending our time around these parts. Many engineers and programmers have a specific sort of epiphany after working with electronics in general or computers in particular for a certain amount of time. Data, they realize, is ultimately just data, whether it represents an audio recording, video, text, or computer code. To a computer in particular it’s all just a stream of manipulable numbers. The corollary to this fact is that a medium developed for the storage of one sort of data can be re-purposed to store something else. Microcomputers in particular already had quite a tradition of doing just that even in 1983. The first common storage format for these machines was ordinary cassette tapes, playing on ordinary cassette players wired up to Altairs, TRS-80s, or Apple IIs. The data stored on these tapes, which when played back for human ears just sounded like a stream of discordant noise, could be interpreted by the computer as the stream of 0s and 1s which it encoded. It wasn’t the most efficient of storage methods, but it worked — and it worked with a piece of cheap equipment found lying around in virtually every household, a critical advantage in those do-it-yourself days of hobbyist hackers.

If a cassette could be used to store a program, so could a laser disc. Doing so had one big disadvantage compared to other storage methods, the very same that kept so many consumers away from the format: unless you could afford the complicated, specialized equipment needed to write to them yourself, discs had to be stamped out from a special factory complete with their contents, which afterward could only be read, not altered. But the upside… oh, what an upside! A single laser-disc side may have been good for only about 30 minutes of analog video, but could store about 1 to 1.5 GB of digital computer code or data. The possibility of so much storage required a major adjustment of the scale of one’s thinking; articles even in hardcore magazines like Byte that published the figure had to include a footnote explaining what a gigabyte actually was.

Various companies initiated programs in the wake of the laser disc’s debut to adopt the technology to computers, resulting in a plethora of incompatible media and players. Edward Rothchild wrote in Byte in March of 1983 that “discs are being made now in 12- and 14-inch diameters, with 8-, 5 1/4-, 3-, and possibly 2-inch discs likely in the near future.”

The Toshiba DF-2000, a typically elaborate optical-storage-based institutional archiving system of the 1980s

The Toshiba DF-2000, a typically elaborate optical-storage-based institutional archiving system of the 1980s

Others moved beyond discs entirely to try cards, slides, even optical versions of old-fashioned reel-to-reel or cassette tapes. Some of the ideas that swirled around were compelling enough that you have to wonder why they never took off. A company called Drexler came out with the Drexon Laser Card, a card the size of a driver’s license or credit card with a strip at the back that was optical rather than magnetic and could store some 2 MB of data. They anticipated PCs of the future being equipped with a little slot for reading the cards. Among other possibilities, a complete operating system could be installed on a card, taking the place of ROM chips or an operating system loaded from disk. Updates would become almost trivial; the cards were cheap and easy to manufacture, and the end user would need only swap the new card for the old to “install” the new operating system. Others anticipated Laser Cards becoming personal identification cards, with everything anyone could need to know about you, from citizenship status to credit rating, right there on the optical strip, a helpful boon indeed in the much less interconnected world of the early 1980s. (From the department of things that never change: the privacy concerns raised by such a scheme were generally glossed over or ignored.)

The Drexon Laser Card

The Drexon Laser Card

Some of these new technologies — the Laser Card alas not among them — did end up living on for quite some years. Optical storage was ideal for large, static databases like public records, especially in institutions that could afford the technology needed to create the discs as well as read them. IBM and others who served the institutional-computing market therefore came out with various products for this purpose, some of which persist to this day. In the world of PCs, however, progress was slow. It could be a bit hard to say what all that storage might actually be good for on a machine like, say, an Apple II. Today we fill our CDs and DVDs mostly with graphics and sound resources, but if you’ve seen the Apple II screenshots that litter this blog you know that home computers just didn’t have the video (or audio) hardware to make much direct use of such assets. Nor could they manipulate more than the most minuscule chunk of the laser disc’s cavernous capacity; connecting an Apple II to optical mass storage would be like trying to fill the family cat’s water bowl with a high-pressure fire hose. Optical media as a data-storage medium therefore came to PCs only slowly. When it did, it piggybacked not on the laser disc but on the newer, more successful format of the audio CD. The so-called “High Sierra” standard for the storage of data on CDs — named after the Las Vegas casino where it was hashed out by a consortium of major industry players — was devised in 1985, accompanied by the first trickle of pioneering disc-housed encyclopedias and the like, and Microsoft hosted the first big conference devoted to CD-ROMl in March of 1986. It took several more years to really catch on with the mass market, but by the early years of the 1990s CD-ROM was one of the key technologies at the heart of the “multimedia PC” boom. By this time processor speeds, memory sizes, and video and sound hardware had caught up and were able to make practical use of all that storage at last.

Still, even in the very early 1980s laser discs were not useless to even the most modest of 8-bit PCs. They could in fact be used with some effectiveness in a way that hewed much closer to their original intended purpose. Considered strictly as a video format, the most important property of the laser disc to understand beyond the upgrade in quality it represented over videotape is that it was a random-access medium. Videocassettes and all other, older mediums for film and video were by contrast linear formats. One could only unspool their contents sequentially; finding a given bit of content could only be accomplished via lots of tedious rewinding and fast forwarding. But with a laser disc one could jump to any scene, any frame, immediately; freeze a frame on the screen; advance forward or backward frame by frame or at any speed desired. The soundtrack could be similarly manipulated. This raised the possibility of a new generation of interactive video, which could be controlled by a computer as cheap and common as an Apple II or TRS-80. After all, all the computer had to do was issue commands to the player. All of the work of displaying the video on the screen, so far beyond the capabilities of any extant computer’s graphics hardware, was neatly sidestepped. For certain applications at least it really did feel like leapfrogging about ten years of slow technological progress. Computers could manage graphics and sound through manipulating laser-disc players that they wouldn’t be able to do natively until the next decade.

The people who worked on the DiscoVision project were not blind to the potential here. Well before the laser disc became widely available to consumers in 1980 they were already making available pre-release and industrial-grade models to various technology companies and research institutions. These were used for the occasional showcase, such as the exhibition at Chicago’s Museum of Science and Industry in 1979 which let visitors pull up an image of the front page of any issue of the Chicago Tribune ever published. Various companies continued to offer throughout the 1980s pricey professional-level laser-disc setups that came equipped with a CPU and a modest amount of RAM memory. These could take instructions from their controlling computers and talk back to them as well: telling what frame was currently playing, notifying the host when a particular snippet was finished, etc. The host computer could even load a simple program into the player’s memory and let it run unattended. Consumer-grade devices were more limited, but virtually all did come equipped with one key feature: a remote-control sensor,which could be re-purposed to let a computer control the player. Such control was more limited than was possible with the more expensive players — no talking back on the part of the player was possible. Still, it was intriguing stuff. Magazines like Byte and Creative Computing started publishing schematics and software to let home users take control of their shiny new laser-disc player just months after the devices started becoming available to purchase in the first place. But, given all of the complications and the need to shoot video as well as write code and hack hardware to really create something, much of the most interesting work with interactive video was done by larger institutions. Consider, for example, the work dome by the American Heart Association’s Advanced Technology Development Division.

The AHA was eager to find a way to increase the quantity and quality of CPR training in the United States. They had a very good reason for doing so: in 1980 it was estimated that an American stricken with a sudden heart attack faced odds of 18 to 1 against there being someone on-hand who could use CPR to save her life. Yet CPR training from a human instructor is logistically complicated and expensive. Many small-town governments and/or hospitals simply couldn’t manage to provide it. David Hon of the AHA believed that interactive video could provide the answer. The system his research group developed consisted of an Apple II interfaced to a laser-disc player as well as a mannequin equipped with a variety of sensors. An onscreen instructor taught the techniques of CPR step by step. After each step the system quizzed the student on what she had just learned; she could select her answers by touching the screen with a light pen. It then let her try out her technique on the mannequin until she had it down. The climax of the program came with a simulation of an actual cardiac emergency, complete with video and audio, explicitly designed to be to be exciting and dramatic. Hon:

We had learned something from games like Space Invaders: if you design a computer-based system in such a way that people know the difference between winning and losing, virtually anyone will jump in and try to win. Saving a life is a big victory and a big incentive. We were sure that if we could build a system that was easy to use and engaging trainees would use it and learn from it willingly.

The American Heart Association's CPR training system

The trainee's "coach" provides instruction and encouragement on the left monitor; the right shows the subject's vital signs as the simulation runs

The trainee’s “coach” provides instruction and encouragement on the left monitor; the right shows the subject’s vital signs as the simulation runs

At a cost of about $15,000 per portable system, the scheme turned out to be a big success, and probably saved more than a few lives.

One limitation of many early implementations of interactive video like this was the fact that the computer controller and the laser disc itself each had its own video display, with no way to mix the two on one screen, as you can clearly see in the photos above. In time, however, engineers developed the genlock, a piece of hardware which allowed a computer to overlay its own signal onto a video display. How might this be useful? Well, consider the very simple case of an educational game which quizzes children on geography. The computer could play some looping video associated with a given country from the laser disc, while asking the player what country is being shown in text generated by the computer. Once the player answers, more text could be generated telling whether she got it right or not. Yet many saw even this scenario as representing the merest shadow of interactive video’s real potential. A group at the University of Nebraska developed a flight-training system which helped train prospective pilots by combining video and audio from actual flights with textual quizzes asking, “What’s happening here?” or “What should be done next?” or “What do these instruments seem to indicate?”

Flight Training

Another University of Nebraska group developed a series of educational games meant to teach problem-solving to hearing-impaired children. They apparently played much like the full-motion-video adventure games of a decade later, combining video footage of real actors with puzzles and conversation menus to let the child find her own way through the story and solve the case.

Think It Through Think It Through

Think It Through

The Minnesota Educational Computing Consortium (the same organization that distributed The Oregon Trail) developed a high-school economics course:

Three types of media are used in each session. A booklet introduces the lesson and directs the student to use the other pieces of equipment. At the same time, it provides space for note taking and record keeping. A microcomputer [an Apple II] contributes tutorial, drill, and practice dimensions to the lesson. And a videodisc player presents information, shows examples, and develops concepts which involve graphics or motion.

Apple built commands for controlling laser-disc players into their SuperPILOT programming language, a rival to BASIC designed specifically for use in schools.

There was a widespread sense among these experimenters that they were pioneering a new paradigm of education and of computing, even if they themselves couldn’t quite put their fingers on what it was, what it meant, or what it should be called. In March of 1976, an amazingly early date when laser discs existed merely as crude prototypes, Alfred M. Bork envisioned what laser discs could someday mean to educational computing in an article that reads like a dispatch from the future:

I envision that each disc will contain a complete multimedia teaching package. Thus, a particular disc might be an elaborate teaching sequence for physics, having on the disc the computer code for that sequence (including possible microcode to make the stand-alone system emulate the particular machine that material was originally developed for), slides, audio messages, and video sequences of arbitrary length, all of these in many different segments. Thus, a teaching dialog stored on a videodisc would have full capability of handling very complex computer logic, and making sizable calculations, but it also could, at an appropriate point, show video sequences of arbitrary length or slides, or present audio messages. Another videodisc might have on it a complete language, such as APL, including a full multimedia course for learning APL interactively. Another might have relatively little logic, but very large numbers of slides in connection with an art-history or anatomy course. For the first time control of all the important audiovisual media would be with the student. The inflexibility of current film and video systems could be overcome too, because some videodiscs might have on them simply nothing but a series of film clips, with the logic for students to pick which ones they wanted to see at a particular time.

Bork uses a critical word in his first sentence above, possibly for the first time in relation to computing: “multimedia.” Certainly it’s a word that wouldn’t become commonplace until many years after Bork wrote this passage. Tony Feldman provided perhaps the most workable and elegant definition in 1994: “[Multimedia is] a method of designing and integrating computer technologies on a single platform that enables the end user to input, create, manipulate, and output text, graphics, audio, and video utilizing a single user interface.” This new paradigm of multimedia computing is key to almost all of the transformations that computers have made in people’s everyday lives in the thirty years that have passed since the pioneering experiments I just described. The ability to play back, catalog, combine, and transform various types of media, many or most of them sourced from the external world rather than being generated within the computer itself, is the bedrock at the root of the World Wide Web, of your iPod and iPhone and iPad (or equivalent). Computers today can manipulate all of that media internally, with no need for the kludgy plumbing together of disparate devices that marks these early experiments, but the transformative nature of the concept itself remains. With these experiments with laser-disc-enabled interactive video we see the beginning of the end of the old analog world of solid-state electronics, to be superseded by a digital world of smart, programmable media devices. That, much more than gigabytes of storage, is the real legacy of DiscoVision.

But of course these early experiments were just that, institutional initiatives seen by very few. There simply weren’t enough people with laser-disc players wired to their PCs for a real commercial market to develop. The process of getting a multimedia-computing setup working in the home was just too expensive and convoluted. It would be six or seven more years before “multimedia” would become the buzzword of the age — only to be quickly replaced in the public’s imagination by the World Wide Web, that further advance that multimedia enabled.

In the meantime, most people of the early 1980s had their first experience with this new paradigm of computing outside the home, in the form of — what else? — a game. We’ll talk about it next time.

(The most important sources for this article were: Byte magazines from June 1982, March 1983, and October 1984; Creative Computing from March 1976 and January 1982; Multimedia, a book by Tony Feldman; Interactive Video, a volume from 1989 in The Educational Technology Anthology Series; and various laser-disc-enthusiast sites on the Internet. I also lifted some of these ideas from my own book about the Amiga, The Future Was Here. The lovely picture that begins this article was on the cover of the June 1982 Byte. All of the other images were also taken from the various magazines listed above.)

 

Tags: ,

Suspended

Mike Berlyn

Mike Berlyn

As earlier posts have hopefully made clear, conventions played a pivotal role for many years in the PC industry. In the early years that conventions meant places like the West Coast Computer Faire and the AppleFests, where hackers and hobbyists would gather to talk about their machines and trade tips along with manufacturers, publishers, and developers; indeed, in this early period the groups could be all but indistinguishable. But 1982 is generally remembered by old-timers as the last year when the likes of Applefest could attract the movers and shakers. Afterward, as the moneyed interests entered en masse and the community of computer users (or even Apple users) grew too large to retain that clubby feeling, such gatherings faded in importance in comparison with the glitzier Consumer Electronics Show and its rivals, where you needed a press badge just to get in. Whatever form the shows took, they were as important for what took place behind the scenes, in back rooms, bars, and hotels, as what was shown on their floors. In gathering people from all over the industry together in one location, they provided essential opportunities for negotiations, deal making, maybe even a bit of intrigue.

Thus it was at the Boston Applefest in May of 1982 that Marc Blank of Infocom had a long talk with Mike Berlyn of Sentient Software, to whom he had been introduced by a mutual acquaintance. As it turned out, each was looking for something the other could offer him. It didn’t take long to make a deal.

Berlyn was by a wide margin the more frustrated of the pair. As you may recall, he had embraced the idea of adventure games as a new form of literary expression very early, and put it into practice as well as his resources allowed in two games he released through Sentient, Oo-Topos and Cyborg. Yet despite an absolutely rapturous review of the latter in the influential Softalk, the two games made nary a dent commercially. Berlyn, a demanding personality who throughout his career would change business relationships almost as often as he churned out games, felt muzzled by partners he felt weren’t as committed as he was and the accompanying lack of promotion and investment. Still, he also realized that in a real sense his best just wasn’t good enough. Both games were written in BASIC, with the two-word parser, simplistic world model, and all the other limitations that implied. Berlyn was a clever self-taught Apple II hacker, but lacked the experience or technical vision to create something more advanced — like, say, Infocom’s state-of-the-art ZIL system.

Blank, meanwhile, had ZIL but wasn’t sure he could take full advantage of it. Since starting to work on the landmark Deadline the previous year, he had started to see Infocom’s games in much the same light as Berlyn — as dynamic, playable stories. Blank, who was rather insecure about his own writerly chops (albeit largely unnecessarily), now viewed Deadline almost as a tech demo, a chance to get tools worked out and to demonstrate some shadow of what might be possible in the hands of a real writer. Berlyn, it must be admitted, was not exactly Norman Mailer or even Arthur C. Clarke. He had just three straight-to-the-dimestore-paperback-rack science fiction novels to his credit, none of which had sold all that well. Still, that was enough to qualify him for the title of “published author,” and was also three more novels than anyone else currently writing adventure games had published. Signing Berlyn would mark a big step toward Blank’s crystallizing vision of Infocom as publishers of interactive fiction rather than mere text adventures, even if it would still be a couple of years before the company would stumble upon that term to describe what they were really about.

The first plan had Berlyn working on a game for Infocom under contract from his home in Colorado. However, what with the complexities of the ZIL system and the state of telecommunications in 1982, that quickly proved impractical. So, within weeks of the Applefest meeting, Berlyn and his wife packed up and moved to Boston, where he became one of the first full-time employees to be hired by Infocom, as well as the first Implementor to be drawn from outside the immediate orbit of MIT’s Laboratory for Computer Science. What Infocom got for a first project was perhaps not quite what they had expected. Berlyn, Infocom’s supposed literary star, always combined a headstrong creativity with a certain flair for the perverse. He now started in earnest on Suspended, arguably the least literary parser-driven game Infocom would ever release, more a strategy game implemented in text than an interactive fiction.

The premise of Suspended reflects a longstanding obsession of Berlyn with disembodied consciousness; this had already been at the heart of his novel The Integrated Man and his earlier adventure Cyborg. In Suspended, you take the role of, yes, another disembodied consciousness, whose body has been placed in “cryogenic suspension” while her mind takes a 500-year shift as the emergency backup to an automated system which makes life possible on a planet of the future, controlling the weather, food production, and the transportation network. Normally your mind sleeps alongside your body, but you’re to be woken in the case of an emergency which the automated systems are not equipped to handle. As you’ve probably guessed, just such an emergency occurs as the game begins.

With no body of your own, you have six robots to whom you can issue orders and through whose senses you can experience the game’s available geography, which is restricted to a planetary control complex located far underground. Each robot is somewhat, um, specialized in its capabilities. Iris is the only one who can see. Auda can hear. Sensa can detect “vibrational activity, photon emission sources, and ionic discharges.” Poet seems to have no clear purpose, other than to spout bits of poetry that must be deciphered like a code to figure out what is really going on with him. (“All life’s a stage, so just consider me a player,” he says when asked to go somewhere; “It hops and skips and leaves a bit, and can’t decide if it should quit,” when asked to describe his surroundings inside a power station.) The most obviously practical robots are Whiz, who can interface with various computer systems, and Waldo, a general-purpose repair robot.

Over the course of the game a series of escalating crises strike the planet, to which you must respond by making use of all of your robots. There are fairly conventional object-based puzzles to solve, but even once you figure out how to do everything you still face a daunting challenge in scheduling and logistics to juggle all of your robots efficiently and minimize the causalities on the surface. If you succeed in saving the planet at all — no easy task in itself; it will likely take dozens of plays just to get that far — you next can concentrate on doing it without leaving half the population dead. (It’s rather deflating when you “win” for the first time, only to be told that the survivors want to burn you in effigy.) Winning “a home in the country and an unlimited bank account” will likely take at least a few dozen more attempts.

Played today, Suspended feels oddly like a genre of cooperative board games that have become fairly common in recent years. In games like Pandemic, Red November, and Flash Point, players struggle together to maintain a system against a series of shocks, whether they come in the form of waves of global disease, leaks and explosions aboard a very unseaworthy submarine, or a hungry house fire. Further cementing the board-game connection in my mind are the uniquely practical feelies that came with Suspended: a map of the complex in the form of a game board, with a set of counters representing each of the robots. As you get deeper into the game and begin playing to win you’ll soon have multiple robots moving simultaneously about the complex doing various things. Thus the board quickly becomes an essential tool for keeping track of the whole situation, along with some careful notes.

In one sense, Suspended feels visionary, or at least wholly unique in the Infocom canon. The standard text-adventure paradigm of play has been thrown overboard almost entirely. Gone, for example, is the need to map, along with the connection to a single in-game protagonist and any semblance of conventional storytelling. Further emphasizing the strategy-game feeling, Suspended is explicitly designed to be replayable. It has an “advanced” difficulty level you can attempt if you finally manage a good score on the standard, or you can choose the custom starting option, where you can choose the starting location of each robot and control when the various disasters are triggered. The manual suggests that you and friends could use this to “challenge each other” with new scenarios.

Unfortunately, the flexibility Suspended has can rather make us expect more from it than it can deliver. It would be nice if, like those board games I mentioned, Suspended could truly become a different experience every time it’s played by parceling out fortune and misfortune from a randomized deck of virtual cards. But alas, the same events will always occur even in custom mode; the only question is when, and even that is predetermined by the person entering the new parameters. Suspended upends the traditional Infocom approach enough that you wish it could have gone even further, dispensing with fixed puzzles and events entirely in favor of something completely dynamic and replayable. Maybe there’s a project in there somewhere for some modern author…

Visionary as it can feel, Suspended can also paradoxically feel like a bit of a throwback even in the context of its day. When we think of games in text today, we generally leap immediately to Adventure, Infocom, and all of their peers and antecedents. However, it’s important to remember that through the 1970s lots and lots of other sorts of games were implemented in text, simply because that was the only possibility. This included card games, strategy games, simulations, even action games. By the time of Suspended, the two text-only members of the trinity of 1977 (the TRS-80 and the Commodore PET) were fading away, and games other than adventures were expected to have graphics. One is almost tempted to look at Suspended as a text game that really wants to be in pictures, to imagine how cool it might be if the map board was included in the game itself as a graphical playing field. But then you realize that the very premise of having only one robot who can actually, you know, see is dependent on the proverbial magic of text, and a new appreciation for Berlyn’s creativity asserts itself. At any rate, it’s perhaps worth remembering again in light of Suspended‘s unusual mode of play that Infocom were not at this stage calling themselves makers of interactive fiction or even adventure games. They were just making games in text which were (they claimed) smarter and more sophisticated than those of anybody working in graphics.

Being such a departure from anything Infocom had done before (or, for that matter, would do later), Suspended pushed and stretched the ZIL system in unexpected new directions, turning development into quite a challenge. To make things harder, Berlyn, while he knew his way pretty well around an Apple II, had none of the grounding in programming and theory of the Infocom founders. Just getting him up to speed on ZIL took some time, and getting this extremely ambitious first project going took more. Yes, some of what was needed had been done already: Dave Lebling had first put together a system for passing orders to other characters for his own robot in Zork II, and Blank had made great strides toward a more dynamic model of adventuring in Deadline. Still, Blank had to work quite extensively with Berlyn to give him the tools he needed. A game of Suspended can have many, many balls in the air, with six robots all moving about following orders, disasters and events happening (or being averted) on the surface, and the player hopping about amidst all the chaos, taking in the scene through this robot’s senses, then issuing orders to that one. Further, the parser had to be substantially reworked to support it all; it’s now possible to issue orders to multiple robots at once, or even to tell two or more robots to work on something together, such as moving something neither one is strong enough to budge on its own. Taken just as a functioning virtual world, Suspended is damn impressive — amongst the most technically impressive worlds that Infocom would ever create.

It’s also damn difficult to penetrate. With its tersely sterile robotic diction, its ironclad adherence to the sensory limitations of each robot, and the time pressures of its cavalcade of disasters, there isn’t an ounce of compromise or compassion in the game. We can only take comfort in knowing that even in its cruelty it’s eminently fair, as uninterested in playing guess the verb or foisting illogical puzzles on us as it is in coddling us. There’s none of the sense here of a design that got away from its designer that plagues, say, the work of Scott Adams or the early work of Roberta Williams. Suspended is hard because it wants to be hard, and it’s hard in exactly the way it wants to be. Which isn’t to say that most players, myself included, are exactly disappointed that Infocom never ventured further down the trail it blazed. I suspect that Suspended is the Infocom game farthest away from the ideal of interactive fiction as it’s perceived and (in Infocom’s case) remembered today.

Suspended Suspended

 

 

 

 

 

 

 

 

Suspended was released in March of 1983 in a huge and elaborate box (better to house that big laminated game board) that featured a recessed three-dimensional face mask for a lid. Surprisingly in light of the game’s difficulty and unabashedly experimental mode of play, it was yet another solid hit, selling some 55,000 copies in 1983 alone and eventually flirting with sales of 100,000 over its commercial lifetime. It really did seem that, at least for now, people were willing to follow Infocom wherever they led them. And Suspended was the only first release of 1983, the happiest, most financially successful year in the company’s history. I’ll have much more to tell about that year and the games it produced in the next posts.

(I’m thrilled to be able to say that since my last post on Infocom Activision has rereleased many of their games, including Suspended, for iPhone and iPad. If you don’t have an iDevice, you can certainly find the story file elsewhere on the Internet, but as usual I won’t be hosting it here. Just in case it’s helpful to anyone, here’s a very rough module for the VASSAL board-gaming engine with the Suspended map and counters. Load the save to position the robots as they are at the start of the standard game. If someone more familiar with VASSAL wants to clean it up and upload it to the official module repository, by all means feel free.

I should also note here that Marc Blank’s attitude toward the eternal game vs. story question that always hangs about Infocom and interactive fiction in general seems to have changed over the years. In an interview for Jason Scott’s Get Lamp documentary, he states that he always viewed Infocom’s works as fundamentally games rather than fiction or literature. In contemporary interviews, however, he often expresses the belief that Infocom was creating works that were different from — or, if you like, transcended — games. I believe his current thinking may be somewhat colored by the pain and frustration of Infocom’s later years, and his inability to really move the genre forward in a way that felt right to him.)

 

Tags: , ,

The Pinball Wizard

Bill Budge in Electronic Arts software artist pose

Bill Budge in Electronic Arts software artist pose

The name of Bill Budge has already come up from time to time on this blog. Mentioning him has been almost unavoidable, for he was one of the titans amongst early Apple II programmers, worshiped for his graphical wizardry by virtually everyone who played games. As you may remember, his name carried such weight that when Richard Garriott was first contacted by Al Remmers of California Pacific to propose that he allow CP to publish Akalabeth Garriott’s first reaction was a sort of “I’m not worthy” sense of shock at the prospect of sharing a publisher with the great Budge. Having arrived at the time of the birth of Electronic Arts and Budge’s masterpiece, Pinball Construction Set, now seems a good moment to take a step back and look at what made Budge such a star.

Budge was always a tinkerer, always fascinated by the idea of construction sets. As a young kid, he played with blocks, tinker toys, erector sets. As an older kid, he moved on to fiddling with telescopes and model rockets. (“It’s amazing we didn’t kill ourselves.”) After moving about the country constantly when Budge was younger, his family finally ended up in the San Francisco Bay area by the time Budge began high school in the late 1960s. It was a fortuitous move. With the heart of the burgeoning Silicon Valley easily accessible, Budge’s family had found the perfect spot for a boy who liked to tinker with technology. A teacher at his school named Harriet Hungate started a class in “computer math” soon after Budge arrived. The students wrote their programs out by hand, then sent them off to a local company that had agreed to donate some time on their IBM 1401 minicomputer. They then got to learn whether their programs had worked from a printout sent back to the school. It was a primitive way of working, but Budge was immediately smitten. He calls the moment he discovered what a loop is one of the “transcendent moments” in his life. He “just programmed all the time” during his last two years of high school. Hungate was eventually able to finagle a deal with another local business to get a terminal installed at the school with a connection to an HP 2100 machine hosting HP Time-Shared BASIC. Budge spent hours writing computer versions of Tic-tac-toe, checkers, and Go.

But then high school was over. Without the ready access to computers that his high school had afforded him, Budge tried to put his programming behind him. He entered the University of California Santa Cruz as an English major, with vague aspirations toward becoming a novelist. Yet in the end the pull of programming proved too strong. After two years he transferred to Berkeley as a computer-science major. He got his Bachelor’s there, then stayed on to study for a PhD. He was still working on that in late 1978 when the Apple II first entered his life.

As you might expect, the arrival of the trinity of 1977 had prompted considerable discussion within Berkeley’s computer-science department. Budge dithered for a time about whether to buy one, and if so which one. At last friend and fellow graduate student Andy Hertzfeld convinced him to go with the local product of nearby Apple Computer. It wasn’t an easy decision to make; the Commodore PET and the TRS-80 were both much cheaper (a major consideration for a starving student), and the TRS-80 had a vastly larger installed base of users and much more software available. Still, Budge decided that the Apple II was worth the extra money when he saw the Disk II system and the feature that would make his career, the bitmapped hi-res graphics mode. He spent half of his annual income on an Apple II of his own. It was so precious that he would carefully stow the machine away back in its box, securely swaddled in its original protective plastic, whenever he finished using it.

As he explored the possibilities of his treasure, Budge kept coming back again and again to hi-res mode. He worked to divine everything about how it worked and what he might do with it. His first big programming project became to rewrite much of Steve Wozniak’s original game of Breakout which shipped with every early Apple II. He replaced Woz’s graphics code with his own routines to make the game play faster and smoother, more like its arcade inspiration. When he had taken that as far as he could, he started thinking about writing a game of his own. He was well-acquainted with Pong from a machine installed at the local pizza parlor. Now he recreated the experience on the Apple II. He names “getting my first Pong ball bouncing around the on the screen” as another of his life’s transcendent moments: “When I finished my version of Pong, it was kind of a magical moment for me. It was night, and I turned the lights off in my apartment and watched the trailing of the ball on the phosphors of my eighty-dollar black and white TV.” He added a number of optional obstacle layouts to the basic template for variety, then submitted the game, which he named Penny Arcade, to Apple themselves. They agreed to trade him a printer for it, and earmarked it for The Apple Tapes, a cassette full of “introductory programs” to be included with every specimen of the new Apple II Plus model they were about to release. In the manual for the collection they misattributed the game to “Bob Budge,” but it mattered little. Soon enough everyone would know his name.

Penny Arcade

Penny Arcade

With his very first game shipping with every Apple II Plus, Budge was naturally happy to continue with his new hobby. He started hanging around the local arcades, observing and taking careful notes on the latest games. Then he would go home and clone them. Budge had little interest in playing the games, and even less in playing the role of game designer. For him, the thrill — the real game, if you will — was in finding ways to make his little Apple II produce the same visuals and gameplay as the arcade machines, or at least as close as he could possibly get. In a few years Atari would be suing people for doing what Budge was doing, but right now the software industry was small and obscure enough that he could get away with it.

Budge’s big breakthrough came when a friend of his introduced him to a traveling sales rep named Al Remmers, who went from store to store selling 8″ floppy disk drives. He and Budge made a deal: Remmers would package the games up in Ziploc baggies and sell them to the stores he visited on his treks, and they would split the profits fifty-fifty. Budge was shocked to earn $7000 for the first month, more than his previous annual income. From this relationship was born Remmers’s brief-lived but significant software-publishing company, California Pacific, as well as Budge’s reputation as the dean of Apple II graphics programmers. His games may not have been original, but they looked and played better than just about anything else out there. To help others who dreamed of doing what he did, he packaged some of his routines together as Bill Budge’s 3-D Graphics System. His reputation was such that this package sold almost as well as his games. This was how easily fame and fortune could come to a really hot programmer for a brief window of a few years, when word traveled quickly in a small community aching for more and better software for their machines.

In fact, his reputation soared so quickly that Apple themselves came calling. Budge, who had been putting less and less effort into his studies as his income from his games increased, dropped out of Berkeley to join his old buddy Andy Hertzfeld in Cupertino. He was made — what else? — a graphics specialist working in the ill-fated Apple III division. He ended up spending only about a year at Apple during 1980 and 1981, but two experiences there would have a huge impact on his future work, and by extension on the field of computer gaming.

While Budge was working at Apple much of the engineering team, including Hertzfeld and even Woz himself, were going through a hardcore pinball phase: “They were students of the game, talking about catches, and how to pass the ball from flipper to flipper, and they really got into it.” Flush with cash as they were after the IPO, many at Apple started filling their houses with pinball tables.

Budge's first pinball game, from Trilogy of Games

Budge’s first pinball game, from Trilogy of Games

Budge didn’t find pinball intrinsically all that much more interesting than he did purely electronic arcade games. Still, one of the first games Budge sold through Remmers had been a simple pinball game, which was later included in his very successful Trilogy of Games package published by California Pacific. Pinball was after all a fairly natural expansion of the simple Pong variants he started with. Now, witnessing the engineers’ enthusiasm led him to consider whether he could do the game better justice, create something on the Apple II that played and felt like real pinball, with the realistic physics that are so key to the game. It was a daunting proposition in some ways, but unusually susceptible to computer simulation in others. A game of pinball is all about physics, with no need to implement an opponent AI. And the action is all centered around that single moving ball while everything else remains relatively static, meaning it should be possible to do on the Apple II despite that machine’s lack of hardware sprites. (This lack made the Apple II less suited for many action games than the likes of the Atari 8-bit computers or even the Atari VCS.) After some months of work at home and after hours, Budge had finished Raster Blaster.

Raster Blaster

Raster Blaster was the best thing Budge had yet done — so good that he decided he didn’t want to give it to California Pacific. Budge felt that Remmers wasn’t really doing much for him by this point, just shoveling disks into his homemade-looking packaging, shipping them off to the distributor SoftSel, and collecting 50% of the money that came back. The games practically sold themselves on the basis of Budge’s name, not California Pacific’s. Budge was a deeply conflict-averse personality, but his father pushed him to cut his ties with California Pacific, to go out on his own and thereby double his potential earnings. And anyway, he was getting bored in his job at Apple. So he quit, and along with his sister formed BudgeCo. He would write the games, just as he always had, and she would handle the business side of things. Raster Blaster got BudgeCo off the ground in fine form. It garnered rave reviews, and became a huge hit in the rapidly growing Apple II community, Budge’s biggest game yet by far. Small wonder — it was the first computer pinball game that actually felt like pinball, and also one of the most graphically impressive games yet seen on the Apple II.

But next came the question of what to do for a follow-up. It was now 1982, and it was no longer legally advisable to blatantly clone established arcade games. Budge struggled for weeks to come up with an idea for an original game, but he got nowhere. Not only did he have no innate talent for game design, he had no real interest in it either. Out of this frustration came the brilliant conceptual leap that would make his legacy.

Above I mentioned that two aspects of Budge’s brief time at Apple would be important. The other was the Lisa project. Budge did not directly work on or with the Lisa team, but he was fascinated by their work, and observed their progress closely. Like any good computer-science graduate student, he had been aware of the work going on at Xerox PARC. Yet he had known the Alto’s interface only as a set of ideas and presentations. When he could actually play with a real GUI on the Lisa prototypes, it made a strong impression. Now it provided a way out of his creative dilemma. He was disinterested in games and game design; what interested him was the technology used to make games. Therefore, why not give people who actually did want to become designers a set of tools to let them do that? Since these people might be no more interested in programming than he was in design, he would not just give them a library of code like the old 3-D Graphics System he had published through California Pacific. No, he would give them a visual design tool to make their own pinball tables, with a GUI interface inspired by the work of the Lisa team.

The original Pinball Construction Set box art, featuring pieces of the pinball machine that Budge disassembled to plan the program

The original Pinball Construction Set box art, featuring pieces of the pinball machine that Budge disassembled to plan the program

Budge had resisted buying a pinball table of his own while at Apple, but now he bought a used model from a local thrift shop. He took it apart carefully, cataloging the pieces that made up the playfield. Just as the Lisa’s interface used a desktop as its metaphor, his program would let the user build a pinball machine from a bin of iconographic spare parts. The project was hugely more ambitious than anything he had tackled before, even if some of the components, such as a simple paint program that let the user customize the look of her table, he had already written for his personal use in developing Raster Blaster. Budge was determined to give his would-be creator as much scope as he possibly could. That meant fifteen different components that she could drag and drop anywhere on the surface of her table. It meant letting her alter gravity or the other laws of physics if she liked. It meant letting her make custom scoring combinations, so that bumping this followed by that gave double points. And, because every creator wants to share her work, it meant letting the user save her custom table as a separate program that her friends could load and play just like they did Budge’s own Raster Blaster. That Budge accomplished all of this, and in just 48 K of memory, makes Pinball Construction Set one of the great feats of Apple II programming. None other than Steve Wozniak has called it “the greatest program ever written for an 8-bit machine.”

Pinball Construction Set

Pinball Construction Set

Amazing as it was, when BudgeCo released Pinball Construction Set at the end of 1982 its sales were disappointing. It garnered nowhere near the attention of Raster Blaster. The software industry had changed dramatically over the previous year. A tiny operation like BudgeCo could no longer just put a game out — even a great, groundbreaking game like PCS — and wait for sales. It was getting more expensive to advertise, harder to get reviews and get noticed in general. Yet when Trip Hawkins came to him a few months later asking to re-release PCS through his new publisher Electronic Arts, Budge was reluctant, nervous of the slick young Hawkins and his slick young company. But Hawkins just wouldn’t take no for an answer; he said he would make Budge and his program stars, said that only he could find PCS the audience its brilliance deserved — and he offered one hell of a nice advance and royalty rate to boot. And EA did have Woz himself on the board of directors, and Woz said he thought signing up would be a smart move. Budge agreed at last; thus BudgeCo passed into history less than two years after its formation.

As good as PCS was, it’s very possible that Hawkins had another, ulterior motive in pursuing Budge with such vigor. To understand how that might have been, we need to understand something about what Budge was like personally. Given the resume I’ve been outlining — spent his best years of high school poring over computer code; regarded his Apple II as his most precious possession; had his most transcendent moments programming it; etc. — you’ve probably already formulated a shorthand picture. If the Budge of that picture is, shall we say, a little bit on the nerdy, introverted side, you can be forgiven. The thing was, however, the real Budge was nothing like what you might expect; as he himself put it, he “didn’t quite fit the mold.” He had a tall, rangy build and handsome features beneath a luxurious head of hair, with striking eyes that a teenage girl might call dreamy. At 29 (although he looked perhaps 22), he was comfortable in his own skin in a way that some people never manage, with an easy grace about him that made others as glad to talk to him as they were to listen. His overall persona smacked more of enlightened California beach bum than hardcore programmer. And he took a great picture. If there was one person amongst Hawkins’s initial crew of developers who could actually pull off the rock star/software artist role, it was Budge; he might even attract a groupie or two. He was a dream come true for the PR firm Hawkins had inherited from his time at Apple, Regis McKenna, Inc. Thus the EA version of PCS was designed to look even more like a contemporary rock album than any of the other games. The name of Bill Budge, the man EA planned to make their very own rock star, was far larger on it than the name of his game.

EA's version of Pinball Construction Set

The down-to-earth Budge himself was rather bemused by EA’s approach, but he shrugged his shoulders and went along with it in his usual easygoing manner. When EA arranged for rock photographer Norman Seeff to do the famous “software artists” photo shoot, they asked that the subjects all wear appropriately bohemian dark clothing to the set. Budge went one better: he showed up with a single studded leather glove he’d bought for dressing up as a punk rocker for a party thrown by the Apple Macintosh team. He brought it simply as a joke, a bit of fun poked at all this rock-star noise. Imagine, then, how shocked he was when Seeff and the others demanded that he actually wear it. Thus Budge in his leather glove became the standout figure from that iconic image. As he later sheepishly admitted, “That’s not really me.” Soon after he got a software-artist photo shoot and advertisement all to himself, filled with vague profundities that may or may not have actually passed his lips beforehand. (“Programming for a microcomputer is like writing a poem using a 600-word vocabulary.”)

EA booked Budge into every gig they could find for him. He did a lengthy one-on-one interview with Charlie Rose for CBS News Nightwatch (“He knew absolutely nothing. He seemed like your typical blow-dried guy without a lot of substance. But I guess I was wrong about him.”); he demonstrated PCS alongside Hawkins on the influential show Computer Chronicles; he featured in a big segment on Japanese television, at a time when that country’s own designers were toiling in obscurity for their parent corporations; he had his photo on the cover of The Wall Street Journal; he was featured alongside the likes of Steve Jobs in an Esquire article on visionaries under the age of forty.

With his album out and the photo shoots done and the promotional spots lined up, it still remained for EA’s rock star to hit the road — to tour. If the highs just described were pretty intoxicating for a computer-game programmer, this part of the process kept him most assuredly grounded. Budge and EA’s Bing Gordon went on a series of what were billed as “Software Artists Tours,” sometimes accompanied by other designers, sometimes alone. The idea was something like a book tour, a chance to sign autographs and meet the adoring fans. Determined to break beyond the ghetto of traditional computer culture, EA booked them not only into computer stores but also into places like Macy’s in New York City, where they were greeted with confusion and bemusement. Even the computer stores sometimes seemed surprised to see them. Whether because of communications problems or flat disinterest, actual fans were often rare or nonexistent at the events. Hawkins’s dream of hordes of fans clutching their EA albums, fighting for an autograph… well, it didn’t happen, even though PCS became a major hit in its new EA duds (it would eventually sell over 300,000 copies across all platforms, a huge figure in those days). Often there seemed to be more press people eager to score an interview than actual fans at the appearances, and often the stores themselves didn’t quite know what to do with their software artists. One manager first demanded that Budge buy himself a new outfit (he was under-dressed in the manager’s opinion to be “working” in his store), then asked him if he could make himself useful by going behind the register and ringing up some customers. “That’s when I realized maybe I wouldn’t be a rock star,” a laconic Budge later said.

Budge wasn’t idle in the down-times between PR junkets. Privileged with one of the few Macintosh prototypes allowed outside of Apple, he used its bundled MacPaint application as the model for MousePaint, a paint program that Apple bundled with the first mouse for the Apple II. He also ported PCS to the Mac. Still, the fans and press were expecting something big, something as revolutionary as PCS itself had been — and small wonder, given the way that EA had hyped him as a visionary.

One of the most gratifying aspects of PCS had been the unexpected things people found to do with it, things that often had little obvious relationship to the game of pinball. Children loved to filled the entire space with bumpers, then watch the ball bouncing about among them like a piece of multimedia art. Others liked to just use the program’s painting subsections to make pictures, scattering the ostensible pinball components here and there not for their practical functions but for aesthetic purposes. If people could make such creative use of a pinball kit, what might they do with something more generalized? As uninterested as ever in designing a game in the traditional sense, Budge began to think about how he could take the concept of the construction set to the next step. He imagined a Construction Set Construction Set, a completely visual programming environment that would let the user build anything she liked — something like ThingLab, an older and admittedly somewhat obtuse stab at the idea that existed at Xerox PARC. His ideas about Construction Set Construction Set were, to say the least, ambitious:

“I could build anything from Pac-Man to Missile Command to a very, very powerful programming language. It’s the kind of a program that has a very wide application. A physics teacher, for example, could build all kinds of simulations, of little micro-worlds, set up different labs and provide dynamic little worlds that aren’t really videogames.”

It turned out to be a bridge too far. Budge tinkered with the idea for a couple of years, but never could figure out how to be begin to really implement it. (Nor has anyone else in the years since.) In fact, he never made a proper follow-up to PCS at all. Ironically, Budge, EA’s software artist who best looked the part, was one of the least able to play the role in the long term. As becomes clear upon reading any interview with Budge, old or new, his passion is not for games; it’s for code. In the early days of computer gaming the very different disciplines of programming and game design had been conflated into one due to the fact that most of the people who owned computers and were interested in making games for them were programmers. During the mid-1980s, however, the two roles began to pull apart as the people who used computers and the way games were developed changed. Budge fell smack into the chasm that opened up in the middle. Lauded as a brilliant designer, he was in reality a brilliant programmer. People expected from him something he didn’t quite know how to give them, although he tried mightily with his Construction Set Construction Set idea.

Budge at home in early 1985, the beginning of his "years in the wilderness"

Budge at home in early 1985, the beginning of his “years in the wilderness”

So, he finally just gave up. After 1984 the interviews and appearances and celebrity petered out. His continuing royalties from PCS made work unnecessary for some years, so he all but gave up programming, spending most of his time wind-surfing instead (a sport that Bing Gordon, perhaps to his regret, had taught him). Most people would have a problem going back to obscurity after being on television and newspaper features and even having their own magazine column (in Softalk), but it seemed to affect Budge not at all: “I’m kind of glad when I don’t have anything new out and people forget about me.” Eventually EA quietly accepted that they weren’t going to get another game from him and quit calling. Budge refers to this period as his “years in the wilderness.” By 1990 the name of Bill Budge, such a superstar in his day, came up only when the old-timers started asking each other, “Whatever happened to….?”

In the early 1990s, Budge, now married and more settled, decided to return to the games industry, first to work on yet another pinball game, Virtual Pinball for the Sega Genesis console. Without the pressure of star billing to live up to and with a more mature industry to work in that had a place for his talents as a pure programmer’s programmer, he decided to continue his career at last. He’s remained in the industry ever since, unknown to the public but respected immensely by his peers within the companies for which he’s worked. For Budge, one of those people who has a sort of innate genius for taking life as it comes, that seems more than enough. Appropriately enough, he’s spent most of his revived careers as what’s known as a tools programmer, making utilities that others then use to make actual games. In that sense his career, bizarre as its trajectory has been, does have a certain consistency.

PCS, his one towering achievement as a software artist, deserves to be remembered for at least a couple of reasons. First of all there is of course its status as the first really elegant tool to let anyone make a game she could be proud of. It spawned a whole swathe of other “construction set” software, from EA and others, all aimed at fostering this most creative sort of play. That’s a beautiful legacy to have. Yet its historical importance is greater than even that would imply. PCS represents to my knowledge the first application of the ideas that began at Xerox PARC to an ordinary piece of software which ordinary people could buy at an ordinary price and run on ordinary computers. It proved that you didn’t need an expensive workstation-class machine like the Apple Lisa to make friendlier, more intuitive software; you could do it on a 48 K Apple II. No mouse available? Don’t let that stop you; use a joystick or a set of paddles or just the arrow keys. Thousands and thousands of people first saw a GUI interface in the form of Pinball Construction Set. Just as significantly, thousands of other designers saw its elegance and started implementing similar interfaces in their own games. The floating, disembodied hand of PCS, so odd when the game first appeared, would be seen everywhere in games within a couple of years of its release. And game manuals soon wouldn’t need to carefully define “icon,” as the PCS manual did. PCS is a surprising legacy for the Lisa project to have; certainly the likes of it weren’t anything anyone involved with Lisa was expecting or aiming for. But sometimes legacies are like that.

Next time we’ll look at another of those seminal early EA games. If you’d like to try to make something of your own in the meantime, here’s the Apple II disk image and manual for Pinball Construction Set.

(What with his celebrity in Apple II circles between 1979 and 1985, there’s a lot of good information on Budge available in primary-source documents from the period. In particular, see the November 1982 Softline, the December 1985 Compute!’s Gazette, the March 1985 Electronic Games, the March 1985 Enter, the September 1984 Creative Computing, and Budge’s own column in later issues of Softalk. Budge is also featured in Halycon Days, and Wired magazine published a retrospective on his career when he was given a Pioneer Award by the Academy of Interactive Arts and Sciences in 2011. Budge’s interview at the awards ceremony was videotaped and is available for viewing online.)

 
9 Comments

Posted by on February 1, 2013 in Digital Antiquaria, Interactive Fiction

 

Tags: , ,

Xerox PARC

One day in 1962 J.C.R. Licklider, head of the Defense Department’s Information Processing Techniques Office and future Infocom co-founder, ran into a young man named Robert Taylor at an intra-government conference on emerging computer technologies. Lick was seventeen years older than Taylor, but the two found they had much in common. Both had studied psychology at university, with a special research interest in the psychology of human hearing. Both had moved on to computers, to become what we might today call user-interface specialists, studying the ways that the human mind receives and processes information and how computers might be designed to work in more intuitive accord with their masters. Both were humanists, more concerned with that amorphous thing we’ve come to call the user experience than with the bits and bytes that obsessed the technicians and engineers around them. And both were also from the South — Lick from Missouri, Taylor from Texas — and spoke in a corresponding slow drawl that often caused new acquaintances to underestimate them. A friendship was formed.

Robert Taylor, as photographed by Annie Leibowitz in 1972 for a Rolling Stone feature article

Robert Taylor, as photographed by Annie Leibowitz in 1972 for a Rolling Stone feature article

Taylor was working at the time at NASA, having been hired there in the big build-up that followed President Kennedy’s Moon-before-the-decade-is-out speech to work on simulators. Rather astonishingly considering the excitement building for the drive to the Moon, Taylor found himself increasingly dissatisfied there. He wasn’t content working on the margins of even as magnificent an endeavor as this one. Fueled by his conversations with Lick about the potential of computers, he wanted to be at the heart of the action. In 1964 he got his wish. Stepping down as head of IPTO, Lick recommended that Ivan Sutherland be made his replacement, and that Taylor be made Sutherland’s immediate deputy. Barely two years later Sutherland himself stepped down, making the 34-year-old Taylor head of the most far-reaching, well-funded computer-research grant program in the country.

By this time Taylor had long ago come to share Lick’s dream of computers as more than just calculating and tabulating machines. They had the potential to become personal, interactive companions that would not replace human brainpower (as many of the strong AI proponents dreamed) but rather complement, magnify, and transmit it. Taylor put his finger on the distinction in a later interview: “I was never interested in the computer as a mathematical device, but as a communications device.” He and Lick together published a manifesto of sorts in 1968 that still stands as a landmark in the field of computer science, the appropriately named “The Computer as a Communications Device.” They meant that literally as well as figuratively: it was Taylor who initiated the program that would lead to the ARPANET, the predecessor to the modern Internet.

The first mouse, created at SRI circa 1964

The first mouse, created at SRI circa 1964

One of Taylor’s favorites amongst his stable of researchers became Doug Engelbart, who seemed almost uniquely capable of realizing his and Lick’s concept of a new computing paradigm in actual hardware. While developing an early full-screen text editor at the Stanford Research Institute, Engelbart found that users complained of how laborious it was to slowly move the cursor around the screen using arrow keys. To make it easier, he and his team hollowed out a small block of wood, mounting two mechanical wheels attached to potentiometers in the bottom and a button on top. They named it a “mouse,” because with the cord trailing out of the back to connect it with a terminal that’s sort of what it looked like. The strange, homemade-looking gadget was crude and awkward compared to what we use today. Nevertheless, his users found it a great improvement over the keyboard alone. The mouse was just one of many innovations of Engelbart and his team. Their work climaxed in a bravura public demonstration in December of 1968 which first exposed the public to not only the mouse but also the concepts of networked communication, multimedia, and even the core ideas behind what would become known as hypertext. Engelbart pulled out all the stops to put on a great show, and was rewarded with a standing ovation.

But to some extent by this time, and certainly by the time the ARPANET first went live the following year, the atmosphere at ARPA itself was changing. Whereas earlier Taylor was largely left to invest his resources in whatever seemed to him useful and important, the ever-escalating Vietnam War was bringing with it both tightening research budgets and demands that all research be “mission-focused” — i.e., tailored not only to a specific objective, but to a specific military objective at that. Further, Taylor found himself more and more ambivalent about both the war itself and the idea of working for the vast engine that was waging it. After being required to visit Vietnam personally several times to sort out IT logistics there, he decided he’d had enough. He resigned from ARPA at the end of 1969, accepting a position with the University of Utah, which was conducting pioneering (and blessedly civilian) research in computer graphics.

He was still there a year later when an old colleague whose research had been partially funded through ARPA, George Pake, called him. Pake now worked for Xerox Corporation, who were in the process of opening a new blue-sky research facility that he would head. One half of its staff and resources would be dedicated to Xerox’s traditional forte, filled with chemists and physicists doing materials research into photocopying technology. The other half, however, would be dedicated to computer technology in a bid to make Xerox not just the copy-machine guys but the holistic architects of “the office of the future.” Eager to exploit Taylor’s old ARPA connections, which placed him on a first-name basis with virtually every prominent computer scientist in the country, Pake offered Taylor a job as an “associate manager” — more specifically, as a sort of recruiter-in-chief and visionary-in-residence — in the new facility in Palo Alto, California, just outside Stanford University. Bored already by Mormon-dominated Salt Lake City, Taylor quickly accepted.

The very idea of a facility like Xerox’s Palo Alto Research Center feels anachronistic today, what with its opened-ended goals and dedication to “pure” research. When hired to run the place, Pake frankly told Xerox that they shouldn’t expect any benefits from the research that would go on there for five to ten years. In that he wasn’t entirely correct, for PARC did do one thing for Xerox immediately: it gave them bragging rights.

Xerox was a hugely profitable company circa 1970, but relatively new to the big stage. Founded back in 1906 as The Haliod Photographic Company, they had really hit the big time only in 1960, when they started shipping the first copy machine practical for the everyday office, the Xerox 914. Now they were eager to expand their empire beyond copy machines, to rival older giants like IBM and AT&T. One part of doing so must be to have a cutting-edge research facility of their own, like IBM’s Thomas J. Watson Research Center and the fabled Bell Labs. Palo Alto was chosen as the location not so much because it was in the heart of Silicon Valley as because it was a long way from the majority of Xerox’s facilities on the other coast. Like its inspirations, PARC was to be kept separate from any whiff of corporate group-think or practical business concerns.

Once installed at PARC, Taylor started going through his address book to staff the place. In a sense it was the perfect moment to be opening such a facility. The American economy was slowing, leaving fewer private companies with the spare change to fund the sort of expensive and uncertain pure research that PARC was planning. Meanwhile government funding for basic computer-science research was also drying up, due to budget squeezes and Congressional demands that every project funded by ARPA must have a specific, targeted military objective. The salad days of Taylor’s ARPA reign, in other words, were well and truly over. It all added up to a buyer’s market for PARC. Taylor had his pick of a large litter of well-credentialed thinkers and engineers who were suddenly having a much harder time finding interesting gigs. Somewhat under the radar of Pake, he started putting together a group specifically tailored to advance the dream he shared with Lick and Engelbart for a new, more humanistic approach to computing.

One of his early recruits was William English, who had served as Engelbart’s right-hand man through much of the previous decade; it was English who had actually constructed the mouse that Engelbart had conceived. Many inside SRI, English not least among them, had grown frustrated with Engelbart, who managed with an air of patrician entitlement and seemed perpetually uninterested in building upon the likes of that showstopping 1968 demonstration by packaging his innovations into practical forms that might eventually reach outside the laboratory and the exhibit hall. English’s recruitment was the prelude to a full-blown defection of Engelbart’s team; a dozen more eventually followed. One of their first projects was to redesign the mouse, replacing the perpendicularly mounted wheels with a single ball that allowed easier, more precise movement. That work would be key to much of what would follow at PARC. It would also remain the standard mouse design for some thirty years, until the optical mouse began to phase out its older mechanical ancestor at last.

Alan Kay

Alan Kay

Taylor was filling PARC with practical skill from SRI and elsewhere, but he still felt he lacked someone to join him in the role of conceptual thinker and philosopher. He wanted someone who could be an ally against the conventional wisdom — held still even by many he had hired — of computers as big, institutional systems rather than tools for the individual. He therefore recruited Alan Kay, a colleague and intellectual soul mate from his brief tenure at the University of Utah. Kay dreamed of a personal computer with “enough power to outrace your senses of sight and hearing, enough capacity to store thousands of pages, poems, letters, recipes, records, drawings, animations, musical scores, and anything else you would like to remember and change.” It was all pretty vague stuff, enough so that many in the computer-science community — including some of those working at PARC — regarded him as a crackpot, a fuzzy-headed dreamer slumming it in a field built on hard logic. Of course, they also said the same about Taylor. Taylor decided that Kay was just what he needed to make sure that PARC didn’t just become another exercise in incremental engineering. Sure enough, Kay arrived dreaming of something that wouldn’t materialize in anything like the form Kay imagined it until some two decades later. He called it the Dynabook. It was a small, flat rectangular box, about 9″ X 12.5″, which flipped open to reveal a screen and keyboard on which one could read, write, play, watch, and listen using media of one’s own choice. Kay was already describing a multimedia laptop computer — and he wasn’t that far away from the spirit of the iPad.

Combining the idealism of Taylor and Kay with the practical knowledge of their engineering staff and at least a strong nod toward the strategic needs of their parent corporation, PARC gradually refined its goal to be the creation of an office of the future that could hopefully also be a stepping stone on the path to a new paradigm for computing. Said office was constructed during the 1970s around four technologies developed right there at PARC: the laser printer; a new computer small enough to fit under a desk and possessed of almost unprecedented graphical capabilities; practical local-area networking in the form of Ethernet; and the graphical user interface (GUI). Together they all but encapsulated the face that computing would assume twenty years later.

Gary Starkweather's laser printer

Gary Starkweather’s laser printer

Of the aforementioned technologies, the laser printer was the most immediately, obviously applicable to Xerox’s core business. It’s thus not so surprising that its creator, Gary Starkweather, was one of the few at PARC to have been employed at Xerox before the opening of the new research facility. Previous computer printers had been clanking, chattering affairs that smashed a limited set of blocky characters onto a continuous feed of yellow-tinged fan-fold paper. They were okay for program listings and data dumps but hardly acceptable for creating business correspondence. In its original implementation Starkweather’s laser printer was also ugly, an unwieldy contraption sprouting wires out of every orifice perched like a huge parasite atop a Xerox copy machine whose mechanisms it controlled. It was, however, revolutionary in that it treated documents not as a series of discrete characters but as a series of intricate pictures to be reproduced by the machinery of the copier it controlled. The advantages of the new approach were huge. Not only was the print quality vastly better, but it appeared on crisp white sheets of normal office paper. Best of all, it was now possible to use a variety of pleasing proportional fonts to replace the ugly old fixed-width characters of the line printers, to include non-English characters like umlauts and accents, to add charts, graphs, decorative touches like borders, even pictures.

Xerox Alto

Xerox Alto

The new computer was called the Alto. It was designed to be a personal computer, semi-portable and supporting just one user, although since it was not built around a microprocessor it was not technically a microcomputer like those that would soon be arriving on the hobbyist market. The Alto’s small size made it somewhat unusual, but what most set it apart was its display.

Most computers of this period — those that were interactive and thus used a display at all, that is — had no real concept of managing a display. They rather simply dumped their plain-text output, fire-and-forget fashion, to a teletype printer or terminal. (For example, it was on the former devices that the earliest text adventures were played, with the response to each command unspooling onto fan-folded paper.) Even more advanced systems, like the full-screen text editors with which Engelbart’s team had worked, tracked the contents of the screen only as a set of cells, each of which could contain a single fixed-width ASCII character; no greater granularity was possible, nor shapes that were not contained in the terminal’s single character set. Early experiments with computer graphics, such as the legendary Spacewar! game developed at MIT in the early 1960s, used a technique known as vector graphics, in which the computer manually controlled the electron gun which fired to form the images on the screen. A picture would be stored not as a grid of pixels but as a series of instructions — the sequence of strokes used to draw it on the display. (This is essentially the same technique as that developed by Ken Williams to store the graphics for On-Line’s Hi-Res Adventure line years later.) Because the early vector displays had no concept of display memory at all, a picture would have to be traced out many times per second, else the phosphors on the display would fade back to black. Such systems were not only difficult to program but much too coarse to allow the intricacies of text.

The Alto formed its display in a different way — the way the device you’re reading this on almost certainly does it. It stored the contents of its screen in memory as a grid of individual pixels, known as a bitmap. One bit represented the on/off status of one pixel; a total of 489,648 of them had to be used to represent the Alto’s 606 X 808 pixel black-and-white screen. (The Alto’s monitor had an unusual portrait orientation that duplicated the dimensions of a standard 8 1/2″ X 11″ sheet of office paper, in keeping with its intended place as the centerpiece of the office of the future.) This area of memory, often called a frame buffer during these times when it was a fairly esoteric design choice, was then simply duplicated onto the monitor screen by the Alto’s video hardware. Just as the laser printer saw textual documents as pictures to be reproduced dot by dot, the Alto saw even its textual displays in the same way. This approach was far more taxing on memory and computing power than traditional approaches, but it had huge advantages. Now the user needed no longer be restricted to a single font; she could have a choice of type styles, or even design her own. And each letter needed no longer fit into a single fixed-size cell on the screen, meaning that more elegant and readable proportional fonts were now possible.

Amongst many other applications, the what-you-see-is-what-you-get word processor was born on the Alto as a direct result of its bitmapped display. A word processor called Gypsy became the machine’s first and most important killer app. Using Gypsy, the user could mix fonts and styles and even images in a document, viewing it all onscreen exactly as it would later look on paper, thanks to the laser printer. The combination was so powerful, went so far beyond what people had heretofore been able to expect of computers or typewriters, that a new term, “desktop publishing,” was eventually coined to describe it. Suddenly an individual with an Alto and a laser printer could produce work that could rival — in appearance, anyway — that of a major publishing house. (As PARC’s own David Liddle wryly said, “Before that, you had to have an article accepted for publication to see your words rendered so beautifully. Now it could be complete rubbish, and still look beautiful.”) Soon even the professionals would be abandoning their old paste boards and mechanical typesetters. Ginn & Co., a textbook-publishing subsidiary of Xeox, became the first publishers in the world to go digital, thanks to a network of laser printers and Altos running Gypsy.

Ethernet

Speaking of which: the Ethernet network was largely the creation of PARC researcher Robert Metcalfe. Various networking schemes had been proposed and sometimes implemented in the years before Ethernet, but they all carried two big disadvantages: they were proprietary, limited to the products of a single company or even to a single type of machine; and they were fragile, prone to immediate failure if the smallest of their far-flung elements should break down. Ethernet overcame both problems. It was a well-documented standard that was also almost absurdly simple to implement, containing the bare minimum needed to accomplish the task effectively and absolutely nothing else. This quality also made it extremely reliable, as did its decentralized design that made it dependent on no single computer on the network to continue to function. The name itself reflected Ethernet’s simplicity. Unlike most earlier networking systems, which relied upon a charged cable like that used by the telephone system, Ethernet messages passed through a passive medium, the “luminiferous aether” once thought to fill the universe’s empty space. The result was simpler, cheaper, safer, and more reliable than anything that had come before.

Like so much else at PARC, Ethernet represented both a practical step toward the office of the future and a component of Taylor’s idealistic crusade for computers as communications devices. In immediate, practical terms, it let dozens of Altos at PARC or Ginn & Co. share just a few of Starkweather’s pricy laser printers. In the long run, it provided the standard by which millions of disparate devices could talk to one another — the “computer as a communications device” in its purest terms. Ethernet remains today one of the bedrock technologies of the hyper-connected world in which we live, a basic design so effective at what it does that it still hasn’t been improved upon.

A GUI file manager running on an Alto

A GUI file manager running on an Alto

The GUI was the slowest and most gradual of the innovations to come to PARC. When the Alto was designed, Engelbart and English’s mouse was included. However, it was pictured as being used only for the specialized function for which they had originally designed it: positioning the cursor within a text document, a natural convenience for the centerpiece of the office of the future. But then Alan Kay and his small team, known as the “lunatic fringe” even amongst the others at PARC, got their hands on some Altos and started to play. Unlike the hardcore programmers and engineers elsewhere at PARC, Kay’s team had not been selected on the basis of credentials or hacking talent. Kay rather looked for people “with special stars in their eyes,” dreamers and grand conceptual thinkers like him. Any technical skills they might lack, he reasoned, they could learn, or rely on other PARC hackers to provide; one of his brightest stars was Diana Merry, a former secretary for a PARC manager who just got what Kay was on about and took to coming to his meetings. Provided the Alto, the closest they could come to Kay’s cherished Dynabook, they went to work to make the technology sing. They developed a programming language called Smalltalk that was not only the first rigorously object-oriented language in history, the forerunner to C++, Java, and many others, but also simple enough for a grade-school kid to use. With Smalltalk they wrote a twelve-voice music synthesizer and composer (“Twang”), sketching and drawing programs galore, and of course the inevitable games (a networked, multiplayer version of the old standard Star Trek was a particular hit). Throughout, they re-purposed the mouse in unexpected new ways.

Kay and his team realized that many of the functions they were developing were complimentary; it was likely that users would want to do them simultaneously. One might, for example, want to write an instructional document in a text editor at the same time as one edited a diagram meant for it in a drawing program. They developed tools to let users do this, but ran into a problem: the Alto’s screen, just the size of a single sheet of paper, simply couldn’t contain it all. Returning yet again to the idea of the office of the future, Kay asked what people in real offices did when they ran out of space on their desk. The answer, of course, was that they simply piled the document they weren’t using at that instant on top of the one they were, then proceeded to flip between the documents as needed. From there it all just seemed to come gushing out of Kay and his team.

The Alto's Smalltalk windowing system in mature form

The Alto’s Smalltalk windowing system in mature form

In February of 1975 Kay called together much of PARC, saying he had “a few thing to show them.” What they saw was nothing less than the future of the user interface: discrete, draggable, overlapping windows; mouse-driven navigation; pop-up menus. In a very real way it was the fruition of everything they had been working on for almost five years, and everything Taylor and Kay had been dreaming of for many more. At last, at least in this privileged research institution, the technology was catching up to their dreams. Now, not quite two years after the Alto itself had been finished, they knew what it needed to be. Kay and the others at PARC would spend several more years refining the vision into a workable, practical interface for everyday users, but the blueprint for the future was in place already in 1975.

Xerox ultimately received little of the benefit they might have from all this visionary technology. A rather hidebound, intensely bureaucratic management structure never really understood the work that was going on at PARC, whose personnel they thought of as vaguely dangerous, undisciplined and semi-disreputable. Unsurprisingly, they capitalized most effectively on the PARC invention closest to the copier technology they already understood: the laser printer. Even here they lost years to confusion and bureaucratic infighting, allowing IBM to beat them to the market with the world’s first commercial laser printer. However, Starkweather’s work finally resulted in the smaller, more refined Xerox 9700 of 1977, which remained for many years a major moneymaker. Indeed, all of the expense of PARC was likely financially justified by the 9700 alone.

Still, the bulk of PARC’s innovations went comparatively unexploited. During the late 1970s Xerox did sell Alto workstations to a small number of customers, among them Sweden’s national telephone service, Boeing, and Jimmy Carter’s White House. Yet the commercialization of the Alto, despite pleading from many inside PARC who were growing tired of seeing their innovations used only in their laboratories, was never regarded by Xerox’s management as more than a cautious experiment. With a bit of corporate urgency, Altos could easily have been offered for sale well before the trinity of 1977 made its debut. While a more expensive machine designed for a very different market, a computer equipped with a full GUI for sale before the likes of the Apple II, TRS-80, and PET would likely have dramatically altered the evolution of the PC and made Xerox a major player in the PC revolution. Very possibly they might have ended up playing a role similar to that of IBM in our own timeline — only years earlier, and with better, more visionary technology.

The Xerox Star

The Xerox Star

Xerox’s most concerted attempt to exploit the innovations of PARC as a whole came only in 1981, in the form of the Xerox Star “office information system.” The product of an extended six years of troubled development shepherded to release against the odds at last by ex-PARCer David Liddle, the Star did it all, and often better than than it had been done inside PARC itself. The innovations of Kay and his researchers — icons, windows, scroll bars, sliders, pop-up menus — were refined into the full desktop metaphor that remains with us today, the perfect paradigm for the office of the future. Also included in each Star was a built-in Ethernet port to link it with its peers as well as the office laser printer. The new machine represented the commercial fruition of everything PARC had been working toward for the last decade.

The desktop metaphor in full flight on the Star

The desktop metaphor in full flight on the Star

Alas, the Star was a commercial failure. Its price of almost $17,000 per workstation meant that assembling a full office of the future could easily send the price tag north of $100,000. It also had the misfortune to arrive just a few months before the IBM PC, a vastly simpler, utilitarian design that lacked the Star’s elegance but was much cheaper and open to third-party hardware and software. Marketed as a very unconventional piece of conventional office equipment rather than a full-fledged PC, the Star was by contrast locked into the hardware and software Xerox was willing to provide. In the end Xerox managed to sell only about 30,000 of them — a sad, anticlimactic ending to the glory days of innovation at PARC. (The same year that the Star was released Robert Taylor left PARC, taking the last remnants of his original team of innovators with him. By this time Alan Kay was already long gone, driven away by management’s increased demands for practical, shorter-term projects rather than leaps of faith.)

Like the Alto, the Star’s influence would be far out of proportion to the number produced. It is after all to this machine that we owe the ubiquitous desktop metaphor. If anything, the innovations of the Star tend to go somewhat under-credited today in the understandable rush to lionize the achievements inside PARC proper. Perhaps this is somewhat down to Xerox’s dour advertising rhetoric that presented the Star as “just” an “office administration assistant”; those words don’t exactly smack of a machine to change the world.

Oddly, the Star’s fate was just the first of a whole string of similar disappointments from many companies. The GUI and the desktop metaphor were concepts that seemed obviously, commonsensically better than the way computers currently worked to just about everyone who saw them, but it would take another full decade for them to remake the face of the general-purpose PC. Those years are littered with failures and disappointments. Everyone knew what the future must be like, but no one could quite manage to get there. We’ll look at one of the earliest and most famous of these bumps on the road next time.

(Despite some disconcerting errors of fact about the computing world outside the laboratory, Michael A. Hiltzik’s Dealers of Lightning is the essential history of Xerox PARC, and immensely readable to boot. If you’re interested in delving into what when on there in far more detail than I can provide in a single blog post, it’s your obvious next stop.)

 
 

Tags:

The Commodore 64

As I described in my last article, many people were beginning to feel that change was in the air as they observed the field of videogame consoles and the emerging market for home computers during the middle part of 1982. If a full-fledged computer was to take the place of the Atari VCS in the hearts of America’s youth, which of the plethora of available machines would it be? IBM had confidently expected theirs to become the one platform to rule them all, but the IBM PC was not gaining the same traction in the home that it was enjoying in business, thanks to an extremely high price and lackluster graphics. Apple was still the media darling, but the only logical contender they could offer for the segment, the Apple II Plus, was looking increasingly aged. Its graphics capabilities, so remarkable for existing at all back in 1977, had barely been upgraded since, and weren’t really up to the sort of colorful action games the kids demanded. Nor was its relatively high price doing it any favors. Another contender was the Atari 400/800 line. Although introduced back in late 1979, these machines still had amongst the best graphics and sound capabilities on the market. On the other hand, the 400 model, with its horrid membrane keyboard, was cost-reduced almost to the point of unusability, while the 800 was, once again, just a tad on the expensive side. And Atari itself, still riding the tidal wave that was the VCS, showed little obvious interest in improving or promoting this tiny chunk of its business. Then of course there was Radio Shack, but no one — including them — seemed to know just what they were trying to accomplish with a pile of incompatible machines of wildly different specifications and prices all labeled “TRS-80.” And there was the Commodore VIC-20 which had validated for many people the whole category of home computer in the first place. Its price was certainly right, but it was just too limited to have long legs.

The TI-99/4A. Note the prominent port for "Solid State Software" to the right of the keyboard.

The TI-99/4A. Note the prominent port for “Solid State Software” to the right of the keyboard.

The most obvious contender came from an unexpected quarter. Back in early 1980, the electronics giant Texas Instruments had released a microcomputer called the TI-99/4. Built around a CPU of TI’s own design, it was actually the first 16-bit machine to hit the market. It had a lot of potential, but also a lot of flaws and oddities to go with its expensive price, and went nowhere. Over a year later, in June of 1981, TI tried again with an updated version, the TI-99/4A. The new model had just 16 K of RAM, but TI claimed more was not necessary. Instead of using cassettes or floppy disks, they sold software on cartridges, a technique they called “Solid State Software.” Since the programs would reside in the ROM of the cartridge, they didn’t need to be loaded into RAM; that needed to be used only for the data the programs manipulated. The idea had some real advantages. Programs loaded instantly and reliably, something that couldn’t be said for many other storage techniques, and left the user to fiddle with fragile tapes or disks only to load and save her data files. This just felt more like the way a consumer-electronics device ought to work to many people — no typing arcane commands and then waiting and hoping, just pop a cartridge in and turn the thing on. The TI-99/4A also had spectacularly good graphics, featuring sprites, little objects that were independent of the rest of the screen and could be moved about with very little effort on the part of the computer or its programmer. They were ideal for implementing action games; in a game of Pac-Man, for instance, the title character and each of the ghosts would be implemented as a sprite. Of the other contenders, only the Atari 400 and 800 offered sprites — as well as, tellingly, all of the game consoles. Indeed, they were considered something of a necessity for a really first-rate gaming system. With these virtues plus a list price of just $525, the TI-99/4A was a major hit right out of the gate, selling in numbers to rival the even cheaper but much less capable VIC-20. It would peak at the end of 1982 with a rather extraordinary (if brief-lived) 35 percent market share, and would eventually sell in the neighborhood of 2.5 million units.

With the TI-99/4A so hot that summer of 1982, the one wildcard — the one obstacle to anointing it the king of home computers — was a new machine just about to ship from Commodore. It was called the Commodore 64, and it would change everything. Its story had begun the previous year with a pair of chips.

In January of 1981 some of the engineers at Commodore’s chipmaking subsidiary, MOS Technologies, found themselves without a whole lot to do. The PET line had no major advancements in the immediate offing, and the VIC-20’s design was complete (and already released in Japan, for that matter). Ideally they would have been working on a 16-bit replacement for the 6502, but Jack Tramiel was uninterested in funding such an expensive and complicated project, a choice that stands as amongst the stupidest of a veritable encyclopedia of stupidity written by Commodore management over the company’s chaotic life. With that idea a nonstarter, the engineers hit upon a more modest project: to design a new set of graphics and sound chips that would dramatically exceed the capabilities of the VIC-20 and (ideally) anything else on the market. Al Charpentier would make a graphics chips to be called the VIC-II, the successor to the VIC chip that gave the VIC-20 its name. Bob Yannes would make a sound synthesizer on a chip, the Sound Interface Device (SID). They took the idea to Tramiel, who gave them permission to go ahead, as long as they didn’t spend too much.

In deciding what the VIC-II should be, Charpentier looked at the graphics capabilities of all of the computers and game machines currently available, settling on three as the most impressive, and thus the ones critical to meet or exceed: the Atari 400 and 800, the Mattel Intellivision console, and the soon-to-be-released TI-99/4A. Like all of these machines, the VIC-II chip would have to have sprites. In fact, Charpentier spent the bulk of his time on them, coming up with a very impressive design that allowed up to eight onscreen sprites in multiple colors. (Actually, as with so many features of the VIC-II and the SID, this was only the beginning. Clever programmers would quickly come up with ways to reuse the same sprite objects, thus getting even more moving objects on the screen.) For the display behind the sprites, Charpentier created a variety of character-based and bitmapped modes, with palettes of up to 16 colors at resolutions of up to 320 X 200. On balance, the final design did indeed exceed or at least match the aggregate capabilities of anything else on the market. It offered fewer colors than the Atari’s 128, for example, but a much better sprite system; fewer total sprites (without trickery) than the TI-99/4A’s 32, but bigger and more colorful ones, and with about the same background display capabilities.

If the VIC-II was an evolutionary step for Commodore, the SID was a revolution in PC and videogame sound. Bob Yannes, just 24 years old, had been fascinated by electronic sound for much of his life, devouring early electronica records like those by Kraftwerk and building simple analog synthesizers from kits in his garage. Hired by MOS right out of university in 1978, he felt like he had been waiting all his employment for just this project. An amateur musician himself, he was appalled by the sound chips that other engineers thought exceptional, like that in the Atari 400 and 800. From a 1985 IEEE Spectrum article on the making of the Commodore 64:

The major differences between his chip and the typical videogame sound chips, Yannes explained, were its more precise frequency control and its independent envelope generators for shaping the intensity of a sound. “With most of the sound effects in games, there is either full volume or no volume at all. That really makes music impossible. There’s no way to simulate the sound of any instrument even vaguely with that kind of envelope, except maybe an organ.”

Although it is theoretically possible to use the volume controls on other sound chips to shape the envelope of a sound, very few programmers had ever tackled such a complex task. To make sound shaping easy, Yannes put the envelope controls in hardware: one register for each voice to determine how quickly a sound builds up; two to determine the level at which the note is sustained and how fast it reaches that level; and one to determine how fast the note dies away. “It took a long time for people to understand this,” he conceded.

But programmers would come to understand it in the end, and the result would be a whole new dimension to games and computer art. The SID was indeed nothing short of a full-fledged synthesizer on a chip. With three independent voices to hand, its capabilities in the hands of the skilled are amazing; the best SID compositions still sound great today. Games had beeped and exploded and occasionally even talked for years. Now, however, the emotional palette game designers had to paint on would expand dramatically. The SID would let them express deep emotions through sound and (especially) music, from stately glory to the pangs of romantic love, from joy to grief.

In November of 1981 the MOS engineers brought their two chips, completed at last, to Tramiel to find out what he’d like to do with them. He decided that they should put them into a successor to the VIC-20, to be tentatively titled the VIC-40. In the midst of this discussion, it emerged that the MOS engineers had one more trick up their sleeves: a new variant of the 6502 called the 6510 which offered an easy way to build an 8-bit computer with more than 48 K of RAM by using a technique called bank switching.

Let’s stop here for just a moment to consider why this should have been an issue at all. Both the Zilog Z80 and the MOS 6502 CPUs that predominated among early PCs are 8-bit chips with 16-bit address buses. The latter number is the one that concerns us right now; it means that the CPU is capable of addressing up to 64 K of memory. So why the 48 K restriction? you might be asking. Well, you have to remember that a computer does not only address RAM; there is also the need for ROM. In the 8-bit machines, the ROM usually contains a BASIC-based operating environment along with a few other essentials like the glyphs used to form characters on the screen. All of this usually consumes about 16 K, leaving 48 K of the CPU’s address space to be mapped to RAM. With the arrival of the 48 K Apple II Plus in 1979, the industry largely settled on this as both the practical limit for a Z80- or 6502-based machine and the configuration that marked a really serious, capable PC. There were some outliers, such as Apple’s Language Card that let a II Plus be expanded to 64 K of RAM by dumping BASIC entirely in lieu of a Pascal environment loaded from disk, but the 48 K limit was largely accepted as just a fact of life for most applications.

With the 6510, however, the MOS engineers added some circuitry to the 6502 to make it easy to swap pieces of the address space between two (or more) alternatives. Below is an illustration of the memory of the eventual Commodore 64.

Commodore 64 memory map

Ignoring the I/O block as out of scope for this little exercise, let’s walk through this. First we have 1 K of RAM used as a working space to hold temporary values and the like (i.e., the program stack). Then 1 K is devoted to storing the current contents of the screen. Next comes the biggest chunk, 38 K for actual BASIC programs. Then 8 K of ROM, which stores the BASIC language itself. Then comes another 4 K of “high RAM” that’s gotten trapped behind the BASIC ROM; this is normally inaccessible to the BASIC programmer unless she knows some advanced techniques to get at it. Then 4 K of ROM to hold the glyphs for the standard onscreen character set. Finally, 8 K of kernel, storing routines for essential functions like reading the keyboard or interacting with cassette or disk drives. All of this would seem to add up to a 44 K RAM system, with only 40 K of it easily accessible. But notice that each piece of ROM has RAM “underneath” it. Thanks to the special circuity on the 6510, a programmer can swap RAM for ROM if she likes. Programming in assembly language rather than BASIC? Swap out the BASIC ROM, and get another 8 K of RAM, plus easy, contiguous access to that high block of another 4 K. Working with graphics instead of words, or would prefer to define your own font? Swap out the character ROM. Taking over the machine entirely, and thus not making so much use of the built-in kernel routines? Swap the kernel for another 8 K of RAM, and maybe just swap it back in from time to time when you want to actually use something there.

Commodore 64 startup screen

The above will hopefully answer the most common first question of a new Commodore 64 user, past or present: Why does my “64 K RAM system” say it has only 38 K free for BASIC? The rest of the memory is there, but only for those who know how to get at it and who are willing to forgo the conveniences of BASIC. I should emphasize here that the concept of bank switching was hardly an invention of the MOS engineers; it’s a fairly obvious approach, after all. Apple had already used the technique to pack a full 128 K of RAM into a 6502-based computer of their own, the failed Apple III (about which more in the very near future). The Apple III, however, was an expensive machine targeted at businesses and professionals. The Commodore 64 was the first to bring the technique to the ordinary consumer market. Soon it would be everywhere, giving the venerable 6502 and Z80 new leases on life.

Jack Tramiel wasn’t a terribly technical fellow, and likely didn’t entirely understand what an extra 16 K of memory would be good for in the first place. But he knew a marketing coup when he saw one. Thus the specifications of the new machine were set: a 64 K system built around MOS’s three recent innovations — the 6510, the VIC-II, and the SID. The result should be cheap enough to produce that Commodore could sell it for less than $600. Oh, and please have a prototype ready for the January 1982 Winter CES show, less than two months away.

With so little time and such harsh restrictions on production costs, Charpentier, Yannes, and the rest of their team put together the most minimalist design they could to bind those essential components together. They even managed to get enough of it done to have something to show at Winter CES, where the “VIC-40” was greeted with excitement on the show floor but polite skepticism in the press. Commodore, you see, had a well-earned reputation, dating from the days when the PET was the first of the trinity of 1977 to be announced and shown but the last to actually ship, for over-promising at events like these and delivering late or not at all. Yet when Commodore showed the machine again in June at the Summer CES — much more polished, renamed the Commodore 64 to emphasize what Tramiel and Commodore’s marketing department saw as its trump card, and still promised for less than $600 — they had to start paying major attention. Day later it started shipping. The new machine was virtually indistinguishable from the VIC-20 in external appearance because Commodore hadn’t been willing to spend the time or money to design a new case.

The Commodore 64

The Commodore 64

Inside it was one hell of a machine for the money, although not without its share of flaws that a little more time, money, and attention to detail during the design process could have easily corrected.

The BASIC housed in its ROM (“BASIC 2.0”) was painfully antiquated. It was actually the same BASIC that Tramiel had bought from Microsoft for the original PET back in 1977. Bill Gates, in a rare display of naivete, sold him the software outright for a flat fee of $10,000, figuring Commodore would have to come back soon for another, better version. He obviously didn’t know Jack Tramiel very well. Ironically, Commodore did have on hand a better BASIC 4.0 they had used in some of the later PET models, but Tramiel nixed using it in the Commodore 64 because it would require a more expensive 16 K rather than 8 K of ROM chips to house. People were already getting a lot for their money, he reasoned. Why should they expect a decent BASIC as well? The Commodore 64’s BASIC was not only primitive, but completely lacked commands to actually harness the machine’s groundbreaking audiovisual capabilities. If the memory restrictions on BASIC weren’t enough to convince would-be game programmers to learn assembly language, this certainly did. The Commodore 64’s horrendous BASIC likely accelerated an already ongoing flight from the language amongst commercial game developers. For the rest of the 1980s, game development and assembly language would go hand in hand.

Due to a whole combination of factors — including miscommunication among marketing, engineering, and manufacturing, an ultimately pointless desire to be hardware compatible with the VIC-20, component problems, cost-cutting, and the sheer rush of putting a product together in such a limited time frame — the Commodore 64 ended up saddled with a disk system that would become, even more than the primitive BASIC, the albatross around the platform’s neck. It’s easily the slowest floppy-disk system ever sold commercially, on the order of thirty times slower than Steve Wozniak’s masterpiece, the Apple II’s Disk II system. Interacting with disks from BASIC 2.0, which was written before disk drives existed on PCs, requires almost as much patience as does waiting for a program to load. For instance, you have to type “LOAD ‘$’, 8” followed by ‘LIST’ just to get a directory listing. As an added bonus, doing so wipes out any BASIC program you might have happened to have in memory.

The disk system’s flaws frustrate because they dissipate a lot of potential strengths. Commodore had had a unique approach to disk drives ever since producing their first for the PET line circa 1979. A Commodore disk drive is a smart device, containing its own 6502 CPU as well as ROM and 2 K of RAM. The DOS used on other computers like the Apple II to tell the computer how to control the drive, manage the filesystem, etc., is unnecessary on a Commodore machine. The drive can control itself very well, thank you very much; it already knows all about that stuff. This brings some notable advantages. No separate DOS has to be loaded into the computer’s RAM, eating precious memory. DOS 3.3., for example, the standard on the Apple II Plus at the time of the Commodore 64’s introduction, eats up more than 10 K of the machine’s precious 48 K of RAM. Thus the Commodore 64’s memory edge was in practical terms even more significant than it appeared on paper. Because it’s possible to write small programs for the drive’s CPU to process and load them into the drive’s RAM, the whole system was a delight for hackers. One favorite trick was to load a disk-copying program into a pair of drives, then physically disconnect them from the computer. They would continue happily copying disks on their own, as long as the user kept putting more disks in. More practically for average users, it was often possible for games to play music or display animated graphics while simultaneously loading from the drive. Other computers’ CPU were usually too busy controlling the drive to manage this. Of course, this was a very good feature for this particular computer, because Commodore 64 users would be spending a whole lot more time than users of other computers waiting for their disk drives to load their programs.

Quality-control issues plagued the entire Commodore 64 line, especially in the first couple of years. One early reviewer had to return two machines before Commodore shipped him one that worked; some early shipments to stores were allegedly 80 percent dead on arrival. To go with all of their other problems, the disk drives were particularly unreliable. In one early issue, Compute!’s Gazette magazine stated that four of the seven drives in their offices were currently dead. The poor BASIC and unfriendly operating environment, the atrocious disk system, and the quality-control issues, combined with no option for getting the 80-column display considered essential for word processing and much other business software, kept the Commodore 64 from being considered seriously by most businesses as an alternative to the Apple II or IBM PC. Third-party solutions did address many of the problems. Various improved BASICs were released as plug-in cartridges, and various companies rewrote the systems software to improve transfer speeds by a factor of six or more. But businesses wanted machines that just worked for them out of the box, which Apple and IBM largely gave them while Commodore did not.

None of that mattered much to Commodore, at least for now, because they were soon selling all of the Commodore 64s they could make for use in homes. No, it wasn’t a perfect machine, not even with its low price (and dropping virtually by the month), its luxurious 64 K of memory, its versatile graphics, and its marvelous SID chip. But, like the Sinclair Spectrum that was debuting almost simultaneously in Britain, it was the perfect machine for this historical moment. Also like the Spectrum, it heralded a new era in its home country, where people would play — and make — games in numbers that dwarfed what had come before. For a few brief years, the premiere mainstream gaming platform in the United States would be a full-fledged computer rather than a console — the only time, before or since, that that has happened. We’ll talk more about the process that led there next time.

(As you might expect, much of this article is drawn from Brian Bagnall’s essential history of Commodore. The IEEE Spectrum article referenced above was also a gold mine.)

 
18 Comments

Posted by on December 17, 2012 in Digital Antiquaria, Interactive Fiction

 

Tags: