RSS

Author Archives: Jimmy Maher

Jim Levy and Activision

I’ve already spent a lot of time on the history of one of the two great corporate survivors from the early years of videogames, Electronic Arts. But I’ve neglected the older of the pair, Activision, because that company originally made games only for consoles, a field of gaming history I’ve largely left to others who are far better qualified than I to write about it. Still, as we get into the middle years of the 1980s Activision suddenly becomes very relevant indeed to computer-gaming history. Given that, it’s worth taking the time now to look back on Activision’s founding and earliest days, to trace the threads that led to some important titles that are very much in my wheelhouse. In my view the single most important figure in Activision’s early history, and one that too often goes undercredited in preference to an admittedly brilliant team of programmers and designers, is Jim Levy, Activision’s president through the first seven-and-a-half years of its existence. So, as the title of this article would imply, one of my agendas today will be to do a little something to correct that.

The penultimate year of the 1970s was a trying time at Atari. The Atari VCS console, eventually to become an indelible totem of an era not just for gamers but for everyone, had a difficult first year after its October 1977 introduction, with sales below expectations and boxes piling up in warehouses. Internally the company was in chaos, as a full-on war took place between Nolan Bushnell, Atari’s engineering-minded founder, and its current CEO, Ray Kassar, a slick East Coast plutocrat whom new owners Warner Communications had installed with orders to clean up Bushnell’s hippie playground and turn it into an operation properly focused on marketing and the bottom line. Kassar won the war in December of 1978: Bushnell was fired. From that point on Atari became a very different sort of place. Bushnell had loved his engineers and programmers, but Kassar had no intrinsic interest in Atari’s products and little regard for the engineers and programmers who made them. The reign of Kassar as sole authority at Atari began auspiciously. Even before Bushnell’s departure, Kassar’s promotional efforts, with a strong assist from a fortuitous craze for a Japanese stand-up-arcade import called Space Invaders, had begun to revive the VCS; the Christmas of 1978, although nothing compared to Christmases to come, was strong enough to clear out some of those warehouses.

David Crane, Bob Whitehead, Larry Kaplan, and Alan Miller were known as the “Fantastic Four” at Atari during those days because the games they had (individually) programmed accounted for as much as 60 percent of the company’s total cartridge sales. Like most of Atari’s technical staff, they were none too happy with this new, Kassar-led version of the company. Trouble started in earnest when Kassar distributed a memo to his programmers listing Atari’s top-selling titles along with their sales, with an implied message of “Give us more like these, please!” The message the Fantastic Four took away, however, was that they were each generating millions for Atari whilst toiling in complete anonymity for salaries of around $30,000 per year. Miller, whose history with Atari went back to playing the original Pong machine at Andy Capp’s Tavern and who was always the most vocal and business-oriented of the group, drafted a contract modeled after those used in the book and music industries that would award him royalties and credit, in the form of a blurb on the game boxes and manuals, for his work; he then sent it to Kassar. Kassar’s reply was vehemently in the negative, allegedly comparing programmers to “towel designers” and saying their contribution to the games’ success was about as great as that of the person on the assembly line who put the packages together. Miller talked to Crane, Whitehead, and Kaplan, convincing them to join him in seeking to start their own company to write their own games to compete with those of Atari. Such a venture would be the first of its kind.

The Fantastic Four, 1980: from left, Bob Whitehead, David Crane, Larry Kaplan, and Alan Miller (standing)

The Fantastic Four, 1980: from left, Bob Whitehead, David Crane, Larry Kaplan, and Alan Miller (standing)

Through the lawyer they consulted, they met Jim Levy, a 35-year-old businessman who had just been laid off after six years at a sinking ship of a company called GRT Corporation, located right there in Sunnyvale, California, also home to Atari. GRT had mainly manufactured prerecorded tapes for music labels, but had also run a few independent labels of their own on the side. Eager to break into that other, creative part of the industry, Levy had come up with a scheme to take the labels off their erstwhile parent’s hands, and had secured $350,000 in venture capital for the purpose. But when his lawyer introduced him to the Fantastic Four that plan changed immediately. Here was a chance to get in on the ground floor of a whole new creative industry — and that really was, as we shall see, how Levy regarded game making, as a primarily creative endeavor. He convinced his venture capitalists to double their initial investment, and on October 1, 1979, after the Fantastic Four had one by one tendered their resignations to Atari as an almost unremarked part of a general brain drain that was going on under Kassar’s new regime, VSYNC, Inc., was born. But that name, an homage to the “vertical sync” signal that Atari VCS programmers lived and died by (see Nick Montfort and Ian Bogost’s Racing the Beam), obviously wouldn’t do. And so, after finding that “Computervision” was already taken, Levy came up with “Activision,” a combination of “action” and “vision” — or, if you like, “action” and “television.” It didn’t hurt that the name would come before that of the company they knew was doomed to become their arch-nemesis, Atari, in sales brochures, phone books, and corporate listings. By January of 1980, when they quietly announced their existence to select industry insiders at the Consumer Electronics Show — among them a very unhappy and immediately threatening Kassar — Activision included 8 people. By that year’s end, it would include 15. And by early 1983, when Activision 1.0 peaked, it would include more than 400.

Jim Levy surrounded by his Activision "family," 1981

Jim Levy surrounded by his fast-growing Activision “family,” 1981

Aware from the beginning of the potential for legal action on Atari’s part, Activision’s lawyer had made sure that the Fantastic Four exited Atari with nothing but the clothes on their backs. The first task of the new company thus became to engineer their own VCS development system. Much of this work was accomplished in the spare bedroom of Crane’s apartment, even before Levy got the financing locked down and found them an office. Activision was thus able to release their first four games in relatively short order, in July of 1980. Following Atari’s own approach to naming games in the early days, Dragster, Boxing, Checkers, and Fishing Derby had names that were rather distressingly literal. But the boxes were brighter and more exciting than Atari’s, the manuals all included head shots of their designers along with their signatures and personal thoughts on their creations, and, most importantly, most gamers agreed that the quality was much higher than what they’d come to expect from Atari’s own recent releases. Activision would refine their approach — not to mention their game naming — over the next few years, but these things would remain constants.

Steve Cartwright

Steve Cartwright

In spite of the example of a thriving software industry on early PCs like the TRS-80 and Apple II, it seems to have literally never occurred to Atari that anyone could or would do what Activision had done and develop software for “their” VCS. That lack of expectation had undoubtedly been buttressed by the fact that the VCS, a notoriously difficult machine to program for even those in the know, was a closed box, its secrets and development tools secured behind the new hi-tech electric door locks Kassar had had installed at Atari almost as soon as he arrived. The Fantastic Four, however, carried all that precious knowledge around with them in their heads. Atari sued Activision right away for alleged theft of “trade secrets,” but had a hard time coming up with anything they had actually done wrong. There simply was no law against figuring out — or remembering — how the VCS worked and writing games for it. And so Atari employed the time-honored technique of trying to bury their smaller competitor under lawsuits that would be very expensive to defend regardless of their merits. That might have worked — except that Activision made an astonishingly successful debut. This was the year that the VCS really took off, and Activision was there to reap the rewards right along with Atari, selling more than $60 million worth of games during their first year. The levelheaded Levy, who had anticipated a legal storm from the beginning, simply treated it as another tax or other business expense, budgeting a certain amount every quarter to keeping Atari at bay.

Carol Shaw

Carol Shaw

Under Levy’s guidance, Activision now proceeded to write a playbook that many of the publishers we’ve already met would later draw from liberally.  Activision’s designer/programmers were always shown as cool people doing cool things. Steve Cartwright, designer of Barnstorming, was photographed, with scarf blowing rakishly in the wind, about to take to the skies in a real biplane; Carol Shaw, designer of River Raid and one of the vanishingly small number of female programmers writing videogames, appeared on her racing bike; Larry (no relation to Alan) Miller, designer of Enduro, could be seen perched on the hood of a classic car. Certainly Trip Hawkins would take note of Activision’s publicity techniques when he came up with his own ideas for promoting his “electronic artists” like rock stars. Almost from the beginning Activision fostered a sense of community with their fans through a glossy newsletter, Activisions, full of puzzles and contests and pictures and news of the latest goings-on around the offices in addition to plugs for the newest games — a practice Infocom and EA among others would also take to heart. Activision, however, did it all on a whole different scale. By 1983 they were sending out 400,000 copies of every newsletter issue, and receiving more than 10,000 pieces of fan mail every week.

Larry Miller

Larry Miller

In those early years Activision practically defined themselves as the anti-Atari. If Atari was closed and faceless, they would be open and welcoming, throwing grand shindigs for press and fans to promote their games, like the “Barnstorming Parade,” featuring, once again, Cartwright in a real airplane; the “Decathlon Party,” featuring 1976 Olympic Decathlon Gold Medal winner Bruce Jenner, to promote The Activision Decathlon; or the “Rumble in the Jungle” to promote their biggest hit of all, Crane’s Pitfall!. While, as Jenner’s presence will attest, they weren’t above a spot of celebratory endorsing now and again, they also maintained a certain artistic integrity in sticking to original game concepts and refusing any sort of licensing deals, whether of current arcade hits or media properties. This again placed them in marked contrast to Atari, who, in the wake of their licensed version of Taito’s arcade hit Space Invaders that had almost singlehandedly transformed the VCS from a modest success to a full-fledged cultural phenomenon in the pivotal year of 1980, never saw a license they didn’t want. Our games, Levy never tired of saying, are original works that can stand on their own merits. As for the licensed stuff: “People will take one look because they know the movie title. But if an exciting game isn’t there, forget it. Our audiences are too sophisticated. You can’t fool them.” Such respect for his audience, whether real or feigned or a bit of both, was another thing that endeared Activision to them.

Pitfall!

Released in April of 1982 just as the videogame craze hit its peak — revenues that year reached fully half those of the music industry — Pitfall! was Activision 1.0’s commercial high-water mark, selling more than 4 million copies, more than any of Atari’s own games except Pac-ManPitfall! would go on to become the urtext of an entire genre of side-scrolling platform games. In more immediate terms, it made Crane something of a minor celebrity as well as a very wealthy young man indeed; one magazine even dared to compare his earnings from Pitfall! with those of Michael Jackson from Thriller, although that was probably laying it on a bit thick. Meanwhile four other Activision games — Laser Blast, Kaboom!, Freeway, and River Raid — had passed the 1 million mark in sales, and dozens of other new publishers had followed Activision’s example by rolling up their sleeves, figuring out how the VCS worked and how they could develop for it, and jumping into the market. The resulting flood of cartridges, many of them sold at a fraction of Activision’s price point and still more of them substandard even by Atari’s less than exacting standards, would be blamed by Levy for much of what happened next.

In April of 1983, Levy confidently predicted in his keynote speech for The First — and, as it would turn out, last — Video Games Conference that the industry would triple in size within five years. In June, Activision went public, creating a number of new millionaires. It was, depending on how you look at it, the best or the worst possible timing. Just weeks later began in earnest the Great Videogame Crash of 1983, and everything went to hell for Activision, just as for the rest of the industry. Activision had always been a happy place, the sort of company whose president could suddenly announce that he was taking everyone along with their significant others to Hawaii for four days to celebrate Pitfall!‘s success; where the employees could move their managers’ offices en masse into the bathrooms for April Fool’s without fear of reprisal; whose break rooms were always bursting with doughnuts and candy. Thus November 10, 1983, was particularly hard to take. That was the day Levy laid off a quarter of Activision’s workforce. It was his birthday. It was also only the first of many painful downsizings.

Levy had bought into the contemporary conventional wisdom that home computers were destined to replace game consoles in the hearts and minds of consumers, that the home-computer market was going to blow up so big as to dwarf the VCS craze at its height. His plan was thus to turn Activision into a publisher of home-computer software rather than game cartridges. His biggest problem looked to be bridging the chasm that lay between the recently expired fad of the consoles and the projected sustained domination of the home computer. The painful fact was that, even on the heels of a hugely successful 1983, all of the home-computer models combined still had nowhere near the market penetration of the Atari VCS alone at its peak. There simply weren’t enough buyers out there to sustain a company of the size to which Activision had so quickly grown. The only way to bridge the chasm was to glide over on the millions they had socked away in the bank during the boom years whilst brutally downsizing to stretch those millions farther. Activision 2.0 would have to be, at least for the time being, a bare shadow of Activision 1.0’s size. Yet “the time being” soon began to look like perpetuity, especially as the cash reserves began to dry up. In 1983, Activision 1.0 had revenues of $158 million; in 1986, three years into Levy’s remaking/remodeling, Activision 2.0 had revenues of $17 million. The fundamental problem, which grew all too clear as Activision 2.0’s life went on, was that the home-computer boom had fizzled about a decade early and about 90 percent short of its expected size.

Ghostbusters

With Activision, despite the frantic downsizing, projected to lose $18 million in 1984, when Columbia Pictures made it known that they would be interested in letting David Crane do a game based on their new movie Ghostbusters, Levy quietly forgot all his old prejudices against licensed products. Ghostbusters the game was literally an afterthought; the movie had already been in theaters a week or two when Activision and Columbia started discussing the idea. The deal was closed within days, and Crane was told he had exactly six weeks to come up with the game before Ghostbusters mania died down — which was just as well, as he was planning to get married in six weeks. Exactly these sorts of external pressures had undone Atari licensed games like Pac-Man and E.T., and were a big part of the reason that Levy had heretofore avoided licenses. Luckily, Crane already had been working on a game for the Commodore 64 he called Car Wars (no apparent relation to the Steve Jackson Games board game, the license for which was held by Origin Systems), which had the player undertaking a series of missions whilst racing around a city map battling other vehicles. Each successful mission earned money she could use to upgrade her car for the next, more difficult level. Crane realized it should be possible to retrofit ghosts and lots of other paraphernalia from the movie onto the idea. Realizing that he couldn’t possibly do it all on his own, he recruited a team of four others to help him. Ghostbusters thus became the first Activision game to abandon the single-auteur model of development that had been the standard until then. In its wake almost every other project also became a team project, a concession to the technical realities of developing for the more advanced Commodore 64 and other home computers versus the old VCS. With the help of his assistants, Crane was able to add many charming little touches to Ghostbusters, like sampled taglines from the movie (“He slimed me!”) and a chiptunes version of Ray Parker, Jr.’s monster hit of a theme song, complete with onscreen words and a bouncing ball to help you sing along.

Ghostbusters

David Crane was Activision’s King Midas. Despite its rushed development, Ghostbusters turned out to be a very playable game, even a surprisingly sophisticated one, what with its CRPG-like in-game economy and the thread of story that linked all of the ghost-busting missions together. It even has a real ending, with the slamming of the dimension door that’s been setting all of these ghosts loose in our universe. Released in plenty of time for Christmas 1984 and doubtless buoyed by the fact that Ghostbusters the movie just kept going and going — it would eventually become the most successful comedy of the 1980s — Ghostbusters became Activision 2.0’s biggest hit by a light year and one of the bestselling Commodore 64 games of all time, selling well into the hundreds of thousands. Like relatively few Commodore 64 games but almost all of the real blockbusters, it became hugely popular in both North America and Europe, where Activision, unlike most of their peers who published there if at all through the likes of U.S. Gold, had set up a real semi-autonomous operation — Activision U.K. — during the boom years. And it was of course widely ported to other platforms popular on both sides of the pond.

It’s at this point that Levy’s story and Activision’s get really interesting. Having proved through Ghostbusters that his company could make the magic happen on the Commodore 64 as well as the Atari VCS, however much more modest the commercial rewards for even a huge hit on the former platform were destined to be, Levy now began to push through a series of aggressively innovative, high-concept titles, often over the considerable misgivings of the board of directors with which Activision’s IPO had saddled him. I don’t want to overstate the case; it’s not as if Levy transformed Activision overnight into an art-house publisher. The next few years would bring plenty of solid action games alongside the occasional adventure as well as, what with Activision having popped the lid off this particular can of worms to so much success, more licensed titles: Aliens, Labyrinth, Transformers, and, just to show that not all licenses are winners, a computerized adaptation of Lucasfilm’s infamous flop Howard the Duck that managed to be almost as bad as its inspiration. Yet betwixt and between all this expected product Levy found room for the weird and the wacky and occasionally the visionary. He made his agenda clear in press interviews. Rhetorically drawing on his music-industry experience despite the fact that he had never actually worked on the creative side of that industry, he cast Activision’s already storied history as that of a plucky artist-driven indie label that went “head to head with the majors” and thereby proved that “the artist can be trusted,” whilst chastising competitors for “a certain stagnation in creative style, concept, and content.” The game industry was — or should be — driven by its greatest assets, its creators. The job of him and the other business-oriented people was just to facilitate their art and to get it before the public.

Writing a game is close to the whole concept of songwriting and composing. Then you get involved later on with the ink-and-paper people for packaging. There are a lot of similarities between the record business and what we do.

There was a certain amount of calculation in such statements, just as there was in Trip Hawkins’s campaigns on behalf of his own electronic artists. Yet, also as in Hawkins’s case, I believe the core sentiment was very sincere. Levy genuinely did believe he was witnessing the birth of a new form (or forms) of art, and genuinely did feel a responsibility to nurture it. Garry Kitchen, a veteran programmer who had joined Activision during the boom years, tells of how Levy during the period of Activision 2.0 kept rejecting his ideas for yet more simple action games: “Do something different, innovate!” How many other game-industry CEOs can you imagine saying such a thing then or now?

At this point, then, I’d like to very briefly tell you about a handful of Activision’s titles from 1985 and 1986. In some cases more interesting as ideas than as playable works, no one could accuse any of what follows of failing to heed Levy’s command to “innovate!” Some aren’t actually games at all, fulfilling another admonition of Levy to his programmers: to stop always thinking in terms of rules and scores and winners and losers.

The first fruit of the creative pressure Levy put on Kitchen was The Designer’s Pencil. Yet another impressive implementation of a Macintosh-inspired interface on an 8-bit computer, The Designer’s Pencil is, depending on how you look at it, either a thoroughly unique programming environment or an equally unique paint program. Rather than painting directly to the screen, you construct a script to control the actions of the eponymous pencil. You can also play sounds and music while the pencil is about its business. A drawing and animation program for people who can’t draw and a visual introduction to programming, The Designer’s Pencil is most of all just a neat little toy.

Garry Kitchen's GameMaker

Responding to users of The Designer’s Pencil who begged for ways to make their creations interactive, Kitchen later provided Garry Kitchen’s GameMaker: The Computer Game Design Kit. Four separate modules let you make the component pieces of your game: Scenes, Sprites, Music, and Sound. Then you can wrap it all together in a blanket of game logic using the Editor. Inevitably more complicated to work with than The Designer’s Pencil, GameMaker is still entirely joystick-driven if you want it to be and remarkably elegant given the complexity of its task. It was by far the most powerful software of its kind for the Commodore 64, the action-game equivalent of EA’s Adventure Construction Set. In a sign of just how far the industry had come in a few years, GameMaker included a reimplementation of Pitfall!, Activision’s erstwhile state-of-the-art blockbuster, as a freebie, just one of several examples of what can be done and how.

Described by its creator Russell Lieblich as a “Zen” game, Web Dimension has an infinite number of lives, no score, and no time limit. The ostensible theme is evolution: levels allegedly progress through atoms, planets, amoebae, jellyfish, germs, eggs, embryos, and finally astronauts. But good luck actually making those associations. The game is best described as, as a contemporary reviewer put it, “a musical fantasy of color, sight, and sound,” on the same wavelength as if less ambitious than Automata’s Deus Ex Machina. As with that game, the soundtrack is the most important part of Web Dimension. Lieblich considered himself a musician first, programmer second, and not actually much of a gamer: “I’m not really into games, but I love music, so I designed a musical game that doesn’t keep score.” Like Deus Ex Machina not much of a game in conventional terms, Web Dimension is interesting as a piece of interactive art.

Hacker

Hacker begins with a screen that’s blank but for a single blinking login prompt. You’re trying to break into a remote computer system with absolutely nothing to go on. Literally: in contrast to the likes of the Ultima or Infocom games, Hacker‘s box contained only a card telling how to boot the game and an envelope of hints for the weak and/or frustrated. Discovering the rules that govern the game is the game. Designed and programmed by Steve Cartwright, creator of Barnstorming amongst others, it was another achievement of an Activision old guard who continued to prove they had plenty of new tricks up their sleeves. This weird experiment of a game actually turned into a surprising commercial success, Activision 2.0’s second biggest seller after Ghostbusters. Playing it is a disorienting, sinister, oddly evocative experience until you figure out what’s going on and what you’re doing, whereupon it suddenly all becomes anticlimactic. “Anticlimactic” also best describes the sequel, Hacker II: The Doomsday Papers; Hacker is the kind of thing that can only really work once.

Little Computer People

Little Computer People is today the most remembered creation of Activision’s experimental years, having been an important influence on a little something called The Sims. It’s yet another work by the indefatigable David Crane, his final major achievement at Activision. The fiction, which Crane hewed to relentlessly even in interviews, has you helping out as a member of the Activision Little Computer People Research Group, looking into the activities of the LCPs who have recently been discovered living inside computers. When you start the program you’re greeted by a newly built house perfect for the man and his dog who soon move in. Every single copy of Little Computer People contained a unique LCP with his own personality, a logistical nightmare for Activision’s manufacturing process. He lives his life on a realistic albeit compressed daily schedule, with 24 hours inside the computer passing in 6 outside it. Depending on the time of day and his personality and mood as well as your handling, the little fellow might prefer to relax with a book in his favorite armchair, play music on his piano, exercise, chat on the phone, play with his dog, watch television, listen to the record collection you’ve (hopefully) provided, or play a game of cards with you — when he isn’t sleeping, brushing his teeth, or showering, that is.  You can try to get him to do your bidding via a parser interface, but if you aren’t polite enough about it or if he’s just feeling cranky the answer is likely to be a big fat “No!”

While Little Computer People was frequently dismissed as pointless by the more goal-oriented among us, other players developed strong if sometimes dysfunctional attachments to their LCPs. For example, in an article for Retro Gamer, Kim Wild told of her fruitless year-long struggle to get her hygienically challenged LCP just to take a shower already. In its way Little Computer People offered as many tempting mysteries as any adventure game. Contemporary online services teemed with controversy over the contents of a certain upstairs closet into which the LCP would periodically disappear, only to reappear with a huge smile on his face. The creepy majority view was that he had a woman stashed in there, although a vocal minority opted for the closet being a private liquor cabinet.

While it didn’t sell in anything like the quantities of some of Crane’s other games, Little Computer People was a moderate success. Further cementing its connection to The Sims, Crane and Activision planned a series of expansions that would have added new houses and other environments for the LCPs and maybe even the possibility of having more than one of them, and thus of watching them interact with one another instead of only with their dogs and their players. For reasons that will have to wait for a future article, however, that would never happen.

Shanghai

Activision’s first notable release for the new 68000-based machines was Shanghai. A simple solitaire tile-matching exercise that uses mahjong tiles —  and not, it should be emphasized, an implementation of the much more complex actual game of mahjong — Shanghai was created by Brodie Lockard, a former gymnast who’d become paralyzed following a bad fall and used his “solitaire mahjong” almost as a form of therapy in the years that followed. Particularly in its original Macintosh incarnation, it’s lovely to look at and dangerously addictive. Like many of Activision’s experimental titles of the period, Shanghai cut against most of the trends in gaming of the mid-1980s. Its gameplay was almost absurdly simple while other games were reveling ever more in complexity; it was playable in short bursts of a few minutes rather than demanding a commitment of hours; its simple but atmospheric calligraphic visuals and feel of leisurely contemplation made a marked contrast to the flash and action of other games; its pure abstraction was the polar opposite to other games’ ever-growing focus on the experiential. Moderately successful in its day, Shanghai was perhaps the most prescient of all Activision’s games from this period, forerunner to our current era of undemanding, bite-sized mobile gaming. Indeed, it would eventually spawn what seems like a million imitators that remain staples on our smartphones and tablets today. And it would also spawn, the modern world being what it is, lots and lots of legal battles over who actually invented solitaire mahjong; there’s still considerable debate about whether Lockard merely adopted an existing Chinese game to the computer or invented a new one from whole cloth.

There are still other fascinating titles whose existence we owe to Jim Levy’s Activision 2.0. In fact, I’m going to use my next two articles to tell you about two more of them — inevitably, given our usual predilections around here, the most narrative-focused of the bunch.

(Lots of print sources for this one, including: Commodore Magazine of June 1987, February 1988, and July 1989; Billboard of June 16 1979, July 14 1979, September 15 1979, June 19 1982, and November 3 1984; InfoWorld of August 4 1980 and November 5 1984; Compute!’s Gazette of March 1985; Creative Computing of September 1983 and November 1984; Antic of June 1984; Retro Gamer 18, 25, 79, and 123; Commodore Horizons of May 1985; Commodore User of April 1985; Zzap! of December 1985; San Jose Mercury of February 18 1988; New York Times of January 13 1983; Commodore Power Play of October/November 1985; Lodi News Sentinel of April 4 1981; the book Zap!: The Rise and Fall of Atari by Scott Cohen; and the entire 7-issue run of the Activisions newsletter. Online sources include Gamasutra’s histories of Activision and Atari and Brad Fregger’s memories of Shanghai‘s development.)

 
 

Tags:

Starflight

Starflight

Fair warning: this article spoils the ending to Starflight, although it doesn’t spoil the things you need to know to get there.

Starflight, one of the grandest and most expansive games of the 1980s, was born in the cramped confines of a racquetball court. Rod McConnell, a businessman who had been kicking around Silicon Valley for some years, happened to have as his regular playing partner Joe Ybarra, a former Apple executive who in late 1982 had decamped to join Trip Hawkins’s fledgling new Electronic Arts as a game “producer.” Intrigued by Ybarra’s stories of “electronic artists” and an upcoming revolution in entertainment based on interactivity, McConnell wondered if he might somehow join the fun. He thus started discussing ideas with a programmer named Dave Boulton.

Boulton, who died in 2009, is the unsung hero of Starflight. His involvement with the project wouldn’t last very long, but his fingerprints are all over the finished game. He was, first and foremost, a zealot for the Forth programming language. He was one of the founding members of the Forth Interest Group, which was established just at the beginning of the PC era in 1977 and did stellar work to standardize the language and bring it to virtually every one of the bewildering variety of computers available by the early 1980s. More recently his hacking had led him to begin exploring the strange universe of Benoit Mandelbrot’s fractal sets fully eighteen months before Rescue on Fractalus! would make fractals a household name for gamers and programmers everywhere. Boulton enticed McConnell with an idea much bigger than Lucasfilm’s simple action game: an almost infinitely vast planet which, thanks to the miracle of fractals, the player could roam at will.

McConnell founded a company named Ambient Design and hired a couple of young programmers to help Boulton. One, Alec Kercso, was just finishing a degree in Linguistics in San Diego, but was more interested in his hobby of hacking. The other, Bob Gonsalves, was another dedicated Forther who wrote a monthly column on the language for Antic magazine. He was hired on the basis of this as well as his intimate familiarity with the Atari 8-bit platform, which thanks to its audiovisual capabilities was the great favorite around EA circles during that first year or so, until the Commodore 64 came online in earnest. On the strength of McConnell’s friendship with Ybarra and little else — the  whole group of them had among them neither any experience with game development nor any real plan for what their game would be beyond a vast environment created with fractals — EA signed them as one of the first of their second wave of contracts, following the premiere of the initial six, reputation-establishing EA games. Ybarra would be their producer, their main point of contact with and advocate within EA. He would have his work cut out for him in the years to come.

The idea soon evolved to encompass not just a single planet but many. The game, to be called Starquest, would let you fly in your starship across an entire galaxy of star systems, each with planets of its own, each of which would in turn be its own unique world, with unique terrain, weather, life forms, and natural resources. For Boulton, the man who had gotten this ball rolling in earnest in the first place, it was suddenly getting to be too much. You just couldn’t do all that on an 8-bit computer, he said, not even with the magic combination of Forth and fractals. He walked away. He would go on to develop early software for the Commodore Amiga and to join another unheralded founder, Jef Raskin of the original vision for the Apple Macintosh, to work on Raskin’s innovative but unsuccessful Canon Cat.

Left on their own with only Boulton’s prototype fractal code to guide them, Kercso and Gonsalves felt over their heads. They needed to be able to show each planet as a rotating globe from space, complete with the fractal terrain that the player would be able to explore more intimately if she elected to land, but didn’t know how to map the terrain onto a sphere. McConnell soon found another programmer, Tim Lee, who did. Lee had already written firmware for Texas Instruments calculators and written very complex policy-analysis applications for life-insurance companies. Yet another Forth fan, he’d just finished writing an actual game in the language, an IBM PC port of the Datasoft action game Genesis which Datasoft would never ship due to its incompatibility with the PCjr. With the graphics code he’d developed for that project, plus his own explorations of fractal programming, Lee was more than up to rendering those spinning terrain-mapped globes.

One of Tim Lee's spinning terrain-mapped planet. He was also responsible for most of the fundamental low-level architecrure of the game.

One of Tim Lee’s spinning terrain-mapped planets. He was also responsible for most of the fundamental low-level architecture of the game.

Lee also brought with him his programming expertise on the IBM PC. This prompted the team to take a big step: to abandon their little 8-bitters and move to the bigger 16-bit MS-DOS machines. They had recognized that Boulton had been right: their ideas were just too big to fit into 8 bits. MS-DOS was just finishing up its trouncing of CP/M to become undisputed king of the business-computing world, but had managed little penetration into homes, which were still dominated by the likes of the Apple II and Commodore 64. On the one hand, the IBM was a terrible gaming platform: its CGA graphics could show only four colors at a time in palettes that seemed deliberately chosen to clash as horribly with one another as possible and give artists nightmares; its single speaker was capable of little more than equally unpleasant bleeps and farts; even standard gaming equipment like joysticks were effectively non-existent due to a perceived lack of demand. But on the other hand, the IBM was an amazing gaming platform, with several times the raw processing power of the 8-bitters and at least twice the memory. Like so much in life, it all depended on how you looked at it. Ambient Design decided they needed the platform’s advantages to contain a galaxy that would eventually encompass 270 star systems with 811 planets between them, and they’d just have to take the bitter with the sweet. Still, it’s unlikely that EA would have gone along with the idea had it not been for the imminent release of the PCjr, which was widely expected to do in home computing what its big brother had in business computing.

Starflight

About this point the last and arguably biggest piece of the development puzzle arrived in the form of Greg Johnson, Kercso’s roommate. Not much of a programmer himself, Johnson had, like his roommate, also just finished a degree and wasn’t quite sure what to do next. He had listened avidly to Kercso’s reports on the game’s progress, and eventually started drawing pictures of imagined scenes on his Atari 800 just for fun. He was soon coming up with so many pictures and, more importantly, ideas that Kercso got him an interview and McConnell hired him. Just like that, Johnson became the much-needed lead designer. Until now the team had been focused entirely on the environment they were trying to create, giving little thought to what the player would be expected to actually do there. As Kercso would later put it, what had been an “open-ended game of exploration” now slowly began to evolve into “a complex story with interwoven plots and twists.” Johnson himself later said his job became to come up with what should happen, the others to come up with how it could happen. Or, as Lee put it, Johnson designed the scenario while the others designed “the game system that you could write the scenario for.” And indeed, he proved to be a boundless fount of creativity, coming up with seven unique and occasionally hilarious alien races for the player to fight, trade, and converse with during her travels.

Critical to those conversations became a designer we’ve met before on this blog, Paul Reiche III, who spent two important weeks helping Johnson and his colleagues to hash out a workable conversation engine which made use of the system of conversation “postures” from a game he had co-designed with Jon Freeman, Murder on the Zinderneuf. Reiche, an experienced designer of tabletop RPG rules and adventures as well computer games, continued to offer Johnson, who had heretofore thought of himself as a better artist than writer or designer, advice and ideas throughout the game’s protracted development.

The system of conversation "postures" from Murder on the Zinderneuf.

The system of conversation “postures” from Murder on the Zinderneuf.

Starflight's implementation of conversation postures.

Starflight’s implementation of conversation postures.

“Protracted” is perhaps putting it too mildly. The process just seemed to go on forever, so much so that it became something of a sick running joke inside EA. The project appeared on more than three years worth of weekly status reports, from the time that EA was mature enough to have weekly status reports until the game’s belated release in August of 1986. Over that time the arcades and home game consoles crashed and burned; the home-computer industry went through its own dramatic boom and bust and stabilization; countless platforms came and went, not least among them the PCjr; EA gave up on the dream of revolutionizing mainstream home-entertainment and accepted the status of big fish in the relatively small pond of computer gaming; IBM achieved business-computing domination and then ceded it almost as quickly thanks to the cloners; the bookware craze came and went; Infocom rose to dizzying heights and crashed to earth just as quickly; the Soviet Union went from an Evil Empire to a partner in nuclear-arms control. And still, ever and anon, there was Starflight, the much more elegant name chosen for Starquest after the release of Sierra’s King’s Quest. McConnell’s company name changed as well before all was said and done, from Ambient Design to Binary Systems (get it?).

EA very nearly lost patience several times; McConnell credits his old friend Joe Ybarra with personally rescuing the project from cancellation on a number of occasions. With the contract structured to provide payments only after milestones that were few and increasingly far between, McConnell himself took personal loans and worked other jobs so as to be able to pay his team a pittance. Throughout it all he never lost faith, despite ample evidence that they didn’t, to be painfully blunt, entirely know what they were doing. The team members lived on savings or loans when their meager salaries ran out. Many months were consumed by fruitless wheel-spinning. As Lee later admitted, they were so entranced with this model universe that they “spent a lot of time trying to model things that didn’t add to the play of the game.” Forth was never the most readable language nor an ideal choice for a large group project, and as the project wandered off in this or that direction and back again the code got nightmarishly gnarly. This just made trying to modify or add to it take still longer. With McConnell only able to afford a tiny office and most of the team thus working remotely most of the time, just keeping everyone on the same page was difficult. Given the situation and the personalities involved, a certain amount of freelancing was inevitable. “There was no master plan detailing each and every task to be done,” said Kercso later. “We had an idea of what the major modules had to be and we added a lot of final design as we got into programming each of the modules”; then they did their best to mash it all together.

Starflight was a prototypical runaway, mismanaged, overambitious project, the likes of which the industry has seen many times since. The difference was, instead of being ignominiously cancelled or shoved out the door incomplete, Starflight somehow ended up amazing. Call it serendipity, or credit it to a team that just wouldn’t give up. Once the core group was assembled, nobody thought of quitting; everyone was determined to finish the game — and on its own original, insanely ambitious terms at that — or die trying. “I remember saying that I didn’t care if I died after it came out,” said Johnson later, but “please, God, let me live until then.”

The hopeless combat screen.

The hopeless muddle of a combat engine.

Some of the confusion and occasional lack of direction is visible in the final game. Even the biggest Starflight fan would have trouble praising the arcade-style in-space combat engine, for instance, which manages to be both far too simplistic and far too baffling to actually control. There’s a disconnected feeling to certain elements, as of clever ideas that were never fully woven into the holistic design. You can gather flora and fauna from the planets you visit and return them to your base to study, for example, but you make so little money from doing so as opposed to mining minerals — and the controls for stunning and gathering your specimens are once again so awkward — that you’re left wondering what the point is. Ditto most of the intriguing alien artifacts you find, which you cart excitedly back to base only to find that they “reveal very little of interest” and are “totally useless to us.” And the game has what should be a fatal case of split personality, being half stately space opera and half silly romp filled with sci-fi alien caricatures.

And yet it really doesn’t matter. Starflight is that rare piece of work that actually justifies the critic’s cliché of being more than the sum of its parts. It’s not a tight design; appropriately given its theme, it sprawls everywhere, sometimes seemingly uselessly so. But even its blind alleys are fascinating to wander down once or twice. It’s the opposite of a minimalist masterpiece like M.U.L.E., whose every last note is carefully considered and exhaustively tested and blended carefully into the whole. And you know what? It’s every bit as awesome.

But for the benefit of those of you who haven’t played it it’s really high time that I tell what the game’s all about, isn’t it?

Your home starbase, where you outfit your ship, select and train your crew, buy and sell equipment and resources, etc.

Your home starbase, where you outfit your ship, select and train your crew, buy and sell equipment and resources, etc. It was largely the work of Alec Kercso.

Starflight starts you off at your base on your home planet of Arth — no, that’s not a typo — with a rather shabbily equipped ship and a little bit of seed capital. If you’re smart, you’ll spend most of the latter training a crew, which will include, in the tradition of a certain classic television series that went where no man has gone before, a Captain, a Science Officer, a Navigator, an Engineer, a Communications Officer, and a Doctor. You’ll also need to save enough to add a cargo pod or three to your ship, so you can begin to earn money by landing on nearby planets and scooping up minerals for sale back at Arth. You need money to upgrade your ship with better engines, weapons, and defenses, to train your crew, and to buy something called endurium, if you can’t find or mine enough of it. Endurium is Starflight‘s equivalent to dilithium crystals, the semi-magical fuel that enables faster-than-light travel.

As you build up your ship and your bank account, you can travel ever farther from Arth, exploring an algorithmically generated galaxy so vast that, like the Fibonacci galaxies of Elite, even Starflight‘s creators hadn’t seen all of it before the game’s release. And so you fly and land where you will, searching for mineral-rich planets you can mine and, even better, habitable planets you can recommend for colonization; you receive a substantial finder’s fee in return for each recommendation. Alien races inhabit various sectors of the galaxy. Some you may be able to befriend, or at least achieve a level of mutual toleration with; others you’ll have to fight. Thus the need to fit out your ship with the best possible weapons and defenses.

Exploring the surface of a planet. This module was largely the work of Bob Gonsalves.

Exploring the surface of a planet. This module was largely the work of Bob Gonsalves.

This, then, is Starflight the sandbox game. While it’s in no way derivative of EliteStarflight‘s creators couldn’t have even been aware of the older game until quite late in their own development cycle, since Elite didn’t reach American shores until late 1985 — Starflight does generate a similar compulsion to explore, an addictive need to see what all is out there. But everything about Starflight is richer and more complex, with the exception only of the combat system that was the heart of Elite but a mere afterthought in Starflight (if you had to spend much time in Starflight actually fighting, it would be a very, very bad game). With so much more computing horsepower at their disposal, Binary Systems was able to add layer after intriguing layer: the ability to land on planets, and once there to engage in an exploring and mining mini-game that is as absurdly addictive as it is superficially simplistic; the chance to converse with the aliens you meet instead of just shooting at them; the whole CRPG angle of training a crew and keeping them healthy; sensor- and Navigator-confounding nebulae and wormholes to negotiate. Whereas Elite sessions soon settle down into a comfortable routine of trade-jump-fight-dock, rinse and repeat forever, Starflight always seems to have something new to throw at you.

But the most important difference is the plot that Starflight layers over its sandbox. I realize everyone is different on this point, but personally I always have a little bit of trouble with purely open-ended games (see my review of Seven Cities of Gold for another example). When I play Elite I eventually start to get bored for lack of any real narrative or goal to shoot for beyond the almost impossible one of actually becoming Elite. Ian Bell and David Braben originally wanted to include a real plot, but there just wasn’t room to contain it inside a 32 K BBC Micro. Starflight, however, has the sort of plot-driven direction that Elite so painfully lacks.

So, having told you what you can do in Starflight, let me now tell you why you do it. Evidence has recently turned up on Arth that the planet’s inhabitants did not evolve there; that it was colonized at some point in the distant past, that the colonists regressed into barbarism due to war or other pressures, and that only now has civilization recovered. A cache of old documents has also revealed the secrets of endurium and faster-than-light travel. All of which is great, except that Arth has even bigger fish to fry. A strange wave is spreading across the galaxy, causing stars to flare — with disastrous results for any orbiting planets — as it strikes them. Thus your mission is not just to explore and get rich, but to discover the source of the wave and to stop it before it reaches Arth.

Starflight has an unusually elaborate plot for its day, but unlike in so many more recent games it never straitjackets you to it. The plot is more backstory than story. The game is essentially a big scavenger hunt, sending you off to reconstruct quite a complicated galactic history. Follow the trail long enough and you should turn up the clues and objects you need to end the threat to Arth and the galaxy by blowing up a certain Crystal Planet that’s the source of all the trouble. There’s not all that much that you actually need to do to beat the game when you know how. In fact, you can do it in less than two game days. It’s the clue- and object-scavenging that’s all the fun, the process of putting the pieces of the backstory together. When you discover Earth, for example — yes, those original colonizers of Arth came, inevitably, from Earth — it gives a thrill when you first look down on those familiar continents from orbit. Other pieces of the puzzle are almost equally thrilling when they come to light. If you’re playing cold, sans walkthrough — which is honestly the only way to play; you’ll otherwise just be left wondering what all the fuss is about — you’ll need to look everywhere for clues: to the occasional emails you receive from your overseers on Arth; to messages and artifacts you find on the planets; to the map and other materials included in the game package. And, most importantly, you need to talk at length to all those aliens, a goofy and amusing rogue’s gallery of sci-fi clichés. They’re the silly part of this odd mixture of stately epic and silly romp — but they’re so much fun we’ll take them just as they are, cognitive dissonance be damned.

The Elowans, a race of plant-like hippies who evince peace and love along with passive aggression.

The Elowan, a race of plant-like hippies who evince peace and love along with passive aggression.

The Thrynn, who have such weird issues with the Elowan that they'll attack if you have one in your crew.

The Thrynn, who have such weird issues with the Elowan that they’ll attack if you have one in your crew.

The unforgettably loathsome Spemin, who lack backbone -- literally.

The unforgettably loathsome Spemin, who lack backbone — literally.

The Mechans, a group of robots who think you're just what they've been waiting for all these millennia.

The Mechans, a group of robots who think you’re just what they’ve been waiting for all these millennia.

Now, this plot-as-scavenger-hunt approach to gameplay is hardly an innovation of Starflight. The Ultima games in particular had long been trolling these waters by the time it appeared. The breadcrumb-following approach to game design always gives rise to the possibility of getting yourself stuck because you’ve missed that one little thing in this absurdly vast virtual world on which all further progress depends. Yet there is a difference between Starflight and Ultima in this respect, a difference not so much in kind as in quality. Starflight is a much more friendly, generous game. Whereas Ultima seems to relish making you fail by hiding vital clues in the most outlandish places or behind the most unlikely parser keywords, there’s a sense that Starflight really wants you to succeed, wants you to solve the mystery and save the galaxy. There are multiple ways to learn much of what you need to know, multiple copies of some vital artifacts hidden in completely different places, multiple solutions to most of the logistic and diplomatic puzzles it sets before you. Yes, there’s a time limit, what with Arth’s sun destined eventually to flare, but even that is very generous, operating more as a way to lend the game excitement and narrative urgency than a way to crush you for failing some hardcore gamer test. Its generosity is not absolute: in my own recent playthrough I had to turn to a walkthrough to learn that you need to be obsequious when you talk to the Velox or they’ll never share an absolutely vital piece of information that I don’t think you can glean anywhere else (remember that, would-be future players!). Still, even these few choke points feel more like accidents than deliberate cruelties strewn in your path by cackling designers. Starflight really does feel like a welcome step toward a more forgiving, inclusive sort of gaming.

Spoilers begin!

No discussion of Starflight‘s plot can be complete without the shocker of an ending. When you finally arrive at the Crystal Planet and are preparing to destroy it, everything suddenly gets deeply weird via a message from an earlier visitor:

I can hardly believe it! Those weird lumps are actually intelligent life. The Ancients are endurium! And we have spent hundreds of years hunting them to burn for fuel in our ships. Their metabolism is so much slower than ours that they live in an entirely different time framework. I don’t think they even know we are sentient. I believe it was only because of a link thru the Crystal Planet that contact was made at all. This Crystal Planet was their last defense. I can hardly blame them. Carbon-based life must have seemed something like a virus to them.

Despite this discovery, the only option — other than to simply stop playing — is to blow up the Crystal Planet anyway, thus annihilating the home planet of a race much more ancient and presumably more sophisticated than your own. It’s a strange, discordant sort of ending to say the least. Some have made much of it indeed; see for instance an earnest article in Envirogamer that would make of Starflight an elaborate allegory for our environmental problems of today, with endurium representing fossil fuels and the stellar wave standing in for global warming. I’m not really buying that. Not only does the article strike me as anachronistic, an argument born of 2009 rather than the mid-1980s, but I somehow have trouble seeing Starflight as a platform for such deliberate messaging; it strikes me as a grand escapist space opera, full stop, without any rhetorical axe to grind.

But is Starflight‘s ending a deliberate subversion of genre convention, like the controversial ending to Infidel? Maybe. It’s not as if the game is not without a certain melancholia in other places; for instance, you’ll occasionally meet a race of interstellar minstrels who roam the spacelanes singing sad songs about glories that used to be. Yet after you do the bloody deed on the Crystal Planet the game immediately leaps back to unabashed triumphalism, gifting you with medals and music and glory and a chance to roam the galaxy at your leisure with an extra 500,000 credits in your account, burning genocidal quantities of endurium as you do so with nary a moral qualm to dog your steps. What to make of this seeming obliviousness to the ramifications of what you’ve just done? You’ll have to decide for yourself whether it represents subtlety or just a muddle of mixed messages that got away from their creators. It’s just one more of the layers within layers that make Starflight so memorable for just about everyone who plays it.

Spoilers end!

In addition to its innovations in the softer arts of design and narrative, Starflight has one final, more concrete quality that sets it apart from what came before, one that can be very easy to overlook but is nevertheless of huge importance. It’s the first of these big open-world games that offers a truly persistent virtual world to explore. Due to the limitations of 8-bit floppy-disk drives, earlier games all fudge persistence in some way. The Wizardry and Bard’s Tale games, for instance, save only the state of your characters between sessions. Everything else resets to its original state as soon as you leave the game, or, indeed, just travel between dungeon levels or between the dungeon and the city. Amongst numerous other oddities, this means that you can actually “win” these games over and over with the same group of characters; the games literally forget you’ve done so almost immediately after showing you the victory screen. The 8-bit Ultimas do only a little better: the outdoor world map does persist along with the details of your characters, but cities and dungeons, again, reset themselves ad infinitum. You can go into a town, murder a dozen guards and rob every shop in town, then exit and return to find all restored to peace and tranquility again. Indeed, solving the early Ultimas is virtually dependent on you doing exactly this.

Starflight, however, is different. Its whole vast galaxy remembers what you have done, on both a macro- and micro-scale. If you discover a juicy new planet, name it, and recommend it for colonization, it goes by that name for the rest of the game. If you befriend or piss off a given alien race, they don’t forget you or what you’ve done. If you strip-mine a planet of all its minerals, they don’t reappear the next time you land on it. If you make notes in your “Captain’s Log” (a first in itself), they remain there until you delete them. If you blow up an alien race’s home planet thereby destroying their entire civilization, it stays blown up. This is a huge step forward for verisimilitude, one enabled by Binary Systems’s choice to throw caution to the wind and target the bigger, more capable MS-DOS machines. [1]This is not to say that all is smooth sailing. Starflight constantly saves updated versions of its data files to disk as you play. It then relies on you to “commit” all of these changes by cleanly exiting the game from the menu. If you ever exit without properly saving, or get killed, your game becomes unplayable until you reset it back to its original data — whereupon you have the joy of starting all over. My advice is to make backups of the files “STARA.COM” and “STARB.COM” after every successful session; then if you get killed or have some other problem you can just copy these back into the game’s directory to get back to a good state. Or, if you like, here’s a DOSBox startup script you can use to automatically keep a few generations of states. Just copy it into the “[autoexec]” section of the game’s “.conf” file, editing as needed to suit your directory names.

As Starflight neared release at last, it was very much an open question whether it would find an audience. By now the PCjr had come and gone — just as well given that the game’s memory requirements had ballooned past that machine’s standard 128 K to a full 256 K. No one had ever released such a major title exclusively on MS-DOS. Normally if that platform got games at all they were ports of titles that had already proved themselves on the more popular gaming machines, delivered months or years after their debuts elsewhere. Binary Systems and EA could only make sure Starflight supported the popular Tandy 1000’s enhanced graphics and hope for the best.

The best was far better than they had bargained for: initial sales far exceeded the most optimistic expectations, leaving EA scrambling to produce more copies to fill empty store shelves. It would eventually sell well over 100,000 copies on MS-DOS alone, a major hit by the standards of any platform. Starflight placed owners of other computers in the unaccustomed position of lusting after a game on MS-DOS of all places, a platform most had heretofore viewed with contempt. Appearing as it did even as owners of the new generation of 68000-based machines were reveling in their Macs, Amigas, and Atari STs, Starflight was an early sign of a sea change that would all but sweep those platforms into oblivion within five years or so. With it now clear that a market of eager MS-DOS gamers existed, the platform suddenly became a viable first-release choice for publishers and developers. Only years later would Starflight belatedly, and not without much pain given the unique Forthian nature of its underpinnings, be ported to the Amiga, ST, Macintosh, Sega Genesis, and even the little Commodore 64 — the latter of which would probably have been better bypassed. It would sell at least 200,000 more copies on those platforms, a nice instance of creativity and sheer hard work being amply rewarded for Rod McConnell’s idealistic little team of five.

Most of Binary Systems stayed together long enough to craft a fairly workmanlike sequel, Starflight 2: Trade Routes of the Cloud Nebula, released in 1989 for MS-DOS just as the first ports of the original game were reaching those other platforms. It was playable enough, but somehow lacked some of the magic of the original. Most then drifted away from the games industry, with only Greg Johnson continuing to work intermittently as a designer, most notably of the Toejam & Earl games for the Sega Genesis. Starflight had been such an all-consuming, exhausting labor of love that perhaps it was hard for the others to imagine starting all over again on another, inevitably less special project. Making Starflight had been the sort of experience that can come only once in a lifetime; anything else they did in games afterward would have been anticlimactic.

If we’re looking for something to which to attribute Starflight‘s success, both commercially and, more importantly, artistically, we’re fairly spoiled for choice. Alec Kercso credits the way that he and his colleagues were allowed to work “organically,” experimenting to see what worked and what didn’t. Credit also the odd idealism that clung to the team as a whole, which prompted them to never back away from their determination to make something bigger and qualitatively different than anything that had come before, no matter how long it took. Credit Joe Ybarra and the management of EA who, skeptical as they may sometimes have been, ultimately gave Binary Systems the time and space they needed to create a masterpiece. Credit Rod McConnell for giving his stable of talented youngsters the freedom to create whilst finding a way to keep the lights on through all those long years. And credit, almost paradoxically, the limited technology of the era. With their graphics capabilities sharply limited, the team was free to concentrate on building an interesting galaxy, full of interesting things to do, and to tweak it endlessly, without needing to employ dozens of artists and 3D modellers to represent every little idea; tellingly, the only artist on the team was Johnson, who was also the lead designer. And of course credit Johnson for giving the game a plot and an unforgettable, quirky personality all its own, without which all of its technical innovations would have added up to little.

There’s a stately dignity to Starflight even amidst all the goofy gags, a sense of something grand and fresh and new attempted and, even more remarkably, actually realized. Few games have ever captured that science-fictional sense of wonder quite this well. If you start playing it — and that’s very easy to do now; Starflight 1 and 2 are both available in one-click-installable form from GOG.com — you might just find yourself lost in its depths for a long, long time. This, my friends, is one of the great ones.

(Useful vintage articles on Starflight include an interview with Rod McConnell in the March 1987 Computer Gaming World and especially one with Tim Lee in the July/August 1987 Forth Dimensions. Alec Kercso wrote about the game in, of all places, Jonathan S. Harbour’s Microsoft Visual BASIC Programming with DirectX. Good recent articles include ones in The Escapist, Gamasutra, and Sega-16. Tim Lee gave part of the source code and many design documents to Ryan Lindeman. He once made even more source and documents available online, some of which can still be recovered via the Internet Archive’s Wayback Machine. Similarly available only via the Wayback Machine is Miguel Etto’s technical analysis of the game’s Forthian underpinnings. See The CRPG Addict for an experiential play-through of the game.)

Footnotes

Footnotes
1 This is not to say that all is smooth sailing. Starflight constantly saves updated versions of its data files to disk as you play. It then relies on you to “commit” all of these changes by cleanly exiting the game from the menu. If you ever exit without properly saving, or get killed, your game becomes unplayable until you reset it back to its original data — whereupon you have the joy of starting all over. My advice is to make backups of the files “STARA.COM” and “STARB.COM” after every successful session; then if you get killed or have some other problem you can just copy these back into the game’s directory to get back to a good state. Or, if you like, here’s a DOSBox startup script you can use to automatically keep a few generations of states. Just copy it into the “[autoexec]” section of the game’s “.conf” file, editing as needed to suit your directory names.
 
 

Tags: , ,

Patreon

Patreon

It’s hard for me to believe that it’s now been about three-and-a-half years since I started this blog. Over that time it’s come a long way. After beginning with no real direction, I found my forte within the first six months or so, and the blog evolved into the grand history you all know and (hopefully) love. Along the way I like to believe I’ve become a better writer, and I know I’ve become a much better and more thorough researcher, with many more sources at my disposal. I must admit that some of those early articles are a bit painful for me to read now (I really need to do something about that someday). But best of all, I’ve found you folks, a core group of loyal readers who seem to grow by just a little bit every month. It hasn’t been a meteoric rise, but it has been a steady one and a fun one. You’re the best readers anywhere, you know, almost unfailingly polite and witty and perceptive and helpful, and I appreciate each and every one of you enormously. Every writer wants, more than anything else, to know some people out there are reading. Thanks for that!

So, having buttered you up, let’s move on to the real subject of today’s post. After some months of dithering over the question, I’ve decided it’s time to take the next step in my blogging career. As you can probably imagine based on the length and depth of the articles I post, the writing and research for the blog  absorbs many hours of my time per week. If I can start to bring in a little bit more, and on a more consistent basis, I’ll be able to devote more time to my work here, which will translate directly into more and better articles for you to enjoy. Imagine if you will a sliding scale of hours devoted to computer-gaming history that terminates in my being able to make it my full-time job. I’m afraid I’m a long way from there, may indeed never reach it, but every little bit of income the blog does manage to generate shifts that scale just slightly in a positive direction, resulting in more articles published, more games and other topics covered, and more depth to that coverage.

I’ve therefore decided to add Patreon to the existing PayPal donation system. As many of you probably already know, Patreon is a way for readers like you to support the work of creators like me through something like the old patronage model that used to fund art and literature back in the day. It has the advantage for me that it represents a steady income stream I can count on on a per-article basis, whereas one-off donations tend to move through cycles of feast and famine that are impossible to plan for. You need only go to my fresh new Patreon page to sign up. If you do so, you’ll be automatically billed for each substantial article that I write for the blog (i.e., articles like this one are not included). You can decide how much that amount will be. I’m certainly not asking you to break the bank here; a dollar or two (or the equivalent in your local currency) is fine, although if any of you love the blog and are flush with cash I certainly wouldn’t balk at more. On the other hand, some of you may want to pay a bit less, maybe just a dollar or two per month. I unfortunately can’t offer monthly and per-article payments simultaneously, but there is a way around it: just set a per-article level of $1 and also set a monthly limit of $1, $2, or whatever you like. This will have the same effect, with the added advantage that you don’t pay anything if I stop blogging for a month for some reason.

Patreon supporters will gain access to a special members area of my Patreon page, where we can interact a bit more informally and where you can have a bit more of a say on certain things that happen around here. I’ll give sneak previews from time to time of upcoming articles, ask for your input on games and topics worthy of coverage, and if there’s interest host occasional meet-ups via Google Hangouts or the like.

The PayPal donation button to the right will not be going away, so if you do still prefer to make a single lump-sum donation by all means feel free. And whether you can contribute financially or not, I could also use your help in one other way. As just about everyone must realize by now, I’m terrible at self-promotion, and worse at social media. So, anything you could do to help me get the word out to potential supporters would be hugely appreciated.

And that’s that, except to say, as Bartles and Jaymes did back in the year of which I’m writing these days, “Thank you for your support.” Next up in the on-deck circle: a certain spacefaring epic.

 

The Forth Dimension

Forth

The Forth programming language reached maturity around 1970 after more than ten years of development and experimentation by its creator, Charles H. Moore. Its first practical use was to control a radio telescope at the National Radio Astronomy Observatory, where Moore was employed at the time. From there Forth spread to other telescopes and other observatories, cementing a connection with astronomy and space science that persists to this day; in addition to controlling countless telescopes and planetariums earthside, Forth has been into space many times on probes and satellites of all descriptions. Yet already by the end of its first decade Forth had spread far beyond astronomical circles. It was being used to control the motorized cameras used to film miniature-based special-effects sequences (suddenly a booming business in the wake of Star Wars); to control automotive diagnostic equipment; as the firmware in various medical devices; to control automated agricultural equipment. Closer to our usual interests, Atari had invested a lot of money into developing a version of the language suitable for programming pinball machines and stand-up arcade games, while versions of the language were available for all of the trinity of 1977 within a year or so of their appearance. The key to Forth’s burgeoning popularity was its efficiency: it not only ran faster than just about any language short of assembly, but in the right hands it was also almost unbelievably stingy with memory. Those were good qualities to have in the late 1970s, when the average PC ran at 1 MHz and had no more than 16 K.

We’ll get into why Forth is so efficient in just a bit. But first let’s take a look at the language itself. If you’ve programmed before in just about any other language, Forth will likely seem deeply, deeply weird. Still, there’s also something kind of beautiful about it. If you’d like to follow along with the few examples I’ll give in this article, you have many free implementations of the language to choose from. A good choice, and one that has the advantage of working on Windows, Macintosh, and Linux alike, is Gforth.

Forth is an interactive programming language, like the microcomputer BASICs so many of us grew up with. This means that you can enter commands directly at the Forth console and watch them run immediately.

Forth is also a stack-based programming language, and this is the key to everything else about it. Virtually every programming language uses a stack under the hood; it’s one of the most fundamental mechanisms of computer science. But most other languages try to hide the stack from us, strain to make it so that we need not trouble ourselves over it and, indeed, don’t really need to know much about it at all. The only time many programmers even hear the word “stack” is when an infinite loop or runaway recursion causes a program to crash with a “stack overflow error.” Forth, however, doesn’t hide its stack away like something shameful. No, Forth loves its stack, sets it front and center for all to see. Forth demands that if we are to love it, we must also love its stack. Given this, it would behoove me at this point to explain just what is meant by the idea of a stack in the first place.

A stack is just that: a stack of numbers stored in a special part of memory, used for storing transient data. Adding a number to the stack is called pushing to the stack. It always goes on top. Taking a number from the stack is called popping the stack. It’s always the top number — i.e., the most recently pushed — that’s popped, after which that number is erased from the stack. A stack is, in other words, a first-in-last-out system — or, if you like, a last-in-first-out system. If you haven’t quite wrapped your head around the idea, don’t sweat it. It should become clearer momentarily.

Let’s look at how we can do simple arithmetic in Forth. Let’s say we want to add 2 and 3 together and print the result. In a typical modern language like Java, we’d just do something like this:

System.out.print(2 + 3);

In Forth, we do it a bit differently. If you’ve started up a Forth environment, you can type this in and see the result immediately.

2 3 + .

If you happened to use a Hewlett-Packard scientific calculator back in the day, this notation might look familiar to you. It’s known as “postfix” or “reverse Polish” notation. Let’s unpack this piece by piece to see how exactly Forth is handling this expression.

The first thing to understand here is that Forth reads almost everything as a word — Forthese for a command. A number standing by itself is actually interpreted as a word, a command to push that number onto the stack. Therefore, assuming we started with an empty stack, the stack looks like this after the first two parts of the expression above have been processed:

3
2

Now the interpreter comes to the “+,” which is also read as a command, instructing it to pop two values off the stack, add them together, and push the result back onto the stack. After doing so, the stack looks like this:

5

Finally, the “.” just instructs the interpreter to pop the top value off the stack and print it.

Now let’s consider a more complicated algebraic expression, like “(4 + 5) * (6 + 7).” In Forth, it would be written like this:

4 5 + 6 7 + * .

Let’s walk through this. We push 4 and 5 onto the stack.

5
4

We pop them off, add them together, and push the result to the stack.

9

We push 6 and 7 onto the stack.

7
6
9

We add them together and push the result.

13
9

We pop the top two values on the stack, multiply them together, and push the result.

117

And finally we pop and print the result.

To this point we’ve been working interactively. The key to programming in Forth, however, is to define new words; this is Forth’s nearest equivalent to the function calls common to other languages. Let’s consider a function to cube a number, which would look like this in Java:

int cube (int num) {
   return (num * num * num);
}


In Forth, we might do it by entering the following lines at the console:

: CUBE ( N -> N. Cube a number)
   DUP DUP ( Now there are three copies)
   * * ( Get the cube)
;


Let’s again unpack this piece by piece. The colon is a word which tells the interpreter that what follows will be a new word definition, to be terminated by a semicolon. “CUBE” is the name of the new word we are creating. All text within parenthesis are comments, to be ignored by the interpreter. The “N -> N.” notation within the first parenthesis is not required, but is considered good practice in Forth programming. It tells us that this word will pop and operate on the current topmost word on the stack, and will push a single result onto the stack. Forth words do not take arguments like functions in other languages, but operate only on the current contents of the stack. Thus it’s the programmer’s responsibility to set the stack up properly before invoking a word, and to know what that word will have done to the stack when it finishes. The two lines in the middle are the meat of the word, the actual instructions it represents.

Let’s say we call this new word “CUBE” with a 5 on top of the stack — i.e., by entering “5 CUBE .” at the console. Thus the initial stack looks like this:

5

Now we’re going into the body of the word itself. The two “DUP” statements tell the interpreter to duplicate the top value on the stack twice, without destroying — i.e., without actually popping — the original value. So, we end up with:

5
5
5

Now we pop the top two values, multiply them together, and push the result.

25
5

Then we just do the same thing again.

125

And our work is done.

Next we’ll see how we can use this word within another word. But first let’s see how we would do that as a function in Java.

void cubes10() {
   for (int i = 0; i < 10; i ++) {
      System.out.print("\n");
      System.out.print(i + " ");
      System.out.print(cube(i));
   }
}


Here it is as a Forth word:

: CUBES10 ( ->. Print a table of cubes of 0-9.)
   10 0 ( Indices of loop)
   DO ( Start Loop)
      CR I . I CUBE . ( Print a number and its cube.)
   LOOP ( End of loop.)
;


As the first comment indicates, the “CUBES10” word expects nothing on the stack and leaves nothing there. We begin by pushing 10 and 0 onto the stack. Now Forth’s back-asswordness really comes to the fore: the “DO” word pops the last two words off the stack. It will increment a variable — always known as “I” — from the second of these until it is equal to the first of these, looping each time through the block of words contained between “DO” and “LOOP.” Within the loop, the word “CR” simply causes the cursor to move down to the next line. Keeping in mind that “I” represents the current value of the variable being incremented, which can be pushed and popped just like a constant, the rest should hopefully be comprehensible. The output looks something like this:

0 0
1 1
2 8
3 27
4 64
5 125
6 216
7 343
8 512
9 729

Forth is built entirely from words like the ones we’ve just created. In fact, calling Forth a programming language may be something of a misnomer because virtually every piece of its vocabulary is redefinable. Forth comes with a dictionary of common, useful words, but the programmer is always free to replace these with others of her own devising, to make Forth into whatever she wants it to be. The most basic words are not constructed from other Forth words but rather written as in-line assembly language. The programmer adds words to this base which do ever more complicated tasks, until finally she writes a word that subsumes the entire program. To take an example from Leo Brodie’s classic book Starting Forth (one of Forth’s chief products down through the decades has been horrid puns), a Forth program to control a washing machine might have this as its top-level word:

: WASHER
   WASH SPIN RINSE SPIN
;


Each of the words referenced within “WASHER” would likely call several words of their own. “RINSE,” for instance, might look like this:

: RINSE
   FAUCETS OPEN TILL-FULL FAUCETS CLOSE
;


Each of these words would call still more words of its own, until we come to the level of fundamental assembly-language commands to control the CPU on its most basic level. Forth words can even create new words dynamically, resulting in programs that effectively rewrite themselves as they run to suit their environment.

Especially if you’re a programmer yourself, you may have already formed an impression by now of Forth’s strengths and weaknesses. And yes, contrary to the claims of many Forth zealots, the latter do exist in considerable numbers. Even leaving aside the strange reverse notation, which one can eventually get used to, Forth programs can be incredibly hard to actually read thanks to their reliance on pushing and popping to the stack, with the associated lack of helpful variable names. For this reason Forth has occasionally been called a “write-only” language; Forth code can be well-nigh incomprehensible even to the person who originally wrote it after just a week or so has elapsed. It’s the polar opposite of a contemporaneous language I once wrote about on this blog, Pascal, replacing the latter’s pedantic emphasis on structure and readability above all else with a love of hackerish tricks, sleight of hand, and cleverness that can sometimes come off as sort of facile. Just trying to keep a picture in your head of the current configuration of the stack, something on which absolutely everything you do in Forth depends, can be a nightmare as programs get more complicated and their possible states get more varied. If not quite the last language in the world I’d use to write a complicated modern application, it must be pretty close to it. Its “write-only” qualities make it particularly unsuitable for team projects, a problem given that most useful software long ago got too complicated for solo programmers.

Yet there’s also an uncompromising beauty about Forth that has drawn many people to it, a beauty that has occasionally been compelling enough to override people’s better judgment and make them use the language for purposes to which it really isn’t terribly suited. Whatever else you can say about it, it sticks to its philosophical guns tenaciously. There’s a fascination to building a dictionary of your own, to effectively making a programming language all your own. Grizzled Forth programmers have often replaced virtually everything that comes with the language to create something that is absolutely theirs. That’s a rare experience indeed in modern programming. People who love Forth really love it. This (in Leo Brodie’s words) “high-level language,” “assembly language,” “operating system,” “set of development tools,” and “software design philosophy” has that rare ability, like my old love the Commodore Amiga, to inspire a level of visceral, emotional commitment that smacks more of romance or religion than practicality.

If we do insist on speaking practically, within certain domains Forth excels. It’s still widely used today in extremely constrained environments where every byte and every processor cycle counts, such as, well, the firmware inside a washing machine. To understand what makes Forth so efficient, we need to first understand that those more readable Java functions I showed you above must ultimately be converted into a form pretty close to that we see in the Forth versions. By making us meet the computer halfway (or further), Forth eliminates a whole lot of shuffling about that costs precious processor time. A well-written Forth program can actually be smaller than its pure assembly-language equivalent — much less the same program written in some other high-level language — because Forth so emphasizes reusable words. And it can be surprisingly easy to port Forth programs from computer to computer; one need only re-implement that bottommost layer of words in the new machine’s assembly language, and leave the rest alone.

Of course, all of these advantages that make Forth so attractive to programmers working on embedded systems and device firmware today also made it mighty appealing to programmers of ordinary PCs of the late 1970s and 1980s, working as they were under stringent restrictions of their own. For some early PCs Forth was the only language other than the ROM-housed BASIC or assembly language that made any sense at all. Stripped down to its essentials, Forth can be tiny; for example, Cognetics Corporation, a developer we met in a recent article, worked with a version of Forth that fit into just 6 K. Thus Forth enjoyed considerable popularity, with a fair number of games and other commercial software written in the language. John Draper, the legendary “Captain Crunch” who taught Steve Wozniak and Steve Jobs how to phone phreak amidst myriad other hacking accomplishments, was a particular devotee, distributing a Forth development system for the Apple II which he also used to write the II’s first really usable word processor, EasyWriter. Many of the magazines ran columns or extended series on Forth, which was available, and generally in multiple versions, for virtually every remotely viable machine of the era. One British computer, the ill-fated but fascinating Jupiter Ace, even included Forth in ROM in lieu of BASIC. Tellingly, however, as the 1980s wore on and software got more complex Forth became less common amongst commercial application and game developers, even as it retained a dedicated cult of hobbyists who have persisted with the language to this day. According to Charles Moore, this was as it should be. Forth, he told Jerry Pournelle in Byte‘s March 1985 issue, had never been intended for closed-source commercial software.

Writing big programs to be distributed in object code is a distortion of what Forth is all about. Forth is like a set of craftsman’s tools. You use it to make still more tools that work with whatever you specialize in. Then you use it to solve problems. Forth programs should always be distributed in source code. You should have Forth online at all times. Recompile whenever you want to use a program. Forth programs are tailored, they’re living and dynamic, not static object code.

“Distortion” or not, the most important Forth game, and arguably the most ambitious project ever completed in the language, would appear more than a year after those remarks. I know I’ve been teasing you with it for a while, but, with all the pieces in place at last, we’ll get there next time… really, I promise.

(Probably the best place to look to get an idea of the excitement Forth once generated, as well as a very good picture of the language itself, is the August 1980 Byte, which had Forth as its main theme. My example code in this article has its origins there, as does the picture.)

 
 

Tags: ,

Send in the Clones

In computer parlance, a clone is Company B’s copycat version of Company A’s computer that strains to be as software and hardware compatible with its inspiration as possible. For a platform to make an attractive target for cloning, it needs to meet a few criteria. The inspiration needs to be simple and/or well-documented enough that it’s practical for another company — and generally a smaller company at that, with far fewer resources at its disposal — to create a compatible knock-off in the first place. Then the inspiration needs to be successful enough that it’s spawned an attractive ecosystem that lots of people want to be a part of. And finally, there needs to be something preventing said people from joining said ecosystem by, you know, simply buying the machine that’s about to be cloned. Perhaps Company A, believing it has a lock on the market, keeps the price above what many otherwise interested people are willing or able to pay; perhaps Company A has simply neglected to do business in a certain part of the world filled with eager would-be buyers.

Clones have been with us almost from the moment that the trinity of 1977 kicked off the PC revolution in earnest. The TRS-80 was the big early winner of the trio thanks to its relatively low price and wide distribution through thousands of Radio Shack stores, outselling the Apple II in its first months by margins of at least twenty to one (as for the Commodore PET, it was the Bigfoot of the three, occasionally glimpsed in its natural habitat of trade-show booths but never available in a form you could actually put your hands on until well into 1978). The first vibrant, non-business-focused commercial software market in history sprung up around the little Trash 80. Cobbled together on an extreme budget out of generic parts that were literally just lying around at Radio Shack — the “monitor,” for instance, was just a cheap Radio Shack television re-purposed for the role — the TRS-80 was eminently cloneable. Doing so didn’t make a whole lot of sense in North America, where Radio Shack’s volume manufacturing and distribution system would be hard advantages to overcome. But Radio Shack had virtually no presence outside of North America, where there were nevertheless plenty of enthusiasts eager to join the revolution.

EACA shindig in Hong Kong

A shindig for EACA distributors in Hong Kong. Shortly after this photo was taken, Eric Chung, third from right in front, would abscond with $10 million and that would be that for EACA.

The most prominent of the number of TRS-80 cloners that had sprung up by 1980 was a rather shady Hong Kong-based company called EACA, who made cheap clones for any region of the world with distributors willing to buy them. Their knock-offs popped up in Europe under the name “The Video Genie”; in Australasia as the “Dick Smith System 80,” distributed under the auspices of Dick Smith Electronics, the region’s closest equivalent to Radio Shack; even in North America as the “Personal Micro Computers PMC-80.” EACA ended in dramatic fashion in 1983 when founder Eric Chung absconded to Taiwan with all of his company’s assets that he could liquify, $10 million worth, stuffed into his briefcase. He or his descendants are presumably still living the high life there today.

By the time of those events, the TRS-80’s heyday was already well past, its position as the most active and exciting PC platform long since having been assumed by the Apple II, which had begun a surge to the fore in the wake of the II Plus model of 1979. The Apple II was if anything an even more tempting target for cloners than the TRS-80. While Steve Wozniak’s hardware design is justly still remembered as a marvel of compact elegance, it was also built entirely from readily available parts, lacking the complex and difficult-to-duplicate custom chips of competitors like Atari and Commodore. Wozniak had also insisted that every last diode on the Apple II’s circuit board be meticulously documented for the benefit of hackers just like him. And Apple, then as now, maintained some of the highest profit margins in the industry, creating a huge opportunity for a lean-and-mean cloner to undercut them.

The Franklin Ace 1000

A Franklin Ace 1000 mixed and matched with a genuine Apple floppy drive.

Assorted poorly distributed Far Eastern knock-offs aside, the first really viable Apple II clone arrived in mid-1982 in the form of the Franklin Ace line. The most popular model, the Ace 1000, offered for about 25 percent less than a II Plus complete hardware and software compatibility while also having more memory as well as luxuries like a numeric keypad and upper- and lowercase letter input. The Ace terrified Apple. With the Apple III having turned into a disaster, Apple remained a one-platform company, completely dependent on continuing Apple II sales — and continuing high Apple II profit margins — to fund not one but two hugely ambitious, hugely innovative, and hugely expensive new platform initiatives, Lisa and Macintosh. A viable market in Apple II workalikes which cut seriously into sales, or that forced price cuts, could bring everything down around their ears. Already six months before the Ace actually hit the market, as soon as they got word of Franklin’s plans, Apple’s lawyers were therefore looking for a way to challenge Franklin in court and drive their machine from the market.

As it turned out, the basis for a legal challenge wasn’t hard to find. Yes, the Apple II’s unexceptional hardware would seem to be fair game — but the machine’s systems software was not. Apple quickly confirmed that, like most of the TRS-80 cloners, Franklin had simply copied the contents of the II’s ROM chips; even bugs and the secret messages Apple’s programmers had hidden inside them were still there in Franklin’s versions. A triumphant Apple rushed to federal court to seek a preliminary injunction to keep the Ace off the market until the matter was decided through a trial. Much to their shocked dismay, the District Court for the Eastern District of Pennsylvania found the defense offered by Franklin’s legal team compelling enough to deny the injunction. The Ace came out right on schedule that summer of 1982, to good reviews and excellent sales.

Franklin’s defense sounds almost unbelievable today. They readily admitted that they had simply copied the contents of the ROM chips. They insisted, however, that the binary code contained on the chips, being a machine-generated sequence of 1s and 0s that existed only inside the chips and that couldn’t be reasonably read by a human, was not a form of creative expression and thus not eligible for copyright protection in the first place. In Franklin’s formulation, only the human-readable source code used to create the binary code stored on the ROM chips, which Franklin had no access to and no need for given that they had the binary code, was copyrightable. It was an audacious defense to say the least, one which if accepted would tear down the legal basis for the entire software industry. After all, how long would it take someone to leap to the conclusion that some hot new game, stored only in non-human-readable form on a floppy disk, was also ineligible for copyright protection? Astonishingly, when the case got back to the District Court for a proper trial the judge again sided with Franklin, stating that “there is some doubt as to the copyrightability of the programs described in this litigation,” in spite of an earlier case, Williams Electronics, Inc. v. Arctic International, Inc., which quite clearly had established binary code as copyrightable. Only in August of 1983 was the lower court’s ruling overturned by the Federal Court of Appeals in Philadelphia. A truculent Franklin threatened to appeal to the Supreme Court, but finally agreed to a settlement that January that demanded they start using their own ROMs if they wanted to keep cloning Apple IIs.

Apple Computer, Inc., v. Franklin Computer Corp. still stands today as a landmark in technology jurisprudence. It firmly and finally established the copyrightable status of software regardless of its form of distribution. And it of course also had an immediate impact on would-be cloners, making their lives much more difficult than before. With everyone now perfectly clear on what was and wasn’t legal, attorney David Grais clarified the process cloners would need to follow to avoid lawsuits in an episode of Computer Chronicles:

You have to have one person prepare a specification of what the program [the systems software] is supposed to do, and have another person who’s never seen the [original] program write a program to do it. If you can persuade a judge that the second fellow didn’t copy from the [original] code, then I think you’ll be pretty safe.

After going through this process, Apple II cloners needed to end up with systems software that behaved absolutely identically to the original. Every system call needed to take the exact same amount of time that it did on a real Apple II; each of the original software’s various little quirks and bugs needed to be meticulously duplicated. Anything less would bring with it incompatibility, because there was absolutely nothing in those ROMs that some enterprising hacker hadn’t used in some crazy, undocumented, unexpected way. This was a tall hurdle indeed, one which neither Franklin nor any other Apple II cloner was ever able to completely clear. New Franklins duly debuted with the new, legal ROMs, and duly proved to be much less compatible and thus much less desirable than the older models. Franklin left the Apple-cloning business within a few years in favor of hand-held dictionaries and thesauri.

There is, however, still another platform to consider, one on which the cloners would be markedly more successful: the IBM PC. The open or (better said) modular architecture of the IBM PC was not, as so many popular histories have claimed, a sign of a panicked or slapdash design process. It was rather simply the way that IBM did business. Back in the 1960s the company had revolutionized the world of mainframe computing with the IBM System/360, not a single computer model but a whole extended family of hardware and software designed to plug and play together in whatever combination best suited a customer’s needs. It was this product line, the most successful in IBM’s history, that propelled them to the position of absolute dominance of big corporate computing that they still enjoyed in the 1980s, and that reduced formerly proud competitors to playing within the house IBM had built by becoming humble “Plug-Compatible Manufacturers” selling peripherals that IBM hadn’t deigned to provide — or, just as frequently, selling clones of IBM’s products for lower prices. Still, the combined profits of all the cloners remained always far less than those of IBM itself; it seemed that lots of businesses wanted the security that IBM’s stellar reputation guaranteed, and were willing to pay a bit extra for it. IBM may have thought the PC market would play out the same way. If so, they were in for a rude surprise.

The IBM PC was also envisioned as not so much a computer as the cornerstone of an ever-evolving, interoperable computing family that could live for years or decades. Within three years of the original machine’s launch, you could already choose from two CPUs, the original Intel 8088 or the new 80286; could install as little as 16 K of memory or as much as 640 K; could choose among four different display cards, from the text-only Monochrome Display Adapter to the complicated and expensive CAD-oriented Professional Graphics Controller; could choose from a huge variety of other peripherals: floppy and hard disks, tape backup units, modems, printer interfaces, etc. The unifying common denominator amongst all this was a common operating system, MS-DOS, which had quickly established itself as the only one of the four operating paradigms supported by the original IBM PC that anyone actually used. Here we do see a key difference between the System/360 and the IBM PC, one destined to cause IBM much chagrin: whereas the former ran an in-house-developed IBM operating system, the operating system of the latter belonged to Microsoft.

The IBM architecture was different from that of the Apple II in that its operating system resided on disk, to be booted into memory at system startup, rather than being housed in ROM. Still, every computer needs to have some code in ROM. On an IBM PC, this code was known as the “Basic Input/Output System,” or BIOS, a nomenclature borrowed from the CP/M-based machines that preceded it. The BIOS was responsible on startup for doing some self-checks and configuration and booting the operating system from disk. It also contained a set of very basic, very low-level routines to do things like read from and write to the disks, detect keyboard input, or display text on the screen; these would be called constantly by MS-DOS and, very commonly, by applications as well while the machine was in operation. The BIOS was the one piece of software for the IBM PC that IBM themselves had written and owned, and for obvious reasons they weren’t inclined to share it with anyone else. Two small companies, Corona Labs and Eagle Computer, would simply copy IBM’s BIOS à la Franklin. It took the larger company all of one day to file suit and force complete capitulation and market withdrawal when those machines came to their attention in early 1984.

Long before those events, other wiser would-be cloners recognized that creating a workalike, “clean-room” version of IBM’s BIOS would be the key to executing a legal IBM clone. The IBM PC’s emphasis on modularity and future expansion meant that it was a bit more forgiving in this area than the likes of the more tightly integrated Apple II. Yet an IBM-compatible BIOS would still be a tricky business, fraught with technical and financial risk.

As the IBM PC was beginning to ship, a trio of Texas Instruments executives named Rod Canion, James Harris, and William Murto were kicking around ideas for getting out from under what they saw as a growing culture of non-innovation inside TI. Eager to start a business of their own, they considered everything from a Mexican restaurant to household gadgets like a beeper for finding lost keys. Eventually they started to ask what the people around them at TI wanted but weren’t getting in their professional lives. They soon had their answer: a usable portable computer that executives and engineers could cart around with them on the road, and that was cheap enough that their purchasing managers wouldn’t balk. Other companies had explored this realm before, most notably the brief-lived Osborne Computer with the Osborne 1, but those products had fallen down badly in the usability sweepstakes; the Osborne 1, for example, had a 5-inch display screen the mere thought of which could prompt severe eye strain in those with any experience with the machine, disk drives that could store all of 91 K, and just 64 K of memory. Importantly, all of those older portables ran CP/M, until now the standard for business computing. Canion, Harris, and Murto guessed, correctly, that CP/M’s days were numbered in the wake of IBM’s adoption of MS-DOS. Not wanting to be tied to a dying operating system, they first considered making their own. Yet when they polled the big software publishers about their interest in developing for yet another new, incompatible machine the results were not encouraging. There was only one thing for it: they must find a way to make their portable compatible with the IBM PC. If they could bring out such a machine before IBM did, the spoils could be enormous. Prominent tech venture capitalist Ben Rosen agreed, investing $2.5 million to help found Compaq Computer Corporation in February of 1982. What with solid funding and their own connections within the industry, Canion, Harris, and Murto thought they could easily design a hardware-compatible portable that was better than anything else available at the time. That just left the software side.

Given Bill Gates’s reputation as the Machiavelli of the computer industry, we perhaps shouldn’t be surprised that some journalists have credited him with anticipating the rise of PC clones from well before the release of the first IBM PC. That, however, is not the case. All indications are that Gates negotiated a deal that let Microsoft lease MS-DOS to IBM rather than sell it to them simply in the expectation that the IBM PC would be a big success, enough so that an ongoing licensing fee would amount to far more than a lump-sum payout in the long run. Thus he was as surprised as anyone when Compaq and a few other early would-be cloners contacted him to negotiate MS-DOS license deals for their own machines. Of course, Gates being Gates, it took him all of about ten minutes to grasp the implications of what was being requested, and to start making deals that, not incidentally, actually paid considerably better than the one he’d already made with IBM.

The BIOS would be a tougher nut to crack, the beachhead on which this invasion of Big Blue’s turf would succeed or fail. Having quickly concluded that simply copying IBM’s ROMs wasn’t a wise option, Compaq hired a staff of fifteen programmers who would dedicate the months to come to creating a slavish imitation. Programmers with any familiarity at all with the IBM BIOS were known as “dirty,” and barred from working on the project. Instead of relying on IBM’s published BIOS specifications (which might very well be incorrect due to oversight or skulduggery), the team took the thirty biggest applications on the market and worked through them one at a time, analyzing each BIOS call each program made and figuring out through trial and error what response it needed to receive. The two trickiest programs, which would go on to become a sort of stress test for clone compatibility both inside and outside of Compaq, proved to be Lotus 1-2-3 and Microsoft Flight Simulator.

Before the end of the year, Compaq was previewing their new portable to press and public and working hard to set up a strong dealer network. For the latter task they indulged in a bit of headhunting: they hired away from IBM H. L. ”Sparky” Sparks, the man who had set up the IBM PC dealer network. Knowing all too well how dealers thought and what was most important to them, Sparks instituted a standard expected dealer markup of 36 percent, versus the 33 percent offered by IBM, thus giving them every reason to look hard at whether a Compaq might meet a customer’s needs just as well or better than a machine from Big Blue.

The Compaq Portable

Compaq’s first computer, the Portable

Savvy business realpolitik like that became a hallmark of Compaq. Previously clones had been the purview of small upstarts, often with a distinct air of the fly-by-night about them. The suburban-Houston-based Compaq, though, was different, not only from other cloners but also from the established companies of Silicon Valley. Compaq was older, more conservative, interested in changing the world only to the extent that that meant more Compaq computers on desks and in airplane luggage racks. ”I don’t think you could get a 20-year-old to not try to satisfy his ego by ‘improving’ on IBM,” said J. Steven Flannigan, the man who led the BIOS reverse-engineering effort. “When you’re fat, balding, and 40, and have a lot of patents already, you don’t have to try.” That attitude was something corporate purchasing managers could understand. Indeed, Compaq bore with it quite a lot of the same sense of comforting stolidity as did IBM itself. Not quite the first to hit the market with an IBM clone with a “clean” BIOS (that honor likely belongs to Columbia Data Products, a much scruffier sort of operation that would be out of business by 1985), Compaq nevertheless legitimized the notion in the eyes of corporate America.

The Compaq Portable goes flying

The worst possible 1980s airplane seatmate: a business traveler lugging along a Compaq Portable.

Yet the Compaq Portable that started shipping very early in 1983 also succeeded because it was an excellent and — Flannigan’s sentiments aside — innovative product. By coming out with their portable before IBM itself, Compaq showed that clones need not be mere slavish imitations of their inspirations distinguished only by a lower price. “Portable” in 1983 did not, mind you, mean what it does today. The Compaq Portable was bigger and heavier  — a full 28 pounds — than most desktop machines of today, something you manhandled around like a suitcase rather than slipping into a pocket or backpack. There wasn’t even a battery in the thing, meaning the businessperson on the go would likely be doing her “portable” computing only in her hotel room. Still, it was very thoughtfully designed within the technical constraints of its era; you could for instance attach it to a real monitor at your desk to enjoy color graphics in lieu of the little 9-inch monochrome screen that came built-in, a first step on the road to the ubiquitous laptop docking stations of today.

Launching fortuitously just as some manufacturing snafus and unexpected demand for the new PC/XT were making IBM’s own computers hard to secure in some places, the Compaq Portable took off like a rocket. Compaq sold 53,000 of them for $111 million in sales that first year, a record for a technology startup. IBM, suddenly in the unaccustomed position of playing catch-up, released their own portable the following year with fewer features but — and this was truly shocking — a lower price than the Compaq Portable; by forcing high-and-mighty IBM to compete on price, Compaq seemed to have somehow turned the world on its head. The IBM Portable PC was a notable commercial failure, first sign of IBM’s loosening grip on the monster they had birthed. Meanwhile Compaq launched their own head-to-head challenge that same year with the DeskPro line of desktop machines, to much greater success. Apple may have been attacking IBM in melodramatic propaganda films and declaring themselves and IBM to be locked in a battle of Good versus Evil, but IBM hardly seemed to notice the would-be Apple freedom fighters. The only company that really mattered to IBM, the only company that scared them, wasn’t sexy Apple but buttoned-down, square-jawed Compaq.

But Compaq was actually far from IBM’s only problem. Cloning just kept getting easier, for everyone. In the spring of 1984 two little companies called Award Software and Phoenix Technologies announced identical products almost simultaneously: a reverse-engineered, completely legal IBM-compatible BIOS which they would license to anyone who felt like using it to make a clone. Plenty of companies did, catapulting Award and Phoenix to the top of what was soon a booming niche industry (they would eventually resolve their rivalry the way that civilized businesspeople do it, by merging). With the one significant difficulty of cloning thus removed, making a new clone became almost a triviality, a matter of ordering up a handful of components along with MS-DOS and an off-the-shelf BIOS, slapping it all together, and shoving it out the door; the ambitious hobbyist could even do it in her home if she liked. By 1986, considerably more clones were being sold than IBMs, whose own sales were stagnant or even decreasing.

That year Intel started producing the 80386, the third generation of the line of CPUs that powered the IBM PC and its clones. IBM elected to wait a bit before making use of it, judging that the second-generation 80286, which they had incorporated into the very successful PC/AT in 1984, was still plenty powerful  for the time being. It was a bad decision, predicated on a degree of dominance which IBM no longer enjoyed. Smelling opportunity, Compaq made their own 80386-based machine, the DeskPro 386, the first to sport the hot new chip. Prior to this machine, the cloners had always been content to let IBM pave the way of such fundamental advances. The DeskPro 386 marks Compaq’s — and the clone industry’s — coming of age. No longer just floating along in the wake of IBM, tinkering with form factors, prices, and feature sets, now they were driving events. Already in November of 1985, Bill Machrone of PC Magazine had seen where this was leading: “Now that it [IBM] has created the market, the market doesn’t necessarily need IBM for the machines.” We see here business computing going through its second fundamental shift (the first being the transition from CP/M to MS-DOS). What was an ecosystem of IBM and IBM clones now became a set of sometimes less-than-ideal, sometimes accidental, but nevertheless agreed-upon standards that were bigger than IBM or anyone else. IBM, Machrone wrote, “had better conform” to the standards or face the consequences just like anyone else. Tellingly, it’s at about this time that we see the phrase “IBM clone” begin to fade, to be replaced by “MS-DOS machine” or “Intel-based machine.”

The emerging Microsoft/Intel juggernaut (note the lack of an “IBM” in there) would eventually conquer the home as well. Already by the mid-1980s certain specimens of the breed were beginning to manifest features that could make them attractive for the home user. Let’s rewind just slightly to look at the most important of them, which I’ve mentioned in a couple of earlier articles but have never really given its full due.

When the folks at Radio Shack, trying to figure out what to do with their aging, fading TRS-80 line, saw the ill-fated IBM PCjr, they saw things well worth salvaging in its 16-color graphics chip and its three-voice sound synthesizer, both far superior to the versions found in its big brothers. Why not clone those pieces, package them into an otherwise fairly conventional PC clone, and sell the end result as the perfect all-around computer, one which could run all the critical business applications but could also play games in the style to which kids with Commodore 64s were accustomed? Thanks to the hype that had accompanied the PCjr’s launch, there were plenty of publishers out there with huge inventories of games and other software that supported the PCjr’s audiovisuals, inventories they’d be only too eager to unload on Radio Shack cheap. With those titles to prime the pump, who knew where things might go…

Launched in late 1984, the Tandy 1000 was the first IBM clone to be clearly pitched not so much at business as at the ordinary consumer. In addition to the audiovisual enhancements and very aggressive pricing, it included DeskMate, a sort of proto-GUI operating environment designed to insulate the user from the cryptic MS-DOS command prompt while giving access to six typical home applications that came built right in. A brilliant little idea all the way around, the Tandy 1000 rescued Radio Shack from the brink of computing irrelevance. It also proved a godsend for many software publishers who’d bet big on the PCjr; John Williams credits it with literally saving Sierra by providing a market for King’s Quest, a game Sierra had developed for the PCjr at horrendous expense and to underwhelming sales given that platform’s commercial failure. Indeed, the Tandy 1000 became so popular that it prompted lots of game publishers to have a second look at the heretofore dull beige world of the clones. As they jumped aboard the MS-DOS gravy train, many made sure to take advantage of the Tandy 1000’s audiovisual enhancements. Thousands of titles would eventually blurb what became known as “Tandy graphics support” on their boxes and advertisements. Having secured the business market, the Intel/Microsoft architecture’s longer, more twisting road to hegemony over home computing began in earnest with the Tandy 1000. And meanwhile poor IBM couldn’t even get proper credit for the graphics standard they’d actually invented. Sometimes you just can’t win for losing.

Another sign of the nascent but inexorably growing power of Intel/Microsoft in the home would come soon after the Tandy 1000, with the arrival of the first game to make many Apple, Atari, and Commodore owners wish that they had a Tandy 1000 or, indeed, even one of its less colorful relatives. We’ll get to that soon — no, really! — but first we have just one more detour to take.

(I was spoiled for choice on sources this time. A quick rundown of periodicals: Creative Computing of January 1983; Byte of January 1983, November 1984, and August 1985; PC Magazine of January 1987; New York Times of November 5 1982, October 26 1983, January 5 1984, February 1 1984, and February 22 1984; Fortune of February 18 1985. Computer Wars by Charles H. Ferguson and Charles R. Morris is a pretty complete book-length study of IBM’s trials and tribulations during this period. More information on the EACA clones can be found at Terry Stewart’s site. More on Compaq’s roots in Houston can be found at the Texas Historical Association. A few more invaluable links are included in the article proper.)

 
 

Tags: , , , , ,