I think [the] Macintosh accomplished everything we set out to do and more, even though it reaches most people these days as Windows.
— Andy Hertzfeld (original Apple Macintosh systems programmer), 1994
When rumors first began to circulate early in 1991 that IBM and Apple were involved in high-level talks about a major joint initiative, most people dismissed them outright. It was, after all, hard to imagine two companies in the same industry with more diametrically opposed corporate cultures. IBM was Big Blue, a bedrock of American business since the 1920s. Conservative and pragamatic to a fault, it was a Brylcreemed bastion of tradition where casual days meant that employees might remove their jackets to reveal the starched white shirts they wore underneath. Apple, on the other hand, had been founded just fifteen years before by two long-haired children of the counterculture, and its campus still looked more like Woodstock than Wall Street. IBM placed great stock in the character of its workforce; Apple, as journalist Michael S. Malone would later put it in his delightfully arch book Infinite Loop, “seemed to have no character, but only an attitude, a style, a collection of mannerisms.” IBM talked about enterprise integration and system interoperability; Apple prattled on endlessly about changing the world. IBM played Lawrence Welk at corporate get-togethers; Apple preferred the Beatles. (It was an open secret that the name the company shared with the Beatles’ old record label wasn’t coincidental.)
Unsurprisingly, the two companies didn’t like each other very much. Apple in particular had been self-consciously defining itself for years as the sworn enemy of IBM and everything it represented. When Apple had greeted the belated arrival of the IBM PC in 1981 with a full-page magazine advertisement bidding Big Blue “welcome, seriously,” it had been hard to read as anything other than snarky sarcasm. And then, and most famously, had come the “1984” television advertisement to mark the debut of the Macintosh, in which Apple was personified as a hammer-throwing freedom fighter toppling a totalitarian corporate titan — Big Blue recast as Big Brother. What would the rumor-mongers be saying next? That cats would lie down with dogs? That the Russians would tell the Americans they’d given up on the whole communism thing and would like to be friends… oh, wait. It was a strange moment in history. Why not this too, then?
Indeed, when one looked a little harder, a partnership began to make at least a certain degree of sense. Apple’s rhetoric had actually softened considerably since those heady early days of the Macintosh and the acrimonious departure of Steve Jobs which had marked their ending. In the time since, more sober minds at the company had come to realize that insulting conservative corporate customers with money to spend on Apple’s pricey hardware might be counter-productive. Most of all, though, both companies found themselves in strikingly similar binds as the 1990s got underway. After soaring to rarefied heights during the early and middle years of the previous decade, they were now being judged by an increasing number of pundits as the two biggest losers of the last few years of computing history. In the face of the juggernaut that was Microsoft Windows, that irresistible force which nothing in the world of computing could seem to defy for long, it didn’t seem totally out of line to ask whether there even was a future for IBM or Apple. Seen in this light, the pithy clichés practically wrote themselves: “the enemy of my enemy is my friend”; “any port in a storm”; etc. Other, somewhat less generous commentators just talked about an alliance of losers.
Each of the two losers had gotten to this juncture by a uniquely circuitous route.
When IBM released the IBM PC, their first mass-market microcomputer, in August of 1981, they were as surprised as anyone by the way it took off. Even as hackers dismissed it as boring and unimaginative, corporate America couldn’t get enough of the thing; a boring and unimaginative personal computer — i.e., a safe one — was exactly what they had been waiting for. IBM’s profits skyrocketed during the next several years, and the pundits lined up to praise the management of this old, enormous company for having the flexibility and wherewithal to capitalize on an emerging new market; a tap-dancing elephant became the metaphor of choice.
And yet, like so many great successes, the IBM PC bore the seeds of its downfall within it from the start. It was a simple, robust machine, easy to duplicate by plugging together readily available commodity components — a process made even easier by IBM’s commitment to scrupulously documenting every last detail of its design for all and sundry. Further, IBM had made the mistake of licensing its operating system from a small company known as Microsoft rather than buying it outright or writing one of their own, and Bill Gates, Microsoft’s Machiavellian CEO, proved more than happy to license MS-DOS to anyone else who wanted it as well. The danger signs could already be seen in 1982, when an upstart company called Compaq released a “portable” version of IBM’s computer — in those days, this meant a computer which could be packed into a single suitcase — before IBM themselves could get around to it. A more dramatic tipping point arrived in 1986, when the same company made a PC clone built around Intel’s hot new 80386 CPU before IBM managed to do so.
In 1987, IBM responded to the multiplying ranks of the clone makers by introducing the PS/2 line, which came complete with a new, proprietary bus architecture, locked up tight this time inside a cage of patents and legalese. A cynical move on the face of it, it backfired spectacularly in practice. Smelling the overweening corporate arrogance positively billowing out of the PS/2 lineup, many began to ask themselves for the first time whether the industry still needed IBM at all. And the answer they often came to was not the one IBM would have preferred. IBM’s new bus architecture slowly died on the vine, while the erstwhile clone makers put together committees to define new standards of their own which evolved the design IBM had originated in more open, commonsense ways. In short, IBM lost control of the very platform they had created. By 1990, the words “PC clone” were falling out of common usage, to be replaced by talk of the “Wintel Standard.” The new standard bearer, the closest equivalent to IBM in this new world order, was Microsoft, who continued to license MS-DOS and Windows, the software that allowed all of these machines from all of these diverse manufacturers to run the same applications, to anyone willing to pay for it. Meanwhile OS/2, IBM’s mostly-compatible alternative operating system, was struggling mightily; it would never manage to cross the hump into true mass-market acceptance.
Apple’s fall from grace had been less dizzying in some ways, but the position it had left them in was almost as frustrating.
After Steve Jobs walked away from Apple in September of 1985, leaving behind the Macintosh, his twenty-month-old dream machine, the more sober-minded caretakers who succeeded him did many of the reasonable, sober-minded things which their dogmatic predecessor had refused to allow: opening the Mac up for expansion, adding much-requested arrow keys to its keyboard, toning down the revolutionary rhetoric that spooked corporate America so badly. These things, combined with the Apple LaserWriter laser printer, Aldus PageMaker software, and the desktop-publishing niche they spawned between them, saved the odd little machine from oblivion. Yet something did seem to get lost in the process. Although the Mac remained a paragon of vision in computing in many ways — HyperCard alone proved that! — Apple’s management could sometimes seem more interested in competing head-to-head with PC clones for space on the desks of secretaries than nurturing the original dream of the Macintosh as the creative, friendly, fun personal computer for the rest of us.
In fact, this period of Apple’s history must strike anyone familiar with the company of today — or, for that matter, with the company that existed before Steve Jobs’s departure — as just plain weird. Quibbles about character versus attitude aside, Apple’s most notable strength down through the years has been a peerless sense of self, which they have used to carve out their own uniquely stylish image in the ofttimes bland world of computing. How odd, then, to see the Apple of this period almost willfully trying to become the one thing neither the zealots nor the detractors have ever seen them as: just another maker of computer hardware. They flooded the market with more models than even the most dutiful fans could keep up with, none of them evincing the flair for design that marks the Macs of earlier or later eras. Their computers’ bland cases were matched with bland names like “Performa” or “Quadra” — names which all too easily could have come out of Compaq or (gasp!) IBM rather than Apple. Even the tight coupling of hardware and software into a single integrated user experience, another staple of Apple computing before and after, threatened to disappear, as CEO John Sculley took to calling Apple a “software company” and intimated that he might be willing to license MacOS to other manufacturers in the way that Microsoft did MS-DOS and Windows. At the same time, in a bid to protect the software crown jewels, he launched a prohibitively expensive and ethically and practically ill-advised lawsuit against Microsoft for copying MacOS’s “look and feel” in Windows.
Apple’s attempts to woo corporate America by acting just as bland and conventional as everyone else bore little fruit; the Macintosh itself remained too incompatible, too expensive, and too indelibly strange to lure cautious purchasing managers into the fold. Meanwhile Apple’s prices remained too high for any but the most well-heeled private users. And so the Mac soldiered on with a 5 to 10 percent market share, buoyed by a fanatically loyal user base who still saw revolutionary potential in it, even as they complained about how many of its ideas Microsoft and others had stolen. Admittedly, their numbers were not insignificant: there were about 3 and a half million members of the Macintosh family by 1990. They were enough to keep Apple afloat and basically profitable, at least for now, but already by the early 1990s most new Macs were being sold “within the family,” as it were. The Mac became known as the platform where the visionaries tried things out; if said things proved promising, they then reached the masses in the form of Windows implementations. CD-ROM, the most exciting new technology of the early 1990s, was typical. The Mac pioneered this space; Mediagenic’s The Manhole, the very first CD-ROM entertainment product, shipped first on that platform. Yet most of the people who heard the hype and went out to buy a “multimedia PC” in the years that followed brought home a Wintel machine. The Mac was a sort of aspirational showpiece platform; in defiance of the Mac’s old “computer for the rest of us” tagline, Windows was the place where the majority of ordinary people did ordinary things.
The state of MacOS added weight to these showhorse-versus-workhorse stereotypes. Its latest incarnation, known as System 6, had fallen alarmingly behind the state of the art in computing by 1990. Once one looked beyond its famously intuitive and elegant user interface, one found that it lacked robust support for multitasking; lacked for ways to address memory beyond 8 MB; lacked the virtual memory that would allow users to open more and larger applications than the physical memory allowed; lacked the memory protection that could prevent errant applications from taking down the whole system. Having been baked into many of the operating system’s core assumptions from the start — MacOS had originally been designed to run on a machine with no hard drive and just 128 K of memory — these limitations were infuriatingly difficult to remedy after the fact. Thus Apple struggled mightily with the creation of a System 7, their attempt to do just that. When System 7 finally shipped in May of 1991, two years after Apple had initially promised it would, it still lagged behind Windows under the hood in some ways: for example, it still lacked comprehensive memory protection.
The problems which dogged the Macintosh were typical of any computing platform that attempts to survive beyond the technological era which spawned it. Keeping up with the times means hacking and kludging the original vision, as efficiency and technical elegance give way to the need just to make it work, by hook or by crook. The original Mac design team had been given the rare privilege of forgetting about backward compatibility — given permission to build something truly new and “insanely great,” as Steve Jobs had so memorably put it. That, needless to say, was no longer an option. Every decision at Apple must now be made with an eye toward all of the software that had been written for the Mac in the past seven years or so. People depended on it now, which sharply limited the ways in which it could be changed; any new idea that wasn’t compatible with what had come before was an ipso-facto nonstarter. Apple’s clever programmers doubtless could have made a faster, more stable, all-around better operating system than System 7 if they had only had free rein to do so. But that was pie-in-the-sky talk.
Yet the most pressing of all the technical problems confronting the Macintosh as it aged involved its hardware rather than its software. Back in 1984, the design team had hitched their wagon to the slickest, sexiest new CPU in the industry at the time: the Motorola 68000. And for several years, they had no cause to regret that decision. The 68000 and its successor models in the same family were wonderful little chips — elegant enough to live up to even the Macintosh ideal of elegance, an absolute joy to program. Even today, many an old-timer will happily wax rhapsodic about them if given half a chance. (Few, for the record, have similarly fond memories of Intel’s chips.)
But Motorola was both a smaller and a more diversified company than Intel, the international titan of chip-making. As time went on, they found it more and more difficult to keep up with the pace set by their rival. Lacking the same cutting-edge fabrication facilities, it was hard for them to pack as many circuits into the same amount of space. Matters began to come to a head in 1989, when Intel released the 80486, a chip for which Motorola had nothing remotely comparable. Motorola’s response finally arrived in the form of the roughly-equivalent-in-horsepower 68040 — but not until more than a year later, and even then their chip was plagued by poor heat dissipation and heavy power consumption in many scenarios. Worse, word had it that Motorola was getting ready to give up on the whole 68000 line; they simply didn’t believe they could continue to compete head-to-head with Intel in this arena. One can hardly overstate how terrifying this prospect was for Apple. An end to the 68000 line must seemingly mean the end of the Macintosh, at least as everyone knew it; MacOS, along with every application ever written for the platform, were inextricably bound to the 68000. Small wonder that John Sculley started talking about Apple as a “software company.” It looked like their hardware might be going away, whether they liked it or not.
Motorola was, however, peddling an alternative to the 68000 line, embodying one of the biggest buzzwords in computer-science circles at the time: “RISC,” short for “Reduced Instruction Set Chip.” Both the Intel x86 line and the Motorola 68000 line were what had been retroactively named “CISC,” or “Complex Instruction Set Chips”: CPUs whose set of core opcodes — i.e., the set of low-level commands by which they could be directly programmed — grew constantly bigger and more baroque over time. RISC chips, on the other hand, pared their opcodes down to the bone, to only those commands which they absolutely, positively could not exist without. This made them less pleasant for a human programmer to code for — but then, the vast majority of programmers were working by now in high-level languages rather than directly controlling the CPU in assembly language anyway. And it made programs written to run on them by any method bigger, generally speaking — but then, most people by 1990 were willing to trade a bit more memory usage for extra speed. To compensate for these disadvantages, RISC chips could be simpler in terms of circuitry than CISC chips of equivalent power, making them cheaper and easier to manufacture. They also demanded less energy and produced less heat — the computer engineer’s greatest enemy — at equivalent clock speeds. As of yet, only one RISC chip was serving as the CPU in mass-market personal computers: the ARM chip, used in the machines of the British PC maker Acorn, which weren’t even sold in the United States. Nevertheless, Motorola believed RISC’s time had come. By switching to RISC, they wouldn’t need to match Intel in terms of transistors per square millimeter to produce chips of equal or greater speed. Indeed, they’d already made a RISC CPU of their own, called the 88000, in which they were eager to interest Apple.
They found a receptive audience among Apple’s programmers and engineers, who loved Motorola’s general design aesthetic. Already by the spring of 1990, Apple had launched two separate internal projects to study the possibilities for RISC in general and the 88000 in particular. One, known as Project Jaguar, envisioned a clean break with the past, in the form of a brand new computer that would be so amazing that people would be willing to accept that none of their existing software would run on it. The other, known as Project Cognac, studied whether it might be possible to port the existing MacOS to the new architecture, and then — and this was the really tricky part — find a way to make existing applications which had been compiled for a 68000-based Mac run unchanged on the new machine.
At first, the only viable option for doing so seemed to be a sort of Frankenstein’s monster of a computer, containing both an 88000- and a 68000-series CPU. The operating system would boot and run on the 88000, but when the user started an application written for an older, 68000-based Mac, it would be automatically kicked over to the secondary CPU. Within a few years, so the thinking went, all existing users would upgrade to the newer models, all current software would get recompiled to run natively on the RISC chip, and the 68000 could go away. Still, no one was all that excited by this approach; it seemed the worst Macintosh kludge yet, the very antithesis of what the machine was supposed to be.
A eureka moment came in late 1990, with the discovery of what Cognac project leader Jack McHenry came to call the “90/10 Rule.” Running profilers on typical applications, his team found that in the case of many or most of them it was the operating system, not the application itself, that consumed 90 percent or more of the CPU cycles. This was an artifact — for once, a positive one! — of the original MacOS design, which offered programmers an unprecedentedly rich interface toolbox meant to make coding as quick and easy as possible and, just as importantly, to give all applications a uniform look and feel. Thus an application simply asked for a menu containing a list of entries; it was then the operating system that did all the work of setting it up, monitoring it, and reporting back to the application when the user chose something from it. Ditto buttons, dialog boxes, etc. Even something as CPU-intensive as video playback generally happened through the operating system’s QuickTime library rather than the application actually employing it.
All of this meant that it ought to be feasible to emulate the 68000 entirely in software. The 68000 code would necessarily run slowly and inefficiently through emulation, wiping out all of the speed advantages of the new chip and then some. Yet for many or most applications the emulator would only need to be used about 10 percent of the time. The other 90 percent of the time, when the operating system itself was doing things at native speed, would more than make up for it. In due course, applications would get recompiled and the need for 68000 emulation would largely go away. But in the meanwhile, it could provide a vital bridge between the past and the future — a next-generation Mac that wouldn’t break continuity with the old one, all with a minimum of complication, for Apple’s users and for their hardware engineers alike. By mid-1991, Project Cognac had an 88000-powered prototype that could run a RISC-based MacOS and legacy Mac applications together.
And yet this wasn’t to be the final form of the RISC-based Macintosh. For, just a few months later, Apple and IBM made an announcement that the technology press billed — sometimes sarcastically, sometimes earnestly — as the “Deal of the Century.”
Apple had first begun to talk with IBM in early 1990, when Michael Spindler, the former’s president, had first reached out to Jack Kuehler, his opposite number at IBM. It seemed that, while Apple’s technical rank and file were still greatly enamored with Motorola, upper management was less sanguine. Having been burned once with the 68000, they were uncertain about Motorola’s commitment and ability to keep evolving the 88000 over the long term.
It made a lot of sense in the abstract for any company interested in RISC technology, as Apple certainly was, to contact IBM; it was actually IBM who had invented the RISC concept back in the mid-1970s. Not all that atypically for such a huge company with so many ongoing research projects, they had employed the idea for years only in limited, mostly subsidiary usage scenarios, such as mainframe channel controllers. Now, though, they were just introducing a new line of “workstation computers” — meaning extremely high-powered desktop computers, too expensive for the consumer market — which used a RISC chip called the POWER CPU that was the heir to their many years of research in the field. Like the workstations it lay at the heart of, the chip was much too expensive and complex to become the brain of Apple’s next generation of consumer computers, but it might, thought Spindler, be something to build upon. And he knew that, with IBM’s old partnership with Microsoft slowly collapsing into bickering acrimony, Big Blue might just be looking for a new partner.
The back-channel talks were intermittent and hyper-cautious at first, but, as the year wore on and the problems both of the companies faced became more and more obvious, the discussions heated up. The first formal meeting took place in February of 1991 or shortly thereafter, at an IBM facility in Austin, Texas. The Apple people, knowing IBM’s ultra-conservative reputation and wishing to make a good impression, arrived neatly groomed and dressed in three-piece suits, only to find their opposite numbers, having acted on the same motivation, sitting there in jeans and denim shirts.
That anecdote illustrates how very much both sides wanted to make this work. And indeed, the two parties found it much easier to work together than anyone might have imagined. John Sculley, the man who really called the shots at Apple, found that he got along smashingly with Jack Kuehler, to the extent that the two were soon talking almost every day. After beginning as a fairly straightforward discussion of whether IBM might be able and willing to make a RISC chip suitable for the Macintosh, the negotiations just kept growing in scale and ambition, spurred on by both companies’ deep-seated desire to stick it to Microsoft and the Wintel hegemony in any and all possible ways. They agreed to found a joint subsidiary called Taligent, staffed initially with the people from Apple’s Project Jaguar, which would continue to develop a brand new operating system that could be licensed by any hardware maker, just like MS-DOS and Windows (and for that matter IBM’s already extant OS/2). And they would found another subsidiary called Kaleida Labs, to make a cross-platform multimedia scripting engine called ScriptX.
Still, the core of the discussions remained IBM’s POWER architecture — or rather the PowerPC, as the partners agreed to call the cost-reduced, consumer-friendly version of the chip. Apple soon pulled Motorola into these parts of the talks, thus turning a bilateral into a trilateral negotiation, and providing the name for their so-called “AIM alliance” — “AIM” for Apple, IBM, and Motorola. IBM had never made a mass-market microprocessor of their own before, noted Apple, and Motorola’s experience could serve them well, as could their chip-fabrication facilities once actual production began. The two non-Apple parties were perhaps less excited at the prospect of working together — Motorola in particular must have been smarting at the rejection of their own 88000 processor which this new plan would entail — but made nice and got along.
On October 2, 1991 — just six weeks after the first 68040-based Macintosh models had shipped — Apple and IBM made official the rumors that had been swirling around for months. At a joint press briefing held inside the Fairmont Hotel in downtown San Francisco, they trumpeted all of the initiatives I’ve just described. The Deal of the Century, they said, would usher in the next phase of personal computing. Wintel must soon give way to the superiority of a PowerPC-based computer running a Taligent operating system with ScriptX onboard. New Apple Macintosh models would also use the PowerPC, but the relationship between them and these other, Taligent-powered machines remained vague.
Indeed, it was all horribly confusing. “What Taligent is doing is not designed to replace the Macintosh,” said Sculley. “Instead we think it complements and enhances its usefulness.” But what on earth did that empty corporate speak even mean? When Apple said out of the blue that they were “not going to do to the Macintosh what we did to the Apple II” — i.e., orphan it — it rather made you suspect that that was exactly what they meant to do. And what did it all mean for IBM’s OS/2, which Big Blue had been telling a decidedly unconvinced public was also the future of personal computing for several years now? “I think the message in those agreements for the future of OS/2 is that it no longer has a future,” said one analyst. And then, what was Kaleida and this ScriptX thing supposed to actually do?
So much of the agreement seemed so hopelessly vague. Compaq’s vice president declared that Apple and IBM must be “smoking dope. There’s no way it’s going to work.” One pundit called the whole thing “a con job. There’s no software, there’s no operating system. It’s just a last gasp of extinction by the giants that can’t keep up with Intel.” Apple’s own users were baffled and consternated by this sudden alliance with the company which they had been schooled to believe was technological evil incarnate. A grim joke made the rounds: what do you get when you cross Apple and IBM? The answer: IBM.
While the journalists reported and the pundits pontificated, it was up to the technical staff at Apple, IBM, and Motorola to make PowerPC computers a reality. Like their colleagues who had negotiated the deal, they all got along surprisingly well; once one pushed past the surface stereotypes, they were all just engineers trying to do the best work possible. Apple’s management wanted the first PowerPC-based Macintosh models to ship in January of 1994, to commemorate the platform’s tenth anniversary by heralding a new technological era. The old Project Cognac team, now with the new code name of “Piltdown Man” after the famous (albeit fraudulent) “missing link” in the evolution of humanity, was responsible for making this happen. For almost a year, they worked on porting MacOS to the PowerPC, as they’d previously done to the 88000. This time, though, they had no real hardware with which to work, only specifications and software emulators. The first prototype chips finally arrived on September 3, 1992, and they redoubled their efforts, pulling many an all-nighter. Thus MacOS booted up to the desktop for the first time on a real PowerPC-based machine just in time to greet the rising sun on the morning of October 3, 1992. A new era had indeed dawned.
Their goal now was to make a PowerPC-based Macintosh work exactly like any other, only faster. MacOS wouldn’t even get a new primary version number for the first PowerPC release; this major milestone in Mac history would go under the name of System 7.1.2, a name more appropriate to a minor maintenance release. It looked so identical to what had come before that its own creators couldn’t spot the difference; they wound up lighting up a single extra pixel in the PowerPC version just so they could know which was which.
Their guiding rule of an absolutely seamless transition applied in spades to the 68000 emulation layer, duly ported from the 88000 to the PowerPC. An ordinary user should never have to think about — should not even have to know about — the emulation that was happening beneath the surface. Another watershed moment came in June of 1993, when the team brought a PowerPC prototype machine to the MacHack, a coding conference and competition. Without telling any of the attendees what was inside the machine, the team let them use it to demonstrate their boundary-pushing programs. The emulation layer performed beyond their most hopeful prognostications. It looked like the Mac’s new lease on life was all but a done deal from the engineering side of things.
But alas, the bonhomie exhibited by the partner companies’ engineers and programmers down in the trenches wasn’t so marked in their executive suites after the deal was signed. The very vagueness of so many aspects of the agreement had papered over what were in reality hugely different visions of the future. IBM, a company not usually given to revolutionary rhetoric, had taken at face value the high-flown words spoken at the announcement. They truly believed that the agreement would mark a new era for personal computing in general, with a new, better hardware architecture in the form of PowerPC and an ultra-modern operating system to run on it in the form of Taligent’s work. Meanwhile it was becoming increasingly clear that Apple’s management, who claimed to be changing the world five times before breakfast on most days, had in reality seen Taligent largely as a hedge in case their people should prove unable to create a PowerPC Macintosh that looked like a Mac, felt like a Mac, and ran vintage Mac software. As Project Piltdown Man’s work proceeded apace, Apple grew less and less enamored with those other, open-architecture ideas IBM was pushing. The Taligent people didn’t help their cause by falling headfirst into a pit of airy computer-science abstractions and staying mired there for years, all while Project Piltdown Man just kept plugging away, getting things done.
The first two and a half years of the 1990s were marred by a mild but stubborn recession in the United States, during which the PC industry had a particularly hard time of it. After the summer of 1992, however, the economy picked up steam and consumer computing eased into what would prove its longest and most sustained boom of all time, borne along on a wave of hype about CD-ROM and multimedia, along with the simple fact that personal computers in general had finally evolved to a place where they could do useful things for ordinary people in a reasonably painless way. (A bit later in the boom, of course, the World Wide Web would come along to provide the greatest impetus of all.)
And yet the position of both Apple and IBM in the PC marketplace continued to get steadily worse while the rest of their industry soared. At least 90 percent of the computers that were now being sold in such impressive numbers ran Microsoft Windows, leaving OS/2, MacOS, and a few other oddballs to divide the iconoclasts, the hackers, and the non-conformists of the world among themselves. While IBM continued to flog OS/2, more out of stubbornness than hope, Apple tried a little bit of everything to stop the slide in market share and remain relevant. Still not entirely certain whether their future lay with open architectures or their own closed, proprietary one, they started porting selected software to Windows, including most notably QuickTime, their much-admired tool for encoding and playing video. They even shipped a Mac model that could also run MS-DOS and Windows, thanks to an 80486 housed in its case alongside its 68040. And they entered into a partnership with the networking giant Novell to port MacOS itself to Intel hardware — a partnership that, like many Apple initiatives of these years, petered out without ultimately producing much of anything. Perhaps most tellingly of all, this became the only period in Apple’s history when the company felt compelled to compete solely on price. They started selling Macs in department stores for the first time, where a stream of very un-Apple-like discounts and rebates greeted prospective buyers.
While Apple thus toddled along without making much headway, IBM began to annihilate all previous conceptions of how much money a single company could possibly lose, posting oceans of red that looked more like the numbers found in macroeconomic research papers than entries in an accountant’s books. The PC marketplace was in a way one of their smaller problems. Their mainframe business, their real bread and butter since the 1950s, was cratering as customers fled to the smaller, cheaper computers that could often now do the jobs of those hulking giants just as well. In 1991, when IBM first turned the corner into loss, they did so in disconcertingly convincing fashion: they lost $2.82 billion that year. And that was only the beginning. Losses totaled $4.96 billion in 1992, followed by $8.1 billion in 1993. IBM lost more money during those three years alone than any other company in the history of the world to that point; their losses exceeded the gross domestic product of Ecuador.
The employees at both Apple and IBM paid the toll for the confusions and prevarications of these years: both companies endured rounds of major layoffs. Those at IBM marked the very first such in the long history of the company. Big Blue had for decades fostered a culture of employment for life; their motto had always been, “If you do your job, you will always have your job.” This, it was now patently obvious, was no longer the case.
The bloodletting at both companies reached their executive suites as well within a few months of one another. On April 1, 1993, John Akers, the CEO of IBM, was ousted after a seven-year tenure which one business writer called “the worst record of any chief executive in the history of IBM.” Three months later, following a terrible quarterly earnings report and a drop in share price of 58 percent in the span of six months, Michael Spindler replaced John Sculley as the CEO of Apple.
These, then, were the storm clouds under which the PowerPC architecture became a physical reality.
The first PowerPC computers to be given a public display bore an IBM rather than an Apple logo on their cases. They arrived at the Comdex trade show in November of 1993, running a port of OS/2. IBM also promised a port of AIX — their version of the Unix operating system — while Sun Microsystems announced plans to port their Unix-based Solaris operating system and, most surprisingly of all, Microsoft talked about porting over Windows NT, the more advanced, server-oriented version of their world-conquering operating environment. But, noted the journalists present, “it remains unclear whether users will be able to run Macintosh applications on IBM’s PowerPC” — a fine example of the confusing messaging the two alleged allies constantly trailed in their wake. Further, there was no word at all about the status of the Taligent operating system that was supposed to become the real PowerPC standard.
Meanwhile over at Apple, Project Piltdown Man was becoming that rarest of unicorns in tech circles: a major software-engineering project that is actually completed on schedule. The release of the first PowerPC Macs was pushed back a bit, but only to allow the factories time to build up enough inventory to meet what everyone hoped would be serious consumer demand. Thus the “Power Macs” made their public bow on March 14, 1994, at New York City’s Lincoln Center, in three different configurations clocked at speeds between 60 and 80 MHz. Unlike IBM’s machines, which were shown six months before they shipped, the Power Macs were available for anyone to buy the very next day.
They were greeted with enormous excitement and enthusiasm by the Mac faithful, who had been waiting anxiously for a machine that could go head-to-head with computers built around Intel’s new Pentium chip, the successor to the 80486. This the Power Macs could certainly do; by some benchmarks at least, the PowerPC doubled the overall throughput of a Pentium. World domination must surely be just around the corner, right?
Predictably enough, the non-Mac-centric technology press greeted the machines’ arrival more skeptically than the hardcore Mac-heads. “I think Apple will sell [a] million units, but it’s all going to be to existing Mac users,” said one market researcher. “DOS and Windows running on Intel platforms is still going to be 85 percent of the market. [The Power Mac] doesn’t give users enough of a reason to change.” Another noted that “the Mac users that I know are not interested in using Windows, and the Windows users are not interested in using the Mac. There has to be a compelling reason [to switch].”
In the end, these more guarded predictions proved the most accurate. Apple did indeed sell an impressive spurt of Power Macs in the months that followed, but almost entirely to the faithful. One might almost say that they became a victim of Project Piltdown Man’s success: the Power Mac really did seem exactly like any other Macintosh, except that it ran faster. And even this fact could be obscured when running legacy applications under emulation, as most people were doing in the early months: despite Project Piltdown Man’s heroic efforts, applications like Excel, Word, and Photoshop actually ran slightly slower on a Power Mac than on a top-of-the-line 68040-based machine. So, while the transition to PowerPC allowed the Macintosh to persist as a viable computing platform, it ultimately did nothing to improve upon its small market share. And because the PowerPC MacOS was such a direct and literal port, it still retained all of the shortcomings of MacOS in general. It remained a pretty interface stretched over some almost laughably archaic plumbing. The new generation of Mac hardware wouldn’t receive an operating system truly, comprehensively worthy of it until OS X arrived seven long years later.
Still, these harsh realities shouldn’t be allowed to detract from how deftly Apple — and particularly the unsung coders of Project Piltdown Man — executed the transition. No one before had ever picked up a consumer-computing platform bodily and moved it to an entirely new hardware architecture at all, much less done it so transparently that many or most users never really had to think about what was happening at all. (There would be only one comparable example in computing’s future. And, incredibly, the Mac would once again be the platform in question: in 2006, Apple would move from the fading PowerPC line to Intel’s chips — if you can’t beat ’em, join ’em, right? — relying once again on a cleverly coded software emulator to see them through the period of transition. The Macintosh, it seems, has more lives than Lazarus.)
Although the briefly vaunted AIM alliance did manage to give the Macintosh a new lease on life, it succeeded in very little else. The PowerPC architecture, which had cost the alliance more than $1 billion to develop, went nowhere in its non-Mac incarnations. IBM’s own machines sold in such tiny numbers that the question of whether Apple would ever allow them to run MacOS was all but rendered moot. (For the record, though: they never did.) Sun Solaris and Microsoft Windows NT did come out in PowerPC versions, but their sales couldn’t justify their existence, and within a year or two they went away again. The bold dream of creating a new reference platform for general-purpose computing to rival Wintel never got off the ground, as it became painfully clear that said dream had been taken more to heart by IBM than by Apple. Only after the millennium would the PowerPC architecture find a measure of mass-market success outside the Mac, when it was adopted by Nintendo, Microsoft, and Sony for use in videogame consoles. In this form, then, it finally paid off for IBM; far more PowerPC-powered consoles than even Macs were sold over the lifetime of the architecture. PowerPC also eventually saw use in other specialized applications, such as satellites and planetary rovers employed by NASA.
Success, then, is always relative. But not so the complete lack thereof, as Kaleida and Taligent proved. Kaleida burned through $200 million before finally shipping its ScriptX multimedia-presentation engine years after other products, most notably Macromedia’s Director, had already sewn up that space; it was disbanded and harvested for scraps by Apple in November of 1995. Taligent burned through a staggering $400 million over the same period of time, producing only some tepid programming frameworks in lieu of the revolutionary operating system that had been promised, before being absorbed back into IBM.
There is one final fascinating footnote to this story of a Deal of the Century that turned out to be little more than a strange anecdote in computing history. In the summer of 1994, IBM, having by now stopped the worst of the bleeding, settling by now into their new life as a smaller, far less dominant company, offered to buy Apple outright for a premium of $5 over their current share price. In IBM’s view, the synergies made sense: the Power Macs were selling extremely well, which was more than could be said for IBM’s PowerPC models. Why not go all in?
Ironically, it was those same healthy sales numbers that scuppered the deal in the end. If the offer had come a year earlier, when a money-losing Apple was just firing John Sculley, they surely would have jumped at it. But now Apple was feeling their oats again, and by no means entirely without reason; sales were up more than 20 percent over the previous year, and the company was once more comfortably in the black. So, they told IBM thanks, but no thanks. The same renewed taste of success also caused them to reject serious inquiries from Philips, Sun Microsystems, and Oracle. Word had it that new CEO Michael Spindler was convinced not only that the Power Mac had saved Apple, but that it had fundamentally altered their position in the marketplace.
The following year revealed how misguided that thinking really was; the Power Mac had fixed none of Apple’s fundamental problems. That year it was Microsoft who cemented their world domination instead, with the release of Windows 95, while Apple grappled with the reality that almost all of those Power Mac sales of the previous year had been to existing members of the Macintosh family, not to the new customers they so desperately needed to attract. What happened now that everyone in the family had dutifully upgraded? The answer to that question wasn’t pretty: Apple plunged off a financial cliff as precipitous in its own way as the one which had nearly destroyed IBM a few years earlier. Now, nobody was interested in acquiring them anymore. The pundits smelled the stink of death; it’s difficult to find an article on Apple written between 1995 and 1998 which doesn’t include the adjective “beleaguered.” Why buy now when you can sift through the scraps at the bankruptcy auction in just a little while?
Apple didn’t wind up dying, of course. Instead a series of improbable events, beginning with the return of prodigal-son Steve Jobs in 1997, turned them into the richest single company in the world — yes, richer even than Microsoft. These are stories for other articles. But for now, it’s perhaps worth pausing for a moment to think about an alternate timeline where the Macintosh became an IBM product, and the Deal of the Century that got that ball rolling thus came much closer to living up to its name. Bizarre, you say? Perhaps. But no more bizarre than what really happened.
(Sources: the books Insanely Great: The Life and Times of Macintosh by Steven Levy, Apple Confidential 2.0: The Definitive History of the World’s Most Colorful Company by Owen W. Linzmayer, Infinite Loop: How the World’s Most Insanely Great Computer Company Went Insane by Michael S. Malone, Big Blues: The Unmaking of IBM by Paul Carroll, and The PowerPC Macintosh Book by Stephan Somogyi; InfoWorld of September 24 1990, October 15 1990, December 3 1990, April 8 1991, May 13 1991, May 27 1991, July 1 1991, July 8 1991, July 15 1991, July 22 1991, August 5 1991, August 19 1991, September 23 1991, September 30 1991, October 7 1991, October 21 1991, November 4 1991, December 30 1991, January 13 1992, January 20 1992, February 3 1992, March 9 1992, March 16 1992, March 23 1992, April 27 1992, May 11 1992, May 18 1992, June 15 1992, June 29 1992, July 27 1992, August 3 1992, August 10 1992, August 17 1992, September 7 1992, September 21 1992, October 5 1992, October 12 1992, October 19 1992, December 14 1992, December 21 1992, December 28 1992, January 11 1993, February 1 1993, February 22 1993, March 8 1993, March 15 1993, April 5 1993, April 12 1993, May 17 1993, May 24 1993, May 31 1993, June 21 1993, June 28 1993, July 5 1993, July 12 1993, July 19 1993, August 2 1993, August 9 1993, August 30 1993, September 6 1993, September 27 1993, October 4 1993, October 11 1993, October 18 1993, November 1 1993, November 15 1993, November 22 1993, December 6 1993, December 13 1993, December 20 1993, January 10 1994, January 31 1994, March 7 1994, March 14 1994, March 28 1994, April 25 1994, May 2 1994, May 16 1994, June 6 1994, June 27 1994; MacWorld of September 1992, February 1993, July 1993, September 1993, October 1993, November 1993, February 1994, and May 1994; Byte of November 1984. Online sources include IBM’s own corporate-history timeline and a vintage IBM lecture on the PowerPC architecture.)
Given the shadow which the original Master of Orion still casts over the gaming landscape of today, one might be forgiven for assuming, as many younger gamers doubtless do, that it was the very first conquer-the-galaxy grand-strategy game ever made. The reality, however, is quite different. For all that its position of influence is hardly misbegotten for other very good reasons, it was already the heir to a long tradition of such games at the time of its release in 1993. In fact, the tradition dates back to well before computer games as we know them today even existed.
The roots of the strategic space opera can be traced back to the tabletop game known as Diplomacy, designed by Allan B. Calhamer and first published in 1959 by Avalon Hill. Taking place in the years just prior to World War I, it put seven players in the roles of leaders of the various “great powers” of Europe. Although it included a playing board, tokens, and most of the other accoutrements of a typical board game, the real action, at least if you were playing it properly, was entirely social, in the alliances that were forged and broken and the shady deals that were struck. In this respect, it presaged many of the ideas that would later go into Dungeons & Dragons and other role-playing games. It thus represents an instant in gaming history as seminal in its own way as the 1954 publication of Avalon Hill’s Tactics, the canonical first tabletop wargame and the one which touched off the hobby of experiential gaming in general. But just as importantly for our purposes, Diplomacy‘s shifting alliances and the back-stabbings they led to would become an essential part of countless strategic space operas, including Master of Orion 34 years later.
Because getting seven friends together in the same room for the all-day affair that was a complete game of Diplomacy was almost as hard in the 1960s as it is today, inventive gamers developed systems for playing it via post; the first example of this breed would seem to date from 1963. And once players had started modifying the rules of Diplomacy to make it work under this new paradigm, it was a relatively short leap to begin making entirely new play-by-post games with new themes which shared some commonalities of approach with Calhamer’s magnum opus.
Thus in December of 1966, Dan Brannon announced a play-by-post game called Xeno, whose concept sounds very familiar indeed in the broad strokes. Each player started with a cluster of five planets — a tiny toehold in a sprawling, unknown galaxy waiting to be colonized. “The vastness of the playing space, the secrecy of the identity of the other players, the secrecy of the locations of ships and planets, the total lack of information without efforts of investigation, all these factors are meant to create the real problems of a race trying to expand to other planets,” wrote Brannon. Although the new game would be like Diplomacy in that it would presumably still culminate in negotiations, betrayals, and the inevitable final war to determine the ultimate victor, these stages would now be preceded by those of exploration and colonization, until a galaxy that had seemed so unfathomably big at the start proved not to be big enough to accommodate all of its would-be space empires. Certainly all of this too will be familiar to any player of Master of Orion or one of its heirs. Brannon’s game even included a tech tree of sorts, with players able to acquire better engines, weapons, and shields for their ships every eight turns they managed to survive.
In practice, Xeno played out at a pace to which the word “glacial” hardly does justice. The game didn’t really get started until September of 1967, and by a year after that just three turns had been completed. I don’t know whether a single full game of it was ever finished. Nevertheless, it proved hugely influential within the small community of experiential-gaming fanzines and play-by-post enthusiasts. The first similar game, called Galaxy and run by H. David Montgomery, had already appeared before Xeno had processed its third turn.
But the idea was, literally and figuratively speaking, too big for the medium for which it had been devised; it was just too compelling to remain confined to those few stalwart souls with the patience for play-by-post gaming. It soon branched out into two new mediums, each of which offered a more immediate sort of satisfaction.
In 1975, following rejections from Avalon Hill and others, one Howard Thompson formed his own company to publish the face-to-face board game Stellar Conquest, the first strategic space opera to appear in an actual box on store shelves. When Stellar Conquest became a success, it spawned a string of similar board games with titles like Godsfire, Outreach, Second Empire, and Starfall during this, the heyday of experiential gaming on the tabletop. But the big problem with such games was their sheer scope and math-heavy nature, which were enough to test the limits of many a salty old grognard who usually reveled in complexity. They all took at least three or four hours to play in their simplest variants, and a single game of at least one of them — SPI’s Outreach — could absorb weeks of gaming Saturdays. Meanwhile they were all dependent on pages and pages of fiddly manual calculations, in the time before spreadsheet macros or even handheld calculators were commonplace. (One hates to contemplate the plight of the Outreach group who have just spent the last two months resolving who shall become master of the galaxy, only to discover that the victor made a mistake on her production worksheet back on the second turn which invalidated all of the numbers that followed…) These games were, in other words, crying out for computerization.
Luckily, then, that too had already started to happen by the end of the 1970s. One of the reasons that play-by-post games of this type tended to run so sluggishly — beyond, that is, the inherent sluggishness of the medium itself — came down to the same problem as that faced by their tabletop progeny: the burden their size and complexity placed on their administrators. Therefore in 1976, Rick Loomis, the founder of a little company called Flying Buffalo, started running the commercial play-by-post game Starweb on what gaming historian Shannon Appelcline has called “probably the first computer ever purchased exclusively to play games” (or, at least, to administrate them): a $14,000 Raytheon 704 minicomputer. He would continue to run Starweb for more than thirty years — albeit presumably not on the same computer throughout that time.
But the first full-fledged incarnation of the computerized strategic space opera — in the sense of a self-contained game meant to be played locally on a single computer — arrived only in 1983. Called Reach for the Stars, it was the first fruit of what would turn into a long-running and prolific partnership between the Aussies Roger Keating and Ian Trout, who in that rather grandiose fashion that was so typical of grognard culture had named themselves the Strategic Studies Group. Reach for the Stars was based so heavily upon Stellar Conquest that it’s been called an outright unlicensed clone. Nevertheless, it’s a remarkable achievement for the way that it manages to capture that sense of size and scope that is such a huge part of these games’ appeal on 8-bit Apple IIs and Commodore 64s with just 64 K of memory. Although the whole is necessarily rather bare-bones compared to what would come later, the computer players’ artificial intelligence, always a point of pride with Keating and Trout, is surprisingly effective; on the harder difficulty level, the computer can truly give you a run for your money, and seems to do so without relying solely on egregious cheating.
Reach for the Stars did very well, prompting updated ports to more powerful machines like the Apple Macintosh and IIGS and the Commodore Amiga as the decade wore on. A modest trickle of other boxed computer games of a similar stripe also appeared, albeit none which did much to comprehensively improve on SSG’s effort: Imperium Galactum, Spaceward Ho!, Armada 2525, Pax Imperia. Meanwhile the commercial online service CompuServe offered up MegaWars III, in which up to 100 players vied for control of the galaxy; it played a bit like one of those years-long play-by-post campaigns of yore compressed into four to six weeks of constant — and expensive, given CompuServe’s hourly dial-up rates — action and intrigue. Even the shareware scene got in on the act, via titles like Anacreon: Reconstruction 4021 and the earliest versions of the cult classic VGA Planets, a game which is still actively maintained and played to this day. And then, finally, along came Master of Orion in 1993 to truly take this style of game to the next level.
Had things gone just a little bit differently, Master of Orion too might have been a shareware release. It was designed in the spare time of Steve Barcia, an electrical engineer living in Austin, Texas, and programmed by Steve himself, his wife Marcia Barcia, and their friend Ken Burd. Steve claims not ever to have played any of the computer games I’ve just mentioned, but, as an avid and longtime tabletop gamer, he was very familiar with Stellar Conquest and a number of its successors. (No surprise there: Howard Thompson and his game were in fact also products of Austin’s vibrant board-gaming scene.)
After working on their computer game, which they called Star Lords, on and off for years, the little band of hobbyist programmers submitted it to MicroProse, whose grand-strategy game of Civilization, a creation of their leading in-house designer Sid Meier, had just taken the world by storm. A MicroProse producer named Jeff Johannigman — himself another member of the Austin gaming fraternity, as it happened, one who had just left Origin Systems in Austin to join MicroProse up in Baltimore — took a shine to the unpolished gem and signed its creators to develop it further. Seeing their hobby about to become a real business, the trio quit their jobs, took the name of SimTex, and leased a cramped office above a gyro joint to finish their game under Johannigman’s remote supervision, with a little additional help from MicroProse’s art department.
A fellow named Alan Emrich was one of most prominent voices in strategy-game criticism at the time; he was the foremost scribe on the subject at Computer Gaming World magazine, the industry’s accepted journal of record, and had just published a book-length strategy guide on Civilization in tandem with Johnny Wilson, the same magazine’s senior editor. Thanks to that project, Emrich was well-connected with MicroProse, and was happy to serve as a sounding board for them. And so, one fateful day very early in 1993, Johannigman asked if he’d like to have a look at a new submission called Star Lords.
As Emrich himself puts it, his initial impressions “were not that great.” He remembers thinking the game looked like “something from the late 1980s” — an eternity in the fast-changing computing scene of the early 1990s. Yet there was just something about it; the more he played, the more he wanted to keep playing. So, he shared Star Lords with his friend Tom Hughes, with whom he’d been playing tabletop and computerized strategy games for twenty years. Hughes had the same experience. Emrich:
After intense, repeated playing of the game, Tom and I were soon making numerous suggestions to [Johannigman], who, in turn, got tired of passing them on to the designer and lead programmer, Steve Barcia. Soon, we were talking to Steve directly. The telephone lines were burning regularly and a lot of ideas went back and forth. All the while, Steve was cooking up a better and better game. It was during this time that the title changed to Master of Orion and the game’s theme and focus crystallized.
I wrote a sneak preview for Computer Gaming World magazine where I indicated that Master of Orion was shaping up to be a good game. It had a lot of promise, but I didn’t think it was up there with Sid Meier’s Civilization, the hobby’s hallmark of strategy gaming at that time. But by the time that story hit the newsstands, I had changed my mind. I found myself still playing the game constantly and was reflecting on that fact when Tom called me. We talked about Master of Orion, of course, and Tom said, “You know, I think this game might become more addicting even than Civilization.” I replied, “You know, I think it already is.”
I was hard on Emrich in earlier articles for his silly assertion that Civilization‘s inclusion of global warming as a threat to progress and women’s suffrage as a Wonder of the World constituted some form of surrender to left-wing political correctness, as I was for his even sillier assertion that the game’s simplistic and highly artificial economic model could somehow be held up as proof for the pseudo-scientific theory of trickle-down economics. Therefore let me be very clear in praising him here: Emrich and Hughes played an absolutely massive role in making Master of Orion one of the greatest strategy games of all time. Their contribution was such that SimTex took the unusual step of adding to the credits listing a “Special Thanks to Alan Emrich and Tom Hughes for their invaluable design critiquing and suggestions.” If anything, that credit would seem to be more ungenerous than the opposite. By all indications, a pair of full-fledged co-designer credits wouldn’t have been out of proportion to the reality of their contribution. The two would go on to write the exhaustive official strategy guide for the game, a tome numbering more than 400 pages. No one could have been more qualified to tackle that project.
As if all that wasn’t enough, Emrich did one more great service for Master of Orion and, one might even say, for gaming in general. In a “revealing sneak preview” of the game, published in the September 1993 issue of Computer Gaming World, he pronounced it to be “rated XXXX.” After the requisite measure of back-patting for such edgy turns of phrase as these, Emrich settled down to explain what he really meant by the label: “XXXX” in this context stood for “EXplore, EXpand, EXploit, and EXterminate.” And thus was a new sub-genre label born. The formulation from the article was quickly shortened to “4X” by enterprising gamers uninterested in making strained allusions to pornographic films. In that form, it would be applied to countless titles going forward, right up to the present day, and retroactively applied to countless titles of the past, including all of the earlier space operas I’ve just described as well as the original Civilization — a game to which the “EXterminate” part of the label fits perhaps less well, but such is life.
Emrich’s article also creates an amusing distinction for the more pedantic ludic taxonomists and linguists among us. Although Master of Orion definitely was not, as we’ve now seen at some length, the first 4X game in the abstract, it was the very first 4X game to be called a 4X game. Maybe this accounts for some of the pride of place it holds in modern gaming culture?
However that may be, though, the lion’s share of the credit for Master of Orion‘s enduring influence must surely be ascribed to what a superb game it is in its own right. If it didn’t invent the 4X space opera, it did in some sense perfect it, at least in its digital form. It doesn’t do anything conceptually new on the face of it — you’re still leading an alien race as it expands through a randomly created galaxy, competing with other races in the fields of economics, technology, diplomacy, and warfare to become the dominant civilization — but it just does it all so well.
A new game of Master of Orion begins with you choosing a galaxy size (from small to huge), a difficulty level (from simple to impossible), and a quantity of opposing aliens to compete against (from one to five). Then you choose which specific race you would like to play; you have ten possibilities in all, drawing from a well-worn book of science-fiction tropes, from angry cats in space to hive-mind-powered insects, from living rocks to pacifistic brainiacs, alongside the inevitable humans. Once you’ve made your choice, you’re cast into the deep end — or rather into deep space — with a single half-developed planet, a colony ship for settling a second planet as soon as you find a likely candidate, two unarmed scout ships for exploring for just such a candidate, and a minimal set of starting technologies.
You must parlay these underwhelming tools into galactic domination hundreds of turns later. You can take the last part of the 4X tag literally and win out by utterly exterminating all of your rivals, but a slightly less genocidal approach is a victory in the “Galactic Council” which meets every quarter-century (i.e., every 25 turns). Here everyone can vote on which of the two most currently populous empires’ leaders they prefer to appoint as ruler of the galaxy, with “everyone” in this context including the two leading emperors themselves. Each empire gets a number of votes determined by its population, and the first to collect two-thirds of the total vote wins outright. (Well, almost… it is possible for you to refuse to respect the outcome of a vote that goes against you, but doing so will cause all of your rivals to declare immediate and perpetual war against you, whilst effectively pooling all of their own resources and technology. Good luck with that!)
A typical game of Master of Orion plays out over three broad stages. The first stage is the land grab, the wide-open exploration and colonization phase that happens before you meet your rival aliens. Here your challenge is to balance the economic development of your existing planets against your need to settle as many new ones as possible to put yourself in a good position for the mid-game. (When exactly do I stop spending my home planet’s resources on improving its own infrastructure and start using them to build more colony ships?) The mid-game begins when you start to bump into your rivals, and comes to entail much jockeying for influence, as the various races begin to sort themselves into rival factions. (The Alkaris, bird-like creatures, loathe the Mrrshans, the aforementioned race of frenzied pussycats, and their loathing is returned in kind. I don’t have strong feelings about either one — but whose side would it most behoove me to choose from a purely strategic perspective?) The end-game is nigh when the there is no more room for anyone to expand, apart from taking planets from a rival by force, and the once-expansive galaxy suddenly seems claustrophobic. It often, although by no means always, is marked by a massive war that finally secures somebody that elusive two-thirds majority in the Galactic Council. (I’m so close now! Do I attack those stubbornly intractable Bulrathi to try to knock down their population and get myself over the two-thirds threshold that way, or do I keep trying to sweet-talk and bribe them into voting for me?) The length and character of all of these stages will of course greatly depend on the initial setup you chose; the first stage might be all but nonexistent in a small galaxy with five rivals, while it will go on for a long, long time indeed in a huge galaxy with just one or two opponents. (The former scenario is, for the record, far more challenging.)
And that’s how it goes, generally speaking. Yet the core genius of Master of Orion actually lies in how resistant it is to generalization. It’s no exaggeration to say that there really is no “typical” game; I’ve enjoyed plenty which played out in nothing like the pattern I’ve just described for you. I’ve played games in which I never fired a single shot in anger, even ones where I’ve never built a single armed ship of war, just as I’ve played others where I was in a constant war for survival from beginning to end. Master of Orion is gaming’s best box of chocolates; you never know what you’re going to get when you jump into a new galaxy. Everything about the design is engineered to keep you from falling back on patterns universally applicable to the “typical” game. It’s this quality, more so than any other, that makes Master of Orion so consistently rewarding. If I was to be stranded on the proverbial desert island, I have a pretty good idea of at least one of the games I’d choose to take with me.
I’ll return momentarily to the question of just how Master of Orion manages to build so much variation into a fairly simple set of core rules. I think it might be instructive to do so, however, in comparison with another game, one I’ve already had occasion to mention several times in this article: Civilization.
As I’m so often at pains to point out, game design is, like any creative pursuit, a form of public dialog. Certainly Civilization itself comes with a long list of antecedents, including most notably Walter Bright’s mainframe game Empire, Dani Bunten Berry’s PC game Seven Cities of Gold, and the Avalon Hill board game with which Civilization shares its name. Likewise, Civilization has its progeny, among them Master of Orion. By no means was it the sole influence on the latter; as we’ve seen, Master of Orion was also greatly influenced by the 4X space-opera tradition in board games, especially during its early phases of development.
Still, the mark of Civilization as well can be seen all over its finished design. (After all, Alan Emrich had just literally written the book on Civilization when he started bombarding Barcia with design suggestions…) For example, Master of Orion, unlike all of its space-opera predecessors, on the computer or otherwise, doesn’t bother at all with multiplayer options, preferring to optimize the single-player experience in their stead. One can’t help but feel that it was Civilization, which was likewise bereft of the multiplayer options that earlier grand-strategy games had always included as a matter of course, that empowered Steve Barcia and company to go this way.
At the same time, though, we cannot say that Jeff Johannigman was being particularly accurate when he took to calling Master of Orion “Civilization in space” for the benefit of journalists. For all that it’s easy enough to understand what made such shorthand so tempting — this new project too was a grand-strategy game played on a huge scale, incorporating technology, economics, diplomacy, and military conflict — it wasn’t ultimately fair to either game. Master of Orion is very much its own thing. Its interface, for example, is completely different. (Ironically, Barcia’s follow-up to Master of Orion, the fantasy 4X Master of Magic, hews much closer to Civilization in that respect.) In Master of Orion, Civilization‘s influence often runs as much in a negative as a positive direction; that is to say, there are places where the later design is lifting ideas from the earlier one, but also taking it upon itself to correct perceived weaknesses in their implementation.
I have to use the qualifier “perceived” there because the two games have such different personalities. Simply put, Civilization prioritizes its fictional context over its actual mechanics, while Master of Orion does just the opposite. Together they illustrate the flexibility of the interactive digital medium, showing how great games can be great in such markedly different ways, even when they’re as closely linked in terms of genre as these two are.
Civilization explicitly bills itself as a grand journey through human history, from the time in our distant past when the first hunter-gatherers settled down in villages to an optimistic near-future in space. The rules underpinning the journey are loose-goosey, full of potential exploits. The most infamous of these is undoubtedly the barbarian-horde strategy, in which you research only a few minimal technologies necessary for war-making and never attempt to evolve your society or participate in any meaningful diplomacy thereafter, but merely flood the world with miserable hardscrabble cities supporting primitive armies, attacking everything that moves until every other civilization is extinct. At the lower and moderate difficulty levels at least, this strategy works every single time, albeit whilst bypassing most of what the game was meant to be about. As put by Ralph Betza, a contributor to an early Civilization strategy guide posted to Usenet: “You can always play Despotic Conquest, regardless of the world you find yourself starting with, and you can always win without using any of the many ways to cheat. When you choose any other strategy, you are deliberately risking a loss in order to make the game more interesting.”
So very much in Civilization is of limited utility at best in purely mechanical terms. Many or most of the much-vaunted Wonders of the World, for example, really aren’t worth the cost you have to pay for them. But that’s okay; you pay for them anyway because you like the idea of having built the Pyramids of Giza or the Globe Theatre or Project Apollo, just as you choose not to go all Genghis Khan on the world because you’d rather build a civilization you can feel proud of. Perhaps the clearest statement of Civilization‘s guiding design philosophy can be found in the manual. It says that, even if you make it all the way to the end of the game only to see one of your rivals achieve the ultimate goal of mounting an expedition to Alpha Centauri before you do, “the successful direction of your civilization through the centuries is an achievement. You have survived countless wars, the pollution of the industrial age, and the risks of nuclear weapons.” Or, as Sid Meier himself puts it, “a game of Civilization is an epic story.”
Such sentiments are deeply foreign to Master of Orion; this is a zero-sum game if ever there was one. If you lose the final Galactic Council vote, there’s no attaboy for getting this far, much less any consolation delivered that the galaxy has entered a new era of peaceful cooperation with some other race in the leadership role. Instead the closing cinematic tells you that you’ve left the known galaxy and “set forth to conquer new worlds, vowing to return and claim the renowned title of Master of Orion.” (Better to rule in Hell, right?) There are no Wonders of the World in Master of Orion, and, while there is a tech tree to work through, you won’t find on it any of Civilization‘s more humanistic advances, such as Chivalry or Mysticism, or even Communism or The Corporation. What you get instead are technologies — it’s telling that Master of Orion talks about a “tech tree,” while Civilization prefers the word “advances” — with a direct practical application to settling worlds and making war, divided into the STEM-centric categories of Computers, Construction, Force Fields, Planetology, Propulsion, and Weapons.
So, Civilization is the more idealistic, more educational, perhaps even the nobler of the two games. And yet it often plays a little awkwardly — which awkwardness we forgive because of its aspirational qualities. Master of Orion‘s fictional context is a much thinner veneer to stretch over its mechanics, while words like “idealistic” simply don’t exist in its vocabulary. And yet, being without any high-flown themes to fall back on, it makes sure that its mechanics are absolutely tight. These dichotomies can create a dilemma for a critic like yours truly. If you asked me which game presents a better argument for gaming writ large as a potentially uplifting, ennobling pursuit, I know which of the two I’d have to point to. But then, when I’m just looking for a fun, challenging, intriguing game to play… well, let’s just say that I’ve played a lot more Master of Orion than Civilization over the last quarter-century. Indeed, Master of Orion can easily be read as the work of a designer who looked at Civilization and was unimpressed with its touchy-feely side, then set out to make a game that fixed all the other failings which that side obscured.
By way of a first example, let’s consider the two games’ implementation of an advances chart — or a tech tree, whichever you prefer. Arguably the most transformative single advance in Civilization is Railroads; they let you move your military units between your cities almost instantaneously, which makes attacks much easier and quicker to mount for warlike players and enables the more peaceful types to protect their holdings with a much smaller (and thus less expensive) standing army. The Railroads advance is so pivotal that some players build their entire strategy around acquiring it as soon as possible, by finding it on the advances chart as soon as the game begins in 4000 BC and working their way backward to find the absolute shortest path for reaching it. This is obviously problematic from a storytelling standpoint; it’s not as if the earliest villagers set about learning the craft of Pottery with an eye toward getting their hands on Railroads 6000 years later. More importantly, though, it’s damaging to the longevity of the game itself, in that it means that players can and will always employ that same Railroads strategy just as soon as they figure out what a winner it is. Here we stumble over one of the subtler but nonetheless significant axioms of game design: if you give players a hammer that works on every nail, many or most of them will use it — and only it — over and over again, even if it winds up decreasing their overall enjoyment. It’s for this reason that some players continue to use even the barbarian-horde strategy in Civilization, boring though it is. Or, to take an outside example: how many designers of CRPGs have lovingly crafted dozens of spells with their own unique advantages and disadvantages, only to watch players burn up everything they encounter with a trusty Fireball?
Master of Orion, on the other hand, works hard at every turn to make such one-size-fits-all strategies impossible — and nowhere more so than in its tech tree. When a new game begins, each race is given a randomized selection of technologies that are possible for it to research, constituting only about half of the total number of technologies in the game. Thus, while a technology roughly equivalent to Civilization‘s Railroads does exist in Master of Orion — Star Gates — you don’t know if this or any other technology is actually available to you until you advance far enough up the tree to reach the spot where it ought to be. You can’t base your entire strategy around a predictable technology progression. While you can acquire technologies that didn’t make it into your tree by trading with other empires, bullying them into giving them to you, or attacking their planets and taking them, that’s a much more fraught, uncertain path to go down than doing the research yourself, one that requires a fair amount of seat-of-your-pants strategy in its own right. Any way you slice it, in other words, you have to improvise.
This one clever design choice has repercussions for every other aspect of the game. Take, for instance, the endlessly fascinating game-within-a-game of designing your fleet of starships. If the tech tree was static, players would inevitably settle upon a small set of go-to designs that worked for their style of play. As it is, though, every new ship is a fresh balancing act, its equipment calibrated to maximize your side’s technological strengths and mitigate its weaknesses, while also taking into careful account the strengths and weaknesses of the foe you expect to use it against, about which you’ve hopefully been compiling information through your espionage network. Do you build a huge number of tiny, fast, maneuverable fighters, or do you build just a few lumbering galactic dreadnoughts? Or do you build something in between? There are no universally correct answers, just sets of changing circumstances.
Another source of dynamism are the alien races you play and those you play against. The cultures in Civilization have no intrinsic strengths and weaknesses, just sets of leader tendencies when played by the computer; for your part, you’re free to play the Mongols as pacifists, or for that matter the Russians as paragons of liberal democracy and global cooperation. But in Master of Orion, each race’s unique affordances force you to play it differently. Likewise, each opposing race’s affordances in combination with those of your own force you to respond differently to that race when you encounter it, whether on the other side of a diplomats’ table or on a battlefield in space. Further, most races have one technology they’re unusually good at researching and one they’re unusually bad at. Throw in varying degrees of affinity and prejudice toward the other races, and, again, you’ve got an enormous amount of variation which defies cookie-cutter strategizing. (It’s worth noting that there’s a great deal of asymmetry here; Steve Barcia and his helpers didn’t share so many modern designers’ obsession with symmetrical play balance above all else. Some races are clearly more powerful than others: the brainiac Psilons get a huge research bonus, the insectoid Klackons get a huge bonus in worker productivity, and the Humans get huge bonuses in trade and diplomacy. Meanwhile the avian Alkaris, the feline Mrrshan, and the ursine Bulrathis have bonuses which only apply during combat, and can be overcome fairly easily by races with other, more all-encompassing advantages.)
There are yet more touches to bring yet more dynamism. Random events occur from time to time in the galaxy, some of which can change everything at a stroke: a gigantic space amoeba might show up and start eating stars, forcing everyone to forget their petty squabbles for a while and band together against this apocalyptic threat. And then there’s the mysterious star Orion, from which the game takes it name, which houses the wonders of a long-dead alien culture from the mythical past. Taking possession of it might just win the game for you — but first you’ll have to defeat its almost inconceivably powerful Guardian.
One of the perennial problems of 4X games, Civilization among them, is the long anticlimax, which begins at that point when you know you’re going to conquer the world or be the first to blast off for Alpha Centauri, but well before you actually do so. (What Civilization player isn’t familiar with the delights of scouring the map for that one remaining rival city tucked away on some forgotten island in some forgotten corner?) Here too Master of Orion comes with a mitigating idea, in the form of the Galactic Council whose workings I’ve already described. It means that, as soon as you can collect two-thirds of the vote — whether through wily diplomacy or the simpler expedient of conquering until two-thirds of the galaxy’s population is your own — the game ends and you get your victory screen.
Indeed, one of the overarching design themes of Master of Orion is its determination to minimize the boring stuff. It must be admitted, of course, that boredom is in the eye of the beholder. Non-fans have occasionally dismissed the whole 4X space-opera sub-genre as “Microsoft Excel in space,” and Master of Orion too requires a level of comfort with — or, better yet, a degree of fascination with — numbers and ratios; you’ll spend at least as much time tinkering with your economy as you will engaging in space battles. Yet the game does everything it can to minimize the pain here as well. While hardly a simple game in absolute terms, it is quite a streamlined example of its type; certainly it’s much less fiddly than Civilization. Planet management is abstracted into a set of five sliding ratio bars, allowing you decide what percentage of that planet’s total output should be devoted to building ships, building defensive installations, building industrial infrastructure, cleaning up pollution, and researching new technologies. Unlike in Civilization, there is no list of specialized structures to build one at a time, much less a need to laboriously develop the land square by square with a specialized unit. Some degree of micro-management is always going to be in the nature of this type of game, but managing dozens of planets in Master of Orion is far less painful than managing dozens of cities in Civilization.
In short, Master of Orion tries really, really hard to work with you rather than against you, and succeeds to such a degree that it can sometimes feel like the game is reading your mind. A reductionist critic of the sort I can be on occasion might say that there are just two types of games: those that actually got played before their release and those that didn’t. With only rare exceptions, this distinction, more so than the intrinsic brilliance of the design team or any other factor, is the best predictor of the quality of the end result. Master of Orion is clearly a game that got played, and played extensively, with all of the feedback thus gathered being incorporated into the final design. The interface is about as perfect as the technical limitations of 1993 allow it to be; nothing you can possibly want to do is more than two clicks away. And the game is replete with subtle little conveniences that you only come to appreciate with time — like, just to take one example, the way it asks if you want to automatically adjust the ecology spending on every one of your planets when you acquire a more efficient environmental-cleanup technology. This lived-in quality can only be acquired the honest, old-fashioned way: by giving your game to actual players and then listening to what they tell you about it, whether the points they bring up are big or small, game-breaking or trivial.
This thoroughgoing commitment to quality is made all the more remarkable by our knowledge of circumstances inside MicroProse while Master of Orion was going through these critical final phases of its development. When the contract to publish the game was signed, MicroProse was in desperate financial straits, having lost bundles on an ill-advised standup-arcade game along with expensive forays into adventure games and CRPGs, genres far from their traditional bread and butter of military simulations and grand-strategy games. Although other projects suffered badly from the chaos, Master of Orion, perhaps because it was a rather low-priority project entrusted largely to an outside team located over a thousand miles away, was given the time and space to become its best self. It was still a work in progress on June 21, 1993, when MicroProse’s mercurial, ofttimes erratic founder and CEO “Wild Bill” Stealey sold the company to Spectrum Holobyte, a publisher with a relatively small portfolio of extant games but a big roll of venture capital behind them.
Master of Orion thus became one of the first releases from the newly conjoined entity on October 1, 1993. Helped along by the evangelism of Alan Emrich and his pals at Computer Gaming World, it did about as well as such a cerebral title, almost completely bereft of audiovisual bells and whistles, could possibly do in the new age of multimedia computing; it became the biggest strategy hit since Civilization, and the biggest 4X space opera to that point, in any medium. Later computerized iterations on the concept, including its own sequels, doubtless sold more copies in absolute numbers, but the original Master of Orion has gone on to become one of the truly seminal titles in gaming history, almost as much so as the original Civilization. It remains the game to which every new 4X space opera — and there have been many of them, far more than have tried to capture the more elusively idealistic appeal of Civilization — must be compared.
Sometimes a status such as that enjoyed by Master of Orion arrives thanks to an historical accident or a mere flashy technical innovation, but that is definitively not the case here. Master of Orion remains as rewarding as ever in all its near-infinite variation. Personally, I like to embrace its dynamic spirit for everything it’s worth by throwing a (virtual) die to set up a new game, letting the Universe decide what size galaxy I play in, how many rivals I play with, and which race I play myself. The end result never fails to be enjoyable, whether it winds up a desperate free-for-all between six alien civilizations compressed into a tiny galaxy with just 24 stars, or a wide-open, stately game of peaceful exploration in a galaxy with over 100 of them. In short, Master of Orion is the most inexhaustible well of entertainment I’ve ever found in the form of a single computer game — a timeless classic that never fails to punish you for playing lazy, but never fails to reward you for playing well. I’ve been pulling it out to try to conquer another random galaxy at least once every year or two for half my life already. I suspect I’ll still be doing so until the day I die.
(Sources: the books Gamers at Work: Stories Behind the Games People Play by Morgan Ramsay, Designers & Dragons, Volume 1: The 1970s by Shannon Appelcline, and Master of Orion: The Official Strategy Guide by Alan Emrich and Tom E. Hughes, Jr.; Computer Gaming World of December 1983, June/July 1985, October 1991, June 1993, August 1993, September 1993, December 1993, and October 1995; Commodore Disk User of May 1988; Softline of March 1983. Online sources include “Per Aspera Ad Astra” by Jon Peterson from ROMchip, Alan Emrich’s historical notes from the old Master of Orion III site, a Steve Barcia video interview which originally appeared in the CD-ROM magazine Interactive Entertainment., and the Civilization Usenet FAQ, lasted updated by “Dave” in 1994.
Master of Orion I and II are available for purchase together from GOG.com. I highly recommend a tutorial, compiled many years ago by Sirian and now available only via archive.org, as an excellent way for new players to learn the ropes.)
“Demography is destiny,” said the French sociologist Auguste Comte apocryphally in the nineteenth century. That truism has been taken to heart by many in the time since — not least by our political classes. Yet it applies equally in the world of the arts and entertainment. For in any free market, the nature of production is dictated as much by the consumers as by the producers.
Certainly this is true of computer games. Throughout the 1980s and 1990s, they were largely the province of a rather specific demographic indeed: single white males between the ages of ten and thirty from relatively privileged socioeconomic circumstances, with a bent toward intellectual rather than active pursuits — i.e., the stereotypical “nerds” of pop culture. Computer games reflected the tastes of these boys and young men in other kinds of entertainment and leisure-time hobbies: Dungeons & Dragons, Star Wars, jet fighters, World War II, action movies, heavy-metal music, fast cars, and, when they could get a glimpse of them, fast women. Although I too have liked all of these things to a greater or lesser degree at some point in my life — I did, after all, grow up as a member of exactly the demographic in question — their extreme prevalence in the cultural ghetto about which I write has often left me searching, sometimes in vain, for games with a different set of values and antecedents.
But this article is not about one or more of those interesting cultural outliers. It’s rather about an interestingly scanty subgenre of games which seems like it ought to have been perfect for the demographic I’ve just described, but that for some reason just never quite took off. Specifically, I speak of games based on the realities of space exploration in a contemporary context, as opposed to the outer-space fantasias of Star Wars and the like. After all, just about every nerdy teenage boy goes through a race-for-the-Moon phase at some point. (And why not? Has humanity ever embarked on a grander collective adventure?) Further, games on this subject would seemingly have fit in well with the broader craze for realistic simulation, as manifested by everything from F-15 Strike Eagle to SimCity, which had taken a firm grip on the industry by the end of the 1980s.
And yet there just weren’t many simulations of this particular type, and even fewer of them that did very well. It strikes me that it’s worth asking why this is so. Was there something about this subject that just didn’t work as a game, or are we dealing with a mere historical accident here? Let’s begin with a brief survey of the field of earlier games that did venture out into this territory before we turn to the one that will be our main focus for today. To help in doing so, we’ll further divide the field into two categories: vehicular simulations of spaceflight and games of space-program management.
The earliest game of the former type actually predates the personal computer. Created on a big DEC PDP-8 by a Massachusetts high-school student named Jim Storer, inspired by the real Neil Armstrong’s nerve-wracking manual landing on the Moon in 1969, the very year it was first programmed, Lunar demanded that you set your own landing craft down gently before your fuel ran out. Implemented entirely in text — you simply entered the number of fuel units you wished to burn each turn in response to a changing textual status display — it inspired dozens of clones and variants, most going under the more accurately descriptive name of Lunar Lander. By the dawn of the personal-computing age in 1978, David Ahl was able to write in his landmark book BASIC Computer Games that Lunar Lander in all its incarnations was “far and away the most popular computer game” of them all. It was even converted into a graphical standup-arcade game by Atari in 1979, in which form its quiet, cerebral tension made it an incongruous outlier indeed in an arcade full of shoot-em-ups.
Other programmers got inevitably more expansive in their ambitions for spaceflight simulation after Lunar Lander. By 1986, with the release of Spectrum HoloByte’s Orbiter, they had graduated to offering up a complete Space Shuttle flight simulator, covering all the stages of a mission from liftoff to landing. (Sadly, it arrived just in time for the Challenger disaster…) In 1992, Virgin Software published an even more complex and complete iteration on the concept, entitled simply Shuttle.
Yet neither of these later simulations came close to matching their simplistic predecessor in popularity. Their subject matter, it seemed, just didn’t quite work as a hardcore simulation. A simulation of a jet fighter flying into a war zone — such as the popular and long-lived Falcon series which Spectrum HoloByte produced after Orbiter — offered an intriguing range of tactical possibilities which a simulation of a Space Shuttle did not. A fighter pilot flying into combat is lord of his domain, in complete control of his airplane; the outcomes of his battles are entirely up to him. An astronaut flying into space, on the other hand, is merely the tip of a long spear of cooperative hierarchy; situations like those last few minutes before the Eagle landed, when Neil Armstrong was making all of the decisions and executing them all alone, have been vanishingly rare in the history of space flight. If, as Sid Meier likes to say, a good game is “a series of interesting decisions,” this fact makes spaceflight as it has existed so far in our historical reality problematic as the subject of a compelling simulation. Too often, Orbiter and Shuttle felt like exercises in rote button-mashing — button-mashing which you were expected to do exactly when and how ground control told you. Perhaps you weren’t quite the spam in a can the test-pilot peers of the earliest astronauts had so mocked them for being, but it sure felt that way at times. “As strange as it may seem,” wrote Computer Gaming World magazine of Orbiter, “a lot of flying the Shuttle is boring — a lot of pushing buttons, running computer programs, and the like — and it shows.”
In light of this, it’s telling that arguably the most entertaining of these spaceflight simulators opted for a less hardcore, more impressionistic approach. Apollo 18, developed by the Canadian studio Artech and published by Accolade in 1987, posited an alternative history where at least one of NASA’s final trio of cancelled Moon missions actually did take place. In keeping with Artech designer and theoretician-in-chief Michael Bate’s concept of “aesthetic simulation,” Apollo 18 portrayed a mission to the Moon not as a holistic vehicular simulation but as a series of mini-games, jumping from the perspective of ground control to that of the astronauts in space whenever it felt the need. This more free-wheeling, almost cinematic approach, combined perhaps with the fact that going to the Moon is inherently more exciting than releasing yet another whatsit from the Shuttle’s cargo bay in low Earth orbit, made the game a more riveting experience than its Shuttle-centric peers. Still, even it ran out of legs fairly quickly; once you’d worked through the steps of getting to the Moon and back once or twice, there just wasn’t much motivation to do so again.
So much for simulation. In the category of strategic space-program managers, we have an equally mixed bag.
Just as with the venerable Lunar Lander, one of the very first attempts to portray the contemporary conquest of space in this way was also the most successful of its era, in both financial and artistic terms. I wrote at some length long ago about 1984’s Project: Space Station, an earnest effort, masterminded by a fellow named Lawrence Holland who would go on to become LucasArts’s flight-simulator guru, to portray the construction and operation of a commercial space station in Earth orbit. Both space stations and private enterprise in space were much in vogue at the time, thanks respectively to President Ronald Reagan’s announcement of plans to build a station called Freedom in his 1984 State of the Union address and the realities of a terminally underfunded NASA whose priorities shifted with the political winds — realities which would ensure that Freedom itself never got off the drawing board, although it would gradually morph into the joint project known as the International Space Station. As I wrote in that older article, Project: Space Station, which blended an overarching strategy game with light vehicular simulation, came heartbreakingly close to greatness. But in the end, it was somewhat undone by a lack of feedback mechanisms and poor command and control — weaknesses which, it should be said, feel more like a result of the limited 8-bit hardware on which it ran than a failure of design in the abstract. But whatever its failings, it was by all indications reasonably successful in its day, enough so that, when its original publisher HESware went bankrupt within a year of its release, it was picked up at auction by Accolade and re-released by them in the same year they published Apollo 18.
Alas, Project: Space Station‘s immediate successors would prove markedly less rewarding as games to play or products to sell. Space MAX, created and self-published by a former Jet Propulsion Laboratories engineer named Tom Keller in 1986, poured on the detail at the expense of playability, until it came to resemble one of NASA’s long-range planning tools more than a computer game. And Karl Buiter’s Earth Orbit Stations of 1987 buried a very appealing premise, focusing more on the mechanical details of building a modular space station than had either of the earlier games of its type, under an atrocious presentation layer which Computer Gaming World described as “a textbook case of how not to design a [GUI] interface.” And after those two less-than-compelling efforts, the strategic space-program-management subgenre pretty much dried up.
This, then, was the underwhelming state of contemporary-spaceflight games in general in 1993, when Interplay published a new take on the subject matter bearing the name of one of the most famous astronauts of all — in fact, the one who had actually been sitting there beside Neil Armstrong when he was making that hair-raising landing on the Moon. Like Apollo 18, Buzz Aldrin’s Race into Space chose to turn back the clock to those glory days of the Moon race rather than focusing on present-day space stations engaged in the comparatively plebeian labor of developing new industrial-chemical compounds and new medical treatments, important though such things undoubtedly are. The managerial perspective it adopted, however, had more to do with Project: Space Station than Apollo 18. A noble effort in its way, as indeed were all those games I’ve just written about, its own points of failure have perhaps even more to tell us about game design than theirs do.
The driving force behind Buzz Aldrin’s Race into Space wasn’t its astronaut mascot — no surprise there, right? — but rather one Fritz Bronner, a less famous American whose name would have fit perfectly to one of the German rocket scientists who helped Wernher von Braun build the Saturn V rocket that sent men to the Moon. In the early 1980s, as a young man with dreams of becoming an actor, Bronner spent many an evening playing a variety of tabletop wargames and RPGs with his buddies in his home state of Florida. On one of those evenings, he had just finished an RPG session when he turned on the television to see a rocket launch on the news — an event he always watched with interest, being a self-described “space fanatic.” The thought process he went through then, with his mind still addled by game systems and dice rolls, will waken immediate recognition in anyone who has ever played Race into Space. For the most fundamental mechanic in that game has its origin right here:
The game player in me suddenly wondered what the odds were for a successful launch. The next thought I had was the chance of failure. I formulated in my mind a guess on the total number of [successful] launches versus failures. I quickly concluded that out of ten previous launches, nine of them were successful. Just before liftoff, I rolled the percentile dice and rolled below the range, which indicated to me that the launch would be successful. A few minutes later, another satellite reached orbit. I was elated that I had come up with a pseudo-model for launch success.
Immediately I wondered how a manned launch would work. I started to play with some rough mathematical figures. I selected a one-stage rocket and a two-stage rocket and then realized that I would have to devise a safety factor for a capsule. I think I came up with around 85 percent for the capsule. Then I plunged into what mission steps would occur in spaceflight. I rolled the dice on a three-step suborbital flight and to my excitement it worked! Suddenly each step of the mission was monumentally important. I became tense as I rolled the dice. It reminded me of the flavor of the early spaceflights.
I called [my friend] Steve [Stipp] over and told him of my successful suborbital flight. After his own successful flight, we both gleefully started scribbling notes on possible payload weights and additional mission steps. Soon we had scraps of paper filled with my horribly drawn stick figures of capsules that were lofting astronauts into space.
At this point, it was success or total failure on a mission step. We both realized that it was too crude and unrealistic for a rocket to always blow up on the pad. There were cancelled launches and aborts that should be considered. We laughed and played and scribbled more notes and sketched drawings for several hours, and then folded it up and forgot about it for several years.
In 1985, Bronner’s acting dream took him from Florida to New York City. His wife was working as a long-haul flight attendant, leaving him with plenty of solitude for contemplation in between auditions there in the big city. A television documentary called Spaceflight refreshed his memories of playing that improvised dice-throwing game of space launches. Just as importantly, it shifted his thinking toward an historical perspective. What if he made a game about the space race of the 1950s and 1960s, with one player in the role of the Americans and the other of the Soviets, each trying to be the first to reach the Moon? Each player would have to research the technology necessary for each stage of the endeavor, then test it with a live launch. The tension that would make for interesting choices was clear: that between researching everything exhaustively to achieve the best possible safety rating and pushing the timetable to beat out your opponent. At bottom, then, it would be a “press your luck” game — an evergreen in tabletop game design, but implemented here in the service of a thoroughly unique theme. For the next couple of years, Bronner continued to develop and refine the concept, even sending samples to many board-game publishers, albeit without managing to stir up much interest.
In 1987, Bronner’s acting dream took him from New York City to Hollywood. While he would never become the movie star he might have imagined back in Florida, he would carve out a solid career for himself as one of the film industry’s unglamorous but indispensable utility players; he would take bit parts in dozens of movies and television shows alongside starring roles in hundreds of commercials, and eventually also take on small-time writing, directing, and producing gigs. A year after arriving in Hollywood, he wrangled a meeting with the Los Angeles-based Task Force Games, best known for their Star Fleet Battles tactical space-combat games which took place in the Star Trek universe. He finally got a positive response from this publisher, and soon signed a contract with them to publish the board game Liftoff!.
Liftoff! made its public bow in the summer of 1989 at the Origins International Game Expo, one of the tabletop hobby’s two biggest American events, which happened to be held that year right there in Los Angeles. The reaction to Bronner’s game at Origins was cautiously favorable, but it never translated into much in the way of sales in the months that followed. Task Force Games had been bought by the computer-game publisher New World Computing the year before they signed the contract with Bronner; it was for this reason that they were in the Los Angeles area at all, having been moved there from Amarillo, Texas, to join their new parent. Yet the relationship wasn’t living up to either partner’s expectations. Profits, which tended to be scant at the best of times in the tabletop industry, had become nonexistent, as the expected synergies between the computer and the tabletop business failed to materialize. In 1990, Task Force’s founder John Olsen scraped together enough funding to buy his company back out from under New World and moved with it back to Amarillo. Necessity forced the downsized entity to focus its resources on Star Fleet Battles, its most well-known and marketable franchise. Liftoff! died on the vine.
But Fritz Bronner wasn’t willing to let his game go so gently into that good night. Although he had never owned a computer in the past, his arrival in Hollywood had coincided with the beginnings of a buzz from the more forward-thinking members of the media elite about the future of interactive video and multimedia computing. It certainly hadn’t been lost on Bronner when signing the contract with Task Force Games that the company’s parent was a publisher of computer games. In fact, he had tried to interest New World in a digital version of Liftoff! repeatedly, but could never really get their attention. Fortunately, his attorney had assured that the contract he signed with Task Force/New World gave them just one year to develop a computerized version, if they wished to do so; afterward, those rights reverted to Bronner himself. He soon bought his first computer, a used Commodore Amiga 500, to consider the possibilities. In the summer of 1990, he started talking with a young programmer named Michael K. McCarty. At year’s end, the two of them formed a company which they named Strategic Visions, and began working on a demo to show to publishers.
It perhaps says something about the zeitgeist of gaming on the cusp of the multimedia age that Bronner and McCarty elected to make their demo a non-interactive video rather than an interactive game. From the start, Bronner’s vision for the project had been to move the mechanics of the board game onto the computer essentially intact, then spice them up with lots of video footage from the archives of NASA and the Soviet space program. His timing in this respect was perfect: the fall of the Iron Curtain helped immensely in getting access to the latter’s videos. Meanwhile the fact that all of the footage was the product of government agencies, and thus released into the public domain, helped in another way. Less positively, this overweening focus on the multimedia aspects of the project, which would continue throughout its duration, would rather distract from some worrisome flaws in the foundation of the actual rules set — an issue we’ll return to a bit later.
In the short term, though, the non-interactive demo served its purpose. In contrast to the relative lack of interest the tabletop design had garnered, the proposed digital version attracted lots of publishers when Bronner and McCarty brought their demo to the Summer Consumer Electronics Show for private screenings in June of 1991. The videos Bronner showed of rockets soaring and exploding were well-nigh irresistible to an industry all abuzz with talk of interactive movies incorporating just this type of real-world footage. Over thirty potential partners viewed the demo reel in the course of the show, and several of them came forward with serious offers.
Bronner settled on Interplay Productions for several reasons: they were also Los Angeles-based, always a nice advantage; he got on well with Interplay’s head Brian Fargo; and Fargo had immediately run with an idea Bronner had mentioned in passing, that of signing up Edwin “Buzz” Aldrin — by far the most gregarious and ambitious of the Apollo 11 astronauts in terms of media and marketing — to lend his endorsement to the game. Indeed, Fargo already had Aldrin on board when the contract was signed in August of 1991. Thus did Liftoff! become Buzz Aldrin’s Race into Space.
Aldrin’s direct participation encompassed nothing more than marketing — he regaled a long string of trade-show attendees and magazine editors with his well-worn tales of landing on the Moon, while saying next to nothing about the game itself — but it did lead to the computer game’s most significant substantive addition to the board game. Bronner added a roster of astronauts to be recruited and trained, who manifested differing strengths and weaknesses and even differing personalities which could cause them to be more or less effective when combined into crews. The idea and approach are so similar to the astronaut management found in Project: Space Station that one suspects they must have been inspired by that earlier game. That said, I have no proof that this was so.
Otherwise, though, Race into Space is a fairly straightforward re-implementation of Liftoff! rather than a major expansion upon it. In fact, some parts of the board game are actually trimmed away, such as the ability to play as the head of a fictional European or Asian space agency, which Bronner had included in order to allow up to four players to gather around the tabletop. Race into Space, on the other hand, is limited to two players, each of them controlled either by a human or the computer.
Pitched to Interplay with an absurdly optimistic six-month development timeline, Race into Space ran over that estimate by a factor of three. Indeed, it became the first game in history to get two feature-length previews in Computer Gaming World, one in January of 1992 and one in December of the same year. An early decision to switch development from the fading Amiga to MS-DOS didn’t help matters; nor did Strategic Visions’s need to rely on Interplay’s art team for most of the non-digitized graphics, work that got done only as time allowed betwixt and between other in-house projects. Most of all, though, the project began just a little bit too early, before the typical consumer computer was quite able to live up to Bronner’s multimedia ambitions. Even the version of the game that finally did ship on floppy disk in March of 1993 was heavily compromised by the limitations of its storage medium, with digitized still photographs standing in for most of the videos the original demo had promised. Players would have to wait for the CD-ROM version, which didn’t arrive until fourteen months later, to truly see the game as its designer had imagined it.
Race into Space is played in turns lasting six months each, beginning in 1957 and stretching until either 1977 arrives or someone manages to land on the Moon. Economics will play a big role in your success or lack thereof; you’re provided with a semiannual budget which increases only gradually, with the completion of major milestones according it a more substantial boost — especially if you manage them before your opponent — and catastrophic failures having the opposite effect. This approach is rather ahistorical on the face of it — in a classic example of throwing money at a problem until it bears fruit, the budget of NASA in particular was dictated more by the achievements of the Soviets than by the agency’s own accomplishments — but is probably necessary for Race into Space to work as a game.
Still, the core of the experience remains what it was when a young Fritz Bronner first started experimenting with the idea of a space-program-management game in the early 1980s: watching with bated breath from mission control as your rockets go up, hoping each successive step will go off without a hitch to get you your next mission milestone. Said milestones encompass everything from launching the first unmanned satellite — the game begins in the year of Sputnik — to the Moon landing itself. Yet, beyond the first few milestones at any rate, they don’t break down into a mere linear progression of steps to be mindlessly walked through. You can combine milestones into one mission; for example, you might make your first flight of eight days or more duration the same one where your astronauts first execute a space walk. And you can also skip some of them entirely, if you’re pressed for time and are willing to forgo the budget boosts with which they tempt you; the aforementioned space walk, for example, isn’t even strictly necessary for a Moon landing.
Most importantly, Race into Space lets you implement not only the historical method of getting to the Moon — that of employing a space capsule which orbits the Moon and a separate landing craft to take part of the crew down to the surface — but also a number of other approaches that were discussed at the time, such as an all-in-one-spacecraft approach (this requires developing a monster rocket that makes a Saturn V look like a kid’s toy) or even a reusable space shuttle (this requires both an enormous investment of time and money and a really slow opponent). The variety of alternate histories the game allows is not infinite — more on that momentarily — but is enough to provide for at least a few interesting and even educational playthroughs. If nothing else, you’ll walk away from your failed attempts to rewrite history with a better understanding of why NASA chose the approach they did.
But alas, Race into Space soon begins to show those cracks in its foundation which I alluded to earlier, which are partly born from the lack of a clear sense of its own goals as a game. One can imagine at least three abstract approaches fitting into the general framework of “a managerial game about the race to the Moon.” One would be a heavily experiential game, in the spirit of Michael Bate’s aesthetic simulations, de-emphasizing the competitive aspects in favor of taking the player on a journey through those heady early days of the space age. Another would be a replayable game of hardcore strategy, in which the fiction of the Moon race functions as a mere thematic skin for the mechanical underpinnings which quickly become the player’s real focus. And still another would be an open-ended sandbox, a learning tool that lets the player experiment with many different approaches to landing on the Moon and to spaceflight in general.
Race into Space never firmly commits to any one of these approaches, but rather feints toward all of them in various places. The end result is a confusing mishmash of elements that are constantly cutting against one another. The heavy reliance on photographs, video, and sound clips from the period in question seem to push it into the experiential camp, but its board-game-derived mechanics and relatively short play time — a full game usually takes no more than two or three hours to play — pull it in the second direction I outlined. And so the cognitive dissonances start to add up. The video clips lose their appeal when you’re forced to click through the same ones over and over, every time you play, even as it remains debatable whether the mechanics are really compelling enough to make it a game you want to return to again and again under any circumstances; there are really only one or two best paths to follow to get to the Moon, and once you’ve found them there’s little reason to keep playing. Meanwhile the game’s educational sandbox potential, while by no means nonexistent, is also sharply limited. True to its board-game roots, Race into Space doesn’t simulate spaceflight at all beyond rolling dice against an arbitrary set of success-or-failure percentiles. In terms of spaceflight hardware, it lets you mix and match a set of pieces it provides for you, and pour money into each piece’s research to push its reliability percentage up, but it’s nowhere near sophisticated enough to let you develop your own components from scratch. Here too, then, it feints in a promising direction without going far enough to truly satisfy over the long term.
Yet this sense of confusion about what Race into Space actually wants to be constitutes only its second biggest problem. Its biggest problem of all doesn’t require as much design philosophy to explain: the darn thing is just too darn hard. Something is badly off with the math behind this game — something you sense more than you can know. Playing it quickly begins to feel like that memorable montage of exploding and misguided rockets from the film The Right Stuff. You can recover in fairly short order from failed launches in the early phases, when you’re mostly launching unmanned craft, but they turn devastating when they start chewing through your astronaut corps like a wolf in a chicken coop. Failed missions not only destroy the morale of your surviving astronauts, causing them to perform worse, but knock the reliability of the failed component almost all the way back to zero, forcing you to research it up again from scratch. This of course makes no sense in strictly logical terms; in the absence of any new inputs, a defective component should be defective to exactly the same extent on the next flight. Rather than conveying the rounds of investigation and soul-searching that always accompanied a real loss of life in the space program, as it was doubtless intended to do, this mechanic just furthers the impression that the game is out to get you at any cost. The fact that the computer player mysteriously seems to be able to cut more corners than you without killing astronauts by the dozens contributes strongly to the same impression.
Unkind though it may sound to say, I can’t help but suspect that Race into Space‘s issues in this area reflect a fundamental misunderstanding of statistics on the part of a younger Fritz Bronner — a misunderstanding that somehow never got corrected through all his years of working on his game. A mission does not, as one might initially imagine, have a chance of success equal to the reliability percentage of its dodgiest hardware component. On the contrary: the various components actually undergo reliability checks at various times — often at multiple times — during a mission. Therefore even a stack of components which have all been researched up to a reliability of 95 percent still has a substantial chance of failing in some more or less disastrous way on a more complex mission. And yet you simply don’t have time to laboriously research every component up to its maximum reliability, which for many of them is substantially below 95 percent anyway. You’re in a Moon race, after all. You have to roll the dice. Small wonder that so many players over the years have advocated save-scumming — that dastardly practice of saving and reloading until the dice roll your way — as the only practical way to play. That, or play a two-human-player game, but just click through your “opponent’s” turns without doing anything. Playing that way, you might just be able to get to the Moon before 1977.
So, despite the historical verisimilitude it works so hard to inculcate via its video clips and all the other period-specific touches, Race into Space‘s mechanics lead to a simple game of luck at bottom, and one where the odds are stacked against you at that. There is no opportunity to jump in and make decisions when a mission starts to go wrong — no chance, in other words, to improvise your way through a drama like the Apollo 13 mission. You’re a mere helpless bystander from the moment a mission begins until it ends.
The game’s delight in making its players’ rockets go boom provoked such howls of protests from early purchasers of the original floppy-based release that Interplay soon released a patch to tweak the numbers somewhat — although still nowhere near enough in the opinion of most. The very fact that Bronner felt able to manipulate the numbers in this way, of course, demolished any remaining belief players might have harbored that the numbers had any real historical basis at all. Clearly they were strictly arbitrary. Bronner never did achieve a balance that felt both playable and true to history. And that failure makes it difficult to consider Race into Space as a whole as anything but another interestingly failed attempt at making a game out of real-world space exploration.
Race into Space sold in reasonable numbers for Interplay, but never huge ones, especially after word of just how frustrating it could be got around on the street. Thus none of Bronner’s plans for sequels, which he had publicly discussed at some length in the run-up to release, ever got off the metaphorical launching pad. Strategic Visions soon folded up shop, and Bronner continued his career in Hollywood. He’s never designed another game.
Ironically, the sequels Bronner discussed may actually have made for better games than this one. One idea, for example, would have focused on a manned mission to Mars. Removed from the context of real history, not being surrounded by all those grainy old video clips reminding players of what once was, such a game would have been able to exist entirely on its own terms, and may have wound up feeling more satisfying because of it even if its mechanics had been left largely unchanged.
As it is, though, Race into Space displays that most telling sign of an ingenious game idea with questionable execution: players lining up with ways to fix it. Their efforts were confined to the realms of speculation and hex editors until 2005, when, the rights having reverted to Fritz Bronner, he generously released the game and all of its source code under the General Public License. In the time since, a small community of enthusiasts has continued to port and refine the game on a sporadic basis, but it’s never managed to garner a critical mass of developers or players. Ditto an attempt at a full-fledged commercial revival of the concept by the wargame publisher Slitherine, which arrived complete with the original game’s astronaut mascot in 2014 under the name Buzz Aldrin’s Space Program Manager.
While Race into Space‘s most specific, practical design mistakes aren’t too hard to identify, the more generalized failings of it and its peers in the scanty tradition of contemporary-space-program games do rather prompt one to ask another question: is there something about the subject matter itself that causes it not to work as a satisfying game? I believe I’ve actually done a reasonable job of answering that question already for the case of spaceborne vehicular simulations: as I noted near the beginning of this article, an astronaut in space just doesn’t have enough independent agency in most situations to make for a reasonably realistic simulation that’s also engaging as a game. But what of the other broad category of games I’ve addressed today, the one to which Race into Space belongs: that of space-program managerial games?
For a long, long time after Race into Space, one might have been forgiven for assuming that space-program managers as well were indeed nonstarters as satisfying games. But then, in 2015, a game called Kerbal Space Program came along to prove such naysayers wrong. I don’t usually write about modern games here, but I will briefly outline the premise of this one.
The titular Kerbals are a species of furry green aliens who run a space program of their own on their planet of Kerbin. Despite their cartoony cuteness, said space program itself is simulated with meticulous attention to detail, including all of the particulars of physics and aeronautics which Race into Space so conspicuously lacks. Players with an interest in rocketry or aeronautical engineering can and do lose years of leisure time to it. It may or may not be a game for you, but it is, by any objective definition, an impressive piece of work, far more intrinsically fascinating than any other that I’ve written about today.
And how does it accomplish this feat? One obvious answer is that it knows what it wants to be first and foremost: a sandbox for exploring the practical possibilities and limitations of space travel using the technology of our own recent past, present, and near future. A dedicated modding community has helped the designers to graft on additional layers of competitive strategy and economics for those who want them. Nevertheless, the game’s central delight remains that of creation and discovery. Kerbal Space Program is, in other words, one of the preeminent sandbox games of our time. And it’s completely comfortable with itself in that role, being free of the cognitive dissonances of Race into Space.
This stronger sense of itself is certainly one of the secrets to Kerbal Space Program‘s success. And here’s another: having noted earlier that the proposed non-historical sequels to Race into Space may have led to more compelling games, I’ll now submit Kerbal Space Program as Exhibit One in evidence for that argument. Freed from the weight of all that real human history, existing as it does in a world of cartoon aliens, it can just be a game.
Games can be great tools for exploring other lives and other times, but sometimes you just want to play. History, after all, doesn’t occur for our ludic amusement. Every wargamer knows that the number of unaltered historical battles that lead to good games is very small indeed; most real battles have their outcomes foreordained before they even begin. Perhaps the Apollo program and the Space Shuttle and the International Space Station and all the rest just don’t have the right stuff to make a worthy game. But that’s okay — because it means that, instead of recreating the storied past, we can imagine an exciting future. That goal is at least equally worthy — and, as Kerbal Space Program so thoroughly illustrates, it’s something that a game about space exploration can most definitely do, and do well at that.
(Sources: the books The Buzz Aldrin’s Race into Space Companion by Fritz Bronner, Designers & Dragons, Volume 2 by Shannon Appelcline, and BASIC Computer Games by David Ahl; Computer Gaming World of August 1986, March 1987, October 1987, February 1988, January 1992, May 1992, December 1992, and August 1993; Strategy & Tactics 212. Online sources include Leon Baradat’s comprehensive Race into Space site, the article “The Buzz is Gone” at The Escapist, and Steve Stipp’s homepage.
I first read Piers Anthony’s thick 1969 novel Macroscope when I was in my early teens, and haven’t returned to it since. Nevertheless, I still remember the back-of-the-jacket text on my dog-eared old first paperback edition: “Existence is full of a number of things, many of them wondrous indeed — and these are the things of this soaring novel.” This high-flown blurb has remained so memorable to me because it’s so unlike anything anybody would ever write about Anthony’s work today.
Piers Anthony was born in 1934, and first made a name for himself in literary circles as one of the slightly lesser lights among science fiction’s New Wave of the 1960s. He was no Roger Zelazny, Ursula Le Guin, or Harlan Ellison, but he was regarded as a modestly promising young writer in his own right; he even contributed a story to the second of Ellison’s landmark Dangerous Visions anthologies in 1972.
But that honor, along with Macroscope, which became his second and last novel to be nominated for a Hugo award in 1970, actually mark the high point of Anthony’s respectable literary career. It had always been difficult for him to pay the bills as a second-string writer of serious speculative fiction, and it only grew more difficult as the luster faded from the New Wave in the 1970s and his books attracted even less attention. He was saved from a perhaps not-undeserved obscurity by Lester del Rey, one of genre fiction’s most legendary editors and curators. As the first to nurture and publish such writers as Stephen R. Donaldson, Terry Brooks, and David Eddings, del Rey became largely responsible for the post-Tolkien, post-New Wave boom in epic fantasy fiction. But, apparently seeing a different set of strengths and weaknesses in Anthony than he did in those other charges, del Rey guided him down a rather less epic path. Thus in 1977 Anthony came to write A Spell for Chameleon, the first novel in an endless series of them set in the pun-infested light-fantasy world of Xanth.
A Spell for Chameleon certainly wasn’t the worst fantasy novel to be published that year. While it had nothing of any substance on its mind whatsoever, its very lightness made it a welcome alternative to the likes of the three other writers I’ve just mentioned, whose books came complete with all the labored self-seriousness of an Emerson, Lake, and Palmer album. The fact is, there really wasn’t much else like A Spell for Chameleon on bookstore shelves in 1977; it felt like a genuine breath of fresh air.
Unfortunately, that book was as good as Xanth ever got. When it became the best-selling novel he had ever written by far, Anthony recognized it for what it was: a formula for maximum sales with minimum labor investment. And from that point on, he never looked back.
Still, even the first few Xanth novels after A Spell for Chameleon weren’t horribly written by the standards of their kind. Eventually, though, Anthony decided that such niceties as editing were incompatible with his desire to publish one of them every year, along with two or more other books from his other series. In time, he admitted to writing his novels using a “template” in his word processor — ah, the wonders of technology! — that he needed merely fill in, Mad Libs-style. He was actually able to outsource much of the writing to his readers, by inviting them to submit their own jokes and plots and character outlines. But where the rubber meets the road, in the form of sentences on the page, none of these assistants could make up for his refusal to take the time to be any good at his craft. There are sentences in latter-day Anthony in particular which are simply appalling from a writer with decades of experience. Consider, for example, this extract: “So why would I break with him? Because I came to the conclusion that he was a loose cannon. The problem with such a cannon is that it is more dangerous to its friends than to its enemies. I had suffered such looseness before…” If ever a court is established for crimes against the English language, Piers Anthony ought to be one of the first writers it indicts.
Between 1977 and today, Anthony has churned out no less than 42 Xanth novels, with another four reportedly complete and merely awaiting release as of this writing. And in between all those Xanth novels, he’s written dozens of other books. His guiding principle appears to be that not one word he writes should ever be put to waste; he wants somebody to pay for every last stroke of the keyboard. Thus he’s written two rambling, unfocused “autobiographies” which seem to be composed of journal extracts and “how to be a successful writer” advice columns he wasn’t able to place anywhere else. And thus when he wrote a series of letters to a twelve-year-old Xanth fan who had been paralyzed in a car crash, he irretrievably tainted the kindness he had evinced in doing so by compiling all of them into a book and publishing that too.
Anthony’s great stroke of genius for promoting all of these books came right out of the modern social-media playbook: he built his brand out of himself, building a cult of personality that superseded pesky details like the quality of his prose or the originality of his plots. For most people, Xanth fandom has a definite expiration date; it generally begins in one’s preteen years, and is over around the time one learns to drive a car. Within that window of time, however, many youngsters are all in for Xanth, and this is due not least to the connection they feel to its mastermind. Early on, Anthony took to appending an “author’s note” to each of his novels, in which he mused about the circumstances of its creation. That anyone, much less impatient youngsters, should have found these interesting was rather bizarre on the face of it. Anthony didn’t travel much or have adventures in the real world or build or do unusual things. He mostly just sat in front of his computer in his suburban home — not exactly a memorably unusual lifestyle in this modern world of ours. In the context of his author’s notes at least, the purchase of some extra memory for his computer or the switch to a new word processor counted as major life events for him.
And yet his fans absolutely ate it up. Most of them were still at an age when books and other creative works seemed to fall out of the sky fully-formed from a realm completely isolated from their own experience. Their glimpses of a real person behind the curtain of the Xanth novels marked for many of them their first exposure to the idea of artistic creation as a human labor — perhaps one they could even engage in themselves. And so, far from being a disadvantage, this sweeping away of the creative mystique was a big part of Xanth’s appeal, inculcating enormous loyalty in Anthony’s young readers. A memorable 2012 episode of the radio show This American Life illustrates the real bond that existed (and presumably still exists) between Anthony and his fans by telling the story of a picked-on teenage boy who ran away to the house of his favorite author — and was, it must be said, treated by said author with great kindness and compassion when he arrived there.
Yet even as he was nurturing such a warm relationship with his fans, Anthony was cementing his reputation among his peers as one of the biggest jerks in genre publishing. His career has been a long string of feuds and shattered friendships, which he describes at length in his autobiographies. His most longstanding battle has been with the Science Fiction Writers of America, an organization he claims to have “blacklisted” him during his lean years; no one actually involved with the SFWA is quite sure what he’s talking about. The real core of Anthony’s anger would seem to be his frustration at not being taken seriously by such establishment organs as this one. He’s long since been dismissed — admittedly, on pretty good evidence — as a hack; there will be no more Hugo talk in his future. Anthony complains endlessly about how all of his more “adult” fiction has been overshadowed by the Xanth novels which have made him a rich man, but has never taken the obvious step of simply not writing any more of the latter. The tension between artistic and commercial demands has tortured the psyche of many a writer, but in Anthony’s case it feels more comical than tragic, given that his adult books all tend to read like Xanth novels with more explicit violence — and, most especially, with much more explicit sex. And so we arrive at the really disturbing side of Piers Anthony.
I want to be especially careful in what I say next because I’ve always tried to separate the creator from his work when writing criticism of any stripe. Certainly there’s no shame in writing disposable children’s entertainment. And certainly there have been plenty of other writers who have also been jerks, including some whose talents far exceeded those of Anthony. And certainly writers need to be able to address difficult, uncomfortable subject matter without being accused of promoting or glorifying the things they describe; Vladimir Nabokov should not be deemed a pedophile because he wrote Lolita. But, even having taken all of that to heart, it remains hard for me to avoid the feeling when reading Piers Anthony on the subject of sex that something is simply wrong with this guy.
Anthony’s wrongness about sex, I should emphasize, isn’t the usual science-fiction author’s clunky mawkishness. It’s more extreme even than Robert A. Heinlein during his Dirty Old Man phase, when he wrote about sex like an alien with no understanding of human psychology might, describing it like any other mechanical process might be described by any of the dozens of stock Competent Men who populated his novels: “Now, you see, Friday, it’s just a matter of inserting Tab A here into Slot B, then moving it in and out like so.” No, Anthony’s obsession with girls just past the age of puberty — or in some eye-opening cases with girls who have not yet reached puberty — is more pernicious than this sort of rank cluelessness. It’s the reason that, if I saw a youngster I was fond of reading an Anthony novel, I wouldn’t just shrug my shoulders, but would actively try to steer her toward something I consider more healthy. For there really is, I think, a sickness — moral if not psychological in the clinical sense — running through this man’s body of work.
This side of Anthony isn’t new, although it has grown more pronounced over time as he’s become less beholden to editors. A Spell for Chameleon‘s gender politics weren’t particularly progressive even by the standards of the late 1970s. Its hero is a young man named Bink who wants something which his author considers to be impossible under normal circumstances: a girl with whom he can enjoy a warm friendship-of-intellectual-equals and whom he also finds sexually attractive — for it’s taken as a given by Anthony that a smart girl can never be a sexy one. The solution to Bink’s problem arrives in a girl with the unsubtle name of Chameleon, who cycles over the course of a month between a hideous but brilliant hag and a beautiful but moronic nymphomaniac. (Yes, Anthony’s idea of allegory really is that banal.) And so Bink’s problem is solved. The solution comes complete with a bit of teenage philosophizing, which Bink delivers to Chameleon’s nympho-bimbo incarnation just before they go at it again.
“I like beautiful girls,” he said. “And I like smart girls. But I don’t trust the combination. I’d settle for an ordinary girl, except she’d get dull after a while. Sometimes I want to talk with someone intelligent, and sometimes I want to –” He broke off. Her mind was like that of a child; it wasn’t really right to impose such concepts on her.
“That’s the point,” he said. “I like variety. I would have trouble living with a stupid girl all the time — but you aren’t stupid all the time. Ugliness is no good for all the time — but you aren’t ugly all the time either. You are — variety. And that is what I crave for the long-term relationship — and what no other girl can provide.”
Cringe-inducingly adolescent though this take on guys and chicks might be — especially when one considers that it was written without any apparent irony by a 43-year-old man — it’s pretty harmless compared to where the Xanth novels went later on. Uncomfortably young girls get put in sexually charged situations, often with much older men, over and over. There’s little to no explicit sex — note where Bink “breaks off” in the extract above — but the subtext keeps getting more and more creepy. By 1992, Anthony felt free to entitle one of his Xanth novels The Color of Her Panties. At this point, it was hard to avoid the feeling that he was deliberately trolling the critics who had by now been calling him out for his books’ pervy subtexts for quite some time.
Still, Anthony’s allegedly prurient interest in his young female subjects would be much more speculative — and I would probably not be writing this article — were it not for those other, “adult” books of his. Many of these ooze the same disturbing fixations as the Xanth books, but are able to carry them through to, shall we say, consumation. Exhibit Number One in this category must be Firefly, a 1990 attempt at horror dealing primarily with what Anthony himself describes as “inflamed and perverted sexual desire.” It includes a lengthy sex scene between an adult man and a five-year-old girl, described in minute detail. In fact, the scene is another, rather horrifying example of Anthony’s habit of outsourcing the writing of his books: it came from an imprisoned pedophile with whom he corresponded. Anthony, in other words, literally published child porn. It’s quite simply the most disturbing thing I’ve ever read in a lifetime of prolific reading. Not even Mein Kampf bothers me like this. Needless to say, I won’t be quoting it here.
But, you counter, this was a horror novel, a genre meant to shock and transgress norms. Don’t confuse the author with the work, etc. And I might reluctantly agree with you, even if I didn’t have any personal desire to ever read anything by this writer again. But then comes the author’s note, in which Anthony justifies the rape of this five-old-girl because… she wanted it. She was asking for it, tempting the man who had sex with her into the deed. (Did I mention that she is five years old?) Her name is Nymph. (Did I mention that Anthony isn’t subtle?)
There seems to be a broad spectrum of human desire, and what we call normal is only the central component. It may be that the problem is not with what is deviant, but with our definitions. I suggest in the novel that little Nymph was abused not by the man with whom she had sex, but by members of her family who warped her taste, and by the society that preferred to condemn her lover rather than address the source of the problem in her family.
Those who feel that [the imprisoned pedophile’s] stories represent abnormal taste should read My Secret Garden by Nancy Friday, which details some of the sexual fantasies of women. Neither is Nymph an invention; similar cases are all too frequent. These aspects were from my research rather than my imagination. I don’t know what is right and what is wrong; I merely hope to raise some social questions along with the entertainment provided in the novel. I suspect our priorities are confused. We have problems enough with world hunger and injustice, without making more by punishing people for deviant but perhaps harmless behavior.
Here we have it from the horse’s mouth. The rape of a five-year-old girl is “perhaps harmless.”
We often see this pattern of argument — the “hey, I’m just asking questions!” pattern — among those who wish to say something much of the society around them will consider reprehensible but who lack the courage to stand right up and do so. (You see it constantly, for example, in the toxic arena that is present-day American politics.) Added to all of the other circumstantial evidence swirling around Piers Anthony — his many almost-as provocative statements made in interviews; his correspondences with multiple imprisoned pedophiles, not just this one; the unending fascination with pubescent and prepubescent girls running through most of his novels — it raises a strong feeling that something is indeed wrong inside this fellow’s head. I should emphasize that I have no reason to believe that Anthony has ever acted on the urges in question, if they do in fact exist. Has he found a way to satisfy them through his writing instead? That would be a good thing, if so; the crime exists not in the unfortunate psychological kink of being a pedophile, but in acting upon it. Or, that is, it would be a good thing — if only his books weren’t being read.
Once you’ve seen these things, you can never unsee them. Anthony’s cherished relationships with his young fans — and again, I have no reason to believe he has ever abused their trust in any physical sense — takes on a new, creepy flavor. Suddenly all those long letters to the paralyzed girl, as collected in the book Letters to Jenny, begin to read disturbingly like… well, like he’s flirting with her. And suddenly we breathe a sigh of relief that the teenage runaway whose story was chronicled on This American Life was a boy rather than a girl. How much of this is real and how much is projection? It’s impossible to say. (Hey, I’m just asking questions…) I will say only this: please, read someone else’s books, and try to get your children to do so as well. I smell something rotten at the core of this writer’s output, and I know I’m not alone.
All of the foregoing ruminations were prompted by my ostensible “real” subject for today, the 1993 Legend Entertainment game Companions of Xanth. Ironically, I find myself with somewhat less to say about that subject than I do about Piers Anthony’s odd and disturbing career arc as a writer. The game is… reasonably good, actually, if hardly one of the most memorable works in the history of adventure gaming. The creepiness factor is kept surprisingly low under the circumstances, the humor is hit-and-miss but always good-natured, and the design, with one glaring exception which we’ll get to momentarily, is up to Legend’s usual high standard. Further, in one sense at least, the game represents a real landmark in Legend’s history: it marks the point where they finally dumped their parser and embraced the point-and-click paradigm, thus ushering in the second of the three broad phases of the company’s history and ushering out the age of the commercial text adventure writ large.
Companions of Xanth came to exist at all entirely thanks to Legend’s everyday composer and music programmer Michael Lindner, who also happened to be one of those rare readers who defy the usual age-circumscribed window of Xanth fandom; he had retained his affection for the series right into his adult years. He had first supplemented his usual duties at Legend with those of a writer and designer on 1992’s Gateway, a project consciously engineered by the company’s co-founder Bob Bates to serve as a sort of boot camp for training up new designers. Having duly completed that apprenticeship, Lindner begged for permission to make a Xanth game as his first project as a head designer. His managers obligingly made inquiries, and soon brought home a contract to make a game version of Piers Anthony’s latest Xanth novel-in-progress, which was to be titled Demons Don’t Dream. As was more usual than not for licensed projects like this, Lindner had very little direct contact with Anthony in the course of making the game. He largely had to content himself with pre-release proofs of the novel in question, whose plot the game he made follows fairly closely but not slavishly.
We can probably feel pleased for Anthony’s lack of involvement, in that it means that most of the pervier elements of Xanth are missing. While Anthony in his novel dwells at length over the “luscious young women” in the story, Lindner lays it on considerably less thickly.
Still, the plot is rife with other Xanthian staples — not least the meta-fictional elements that had become such a hallmark of the series by this point, sixteen books in. Many of the jokes, situations, and characters in both the book and the game come courtesy of Anthony’s army of fans, who are scrupulously credited by name in the book’s author’s note. The most notable example of fan service is the character of Jenny Elf, based on the author’s young friend Jenny, the car-crash victim he wrote to at such length. (By this point, Anthony tells us in his author’s note, she had recovered from her paralysis sufficient to sit and even stand briefly without support. She would make further strides in the years to come, although she would never regain her full range of motion.) Jenny Elf, who is blessedly not overly sexualized even in the book, appears alongside Sammy Cat, the real girl’s favorite pet.
You yourself play as a teenage boy named Dug who lives in Mundania, the non-magical alternative to Xanth; Mundania, that is to say, is our world. As a hater of computer games, Dug has made a bet with his friend Ed that he won’t like one called Companions of Xanth. If his faith in the pointlessness of the gaming hobby holds true, he wins Ed’s motorcycle; if this game proves an exception to the rule, Ed gets a date with Dug’s estranged girlfriend. (“But what if she doesn’t want to go out with you?” asks Dug. “That’s a technicality we’ll deal with at the appropriate time,” answers Ed. Okay, the game isn’t totally without creepy elements…)
So, the earliest stages of the real Companions of Xanth require you to open this virtual Companions of Xanth and boot it up on your in-game computer. (Confused yet?) After some preliminaries, you get sucked through the monitor screen into Xanth. (That is to say, your character in the game you’re playing gets sucked through the monitor of the computer running the game he’s playing.)
Companions of Xanth resoundingly fails to put its best foot forward. Just as you’re about to enter Xanth and get started properly, it lives up to its name by asking you to choose a companion for your adventures from four possibilities. A nice little addition, this, you think to yourself, as you choose the companion that looks most interesting and entertaining. This must be a way to make the game replayable, a la Maniac Mansion. But nope! Think again! There’s just one “correct” companion to be chosen. Naturally, this being a Piers Anthony creation, that companion is the nubile serpent chick named Naga. If you make the supremely non-Xanthian move of choosing any of the others, the game lets you play for a few minutes longer, then dead-ends you; it’s time to restart or restore, my friend.
Such a colossal design fail is downright bizarre to see in a Legend game of this vintage. It struck me immediately that it must be an artifact of an earlier, more ambitious plan to offer four genuinely divergent experiences — a plan which got chopped down to size once the realities of time, labor, and money came home to roost. Unfortunately, neither Bob Bates nor Mike Verdu can recall what might have gone down here, and I haven’t been able to locate Michael Lindner. So, all we can do is speculate.
After a beginning like that, whatever the reason for its existence, one goes into the rest of Companions of Xanth decidedly nervous, wondering if it’s going to be one of those sorts of games. Thankfully, it isn’t; the aforementioned is its only real design pratfall. After it gets going properly, it evinces the meticulous commitment to fair play which the Legend brand was coming to stand for by 1993.
Much of the humor, and with it many of the puzzles, revolve around puns and wordplay, long a Xanth staple. Mind you, Companions of Xanth isn’t as clever as something like Infocom’s Nord and Bert Couldn’t Make Head or Tail of It in this respect. It is, after all, implicitly written for a less sophisticated audience, yet it can still be good fun in its own right. You’ll spend time here battling a censor ship, finding a way to get beyond the pail, and visiting the Fairy Nuff. Sometimes the puns go a little too far out on a limb — the “com-pewter,” an interactive compendium made out of pewter, is one example — but the puzzles themselves are always comprehensible, which is the most important thing. Only those who struggle a bit with idiomatic English in general, such as non-native speakers, are likely to have any major problems solving the game.
Companions of Xanth as a whole is as lightweight as the novels which inspired it. If it never quite dazzles, it never annoys overmuch either, at least once you get past that first hump, and it might even prompt a chuckle or two. It’s a sort of baseline standard game for Legend, never really managing to distinguish itself in either a positive or a negative way. Yet its interface did mark it as something truly new for the four-year-old company at the time of its release, and as such is perhaps worthy of more attention than the game it supports.
As I noted in my last article, in reality the parser disappeared more gradually than suddenly from Legend games; the full run of titles the company released between 1990 and 1993 shows a slow marginalization of the parser, until finally, beginning with Companions of Xanth, it just wasn’t there at all anymore. In fact, this same evolutionary process could be said not to have really ended even here. Although the move to point-and-click has forced the loss of that sense of infinite possibility that so delights people like me and Bob Bates, what remains here is about as text-adventure-like an interface as can be imagined under the new paradigm. Indeed, it smacks of the old ICOM Simulations interface from the mid-1980s, the industry’s earliest serious attempt to recast the classic adventure game in this mold, more so than it does the contemporary interfaces of Sierra and LucasArts. In a sense, one might even say, the parser still exists in this game. It’s just that you now build your imperative sentences with the mouse instead of the keyboard. Such an approach had always been an option in the earlier Legend games; now, it merely becomes a requirement.
Given that the screenshots of the interface included with this article are all but self-explanatory, I won’t dwell too long on its mechanics. Clicking a hotspot in the onscreen picture will highlight a default verb in the list on the left of the screen. Simply clicking on the hotspot again at this point will take that action, but you can also choose another verb from the list, if you wish. Many objects also have unique verbs which show up below the standard list when they’re highlighted; a rock, for example, might have an additional “throw” verb. And indirect objects are connected to certain actions; throwing the rock will require a third click, specifying what to throw it at. As you’re doing all of this, you see your command being built right there on the screen, just as if you were typing it in via a parser. It’s even possible to specify a verb first, then choose the object it acts upon, although this approach is of limited utility in that it doesn’t give you access to the special verbs connected to some objects.
All of which is to say that the new interface truly does represent another evolutionary rather than revolutionary technological step for Legend. What we have here is not a whole new game engine, bur rather the old one with a different front end. Once it gets past the stage of interpreting the player’s command, there’s less difference than you might expect between this Legend game and those that came before it.
This fact is most clearly illustrated in the screenshots by that little “Undo” button in the corner, something you would never — could never — see in a Sierra or LucasArts game. For those games run in real time, while Companions of Xanth, like a text adventure or an ICOM game, is still turn-based. This distinction has an enormous impact on the character of the game, reaching far beyond the welcome ability to instantly undo your last action when you get yourself killed or otherwise try something unfortunate. Legend games even after the parser went away have a more relaxed, contemplative, literary sensibility than the works of Legend’s peers. There’s still quite a lot of text here, and that text is still treated with unusual care and respect. It isn’t hard to divine, after playing around with one of their point-and-click games for just a few minutes, why Legend became the go-to studio for literary adaptations during the 1990s. While it had proved possible to take the type-in parser out of Legend’s engine, it was more difficult to take the literary spirit of the text adventure out of the company’s collective design aesthetic.
This held true even when Legend was otherwise embracing the multimedia era with gusto. Although Eric the Unready and Gateway 2: Homeworld had both been released in CD-ROM versions prior to Companions of Xanth, those were mere repackagings of the floppy-disk-based versions into a more convenient format. But when the subject of this article appeared on CD-ROM about six months after its original floppy-based release, it sported voice acting for the first time in a Legend title. And yet even here the voice acting only covered words said by the characters you met; there was no global narrator. Such an approach felt very much in keeping with that overarching literary sensibility that so marked Legend’s work. In this game, and in the next several Legend games to come, you were still expected to do a lot of reading for yourself.
For the record, the voice acting that is to be found in the CD-ROM Companions of Xanth is excellent — an impressive feat considering that this was Legend’s first foray into such a thing. Even here, their first time out, they were wise enough to employ professional actors recruited from the local union for same and recorded at a professional sound studio. It’s obvious that the actors had fun with their roles; my favorite part of the whole game might just be the blooper reel of outtakes which plays over the closing credits.
In the end, though, I find myself torn on the subject of Companions of Xanth in a way I can’t recall being for any other game I’ve written about here. If it existed in a vacuum, shorn of its association with Piers Anthony, I would call it a fun, frothy little fantasy romp, a solid debut for a new interface which retains more of the spirit of the old than we might have dared to hope for. And I would be happy enough to leave it at that. But, even as I believe it’s wrong to judge art on external factors in the vast majority of cases, there are exceptions, and I’m not sure this isn’t one of them.
I don’t blame Legend in any sense for making this game. Many of the more worrisome aspects of Anthony’s oeuvre become obvious only in the aggregate; most or all of those who worked on this game at Legend doubtless believed that they were merely capitalizing on a popular, harmless series of lightweight fantasy books. And yet I do find myself wishing that they had chosen some other series, just as I wish any current readers of Xanth, young or old, would do likewise. In my role of critic, I can tell you that Companions of Xanth is a (mostly) well-constructed game that’s relatively inoffensive in itself. But should you play it? That is, as always — but perhaps here even more so than usual — something you’ll have to decide for yourself.
(Sources: the Piers Anthony books Bio of an Ogre, How Precious was that While, Letters to Jenny, Macroscope, A Spell for Chameleon, The Color of Her Panties, Firefly, and Demons Don’t Dream; Computer Gaming World of July 1993 and March 1994; Questbusters 108. My thanks go to Bob Bates and Mike Verdu for talking with me about this period of Legend’s history — but I must emphatically state that all of the opinions expressed herein, especially of Piers Anthony and his work, are mine alone.
Companions of Xanth has not been re-released as a digital edition, doubtless owing to the complications involved with licensed titles. I’d prefer not to host it here due to my distaste for Piers Anthony, but you can find it elsewhere without too much trouble.)
The Compiled Life Wisdom of Piers Anthony, as Found in His Autobiographies
Writers like Roger Zelazny and Samuel Delany got awards because of their sophistication as writers, which sophistication I do not question, but I was regarded from the outset as an entertainment writer. What I was doing was too complex and subtle, not only for others to understand, but for them even to realize that it existed.
The best guide for a book to avoid is an award winner.
I worried that I would not be able to write fantasy well without Lester del Rey’s editing. But instead it was like a burden lifting from my shoulders. Suddenly I was free of oppressive editing.
Then Lester tried to cut the entire Author’s Note from the fourth Incarnations novel, Wielding a Red Sword. He said it was too long, and anyway, they were in the business of publishing fiction, not nonfiction. This was the Note in which I described my computerization — I had until then written my novels in pencil and then typed them with a manual machine, so it was a significant step for me.
When I read Isaac Asimov’s massive two-volume autobiography I found it interesting, but concluded that the minutia of daily existence are seldom worth recording for posterity.
I dumped SFWA, and have remained hostile to it since. There is evidence that some of its members are still spreading falsehoods about me. If ever push comes to shove, I will put it out of business. Because today I have the resources to sue. All I need is the pretext.
I, like most boys, would have been capable of orgasm at any time in childhood, had I known how to masturbate.
A formula I invented for explaining the ways of publishers: TPB = SOD. What does it mean? Typical Publisher Behavior is Shitting On Dreams.
So are publishers really as rapacious and idiotic as they seem? Yes and no. Just as the intelligence and conscience of a lynch mob may be less than that of any individual person within it, so may the net savvy of a publisher be below that of any of its components.
I feel like a beautiful woman. That is, a lovely woman is pursued by many men — but when she mentions commitment, most of them vanish. Some vanish when they find they can’t get her into bed on the first date. Others vanish after they do get her into bed. So she becomes cynical; it is evident that most of those ardent suitors are insincere; all they want is her body for a night, rather than an enduring relationship, unless she happens to be rich. All the publishers really wanted from me was my best-selling series, Xanth — and those who lost it and those who got it tended to vanish as far as my other novels went.
I pondered, and my agent pondered, and it was my wife, who evidently understands the situation of beautiful women, who came up with an effective notion: link the one to the other. Make a package deal. So when the time for a new multi-novel Xanth contract came up, we put it to TOR: double or nothing. If this man wanted to get this woman in bed again, there would have to be marriage — though TOR’s chief editor is female, and I’m male.
[My wife and I] have what I call a conventional marriage: I earn the money, she spends it. In fact she keeps accounts and does the taxes, which are complicated. I decide on the big things, like the significance of world events, and she decides the small things, like everything else. I’m glad I married her, and believe that I would not be where I am today without her. But if I should find myself alone, I would then consider more carefully what else offers, with strong cautions from my life experience. Meanwhile I have a small category of correspondents I treat politely: those who profess or imply love for me.
Women of any age are interesting, and as a general rule, the younger a woman is, the more interesting she is, because natural selection dictates that the man who controls the greatest part of a woman’s fertile years will have the most children. A girl of twelve may have breasts and be a young woman in appearance; she is sexually desirable, regardless of law or custom. A girl of eleven may lack the breasts but be of similar general appearance, and her clothing masks her lack of maturity. So it is evident that some men aren’t concerned about the distinction, and go for the vagina regardless.
I have an insatiable curiosity about the nature of the universe and mankind’s place in it, and my profession of writing allows me to explore it all, seeking answers. I have fathomed a number of things to my satisfaction before they were clarified by the scientists.
Sometimes I’m stupid. This is annoying when I’m taking an IQ test.