RSS

Category Archives: Interactive Fiction

Doing Windows, Part 1: MS-DOS and Its Discontents

Has any successful piece of software ever deserved its success less than the benighted, unloved exercise in minimalism that was MS-DOS? The program that started its life as a stopgap under the name of “The Quick and Dirty Operating System” at a tiny, long-forgotten hardware maker called Seattle Computer Products remained a stopgap when it was purchased by Bill Gates of Microsoft and hastily licensed to IBM for their new personal computer. Archaic even when the IBM PC shipped in October of 1981, MS-DOS immediately sent half the software industry scurrying to come up with something better. Yet actually arriving at a viable replacement would absorb a decade’s worth of disappointment and disillusion, conflict and compromise — and even then the “replacement” would still have to be built on top of the quick-and-dirty operating system that just wouldn’t die.

This, then, is the story of that decade, and of how Microsoft at the end of it finally broke Windows into the mainstream.


When IBM belatedly turned their attention to the emerging microcomputer market in 1980, it was both a case of bold new approaches and business-as-usual. In the willingness they showed to work together with outside partners on the hardware and especially the software front, the IBM PC was a departure for them. In other ways, though, it was a continuation of a longstanding design philosophy.

With the introduction of the System/360 line of mainframes back in 1964, IBM had in many ways invented the notion of a computing platform: a nexus of computer models that could share hardware peripherals and that could all run the same software. To buy an IBM system thereafter wasn’t so much to buy a single computer as it was to buy into a rich computing ecosystem. Long before the saying went around corporate America that “no one ever got fired for buying Microsoft,” the same was said of IBM. When you contacted them, they sent a salesman or two out to discuss your needs, desires, and budget. Then, they tailored an installation to suit and set it up for you. You paid a bit more for an IBM, but you knew it was safe. System/360 models were available at prices ranging from $2500 per month to $115,000 per month, with the latter machine a thousand times more powerful than the former. Their systems were thus designed, as all their sales literature emphasized, to grow with you. When you needed more computer, you just contacted the mother ship again, and another dark-suited fellow came out to help you decide what your latest needs really were. With IBM, no sharp breaks ever came in the form of new models which were incompatible with the old, requiring you to remake from scratch all of the processes on which your business depended. Progress in terms of IBM computing was a gradual evolution, not a series of major, disruptive revolutions. Many a corporate purchasing manager loved them for the warm blanket of safety, security, and compatibility they provided. “Once a customer entered the circle of 360 users,” noted IBM’s president Thomas Watson Jr., “we knew we could keep him there a very long time.”

The same philosophy could be seen all over the IBM PC. Indeed, it would, as much as the IBM name itself, make the first general-purpose IBM microcomputer the accepted standard for business computing on the desktop, just as were their mainframe lines in the big corporate data centers. You could tell right away that the IBM PC was both built to last and built to grow along with you. Opening its big metal case revealed a long row of slots just waiting to be filled, thereby transforming it into exactly the computer you needed. You could buy an IBM PC with one or two floppy drives, or more, or none; with a color or a monochrome display card; with anywhere from 16 K to 256 K of RAM.

But the machine you configured at time of purchase was only the beginning. Both IBM and a thriving aftermarket industry would come to offer heaps more possibilities in the months and years that followed the release of the first IBM PC: hard drives, optical drives, better display cards, sound cards, ever larger RAM cards. And even when you finally did bite the bullet and buy a whole new machine with a faster processor, such as 1984’s PC/AT, said machine would still be able to run the same software as the old, just as its slots would still be able to accommodate hardware peripherals scavenged from the old.

Evolution rather than revolution. It worked out so well that the computer you have on your desk or in your carry-on bag today, whether you prefer Windows, OS X, or Linux, is a direct, lineal descendant of the microcomputer IBM released more than 35 years ago. Long after IBM themselves got out of the PC game, and long after sexier competitors like the Commodore Amiga and the first and second generation Apple Macintosh have fallen by the wayside, the beast they created shambles on. Its long life is not, as zealots of those other models don’t hesitate to point out, down to any intrinsic technical brilliance. It’s rather all down to the slow, steady virtues of openness, expandibility, and continuity. The timeline of what’s become known as the “Wintel” architecture in personal computing contains not a single sharp break with the past, only incremental change that’s been carefully managed — sometimes even technologically compromised in comparison to what it might have been — so as not to break compatibility from one generation to the next.

That, anyway, is the story of the IBM PC on the hardware side, and a remarkable story it is. On the software side, however, the tale is more complicated, thanks to the failure of IBM to remember the full lesson of their own System/360.

At first glance, the story of the IBM PC on the software side seems to be just another example of IBM straining to offer a machine that can be made to suit every potential customer, from the casual home user dabbling in games and BASIC to the most rarefied corporate purchaser using it to run mission-critical applications. Thus when IBM announced the computer, four official software operating paradigms were also announced. One could use the erstwhile quick-and-dirty operating system that was now known as MS-DOS; [1]MS-DOS was known as PC-DOS when sold directly under license by IBM. Its functionality, however, was almost or entirely identical to the Microsoft-branded version. For simplicity’s sake, I will just refer to “MS-DOS” whenever speaking about either product — or, more commonly, both — in the course of this series of articles. one could use CP/M, the standard for much of pre-IBM business microcomputing, from which MS-DOS had borrowed rather, shall we say, extensively (remember the latter’s original name?); one could use an innovative cross-platform environment, developed by the University of California San Diego’s computer-science department, that was based around the programming language Pascal; or one could choose not to purchase any additional operating software at all, instead relying on the machine’s built-in ROM-hosted Microsoft BASIC environment, which wasn’t at all dissimilar from those the same company had already provided for many or most of the other microcomputers on the market.

In practice, though, this smorgasbord of possibilities only offered one remotely appetizing entree in the eyes of most users. The BASIC environment was really suited only to home users wanting to tinker with simple programs and save them on cassettes, a market IBM had imagined themselves entering with their first microcomputer but had in reality priced themselves out of. The UCSD Pascal system was ahead of its time with its focus on cross-platform interoperability, accomplished using a form of byte code that would later inspire the Java virtual machine, but it was also rather slow, resource-hungry, and, well, just kind of weird — and it was quite expensive as well. CP/M ought to have been poised for success on the new machine given its earlier dominance, but its parent company Digital Research was unconscionably late making it available for the IBM PC, taking until well after the machine’s October 1981 launch to get it ported from the Zilog Z-80 microprocessor to the Intel architecture of the IBM PC and its successor models — and when CP/M finally did appear it was, once again, expensive.

That left MS-DOS, which worked, was available, and was fairly cheap. As corporations rushed out to purchase the first safe business microcomputer at a pace even IBM had never anticipated, MS-DOS relegated the other three solutions to a footnote in computing history. Nobody’s favorite operating system, it was about to become the most popular one in the world.

The System/360 line that had made IBM the 800-pound gorilla of large-scale corporate data-processing had used an operating system developed in-house by them with an eye toward the future every bit as pronounced as that evinced by the same line’s hardware. The emerging IBM PC platform, on the other hand, had gotten only half of that equation down. MS-DOS was locked into the 1 MB address space of the Intel 8088, allowing any computer on which it ran just 640 K of RAM at the most. When newer Intel processors with larger address spaces began to appear in new IBM models as early as 1984, software and hardware makers and ordinary users alike would be forced to expend huge amounts of time and effort on ugly, inefficient hacks to get around the problem.

Infamous though the 640 K barrier would become, memory was just one of the problems that would dog MS-DOS programmers throughout the operating system’s lifetime. True to its post-quick-and-dirty moniker of the Microsoft Disk Operating System, most of its 27 function calls involved reading and writing to disks. Otherwise, it allowed programmers to read the keyboard and put text on the screen — and not much of anything else. If you wanted to show graphics or play sounds, or even just send something to the printer, the only way to do it was to manually manipulate the underlying hardware. Here the huge amount of flexibility and expandability that had been designed into the IBM PC’s hardware architecture became a programmer’s nightmare. Let’s say you wanted to put some graphics on the screen. Well, a given machine might have an MDA monochrome video card or a CGA color card, or, soon enough, a monochrome Hercules card or a color EGA card. You the programmer had to build into your program a way of figuring out which one of these your host had, and then had to write code for dealing with each possibility on its own terms.

An example of how truly ridiculous things could get is provided by WordPerfect, the most popular business word processor from the mid-1980s on. WordPerfect Corporation maintained an entire staff of programmers whose sole job function was to devour the technical specifications and command protocols of each new printer that hit the market and write drivers for it. Their output took the form of an ever-growing pile of disks that had to be stuffed into every WordPerfect box, even though only one of them would be of any use to any given buyer. Meanwhile another department had to deal with the constant calls from customers who had purchased a printer for which they couldn’t find a driver on their extant mountain of disks, situations that could be remedied in the era before widespread telecommunications only by shipping off yet more disks. It made for one hell of a way to run a software business; at times the word processor itself could almost feel like an afterthought for WordPerfect Printer Drivers, Inc.

But the most glaringly obvious drawback to MS-DOS stared you in the face every time you turned on the computer and were greeted with that blinking, cryptic “C:\>” prompt. Hackers might have loved the command line, but it was a nightmare for a secretary or an executive who saw the computer only as an appliance. MS-DOS contrived to make everything more difficult through its sheer primitive minimalism. Think of the way you work with your computer today. You’re used to having several applications open at once, used to being able to move between them and cut and paste bits and pieces from one to the other as needed. With MS-DOS, you couldn’t do any of this. You could run just one application at a time, which would completely fill the screen. To do something else, you had to shut down the application you were currently using and start another. And if what you were hoping to do was to use something you had made in the first application inside the second, you could almost always forget about it; every application had its own proprietary data formats, and MS-DOS didn’t provide any method of its own of moving data from one to another.

Of course, the drawbacks of MS-DOS spelled opportunity for those able to offer ways to get around them. Thus Lotus Corporation became one of the biggest software success stories of the 1980s by making Lotus 1-2-3, an unwieldy colossus that integrated a spreadsheet, a database manager, and a graph- and chart-maker into a single application. People loved the thing, bloated though it was, because all of its parts could at least talk to one another.

Other solutions to the countless shortcomings of MS-DOS, equally inelegant and partial, were rampant by the time Lotus 1-2-3 hit it big. Various companies published various types of hacks to let users keep multiple applications resident in memory, switching between them using special arcane key sequences. Various companies discussed pacts to make interoperable file formats for data transfer between applications, although few of them got very far. Various companies made a cottage industry out of selling pre-packaged printer drivers to other developers for use in their applications. People wrote MS-DOS startup scripts that brought up easy-to-choose-from menus of common applications on bootup, thereby insulating timid secretaries and executives alike from the terrifying vagueness of the command line. And everybody seemed to be working a different angle when it came to getting around the 640 K barrier.

All of these bespoke solutions constituted a patchwork quilt which the individual user or IT manager would have to stitch together for herself in order to arrive at anything like a comprehensive remedy for MS-DOS’s failings. But other developers had grander plans, and much of their work quickly coalesced around various forms of the graphical user interface. Initially, this fixation may sound surprising if not inexplicable. A GUI built using a mouse, menus, icons, and windows would seem to fix only one of MS-DOS’s problems, that being its legendary user-unfriendliness. What about all the rest of its issues?

As it happens, when we look closer at what a GUI-based operating environment does and how it does it, we find that it must or at least ought to carry with it solutions to MS-DOS’s other issues as well. A windowed environment ideally allows multiple applications to be open at one time, if not actually running simultaneously. Being able to copy and paste pieces from one of those open applications to another requires interoperable data formats. Running or loading multiple applications also means that one of them can’t be allowed to root around in the machine’s innards indiscriminately, lest it damage the work of the others; this, then, must mark the end of the line for bare-metal programming, shifting the onus onto the system software to provide a proper layer of high-level function calls insulating applications from a machine’s actual or potential hardware. And GUIs, given that they need to do all of the above and more, are notoriously memory-hungry, which obligated many of those who made such products in the 1980s to find some way around MS-DOS’s memory constraints. So, a GUI environment proves to be much, much more than just a cutesy way of issuing commands to the computer. Implementing one on an IBM PC or one of its descendants meant that the quick-and-dirty minimalism of MS-DOS had to be chucked forever.

Some casual histories of computing would have you believe that the entire software industry was rigidly fixated on the command line until Steve Jobs came along to show them a better way with the Apple Macintosh, whereupon they were dragged kicking and screaming into computing’s necessary future. Such histories generally do acknowledge that Jobs himself got the GUI religion after a visit to the Xerox Palo Alto Research Center in December of 1979, but what tends to get lost is the fact that he was hardly alone in viewing PARC’s user-interface innovations as the natural direction for computing to go in the more personal, friendlier era of high technology being ushered in by the microcomputer. Indeed, by 1981, two years before a GUI made its debut on an Apple product in the form of the Lisa, seemingly everyone was already talking about them, even if the acronym itself had yet to be invented. This is not meant to minimize the hugely important role Apple really would play in the evolution of the GUI; as we’ll see to a large extent in the course of this very series of articles, they did much original formative work that has made its way into the computer you’re probably using to read these words right now. It’s rather just to say that the complete picture of how the GUI made its way to the personal computer is, as tends to happen when you dig below the surface of any history, more variegated than a tidy narrative of “A caused B which caused C” allows for.

In that spirit, we can note that the project destined to create the MS-DOS world’s first GUI was begun at roughly the same time that a bored and disgruntled Steve Jobs over at Apple, having been booted off the Lisa project, seized control of something called the Macintosh, planned at the time as an inexpensive and user-friendly computer for the home. This other pioneering project in question, also started during the first quarter of 1981, was the work of a brief-lived titan of business software called VisiCorp.

VisiCorp had been founded by one Dan Fylstra under the name of Personal Software in 1978, at the very dawn of the microcomputer age, as one of the first full-service software publishers, trafficking mostly in games which were submitted to him by hobbyists. His company became known for their comparatively slick presentation in a milieu that was generally anything but; MicroChess, one of their first releases, was quite probably the first computer game ever to be packaged in a full-color box rather than a Ziploc baggie. But their course was changed dramatically the following year when a Harvard MBA student named Dan Bricklin contacted Fylstra with a proposal for a software tool that would let accountants and other businesspeople automate most of the laborious financial calculations they were accustomed to doing by hand. Fylstra was intrigued enough to lend the microcomputer-less Bricklin one of his own Apple IIs — whereupon, according to legend at least, the latter proceeded to invent the electronic spreadsheet over the course of a single weekend. He hired a more skilled programmer named Bob Frankston and formed a company called Software Arts to develop that rough prototype into a finished application, which Fylstra’s Personal Software published in October of 1979.

Up to that point, early microcomputers like the Apple II, Radio Shack TRS-80, and Commodore PET had been a hard sell as practical tools for business — even for their most seemingly obvious business application of all, that of word processing. Their screens could often only display 40 columns of big, blocky characters, often only in upper case — about as far away from the later GUI ideal of “what you see is what you get” as it was possible to go — while their user interfaces were arcane at best and their minuscule memories could only accommodate documents of a few pages in length. Most potential business users took one look at the situation, added on the steep price tag for it all, and turned back to their typewriters with a shrug.

VisiCalc, however, was different. It was so clearly, manifestly a better way to do accounting that every accountant Fylstra showed it to lit up like a child on Christmas morning, giggling with delight as she changed a number here or there and watched all of the other rows and columns update automagically. VisiCalc took off like nothing the young microcomputer industry had ever seen, landing tens of thousands of the strange little machines in corporate accounting departments. As the first tangible proof of what personal computing could mean to business, it prompted people to begin asking why IBM wasn’t a part of this new party, doing much to convince the latter to remedy that absence by making a microcomputer of their own. It’s thus no exaggeration to say that the entire industry of business-oriented personal computing was built on the proof of concept that was VisiCalc. It would sell 500,000 copies by January of 1983, an absolutely staggering figure for that time. Fylstra, seeing what was buttering his bread, eventually dropped all of the games and other hobbyist-oriented software from his catalog and reinvented Personal Software as VisiCorp, the first major publisher of personal-computer business applications.

But all was not quite as rosy as it seemed at the new VisiCorp. Almost from the moment of the name change, Dan Fylstra found his relationship with Dan Bricklin growing strained. The latter was suspicious of his publisher’s rebranding themselves in the image of his intellectual property, feeling they had been little more than the passive beneficiaries of his brilliant stroke. This point of view was by no means an entirely fair one. While it may have been true that Fylstra had been immensely lucky to get his hands on Bricklin’s once-in-a-lifetime innovation, he’d also made it possible by loaning Bricklin an Apple II in the first place, then done much to make VisiCalc palatable for corporate America through slick, professional packaging and marketing that projected exactly the right conservative, businesslike image, consciously eschewing the hippie ethos of the Homebrew Computer Club. Nevertheless, Bricklin, perhaps a bit drunk on all the praise of his genius, credited VisiCorp’s contribution to VisiCalc’s success but little. And so Fylstra, nervous about continuing to stake his entire company on Bricklin, set up an internal development team to create more products for the business market.

By the beginning of 1981, the IBM PC project which VisiCalc had done so much to prompt was in full swing, with the finished machine due to be released before the end of the year. Thanks to their status as publisher of the hottest application in business software, VisiCorp had been taken into IBM’s confidence, one of a select number of software developers and publishers given access to prototype hardware in order to have products ready to go on the day the new machine shipped. It seems that VisiCorp realized even at this early point how underwhelming the new machine’s various operating paradigms were likely to be, for even before they had actual IBM hardware to hand, they started mocking up the GUI environment that would become known as Visi On using Apple II and III machines. Already at this early date, it reflected a real, honest, fundamental attempt to craft a more workable model for personal computing than the nightmare that MS-DOS alone could be. William Coleman, the head of the development team, later stated in reference to the project’s founding goals that “we wanted users to be able to have multiple programs on the screen at one time, ease of learning and use, and simple transfer of data from one program to another.”

Visi On seemed to have huge potential. When VisiCorp demonstrated an early version, albeit far later than they had expected to be able to, at a trade show in December of 1982, Dan Fylstra remembers a rapturous reception, “competitors standing in front of [the] booth at the show, shaking their heads and wondering how the company had pulled the product off.” It was indeed an impressive coup; well before the Apple Macintosh or even Lisa had debuted, VisiCorp was showing off a full-fledged GUI environment running on hardware that had heretofore been considered suitable only for ugly old MS-DOS.

Still, actually bringing a GUI environment to market and making a success out of it was a much taller order than it might have first appeared. As even Apple would soon be learning to their chagrin, any such product trying to make a go of it within the increasingly MS-DOS-dominated culture of mainstream business computing ran headlong into a whole pile of problems which lacked clearly best solutions. Visi On, like almost all of the GUI products that would follow for the IBM hardware architecture, was built on top of MS-DOS, using the latter’s low-level function calls to manage disks and files. This meant that users could install it on their hard drive and pop between Visi On and vanilla MS-DOS as the need arose. But a much thornier question was that of running existing MS-DOS applications within the Visi On environment. Those which assumed they had full control of the system — which was practically all of them, because why wouldn’t they? — would flame out as soon as they tried to directly access some piece of hardware that was now controlled by Visi On, or tried to put something in some specific place inside what was now a shared pool of memory, or tried to do any number of other now-forbidden things. VisiCorp thus made the hard decision to not even try to get existing MS-DOS applications to run under Visi On. Software developers would have to make new, native applications for the system; Visi On would effectively be a new computing platform onto itself.

This decision was questionable in commercial if not technical terms, given how hard it must be to get a new platform accepted in an MS-DOS-dominated marketplace. But VisiCorp then proceeded to make the problem even worse. It would only be possible to program Visi On, they announced, after purchasing an expensive development kit and installing it on a $20,000 DEC PDP-11 minicomputer. They thus opted for an approach similar to one Apple was opting for with the Lisa: to allow that machine to be programmed only by yoking it up to a second Lisa. In thus betraying the original promise of the personal computer as an anything machine which ordinary users could program to do their will, both Visi On and the Lisa operating system arguably removed their hosting hardware from that category entirely, converting it into a closed electronic appliance more akin to a game console. Taxonomical debates aside, the barriers to entry even for one who wished merely to use Visi On to run store-bought applications were almost as steep: when this first MS-DOS-based GUI finally shipped on December 16, 1983, after a long series of postponements, it required a machine with 512 K of memory and a hard drive to run and cost more than $1000 to buy.

Visi On was, as the technology pundits like to say, “ahead of the hardware market.” In quite a number of ways it was actually far more ambitious than what would emerge a month or so after it as the Apple Macintosh. Multiple Visi On applications could be open at the same time (although they didn’t actually run concurrently), and a surprisingly sophisticated virtual-memory system was capable of swapping out pages to hard disk if software tried to allocate more memory than was physically available on the computer. Similar features wouldn’t reach MacOS until 1987’s System 5 and 1991’s System 7 respectively.

In the realm of usability, however, Visi On unquestionably fell down in comparison to Apple’s work. The user interfaces for the Lisa and the Macintosh made almost all the right choices right from the beginning, expanding upon the work done at Xerox PARC in all the right ways. Many of the choices made by VisiCorp, on the other hand, feel far more dubious today — and, one has to believe, not just out of the contempt bred by all those intervening decades of user interfaces modeled on Apple’s. Consider the task of moving and sizing windows on the screen, which was implemented so elegantly on the original Lisa and Macintosh that it’s been changed not at all in all the decades since. While Visi On too allows windows to be sized and placed where you will, and allows them to overlay one another — something by no means true of all of the MS-DOS GUI systems that would follow — doing so is a clumsy process involving picking options out of menus rather than simply dragging title bars or sizing widgets. In fact, Visi On uses no icons whatsoever. For anyone still enamored with the old saw that Apple just ripped off the Xerox PARC interface in its entirety and stuck it on the Lisa and Mac, Visi On, being much more slavishly based on the PARC model, provides an instructive demonstration of how far the likes of the Xerox Alto still was from the intuitive ease of Apple’s interface.

A Quick Tour of Visi On


With mice still exotic creatures, VisiCorp provided their own to work with Visi On. Many other early GUI-makers, Microsoft among them, would follow their lead.

Visi On looks like this upon booting up on an original IBM PC with 640 K of memory and a CGA video card, running in high-resolution monochrome mode at 640 X 200. “Services” is Visi On’s terminology for installed applications. The list of them which you see here, all provided by VisiCorp themselves, are the only ones that would ever exist, thanks to Visi On’s complete commercial failure.

We’ve started up a spreadsheet, a graphing application, and a word processor at the same time. These don’t actually run concurrently, as they would under a true multitasking operating system, but are visible onscreen in their separate windows, becoming active when we click them. (Something similar would not have been possible under MacOS prior to 1987.)

Although Visi On does sport windows that can be sized and placed anywhere and can overlap one another, arranging them is made extremely tedious by its lack of any concept of mouse-dragging; the mouse can only be used for single clicks. So, you have to click the “Frame” menu option and see its instructions through step by step. Note also the lack of pull-down menus, another of Apple’s expansions upon the work down at Xerox PARC. Menus here are just one-shot commands, akin to what a modern GUI user would call a button.

Fortunately, you can make a window full-screen with just a couple of clicks. Unfortunately, you then have to laboriously re-“Frame” it when you want to shrink it again; it doesn’t remember where it used to be.

The lack of a mouse-drag affordance makes the “Transfer” function — Visi On’s version of copy-and-paste — extremely tedious.

And, as with most things in Visi On, transferring data is also slow. Moving that little snippet of text from the word processor to the spreadsheet took about ten seconds.

On the plus side, Visi On sports a help system that’s crazily comprehensive for its time — much more so than the one that would ship with MacOS or, for that matter, Microsoft Windows for quite some years.

As if it didn’t have enough intrinsic problems working against it, extrinsic ones also contrived to undo Visi On in the marketplace. By the time it shipped, VisiCorp was a shadow of what they had so recently been. VisiCalc sales had collapsed over the past year, going from nearly 40,000 units in December of 1982 alone to fewer than 6000 units in December of 1983 in the face of competing products — most notably the burgeoning juggernaut Lotus 1-2-3 — and what VisiCorp described as Software Arts’s failure to provide “timely upgrades” amidst a relationship that was growing steadily more tense. With VisiCorp’s marketplace clout thus dissipating like air out of a balloon, it was hardly the ideal moment for them to ask for the sorts of commitments from users and developers required by Visi On.

The very first MS-DOS-based GUI struggled along with no uptake whatsoever for nine months or so; the only applications made for it were the word processor, spreadsheet, and graphing program VisiCorp made themselves. In September of 1984, with VisiCorp and Software Arts now embroiled in a court battle that would benefit only their competitors, the Visi On technology was sold to a veteran manufacturer of mainframes and supercomputers called Control Data Corporation, who proceeded to do very little if anything with it. VisiCorp went bankrupt soon after, while Lotus bought out Software Arts for a paltry $800,000, thus ending the most dramatic boom-and-bust tale of the early business-software industry. “VisiCorp’s auspicious climb and subsequent backslide,” wrote InfoWorld magazine, “will no doubt become a ‘how-not-to’ primer for software companies of the future.”

Visi On’s struggles may have been exacerbated by the sorry state of its parent company, but time would prove them to be by no means atypical of MS-DOS-based GUI systems in general.  Already in February of 1984, PC Magazine could point to at least four other GUIs of one sort or another in the works from other third-party developers: Concurrent CP/M with Windows by Digital Research, VisuALL by Trillian Computer Corporation, DesqView by Quarterdeck Office Systems, and WindowMaster by Structured Systems. All of these would make different choices in trying to balance the seemingly hopelessly competing priorities of reasonable speed and reasonable hardware requirements, compatibility with MS-DOS applications and compatibility with post-MS-DOS philosophies of computing. None would find the sweet spot. Neither they nor the still more GUI environments that followed them would be able to offer a combination of features, ease of use, and price that the market found compelling, so much so that by 1985 the whole field of MS-DOS GUIs was coming to be viewed with disdain by computer users who had been disappointed again and again. If you wanted a GUI, went the conventional wisdom, buy a Macintosh and live with the paltry software selection and the higher price. The mainstream of business computing, meanwhile, continued to truck along with creaky old MS-DOS, a shaky edifice made still more unstable by all of the hacks being grafted onto it to expand its memory model or to force it to load more than one application at a time. “Windowing and desktop environments are a solution looking for a problem,” said Robert Lefkowits, director of software services for Infocorp, in the fall of 1985. “Users aren’t really looking for any kind of windowing environment to solve problems. Users are not expressing a need or desire for it.”

The reason they weren’t, of course, was because they hadn’t yet seen a GUI in which the pleasure outweighed the pain. Entrenched as users were in the old way of doing things, accepting as they had become of all of MS-DOS’s discontents as simply the way computing was, it was up to software developers to show them why a GUI was something they had never known they couldn’t live without. Microsoft at least, the very people who had saddled their industry with the MS-DOS albatross, were smart enough to realize that mainstream business computing must be remade in the image of the much-scoffed-at Macintosh at some point. Further, they understood that it behooved them to do the remaking if they didn’t want to go the way of VisiCorp. By the time Lefkowits said his words, the long, winding tale of dogged perseverance in the face of failure and frustration that would become the story of Microsoft Windows had already been playing out for several years. One of these days, the GUI was going to make its breakthrough in one way or another, and it was going to do so with a Microsoft logo on its box — even if Bill Gates had to personally ram it down his customers’ throats.

(Sources: the books The Making of Microsoft: How Bill Gates and His Team Created the World’s Most Successful Software Company by Daniel Ichbiah and Susan L. Knepper and Computer Wars: The Fall of IBM and the Future of Global Technology by Charles H. Ferguson and Charles R. Morris; InfoWorld of October 31 1983, November 14 1983, April 2 1984, July 2 1984, and October 7 1985; Byte of June 1983, July 1983; PC Magazine of February 7 1984, and October 2 1984; the episode of the Computer Chronicles television program called “Integrated Software.” Finally, I owe a lot to Nathan Lineback for the histories, insights, comparisons, and images found at his wonderful online “GUI Gallery.”)

Footnotes

Footnotes
1 MS-DOS was known as PC-DOS when sold directly under license by IBM. Its functionality, however, was almost or entirely identical to the Microsoft-branded version. For simplicity’s sake, I will just refer to “MS-DOS” whenever speaking about either product — or, more commonly, both — in the course of this series of articles.
 

Tags: , , ,

Another World

The French creative aesthetic has always been a bit different from that of English-speaking nations. In their paintings, films, even furniture, the French often discard the stodgy literalism that is so characteristic of Anglo art in favor of something more attenuated, where impression becomes more important than objective reality. A French art film doesn’t come off as a complete non sequitur to Anglo eyes in the way that, say, a Bollywood or Egyptian production can. Yet the effect it creates is in its way much more disorienting: it seems on the surface to be something recognizable and predictable, but suddenly zigs where we expect it to zag. In particular, it may show disconcertingly little interest in the logic of plot, that central concern of Anglo film. What affects what and why is of far less interest to a filmmaker like, say, François Truffaut than the emotional affect of the whole.

Crude though such stereotypes may be, when the French discovered computer games they did nothing to disprove them. For a long time, saying a game was French was a shorthand way for an Anglo to say that it was, well, kind of weird, off-kilter in a way that made it hard to judge whether the game or the player was at fault. Vintage French games weren’t always the most polished or balanced of designs, yet they must still be lauded today for their willingness to paint in emotional colors more variegated than the trite primary ones of fight or flight, laugh or cry. Such was certainly the case with Éric Chahi’s Another World.


France blazed its own trail through the earliest years of the digital revolution. Most people there caught their first glimpse of the digital future not through a home computer but through a remarkable online service called Minitel, a network of dumb terminals that was operated by the French postal and telephone service. Millions of people installed one of the free terminals in their home, making Minitel the most widely used online service in the world during the 1980s, dwarfing even the likes of CompuServe in the United States. Those in France who craved the capabilities of a full-fledged computer, meanwhile, largely rejected the Sinclair Spectrums and Commodore 64s that were sweeping the rest of Europe in favor of less universal lines like the Amstrad CPC and the Oric-1. Apple as well, all but unheard of across most of Europe, established an early beachhead in France, thanks to the efforts of a hard-charging and very Gallic general manager named Jean-Louis Gassée, who would later play a major role in shepherding the Macintosh to popularity in the United States.

In the second half of the 1980s, French hardware did begin to converge, albeit slowly, with that in use in the rest of Europe. The Commodore Amiga and Atari ST, the leading gaming computers in Europe as a whole, were embraced to at least some extent in France as well. By 1992, 250,000 Amigas were in French homes. This figure might not have compared very well to the 2.5 million of them in Britain and Germany by that point, but it was more than enough to fuel a thriving little Amiga game-development community that was already several years old. “Our games didn’t have the excellent gameplay of original English-language games,” remembers French game designer Philippe Ulrich, “but their aesthetics were superior, which spawned the term ‘The French Touch’ — later reused by musicians such as Daft Punk and Air.”

Many Amiga and ST owners had been introduced to the indelibly French perspective on games as early as 1988. That was the year of Captain Blood, which cast the player in the role of a clone doomed to die unless he could pool his vital essences with those of five other clones scattered across the galaxy — an existential quest for identity to replace the conquer-the-galaxy themes of most science-fiction games. If that alone wasn’t weird enough, the gameplay consisted mostly of talking to aliens using a strange constructed language of hieroglyphs devised by the game’s developers.

Such avoidance of in-game text, whether done as a practical method of easing the problems of localization or just out of the long-established French ambivalence toward translation from their mother tongue, would become a hallmark of the games that followed, as would a willingness to tackle subject matter that no one else would touch. The French didn’t so much reject traditional videogame themes and genres as filter them through their own sensibilities. Often, this meant reflecting American culture back upon itself in ways that could be both unsettling and illuminating. North & South, for instance, turned the Civil War, that greatest tragedy of American history, into a manic slapstick satire. For any American kid raised on a diet of exceptionalism and solemn patriotism, this was deeply, deeply strange stuff.

The creator of Another World, perhaps the ultimate example of the French Touch in games, was, as all of us must be, a product of his environment. Éric Chahi had turned ten the year that Star Wars dropped, marking the emergence of a transnational culture of blockbuster media, and he was no more immune to its charms than were other little boys all over the world. Yet he viewed that very American film through a very French lens. He liked the rhythm and the look of the thing — the way the camera panned across an endless vista of peaceful space down into a scene of battle at the beginning; the riff on Triumph of the Will that is the medal ceremony at the end — much more than he cared about the plot. His most famous work would evince this same rather non-Anglo sense of aesthetic priorities, playing with the trappings of American sci-fi pop culture but skewing them in a distinctly French way.

But first, there would be other games. From the moment Chahi discovered computers several years after Star Wars, he was smitten. “During school holidays, I didn’t see much of the sun,” he says. “Programming quickly became an obsession, and I spent around seventeen hours a day in front of a computer screen.” The nascent French games industry may have been rather insular, but that just made it if anything even more wide-open for a young man like himself than were those of other countries. Chahi was soon seeing the games he wrote — from platformers to text adventures — published on France’s oddball collection of viable 8-bit platforms. His trump card as a developer was a second talent that set him apart from the other hotshot bedroom coders: he was also a superb artist, whether working in pixels or in more traditional materials. Although none of his quickie 8-bit games became big hits, his industry connections did bring him to the attention of a new company called Delphine Software in 1988.

Delphine Software was about as stereotypically French a development house as can be imagined. It was a spinoff of Delphine Records, whose cash cow was the bizarrely popular easy-listening pianist Richard Clayderman, a sort of modern-day European Liberace who would come to sell 150 million records by 2006. Paul de Senneville, the owner of Delphine Records, was himself a composer and musician. Artist that he was, he gave his new software arm virtually complete freedom to make whatever games they felt like making. Their Paris offices looked like a hip recording studio; Chahi remembers “red carpet at the entrance, gold discs everywhere, and many eccentric contemporary art pieces.”

Future Wars

He had been hired by Delphine on the basis of his artistic rather than his programming talent, to illustrate a point-and-click adventure game with the grandiose title of Les Voyageurs du Temps: La Menace (“The Time Travelers: The Menace”), later to be released in English under the punchier name of Future Wars. Inspired by the Sierra graphic adventures of the time, it was nevertheless all French: absolutely beautiful to look at — Chahi’s illustrations were nothing short of mouth-watering — but more problematic to play, with a weird interface, weirder plot, and puzzles that were weirdest of all. As such, it stands today as a template for another decade and change of similarly baffling French graphic adventures to come, from companies like Coktel Vision as well as Delphine themselves.

But the important thing from Chahi’s perspective was that the game became a hit all across Europe upon its release in mid-1989, entirely on the basis of his stunning work as its illustrator. He had finally broken through. Yet anyone who expected him to capitalize on that breakthrough in the usual way, by settling into a nice, steady career as Delphine’s illustrator in residence, didn’t understand his artist’s temperament. He decided he wanted to make a big, ambitious game of his own all by himself — a true auteur’s statement. “I felt that I had something very personal to communicate,” he says, “and in order to bring my vision to others I had to develop the title on my own.” Like Marcel Proust holed up in his famous cork-lined Paris apartment, scribbling frantically away on In Search of Lost Time, Chahi would spend the next two years in his parents’ basement, working sixteen, seventeen, eighteen hours per day on Another World. He began with just two fixed ideas: he wanted to make a “cinematic” science-fiction game, and he wanted to do it using polygonal graphics.

Articles like this one throw around terms like “polygonal graphics” an awful lot, and their meanings may not always be clear to everyday readers. So, let’s begin by asking what separated the type of graphics Chahi now proposed to make from those he had been making before.

The pictures that Chahi had created for Future Wars were what is often referred to as pixel graphics. To make them, the artist loads a paint program, such as the Amiga’s beloved Deluxe Paint, and manipulates the actual onscreen pixels to create a background scene. Animation is accomplished using sprites: additional, smaller pictures that are overlaid onto the background scene and moved around as needed. On many computers of the 1980s, including the Amiga on which Chahi was working, sprites were implemented in hardware for efficiency’s sake. On other computers, such as the IBM PC and the Atari ST, they had to be conjured up, rather less efficiently, in software. Either way, though, the basic concept is the same.

The artist who works with polygonal graphics, on the other hand, doesn’t directly manipulate onscreen pixels. Instead she defines her “pictures” mathematically. She builds scenes out of geometric polygons of three sides or more, defined as three or more connected points, or sets of X, Y, and Z coordinates in abstract space. At run time, the computer renders all this data into an image on the monitor screen, mapping it onto physical pixels from the perspective of a “camera” that’s anchored at some point in space and pointed in a defined direction. Give a system like this one enough polygons to render, and it can create scenes of amazing complexity.

Still, it does seem like a roundabout way of approaching things, doesn’t it? Why, you may be wondering, would anyone choose to use polygonal graphics instead of just painting scenes with a conventional paint program? Well, the potential benefits are actually enormous. Polygonal graphics are a far more flexible, dynamic form of computer graphics. Whereas in the case of a pixel-art background you’re stuck with the perspective and distance the artist chose to illustrate, you can view a polygonal scene in all sorts of different ways simply by telling the computer where in space the “camera” is hanging. A polygonal scene, in other words, is more like a virtual space than a conventional illustration — a space you can move through, and that can in turn move around you, just by changing a few numbers. And it has the additional advantage that, being defined only as a collection of anchoring points for the polygons that make it up rather than needing to explicitly describe the color of every single pixel, it usually takes up much less disk space as well.

With that knowledge to hand, you might be tempted to reverse the question of the previous paragraph, and ask why anyone wouldn’t want to use polygonal graphics. In fact, polygonal graphics of one form or another had been in use on computers since the 1960s, and were hardly unheard of in the games industry of the 1980s. They were most commonly found in vehicular simulators like subLOGIC’s Flight Simulator, which needed to provide a constantly changing out-the-cockpit view of their worlds. More famously in Europe, Elite, one of the biggest games of the decade, also built its intense space battles out of polygons.

The fact is, though, that polygonal graphics have some significant disadvantages to go along with their advantages, and these were magnified by the limited hardware of the era. Rendering a scene out of polygons was mathematically intensive in comparison to the pixel-graphic-backgrounds-and-sprites approach, pushing an 8-bit or even 16-bit CPU (like the Motorola 68000 in the Amiga) hard. It was for this reason that early versions of Flight Simulator and Elite and many other polygonal games rendered their worlds only as wire-frame graphics; there just wasn’t enough horsepower to draw in solid surfaces and still maintain a decent frame rate.

And there were other drawbacks. The individual polygons from which scenes were formed were all flat surfaces; there was no concept of smooth curvature in the mathematics that underlay them. [1]More modern polygonal-graphics implementations do make use of something called splines to allow for curvature, but these weren’t practical to implement using 1980s and early 1990s computers. But the natural world, of course, is made up of almost nothing but curves. The only way to compensate for this disparity was to use many small polygons, packed so closely together that their flat surfaces took on the appearance of curvature to the eye. Yet increasing the polygon count in this way increased the burden of rendering it all on the poor overtaxed CPUs of the day — a burden that quickly became untenable. In practice, then, polygonal graphics took on a distinctive angular, artificial appearance, whose sense of artificiality was only enhanced by the uniform blotches of color in which they were drawn. [2]Again, the state of the art in modern polygonal graphics is much different today in this area than it was in Another World‘s time. Today textures are mapped on polygonal surfaces to create a more realistic appearance, and scenes are illuminated by light sources that produce realistic shadings and shadows across the whole. But all of this was hopelessly far beyond what Chahi or anyone else of Another World’s era could hope to implement in a game which needed to be interactive and to run at a reasonable speed.

These illustrations show how an object can be made to appear rounded by making it out of a sufficient number of flat polygons. The problem is that each additional polygon which must be rendered taxes the processor that much more.

For all these reasons, polygonal graphics were mostly confined to the sort of first-person-perspective games, like those aforementioned vehicular simulators and some British action-adventures, which couldn’t avoid using them. But Chahi would buck the trend by using them for his own third-person-perspective game. Their unique affordances and limitations would stamp Another World just as much as its creator’s own personality, giving the game’s environments the haunting, angular vagueness of a dream landscape. The effect is further enhanced by Chahi’s use of a muted, almost pastel palette of just 16 colors and an evocative, minimalist score by Jean-François Freitas — the only part of the game that wasn’t created by Chahi himself. Although you’re constantly threatened with death — and, indeed, will die over and over in the course of puzzling your way through the game — it all operates on the level of impression rather than reality.

According to some theories of visual art, the line between merely duplicating reality and conveying impressions of reality is the one that separates the draftsman from the artist. If so, Another World‘s visuals betray an aesthetic sophistication rarely seen in computer games of its era. While other games strained to portray violence with ever more realism, Another World went another way entirely, creating an affect that’s difficult to put into words — a quality which is itself another telltale sign of Art. Chahi:

Polygon techniques are great for animation, but the price you pay is the lack of detail. Because I couldn’t include much detail, I decided to work with the player’s imagination, creating suggestive content instead of being highly descriptive. That’s why, for example, the beast in the first scene is impressive even if it is only a big black shape. The visual style of Another World is really descended from the black-and-white comic-book style, where shape and volume are suggested in a very subtle way. By doing Another World, I learned a lot about suggestion. I learned that the medium is the player’s own imagination.

To make his suggestive rather than realistic graphics, Chahi spent much time first making tools, beginning with an editor written in a variant of BASIC. The editor’s output was then rendered in the game in assembly language for the sake of speed, with the logic of it all controlled using a custom script language of Chahi’s own devising. This approach would prove a godsend when it came time to port the game to platforms other than the Amiga; a would-be porter merely had to recreate the rendering engine on a new platform, making it capable of interpreting Chahi’s original polygonal-graphics data and scripts. Thus Another World was, in addition to being a game, actually a new cross-platform game engine as well, albeit one that would only be used for a single title.

Some of the graphics had their point of origin in the real world, having been captured using a long-established animation technique known as rotoscoping: tracing the outlines, frame by frame, of real people or objects filmed in motion, to form the basis of their animated equivalents. Regular readers of this blog may recall that Jordan Mechner used the same technique as far back as 1983 to create the characters in his cinematic karate game Karateka. Yet the differences between the two young developers’ approaches to the technique says much about the march of technology between 1983 and 1989.

Mechner shot his source footage on real film, then used a mechanical Moviola editing machine, a staple of conventional filmmakers for decades, to isolate and make prints of every third frame of the footage. He then traced these prints into his Apple II using an early drawing pad called a VersaWriter.

Chahi’s Amiga allowed a different approach. It had been developed during the brief heyday of laser-disc games in arcades. These often worked by overlaying interactive computer-generated graphics onto static video footage unspooling from the laser disc itself. Wishing to give their new computer the potential to play similar games in the home with the addition of an optional laser-disc player, the designers of the Amiga built into the machine’s graphics chips a way of overlaying the display onto other video; one color of the onscreen palette could be defined as transparent, allowing whatever video lay “below” it to peek through. The imagined laser-disc accessory would never appear due to issues of cost and practicality, but, in a classic example of an unanticipated technological side-effect, this capability combined with the Amiga’s excellent graphics in general made it a wonderful video-production workstation, able to blend digital titles and all sorts of special effects with the analog video sources that still dominated during the era. Indeed, the emerging field of “desktop video” became by far the Amiga’s most sustained and successful niche outside of games.

The same capability now simplified the process of rotoscoping dramatically for Chahi in comparison to what Mechner had been forced to do. He shot video footage of himself on an ordinary camcorder, then played it back on a VCR with single-frame stop capability. To the same television as the VCR was attached his Amiga. Chahi could thus trace the images directly from video into his Amiga, without having to fuss with prints at all.

It wasn’t until months into the development of Another World that a real game, and with it a story of sorts, began to emerge from this primordial soup of graphics technology. Chahi made a lengthy cut scene, rendered, like all of the ones that would follow, using the same graphics engine as the game’s interactive portions for the sake of aesthetic consistency. The entire scene, lasting some two and a half minutes, used just 70 K of disk space thanks to the magic of polygonal graphics. In it, the player’s avatar, a physicist named Lester Cheykin, shows up at his laboratory for a night of research, only to be sucked into his own experiment and literally plunged into another world; he emerges underwater, just a few meters above some vicious plant life eager to make a meal out of him. The player’s first task, then, is to hastily swim to the surface, and the game proper gets underway. The story that follows, such as it is, is one of more desperate escapes from the flora and fauna of this new world, including an intelligent race that don’t like Lester any more than their less intelligent counterparts. Importantly, neither the player nor Lester ever learns precisely where he is — another planet? another dimension? — or why the people that live there — we’ll just call them the “aliens” from now on for simplicity’s sake — want to kill him.

True to the spirit of the kid who found the look of Star Wars more interesting than the plot, the game is constructed with a filmmaker’s eye toward aesthetic composition rather than conventional narrative. After the opening cut scene, the whole game contains not one word devoted to dialog, exposition, or anything else until “The End” appears, excepting only grunts and muffled exclamations made in an alien language you can’t understand. All of Chahi’s efforts were poured into the visual set-pieces, which are consistently striking and surprising, often with multiple layers of action.

Chahi:

I wanted to create a truly immersive game in a very consistent, living universe with a movie feel. I never wanted to create an interactive movie itself. Instead I wanted to extract the essence of a movie — the rhythm and the drama — and place it into game form. To do this I decided to leave the screen free of the usual information aids like an energy bar, score counter, and other icons. Everything had to be in the universe, with no interruptions getting in the way.

Midway through the game, you encounter a friend, an alien who’s been imprisoned — for reasons that, needless to say, are never explained — by the same group who are out to get you. The two of you join forces, helping one another through the rest of the story. Your bond of friendship is masterfully conveyed without using words, relying on the same impressionistic visuals as everything else. The final scene, where the fellow Chahi came to call “Buddy” gently lifts an exhausted Lester onto the back of a strange winged creature and they fly away together, is one of the more transcendent in videogame history, a beautiful closing grace note that leaves you with a lump in your throat. Note the agonizingly slow pace of the snippet below, contrasted with the frenetic pace of the one above. When Chahi speaks about trying to capture the rhythm of a great movie, this is what he means.

For its creator, the ending had another special resonance. When implementing the final scene, two years after retiring into his parents’ basement, Chahi himself felt much like poor exhausted Lester, crawling toward the finish line.

But, you might ask, what has the player spent all of the time between the ominous opening cut scene and the transcendent final one actually doing? In some ways, that’s the least interesting aspect of Another World. The game is at bottom a platforming action-adventure, with a heavy emphasis on the action. Each scene is a challenge to be tackled in two phases: first, you have to figure out what Chahi wants you to do in order to get through its monsters, tricks, and traps; then, you have to execute it all with split-second precision. It’s not particularly easy. The idealized perfect player can make a perfect run through Another World, including watching all of the cut scenes, in half an hour. Imperfect real-world players, on the other hand, can expect to watch Lester die over and over as they slowly blunder their way through the game. At least you’re usually allowed to pick up pretty close to where you left off when Lester dies — because, trust me, he will die, and often.

When we begin to talk of influences and points of comparison for Another World inside the realm of games, one name inevitably leaps to mind first. I already mentioned Jordan Mechner in the context of his own work with rotoscoping, but that’s only the tip of an iceberg of similarities between Another World and his two famous games, Karateka and Prince of Persia. He was another young man with a cinematic eye, more interested in translating the “rhythm and drama” of film to an interactive medium than he was in making “interactive movies” in the sense that his industry at large tended to understand that term. Indeed, Chahi has named Karateka as perhaps the most important ludic influence on Another World, and if anything the parallels between the latter and Prince of Persia are even stronger: both were the virtually single-handed creations of their young auteurs; both largely eschew text in favor of visual storytelling; both clear their screen of score markers and other status indicators in the name of focusing on what’s really important; both are brutally difficult platformers; both can be, because of that brutal difficulty, almost more fun to watch someone else play than they are to play yourself, at least for those of us who aren’t connoisseurs of their try-and-try-again approach to game design.

Still, for all the similarities, nobody is ever likely to mistake Prince of Persia for Another World. Much of the difference must come down to — to engage in yet more crude national stereotyping — the fact that one game is indisputably American, the other very, very French. Mechner, who has vacillated between a career as a game-maker and a filmmaker throughout his life, wrote his movie scripts in the accessible, family-friendly tradition of Steven Spielberg, his favorite director, and brought the same sensibility to his games. But Chahi’s Another World has, as we’ve seen, the sensibility of an art film more so than a blockbuster. The two works together stand as a stark testimony to the way that things which are so superficially similar in art can actually be so dramatically different.

A mentally and physically drained Éric Chahi crawled the final few feet into Delphine’s offices to deliver the finished Another World in late 1991. His final task was to paint the cover art for the box, a last step in the cementing of the game as a deeply personal expression in what was already becoming known as a rather impersonal medium. It was released in Europe before the end of the year, whereupon it became a major, immediate hit for reasons that, truth be told, probably had little to do with its more emotionally resonant qualities: in a market that thrived on novelty, it looked like absolutely nothing else. That alone was enough to drive sales, but in time at least some of the young videogame freaks who purchased it found in it something they’d never bargained for: the ineffable magic of a close encounter with real Art. Memories of those feelings continue to make it a perennial today whenever people of a certain age draw up lists of their favorite games.

Delphine had an established relationship with Interplay as their American publisher. The latter were certainly intrigued by Chahi’s creation, but seemed a little nonplussed by its odd texture. They thus lobbied him for permission to replace its evocative silences, which were only occasionally broken up by Jean-François Freitas’s haunting score, with a more conventional thumping videogame soundtrack. Chahi was decidedly opposed, to the extent of sending Interplay’s offices an “infinite fax” repeating the same sentence again and again: “Keep the original music!” Thankfully, they finally agreed to do so, although conflicts with a long-running daytime soap opera which was also known as Another World did force them to change the name of the game in the United States to the more gung-ho-sounding Out of This World. But on the positive side, they put the game through the rigorous testing process the air-fairy artistes at Delphine couldn’t be bothered with, forcing Chahi to fix hundreds of major and minor bugs and unquestionably turning it into a far tighter, more polished experience.

I remember Out of this World‘s 1992 arrival in the United States with unusual vividness. I was still an Amiga loyalist at the time, even as the platform’s star was all too obviously fading in my country. It will always remain imprinted on my memory as the last “showpiece” Amiga game I encountered, the last time I wanted to call others into the room and tell them to “look at this!” — the last of a long line of such showpieces that had begun with Defender of the Crown back in 1986. For me, then, it marked the end of an era in my life. Shortly thereafter, my once-beloved old Amiga got unceremoniously dumped into the closet, and I didn’t have much to do with computers at all for the next two or three years.

But Interplay, of course, wasn’t thinking of endings when the Amiga version of Out of this World was greeted with warm reviews in the few American magazines still covering Amiga games. Computer Gaming World called the now-iconic introductory cut scene “one of the most imaginative pieces of non-interactive storytelling ever associated with a computer game” — a description which might almost, come to think of it, be applied to the game as a whole, depending on how broad your definition of “interactive storytelling” is willing to be. Reviewers did note that the game was awfully short, however, prompting Interplay to cajole the exhausted Chahi into making one more scene for the much-anticipated MS-DOS port. This he duly did, diluting the concentrated experience that was the original version only moderately in the process.

The game was ported to many more platforms in the years that followed, including to consoles like the Super Nintendo and Sega Genesis, eventually even to iOS and Android in the form of a “20th Anniversary Edition.” Chahi estimates that it sold some 1 million copies in all during the 1990s alone. He made the mistake of authorizing Interplay to make a sequel called Heart of the Alien for the Sega CD game console in 1994, albeit with the typically artsy stipulation that it must be told from the point of view of Buddy. The results were so underwhelming that he regrets the decision to this day, and has resisted all further calls to make or authorize sequels. Instead he’s worked on other games over the years, but only intermittently, mixing his work in games with a range of other pursuits such as volcanology, photography, and painting. His ludography remains tiny — another trait, come to think of it, that he shares with Jordan Mechner — and he is still best known by far for Another World, which is perhaps just as well; it’s still his own personal favorite of his games. It remains today a touchstone for a certain school of indie game developers in particular, who continue to find inspiration in its artsy, affective simplicity.

In fact, Another World raises some interesting questions about the very nature of games. Is it possible for a game that’s actually not all that great at all in terms of mechanics and interactivity to nevertheless be a proverbial great game in some more holistic sense? The brilliant strategy-game designer Sid Meier has famously called a good game “a series of interesting decisions.” Another World resoundingly fails to meet this standard of ludic goodness. In it, you the player have virtually no real decisions to make at all; your task is rather to figure out the decisions which Éric Chahi has already made for Lester, and thereby to advance him to the next scene. Of course, the Sid Meier definition of gaming goodness can be used to criticize plenty of other games — even other entire game genres. Certainly most adventure games as well are largely exercises in figuring out the puzzle solutions the author has already set in place. Yet even they generally offer a modicum of flexibility, a certain scope for exploration in, if nothing else, the order in which you approach the puzzles. Another World, on the other hand, allows little more scope for exploration or improvisation than the famously straitjacketed Dragon’s Lair — which is, as it happens, another game Chahi has listed as an inspiration. Winning Dragon’s Lair entails nothing more nor less than making just the right pre-determined motions with the controller at just the right points in the course of watching a static video clip. In Another World, Lester is at least visibly responsive to your commands, but, again, anything but the exactly right commands, executed with perfect precision, just gets him killed and sends you back to the last checkpoint to try again.

So, for all that it’s lovely and moving to look at, does Another World really have any right to be a game at all? Might it not work better as an animated short? Or, to frame the question more positively, what is it about the interactivity of Another World that actually adds to the audiovisual experience? Éric Chahi, for his part, makes a case for his game using a very different criterion from that of Meier’s “interesting decisions”:

It’s true that Another World is difficult. When I played it a year ago, I discovered how frustrating it can be sometimes — and breathtaking at the same time. The trial-and-error doesn’t disturb me, though. Another World is a game of survival on a hostile world, and it really is about life and death. Death doesn’t mean the end of the game, but it is a part of the exploration, a part of the experience. That’s why the death sequences are so diversified. To solve many puzzles, I recognize that you have to die at least once, and this certainly isn’t the philosophy of today’s game design. It is a controversial point in Another World’s design because it truly serves the emotional side of things and the player’s attachment to the characters, but it sometimes has a detrimental effect on the gameplay. Because of this, Another World must be considered first as an intense emotional experience.

Personally, I’m skeptical of whether deliberately frustrating the player, even in the name of artistic affect, is ever a good design strategy, and I must confess that I remain in the camp of players who would rather watch Another World than try to struggle through it on their own. Yet there’s no question that Éric Chahi’s best-remembered game does indeed deserve to be remembered for its rare aesthetic sophistication, and for stimulating emotional responses that go way beyond the typical action-game palette of anger and fear. While there is certainly room for “interesting decisions” in games — and perhaps a few of them might not have gone amiss in Another World itself — games ought to be able to make us feel as well. This lesson of Another World is one every game designer can stand to profit from.

(Sources: the book Principles of Three-Dimension Animation: Modeling, Rendering, and Animating with 3D Computer Graphics by Michael O’Rourke; Computer Gaming World of August 1992; Game Developer of November 2011; Questbusters of June/July 1992; The One of October 1991 and October 1992; Zero of November 1991; Retro Gamer 24 and 158; Amiga Format 1992 annual; bonus materials included with the 20th Anniversary edition of Another World; an interview with Éric Chahi conducted for the film From Bedrooms to Billions: The Amiga Years; Chahi’s postmorten talk about the game at the 2011 Game Developers Conference; “How ‘French Touch’ Gave Early Videogames Art, Brains” from Wired; “The Eccentricities of Eric Chahi” from Eurogamer. The cut-scene and gameplay footage in the article is taken from a World of Longplays YouTube video.

Another World is available for purchase on GOG.com in a 20th Anniversary Edition with lots of bonus content.)

Footnotes

Footnotes
1 More modern polygonal-graphics implementations do make use of something called splines to allow for curvature, but these weren’t practical to implement using 1980s and early 1990s computers.
2 Again, the state of the art in modern polygonal graphics is much different today in this area than it was in Another World‘s time. Today textures are mapped on polygonal surfaces to create a more realistic appearance, and scenes are illuminated by light sources that produce realistic shadings and shadows across the whole. But all of this was hopelessly far beyond what Chahi or anyone else of Another World’s era could hope to implement in a game which needed to be interactive and to run at a reasonable speed.
 

Tags: , , ,

The Incredible Machine

As we saw in my previous article, Jeff Tunnell walked away from Dynamix’s experiments with “interactive movies” feeling rather disillusioned by the whole concept. How ironic, then, that in at least one sense comparisons with Hollywood continued to ring true even after he thought he’d consigned such things to his past. When he stepped down from his post at the head of Dynamix in order to found Jeff Tunnell Productions and make smaller but more innovative games, he was making the sort of bargain with commercial realities that many a film director had made before him. In the world of movies, and now increasingly in that of games as well, smaller, cheaper projects were usually the only ones allowed to take major thematic, formal, and aesthetic risks. If Tunnell hoped to innovate, he had come to believe, he would have to return to the guerrilla model of game development that had held sway during the 1980s, deliberately rejecting the studio-production culture that was coming to dominate the industry of the 1990s. So, he recruited Kevin Ryan, a programmer who had worked at Dynamix almost from the beginning, and set up shop in the office next door with just a few other support personnel.

Tunnell knew exactly what small but innovative game he wanted to make first. It was, appropriately enough, an idea that dated back to those wild-and-free 1980s. In fact, he and Damon Slye had batted it around when first forming Dynamix all the way back in 1983. At that time, Electronic Art’s Pinball Construction Set, which gave you a box of (virtual) interchangeable parts to use in making playable pinball tables of your own, was taking the industry by storm, ushering in a brief heyday of similar computerized erector sets; Electronics Arts alone would soon be offering the likes of an Adventure Construction Set, a Music Construction Set, and a Racing Destruction Set. Tunnell and Slye’s idea was for a sort of machine construction set: a system for cobbling together functioning virtual mechanisms of many types out of interchangeable parts. But they never could sell the vaguely defined idea to a publisher, thus going to show that even the games industry of the 1980s maybe wasn’t quite so wild and free as nostalgia might suggest. [1]That, anyway, is the story which both Jeff Tunnell and Kevin Ryan tell in interviews today, which also happened to be the only one told in an earlier version of this article. But this blog’s friend Jim Leonard has since pointed out the existence of a rather obscure children’s game from the heyday of computerized erector sets called Creative Contraptions, published by the brief-lived software division of Bantam Books and created by a team of developers who called themselves Looking Glass Software (no relation to the later, much more famous Looking Glass Studios). It’s a machine construction set in its own right, one which is strikingly similar to the game which is the main subject of this article, even including some of the very same component parts, although it is more limited in many ways than Tunnell and Ryan’s creation, with simpler mechanisms to build out of fewer parts and less flexible controls that are forced to rely on keystrokes rather than the much more intuitive affordances of the mouse. One must assume that Tunnell and Ryan either reinvented much of Creative Contraptions or expanded on a brilliant concept beautifully in the course of taking full advantage of the additional hardware at their disposal. If the latter, there’s certainly no shame in that.

Still, the machine-construction-set idea never left Tunnell, and, after founding Jeff Tunnell Productions in early 1992, he was convinced that now was finally the right time to see it through. At its heart, the game, which he would name The Incredible Machine, must be a physics simulator. Luckily, all those years Kevin Ryan had spent building all those vehicular simulators for Dynamix provided him with much of the coding expertise and even actual code that he would need to make it. Ryan had the basic engine working within a handful of months, whereupon Tunnell and anyone else who was interested could start pitching in to make the many puzzles that would be needed to turn a game engine into a game.

The look of the Mouse Trap board game…

…is echoed by the Incredible Machine computer game.

If Pinball Construction Set and those other early “creativity games” were one part of the influences that would result in The Incredible Machine, the others are equally easy to spot. One need only glance at a screenshot to be reminded of the old children’s board game cum toy Mouse Trap, a simplistic exercise in roll-and-move whose real appeal is the elaborate, Rube Goldberg-style mechanism that the players slowly assemble out of plastic parts in order to trap one another’s pieces — if, that is, the dodgy contraption, made out of plastic and rubber bands, doesn’t collapse on itself instead. But sadly, there’s only one way to put the mousetrap’s pieces together, making the board game’s appeal for any but the youngest children short-lived. The Incredible Machine, on the other hand, would offer the opportunity to build a nearly infinite number of virtual mousetraps.

In contrast to such venerable inspirations, the other game that clearly left its mark on The Incredible Machine was one of the hottest current hits in the industry at the time the latter was being made. Lemmings, the work of a small team out of Scotland called DMA Design, was huge in every corner of the world where computer games were played — a rarity during what was still a fairly fragmented era of gaming culture. A level-oriented puzzle game of ridiculous charm, Lemmings made almost anyone who saw it want it to pick up the mouse and start playing it, and yet managed to combine this casual accessibility with surprising depth and variety over the course of 120 levels that started out trivial and escalated to infuriating and beyond. Its strong influence can be seen in The Incredible Machine‘s similar structure, consisting of 87 machines to build, beginning with some tutorial puzzles to gently introduce the concepts and parts and ending with some fiendishly complex problems indeed. For that matter, Lemmings‘s commercial success, which proved that there was a real market for accessible games with a different aesthetic sensibility than the hardcore norm, did much to make Sierra, Dynamix’s new owner and publisher, enthusiastic about the project.

Like Lemmings, the heart of The Incredible Machine is its robust, hugely flexible engine. Yet that potential would have been for naught had not Tunnell, Ryan, and their other associates delivered a progression of intriguing puzzles that build upon one another in logical ways as you learn more and more about the engine’s possibilities. One might say that, if the wonderful engine is the heart of the game, the superb puzzle design is the soul of the experience — just as is the case, yet again, with Lemmings. In training you how to play interactively and then slowly ramping up the challenge, Lemmings and The Incredible Machine both embraced the accepted best practices of modern game design well before they had become such. They provide you the wonderful rush of feeling smart, over and over again as you master the ever more complex dilemmas they present to you.

To understand how The Incredible Machine actually works in practice, let’s have a look at a couple of its individual puzzles. We’ll begin with the very first of them, an admittedly trivial exercise for anyone with any experience in the game.

Each puzzle begins with three things: with a goal; with an incomplete machine already on the main board, consisting of some selection of immovable parts; and with some additional parts waiting off on the right side of the screen, to be dragged onto the board where we will. In this case, we need to send the basketball through the “hoop” — which is, given that there is no “net” graphic in the game’s minimalist visual toolkit, the vaguely hole-shaped arrangement of pieces below and to the right of where the basketball stands right now. Looking to the parts area at the far right, we see that we have three belts, three hamster wheels, and three ramp pieces to help us accomplish our goal. The score tallies at the bottom of the screen have something or other to do with time and number of puzzles already completed, but feel free to do like most players and ignore them; the joy of this game is in making machines that work, not in chalking up high scores. Click on the image above to see what happens when we start our fragment of a machine in its initial state.

Not much, right? The bowling ball that begins suspended in mid-air simply falls into the ether. Let’s begin to make something more interesting happen by putting a hamster cage below the falling ball. When the ball drops on top of it, the little fellow will get spooked and start to run.


His scurrying doesn’t accomplish anything as long as his wheel isn’t connected to any other parts. So, let’s stretch a belt from the hamster wheel to the conveyor belt just above and to its right.


Now we’re getting somewhere! If we put a second hamster wheel in the path of the second bowling ball, and connect it to the second conveyor belt, we can get the third bowling ball rolling.


And then, as you’ve probably surmised, the same trick can be used to send the basketball through the hoop.

Note that we never made use of the three ramp pieces at our disposal. This is not unusual. Because each puzzle really is a dynamic physics simulation rather than a problem with a hard-coded solution, many of them have multiple solutions, some of which may never have been thought of by the designers. In this quality as well The Incredible Machine is, yet once more, similar to Lemmings.

The game includes many more parts than we had available to us in the first puzzle; there are some 45 of them in all, far more than any single puzzle could ever use. Even the physical environment itself eventually becomes a variable, as the later puzzles begin to mess with gravity and atmospheric pressure.

We won’t look at anything that daunting today, but we should have a look at a somewhat more complicated puzzle from a little later in the game, one that will give us more of a hint of the engine’s real potential.

In tribute to Mouse Trap (and because your humble correspondent here just really likes cats), this one will be a literal game of cat and mouse, as shown above. We need to move Mort the Mouse from the top right corner of the screen to the vaguely basket-like enclosure at bottom left, and we’ll have to use Pokey the Cat to accomplish part of that goal. We have more parts to work with this time than will fit in the parts window to the right. (We can scroll through the pages of parts by clicking on the arrows just above.) So, in addition to the two belts, one gear, one electric motor, two electric fans, and one generator shown in the screenshot below, know that we also have three ramp pieces at our disposal.

Already with the starting setup, a baseball flips on a household power outlet, albeit one to which nothing is initially connected.

We can connect one of the fans to the power outlet to blow Mort toward the left. Unfortunately, he gets stuck on the scenery rather than falling all the way down to the next level.


So, we need to alter the mouse’s trajectory by using one of our ramp pieces; note that these, like many parts, can be flipped horizontally and stretched to suit our needs. Our first attempt at placing the ramp does cause Mort to fall down to the next level, and he then starts running away from Pokey toward the right, as we want. But he’s not fast enough to get to the end of the pipe on which he’s running before Pokey catches him. This is good for Pokey, but not so good for us — and, needless to say, least good of all for Mort. (At least the game politely spares us the carnage that ensues after he’s caught by making him simply disappear.)


A little more experimentation and we find a placement of the ramp that works better.


Now we just have to move the mouse back to the left and into the basket. The most logical approach would seem to be to use the second fan to blow him there. Simple enough, right? Getting it running, however, will be a more complicated affair, considering that we don’t have a handy mains-power outlet already provided down here and that our fan’s cord won’t stretch anywhere near as far as we need it to in order to utilize the outlet above. So, we begin by instead plugging our electric motor into the second socket of the outlet we do have, and belting it up to the gear that’s already fixed in place.


So far, so good. Now we mesh the gear from our box of parts to the one that’s already on the board, and belt it up to our generator, which provides us with another handy power outlet right where we need it.


Now we place our second fan just right, and… voila! We’ve solved the puzzle with two ramp pieces to spare.


The experience of working through the stages of a solution, getting a little closer each time, is almost indescribably satisfying for anyone with the slightest hint of a tinkering spirit. The Incredible Machine wasn’t explicitly pitched as an educational product, but, like a lot of Sierra’s releases during this period, it nevertheless had something of an educational — or at least edutational — aura, what with its bright, friendly visual style and nonviolent premise (the occasional devoured mouse excepted!). There’s much to be learned from it — not least that even the most gnarly problems, in a computer game or in real life, can usually be tackled by breaking them down into a series of less daunting sub-problems. Later on, when the puzzles get really complex, one may question where to even start. The answer, of course, is just to put some parts on the board and connect some things together, to start seeing what’s possible and how things react with one another. Rolling up the old sleeves and trying things is better than sitting around paralyzed by a puzzle’s — or by life’s — complexity. For the pure tinkerers among us, meanwhile, the game offers a free-form mode where you can see what sort of outlandish contraption you can come up with, just for the heck of it. It thus manages to succeed as both a goal-oriented game in the mode of Lemmings and as a software toy in the mode of its 1980s inspirations.

As we’ve already seen, Jeff Tunnell Productions had been formed with the intention of making smaller, more formally innovative games than those typically created inside the main offices of Dynamix. It was tacitly understood that games of this stripe carried with them more risk and perhaps less top-end sales potential than the likes of Damon Slye’s big military flight simulators; these drawbacks would be compensated for only by their vastly lower production costs. It’s thus a little ironic to note that The Incredible Machine upon its release on December 1, 1992, became a major, immediate hit by the standard of any budget. Were it not for another of those aforementioned Damon Slye simulations, a big World War II-themed extravaganza called Aces of the Pacific that had been released just days before it, it would actually have become Dynamix’s single best-selling game to date. As it was, Aces of the Pacific sold a few more absolute units, but in terms of profitability there was no comparison; The Incredible Machine had cost peanuts to make by the standards of an industry obsessed with big, multimedia-rich games.

The size comparisons are indeed telling. Aces of the Pacific had shipped on three disks, while Tunnell’s previous project, the interactive cartoon The Adventures of Willy Beamish, had required six. The Incredible Machine, by contrast, fit comfortably on a single humble floppy, a rarity among games from Dynamix’s parent company Sierra especially, from whose boxes sometimes burst forth as many as a dozen disks, who looked forward with desperate urgency to the arrival of CD-ROMs and their 650 MB of storage. The Incredible Machine needed less than 1 MB of space in all, and its cost of production had been almost as out of proportion with the Sierra norm as its byte count. It thus didn’t take Dynamix long to ask Jeff Tunnell Productions to merge back into their main fold. With the profits The Incredible Machine was generating, it would be best to make sure its developers remained in the Dynamix/Sierra club.

There was much to learn from The Incredible Machine‘s success for any student of the evolving games industry who bothered to pay attention. Along with Tetris and Lemmings before it, it provided the perfect template for “casual” gaming, a category the industry hadn’t yet bothered to label. It could be used as a five-minute palate-cleanser between tasks on the office computer as easily as it could become a weekend-filling obsession on the home computer. It was a low-investment game, quick and easy to get into and get out of, its premise and controls obvious from the merest glance at the screen, yet managed to conceal beneath its shallow surface oceans of depth. At the same time, though, that depth was of such a nature that you could set it aside for weeks or months when life got in the way, then pick it up and continue with the next puzzle as if nothing had happened. This sort of thing, much more so than elaborate interactive movies filmed with real actors on real sound stages —  or, for that matter, hardcore flight simulators that demanded hours and hours of practice just to rise to the level of competent — would prove to be the real future of digital games as mass-market entertainments. The founding ethos of the short-lived entity known as Jeff Tunnell Productions — to focus on small games that did one thing really, really well — could stand in for that of countless independent game studios working in the mobile and casual spaces today.

Still, it would be a long time before The Incredible Machine and games like it became more than occasional anomalies in an industry obsessed with cutting-edge technology and size, both in megabytes and in player time commitment. In the meantime, developers who did realize that not every gamer was thirsting to spend dozens of hours immersed in an interactive Star Wars movie or Lord of the Rings novel could do very well for themselves. The Incredible Machine was the sort of game that lent itself to almost infinite sequels once the core engine had been created. With the latter to hand, all that remained for Tunnell and company was to churn out more puzzles. Thus the next several years brought The Even More! Incredible Machine, a re-packaging of the original game with an additional 73 puzzles; Sid & Al’s Incredible Toons, which moved the gameplay into more forthrightly cartoon territory via its titular Tom & Jerry ripoffs; and The Incredible Machine 2 and The Incredible Toon Machine, which were just what they sounded like they would be. Being the very definition of “more of the same,” these aren’t the sort of games that lend themselves to extended criticism, but certainly players who had enjoyed the original game found plenty more to enjoy in the sequels. Along the way, the series proved quietly but significantly influential as more than just one of the pioneers of casual games in the abstract: it became the urtext of the entire genre of so-called “physics simulators.” There’s much of The Incredible Machine‘s influence to be found in more than one facet of such a modern casual mega-hit as the Angry Birds franchise.

For his part, Jeff Tunnell took away from The Incredible Machine‘s success the lesson that his beloved small games were more than commercially viable. He spent most of the balance of the 1990s working similar territory. In the process, he delivered two games that sold even better than The Incredible Machine franchise — in fact, they became the two best-selling games Dynamix would ever release. Trophy Bass and 3-D Ultra Pinball are far from the best-remembered or best-loved Dynamix-related titles among hardcore gamers today, but they sold and sold and sold to an audience that doesn’t tend to read blogs like this one. While neither is a brilliantly innovative design like The Incredible Machine, their huge success hammers home the valuable lesson, still too often forgotten, that many different kinds of people play many different kinds of games for many different reasons, and that none of these people, games, or reasons is a wrong one.

(Sources: Sierra’s InterAction news magazine of Fall 1992 and Winter 1992; Computer Gaming World of March 1992 and April 1993; Commodore Microcomputers of November/December 1986; Matt Barton’s interviews with Jeff Tunnell in Matt Chat 200 and 201; press releases, annual reports, and other internal and external documents from the Sierra archive at the Strong Museum of Play.

All of the Incredible Machine games are available for purchase in one “mega pack” from GOG.com.)

Footnotes

Footnotes
1 That, anyway, is the story which both Jeff Tunnell and Kevin Ryan tell in interviews today, which also happened to be the only one told in an earlier version of this article. But this blog’s friend Jim Leonard has since pointed out the existence of a rather obscure children’s game from the heyday of computerized erector sets called Creative Contraptions, published by the brief-lived software division of Bantam Books and created by a team of developers who called themselves Looking Glass Software (no relation to the later, much more famous Looking Glass Studios). It’s a machine construction set in its own right, one which is strikingly similar to the game which is the main subject of this article, even including some of the very same component parts, although it is more limited in many ways than Tunnell and Ryan’s creation, with simpler mechanisms to build out of fewer parts and less flexible controls that are forced to rely on keystrokes rather than the much more intuitive affordances of the mouse. One must assume that Tunnell and Ryan either reinvented much of Creative Contraptions or expanded on a brilliant concept beautifully in the course of taking full advantage of the additional hardware at their disposal. If the latter, there’s certainly no shame in that.
 

Tags: , , ,

What’s in a Subtitle?

Sharp-eyed readers may have already noticed that I’ve changed the subtitle of this blog from “a history of computer entertainment” to “a history of computer entertainment and digital culture.” This is not so much indicative of any change in focus as it is a better description of what this blog has always been. I’ve always made space for aspects of what we might call “creative computing” that aren’t games, from electronic literature to the home-computer wars, from the birth of hypertext to early online culture, from influential science fiction to important developments in programming, and that will of course continue.

That is all. Carry on.

 

Ebooks and Future Plans

I’m afraid I don’t have a standard article for you this week. I occasionally need to skip a Friday to store up an independent writer’s version of vacation time, and the beginning of a five-Friday month like this one is a good time to do that. That said, this does make a good chance to give you some updates on the latest goings-on here at Digital Antiquarian World Headquarters, and to solicit some feedback on a couple of things that have been on my mind of late. So, let me do that today, and I’ll be back with the usual fare next Friday. (Patreon supporters: don’t worry, this meta-article’s a freebie!)

First and foremost, I’m pleased to be able to release the latest volume of the growing ebook collection compiling the articles on this site, this one centering roughly — even more roughly than usual, in fact — on 1991. Volume 13 has been a long time coming because the last year has brought with it a lot of longer, somewhat digressive series on topics like Soviet computing and the battle over Tetris, the metamorphosis of Imagine Software into Psygnosis, the world of pre-World Wide Web commercial online services, and of course my recently concluded close reading of Civilization, along with the usual singletons on individual games and related topics. This ebook is by far the fattest one yet, and I think it contains some of the best work I’ve ever done; these are certainly, at any rate, some of the articles I’ve poured the most effort into. As usual, it exists only thanks to the efforts of Richard Lindner. He’s outdone himself this time, even providing fresh cover art to suit what he described to me as the newly “glamorous, visual” era of the 1990s. If you appreciate being able to read the blog in this way, feel free to send him a thank-you note at the email address listed on the title page of the ebook proper.

Next, I want to take this opportunity to clear up the current situation around Patreon, something I’ve neglected to do for an unconscionably long time. Many of you doubtless remember the chaos of last December, when Patreon suddenly announced changes to their financial model that would make a blog like this one, which relies mostly on small donations, much less tenable. I scrambled to find alternatives to Patreon for those who felt (justifiably) betrayed by the changes, and had just about settled on a service called Memberful when Patreon reversed course and went back to the old model after a couple of weeks of huge public outcry.

Despite sending some mixed messages in the weeks that followed that reversal, I haven’t ever implemented Memberful as an alternative funding model due to various nagging concerns: I’m worried about tech-support issues that must come with a bespoke solution, not happy about being forced to sell monthly rather than per-article subscriptions (meaning I have to feel guilty if due to some emergency I can’t publish four articles in any given month), and concerned about the complication and confusion of offering two separate subscription models — plus PayPal! — as funding solutions (just writing a FAQ to explain it all would take a full day or two!). In addition, a hard look at the numbers reveals that a slightly higher percentage of most pledges would go to third parties when using Memberful than happens with Patreon. It’s for all these reasons that, after much agonized back-and-forthing, I’ve elected to stay the course with Patreon alone as my main funding mechanism, taking them at their word that they’ll never again do do anything like what they did last December.

I do understand that some of you are less inclined to be forgiving, which is of course your right. For my part, even the shenanigans of last December weren’t quite enough to destroy the good will I have toward Patreon for literally changing my life by allowing me to justify devoting so much time and energy to this blog. (They were of course only the medium; I’m even more grateful to you readers!) At any rate, know that except for that one blip Patreon has always treated me very well, and that their processing fees are lower than I would pay using any other subscription service. And yeah, okay… maybe also keep your fingers crossed that I’ve made the right decision in giving them a second chance before I hit the panic button. Fool me once…

So, that’s where we stand with the Patreon situation, which can be summed up as sticking with the status quo for now.  But it’s not the only thing I’ve been a bit wishy-washy about lately…

As a certain recent ten-article series will testify, I fell hard down the Civilization rabbit hole when I first began to look at that game a year or so ago. I’ve spent quite some time staring at that Advances Chart, trying to decide what might be there for me as a writer. I’m very attracted to the idea of writing some wider-scale macro-history in addition to this ongoing micro-history of the games industry, as I am by the idea of writing said history in terms of achievement and (largely) peaceful progress as opposed to chronicles of wars and battles won and lost.  Still, I’ve struggled to figure out what form it all should take.

My first notion was to start a second blog. It would be called — again, no surprise here for readers of my Civilization articles! — The Narrative of Progress, and would be structured around an Advances Chart similar but not identical to the one in the Civilization box. (Intriguing as it is, the latter also has some notable oddities, such as its decision to make “Alphabet” and “Writing” into separate advances; how could you possibly have one without the other?) I even found a web developer who did some work on prototyping an interactive, dynamically growing Advances Chart with links to individual articles. But we couldn’t ever come up with anything that felt more intuitive and usable than a traditional table of contents, so I gave up on that idea. I was also concerned about whether I could possibly handle the research burden of so many disparate topics in science, technology, and sociology — a concern which the Civilization close reading, over the course of which I made a few embarrassing gaffes which you readers were kind enough to point out to me, has proved were justified.

But still I remain attracted to the idea of doing a different kind of history in addition to this gaming history. Lately, I’ve gravitated to the Wonders of the World. In fact, Civilization prompted my wife Dorte and I to take a trip to Cairo just a month ago — a crazy place, let me tell you! — to see the Pyramids, the Egyptian Museum, and other ancient sites. I think I could do a great job with these topics, as they’re right in my writerly wheelhouse of readable narrative history, and it would be hard to go wrong with stories as fascinating as these. Up until just a couple of weeks ago I had schemed about doing these kinds of stories on this site, but finally had to give it up as well as the wrong approach. I would have to set up a second Patreon anyway, as I couldn’t possibly expect people who signed up to support a “history of interactive entertainment” to support this other stuff as well, and running two Patreons and two parallel tracks out of a single WordPress blog would just be silly.

All of which is to say that I’m as undecided as ever about this stuff. I know I’d like to do some wider-frame historical writing at some point, almost certainly hosted at a different site, but I don’t know exactly when that will be or what form it will take. Would you be interested in reading such a thing? I’d be interested to hear your opinions and suggestions, whether in the comments below or via email.

Whatever happens, rest assured that I remain committed to this ongoing history as well; the worst that might result from a second writing project would be a somewhat slower pace here. I’m occasionally asked how far I intend to go with this history, and I’ve never had a perfect answer. A few years ago, I thought 1993’s Doom might be a good stopping place, as it marked the beginning of a dramatic shift in the culture of computer games. But the problem with that, I’ve come to realize, is that it did indeed only mark the beginning of a shift, and to stop there would be to leave countless threads dangling. These days, the end of the 1990s strikes me as a potential candidate, but we’ll see. At any rate, I don’t have plans for stopping anytime soon — not as long as you’re still willing to read and support this work. Who knows, maybe we’ll make it all the way to 2018 someday.

In that meantime, a quick rundown of coming attractions for the historical year of 1992. (If you want to be completely surprised every week, skip this list!)

  • Jeff Tunnell’s hugely influential physics puzzler The Incredible Machine
  • the seminal platformer Another World, among other things a beautiful example of lyrical nonverbal storytelling
  • a series on the evolution of Microsoft Windows, encompassing the tangled story of OS/2, the legal battle with Apple over look-and-feel issues, and those Windows time-wasters, like Solitaire, Minesweeper, and Hearts, that became some of the most-played computer games in history
  • William Gibson’s experimental poem-that-destroys-itself Agrippa
  • Shades of Gray, an underappreciated literary statement in early amateur interactive fiction which came up already in my conversation with Judith Pintar, but deserves an article of its own
  • Legend’s two Gateway games
  • Indiana Jones and the Fate of Atlantis
  • Electronic Arts in the post “rock-star” years, Trip Hawkins’s departure, and the formation of 3DO
  • The Lost Files of Sherlock Holmes, which might just be my all-time favorite Holmes game
  • Interplay’s two Star Trek graphic adventures
  • the adventures in Sierra’s Discovery line of games for children, which were better than most of their adult adventure games during this period
  • Quest for Glory III and IV
  • the strange story behind the two Dune games which were released back-to-back in 1992
  • Star Control II
  • Ultima Underworld and Ultima VII
  • Darklands

Along with all that, I’ve had a great suggestion from Casey Muratori — who, incidentally, was also responsible for my last article by first suggesting I take a closer look at Dynamix’s legacy in narrative games — to write something about good puzzles in adventure games. I’ve long been conscious of spending a lot more time describing bad puzzles in detail than I do good ones. The reason for this is simply that I hesitate to spoil the magic of the good puzzles for you, but feel far less reluctance with regard to the bad ones. Still, it does rather throw things out of balance, and perhaps I should do something about that. Following Casey’s suggestion, I’ve been thinking of an article describing ten or so good puzzles from classic games, analyzing how they work in detail and, most importantly, why they work.

That’s something on which I could use your feedback as well. When you think of the games I’ve written about so far on this blog, whether textual or graphical, is there a puzzle that immediately springs to mind as one that you just really, really loved for one reason or another? (For me, just for the record, that puzzle is the T-removing machine from Leather Goddesses of Phobos.) If so, feel free to send it my way along with a sentence or two telling me why, once again either in the comments below or via private email. I can’t promise I can get to all of them, but I’d like to assemble a reasonable selection of puzzles that delight for as many different reasons as possible.

Finally, please do remember that I depend on you for support in order to continue doing this work. If you enjoy and/or find something of value in what I do here, if you’re lucky enough to have disposable income, and if you haven’t yet taken the plunge, please do think about signing up as a Patreon supporter at whatever level strikes you as practical and warranted. I run what seems to be one of the last “clean” sites on the Internet — no advertisements, no SEO, no personal-data-mining, no “sponsored articles,” just the best content I can provide — but that means that I have to depend entirely upon you to keep it going. With your support, we can continue this journey together for years to come.

And with that, I’ll say thanks to all of you for being the best readers in the world and wish you a great weekend. See you next week with a proper article!