RSS

Tag Archives: microsoft

Doing Windows, Part 1: MS-DOS and Its Discontents

Has any successful piece of software ever deserved its success less than the benighted, unloved exercise in minimalism that was MS-DOS? The program that started its life as a stopgap under the name of “The Quick and Dirty Operating System” at a tiny, long-forgotten hardware maker called Seattle Computer Products remained a stopgap when it was purchased by Bill Gates of Microsoft and hastily licensed to IBM for their new personal computer. Archaic even when the IBM PC shipped in October of 1981, MS-DOS immediately sent half the software industry scurrying to come up with something better. Yet actually arriving at a viable replacement would absorb a decade’s worth of disappointment and disillusion, conflict and compromise — and even then the “replacement” would still have to be built on top of the quick-and-dirty operating system that just wouldn’t die.

This, then, is the story of that decade, and of how Microsoft at the end of it finally broke Windows into the mainstream.


When IBM belatedly turned their attention to the emerging microcomputer market in 1980, it was both a case of bold new approaches and business-as-usual. In the willingness they showed to work together with outside partners on the hardware and especially the software front, the IBM PC was a departure for them. In other ways, though, it was a continuation of a longstanding design philosophy.

With the introduction of the System/360 line of mainframes back in 1964, IBM had in many ways invented the notion of a computing platform: a nexus of computer models that could share hardware peripherals and that could all run the same software. To buy an IBM system thereafter wasn’t so much to buy a single computer as it was to buy into a rich computing ecosystem. Long before the saying went around corporate America that “no one ever got fired for buying Microsoft,” the same was said of IBM. When you contacted them, they sent a salesman or two out to discuss your needs, desires, and budget. Then, they tailored an installation to suit and set it up for you. You paid a bit more for an IBM, but you knew it was safe. System/360 models were available at prices ranging from $2500 per month to $115,000 per month, with the latter machine a thousand times more powerful than the former. Their systems were thus designed, as all their sales literature emphasized, to grow with you. When you needed more computer, you just contacted the mother ship again, and another dark-suited fellow came out to help you decide what your latest needs really were. With IBM, no sharp breaks ever came in the form of new models which were incompatible with the old, requiring you to remake from scratch all of the processes on which your business depended. Progress in terms of IBM computing was a gradual evolution, not a series of major, disruptive revolutions. Many a corporate purchasing manager loved them for the warm blanket of safety, security, and compatibility they provided. “Once a customer entered the circle of 360 users,” noted IBM’s president Thomas Watson Jr., “we knew we could keep him there a very long time.”

The same philosophy could be seen all over the IBM PC. Indeed, it would, as much as the IBM name itself, make the first general-purpose IBM microcomputer the accepted standard for business computing on the desktop, just as were their mainframe lines in the big corporate data centers. You could tell right away that the IBM PC was both built to last and built to grow along with you. Opening its big metal case revealed a long row of slots just waiting to be filled, thereby transforming it into exactly the computer you needed. You could buy an IBM PC with one or two floppy drives, or more, or none; with a color or a monochrome display card; with anywhere from 16 K to 256 K of RAM.

But the machine you configured at time of purchase was only the beginning. Both IBM and a thriving aftermarket industry would come to offer heaps more possibilities in the months and years that followed the release of the first IBM PC: hard drives, optical drives, better display cards, sound cards, ever larger RAM cards. And even when you finally did bite the bullet and buy a whole new machine with a faster processor, such as 1984’s PC/AT, said machine would still be able to run the same software as the old, just as its slots would still be able to accommodate hardware peripherals scavenged from the old.

Evolution rather than revolution. It worked out so well that the computer you have on your desk or in your carry-on bag today, whether you prefer Windows, OS X, or Linux, is a direct, lineal descendant of the microcomputer IBM released more than 35 years ago. Long after IBM themselves got out of the PC game, and long after sexier competitors like the Commodore Amiga and the first and second generation Apple Macintosh have fallen by the wayside, the beast they created shambles on. Its long life is not, as zealots of those other models don’t hesitate to point out, down to any intrinsic technical brilliance. It’s rather all down to the slow, steady virtues of openness, expandibility, and continuity. The timeline of what’s become known as the “Wintel” architecture in personal computing contains not a single sharp break with the past, only incremental change that’s been carefully managed — sometimes even technologically compromised in comparison to what it might have been — so as not to break compatibility from one generation to the next.

That, anyway, is the story of the IBM PC on the hardware side, and a remarkable story it is. On the software side, however, the tale is more complicated, thanks to the failure of IBM to remember the full lesson of their own System/360.

At first glance, the story of the IBM PC on the software side seems to be just another example of IBM straining to offer a machine that can be made to suit every potential customer, from the casual home user dabbling in games and BASIC to the most rarefied corporate purchaser using it to run mission-critical applications. Thus when IBM announced the computer, four official software operating paradigms were also announced. One could use the erstwhile quick-and-dirty operating system that was now known as MS-DOS; [1]MS-DOS was known as PC-DOS when sold directly under license by IBM. Its functionality, however, was almost or entirely identical to the Microsoft-branded version. For simplicity’s sake, I will just refer to “MS-DOS” whenever speaking about either product — or, more commonly, both — in the course of this series of articles. one could use CP/M, the standard for much of pre-IBM business microcomputing, from which MS-DOS had borrowed rather, shall we say, extensively (remember the latter’s original name?); one could use an innovative cross-platform environment, developed by the University of California San Diego’s computer-science department, that was based around the programming language Pascal; or one could choose not to purchase any additional operating software at all, instead relying on the machine’s built-in ROM-hosted Microsoft BASIC environment, which wasn’t at all dissimilar from those the same company had already provided for many or most of the other microcomputers on the market.

In practice, though, this smorgasbord of possibilities only offered one remotely appetizing entree in the eyes of most users. The BASIC environment was really suited only to home users wanting to tinker with simple programs and save them on cassettes, a market IBM had imagined themselves entering with their first microcomputer but had in reality priced themselves out of. The UCSD Pascal system was ahead of its time with its focus on cross-platform interoperability, accomplished using a form of byte code that would later inspire the Java virtual machine, but it was also rather slow, resource-hungry, and, well, just kind of weird — and it was quite expensive as well. CP/M ought to have been poised for success on the new machine given its earlier dominance, but its parent company Digital Research was unconscionably late making it available for the IBM PC, taking until well after the machine’s October 1981 launch to get it ported from the Zilog Z-80 microprocessor to the Intel architecture of the IBM PC and its successor models — and when CP/M finally did appear it was, once again, expensive.

That left MS-DOS, which worked, was available, and was fairly cheap. As corporations rushed out to purchase the first safe business microcomputer at a pace even IBM had never anticipated, MS-DOS relegated the other three solutions to a footnote in computing history. Nobody’s favorite operating system, it was about to become the most popular one in the world.

The System/360 line that had made IBM the 800-pound gorilla of large-scale corporate data-processing had used an operating system developed in-house by them with an eye toward the future every bit as pronounced as that evinced by the same line’s hardware. The emerging IBM PC platform, on the other hand, had gotten only half of that equation down. MS-DOS was locked into the 1 MB address space of the Intel 8088, allowing any computer on which it ran just 640 K of RAM at the most. When newer Intel processors with larger address spaces began to appear in new IBM models as early as 1984, software and hardware makers and ordinary users alike would be forced to expend huge amounts of time and effort on ugly, inefficient hacks to get around the problem.

Infamous though the 640 K barrier would become, memory was just one of the problems that would dog MS-DOS programmers throughout the operating system’s lifetime. True to its post-quick-and-dirty moniker of the Microsoft Disk Operating System, most of its 27 function calls involved reading and writing to disks. Otherwise, it allowed programmers to read the keyboard and put text on the screen — and not much of anything else. If you wanted to show graphics or play sounds, or even just send something to the printer, the only way to do it was to manually manipulate the underlying hardware. Here the huge amount of flexibility and expandability that had been designed into the IBM PC’s hardware architecture became a programmer’s nightmare. Let’s say you wanted to put some graphics on the screen. Well, a given machine might have an MDA monochrome video card or a CGA color card, or, soon enough, a monochrome Hercules card or a color EGA card. You the programmer had to build into your program a way of figuring out which one of these your host had, and then had to write code for dealing with each possibility on its own terms.

An example of how truly ridiculous things could get is provided by WordPerfect, the most popular business word processor from the mid-1980s on. WordPerfect Corporation maintained an entire staff of programmers whose sole job function was to devour the technical specifications and command protocols of each new printer that hit the market and write drivers for it. Their output took the form of an ever-growing pile of disks that had to be stuffed into every WordPerfect box, even though only one of them would be of any use to any given buyer. Meanwhile another department had to deal with the constant calls from customers who had purchased a printer for which they couldn’t find a driver on their extant mountain of disks, situations that could be remedied in the era before widespread telecommunications only by shipping off yet more disks. It made for one hell of a way to run a software business; at times the word processor itself could almost feel like an afterthought for WordPerfect Printer Drivers, Inc.

But the most glaringly obvious drawback to MS-DOS stared you in the face every time you turned on the computer and were greeted with that blinking, cryptic “C:\>” prompt. Hackers might have loved the command line, but it was a nightmare for a secretary or an executive who saw the computer only as an appliance. MS-DOS contrived to make everything more difficult through its sheer primitive minimalism. Think of the way you work with your computer today. You’re used to having several applications open at once, used to being able to move between them and cut and paste bits and pieces from one to the other as needed. With MS-DOS, you couldn’t do any of this. You could run just one application at a time, which would completely fill the screen. To do something else, you had to shut down the application you were currently using and start another. And if what you were hoping to do was to use something you had made in the first application inside the second, you could almost always forget about it; every application had its own proprietary data formats, and MS-DOS didn’t provide any method of its own of moving data from one to another.

Of course, the drawbacks of MS-DOS spelled opportunity for those able to offer ways to get around them. Thus Lotus Corporation became one of the biggest software success stories of the 1980s by making Lotus 1-2-3, an unwieldy colossus that integrated a spreadsheet, a database manager, and a graph- and chart-maker into a single application. People loved the thing, bloated though it was, because all of its parts could at least talk to one another.

Other solutions to the countless shortcomings of MS-DOS, equally inelegant and partial, were rampant by the time Lotus 1-2-3 hit it big. Various companies published various types of hacks to let users keep multiple applications resident in memory, switching between them using special arcane key sequences. Various companies discussed pacts to make interoperable file formats for data transfer between applications, although few of them got very far. Various companies made a cottage industry out of selling pre-packaged printer drivers to other developers for use in their applications. People wrote MS-DOS startup scripts that brought up easy-to-choose-from menus of common applications on bootup, thereby insulating timid secretaries and executives alike from the terrifying vagueness of the command line. And everybody seemed to be working a different angle when it came to getting around the 640 K barrier.

All of these bespoke solutions constituted a patchwork quilt which the individual user or IT manager would have to stitch together for herself in order to arrive at anything like a comprehensive remedy for MS-DOS’s failings. But other developers had grander plans, and much of their work quickly coalesced around various forms of the graphical user interface. Initially, this fixation may sound surprising if not inexplicable. A GUI built using a mouse, menus, icons, and windows would seem to fix only one of MS-DOS’s problems, that being its legendary user-unfriendliness. What about all the rest of its issues?

As it happens, when we look closer at what a GUI-based operating environment does and how it does it, we find that it must or at least ought to carry with it solutions to MS-DOS’s other issues as well. A windowed environment ideally allows multiple applications to be open at one time, if not actually running simultaneously. Being able to copy and paste pieces from one of those open applications to another requires interoperable data formats. Running or loading multiple applications also means that one of them can’t be allowed to root around in the machine’s innards indiscriminately, lest it damage the work of the others; this, then, must mark the end of the line for bare-metal programming, shifting the onus onto the system software to provide a proper layer of high-level function calls insulating applications from a machine’s actual or potential hardware. And GUIs, given that they need to do all of the above and more, are notoriously memory-hungry, which obligated many of those who made such products in the 1980s to find some way around MS-DOS’s memory constraints. So, a GUI environment proves to be much, much more than just a cutesy way of issuing commands to the computer. Implementing one on an IBM PC or one of its descendants meant that the quick-and-dirty minimalism of MS-DOS had to be chucked forever.

Some casual histories of computing would have you believe that the entire software industry was rigidly fixated on the command line until Steve Jobs came along to show them a better way with the Apple Macintosh, whereupon they were dragged kicking and screaming into computing’s necessary future. Such histories generally do acknowledge that Jobs himself got the GUI religion after a visit to the Xerox Palo Alto Research Center in December of 1979, but what tends to get lost is the fact that he was hardly alone in viewing PARC’s user-interface innovations as the natural direction for computing to go in the more personal, friendlier era of high technology being ushered in by the microcomputer. Indeed, by 1981, two years before a GUI made its debut on an Apple product in the form of the Lisa, seemingly everyone was already talking about them, even if the acronym itself had yet to be invented. This is not meant to minimize the hugely important role Apple really would play in the evolution of the GUI; as we’ll see to a large extent in the course of this very series of articles, they did much original formative work that has made its way into the computer you’re probably using to read these words right now. It’s rather just to say that the complete picture of how the GUI made its way to the personal computer is, as tends to happen when you dig below the surface of any history, more variegated than a tidy narrative of “A caused B which caused C” allows for.

In that spirit, we can note that the project destined to create the MS-DOS world’s first GUI was begun at roughly the same time that a bored and disgruntled Steve Jobs over at Apple, having been booted off the Lisa project, seized control of something called the Macintosh, planned at the time as an inexpensive and user-friendly computer for the home. This other pioneering project in question, also started during the first quarter of 1981, was the work of a brief-lived titan of business software called VisiCorp.

VisiCorp had been founded by one Dan Fylstra under the name of Personal Software in 1978, at the very dawn of the microcomputer age, as one of the first full-service software publishers, trafficking mostly in games which were submitted to him by hobbyists. His company became known for their comparatively slick presentation in a milieu that was generally anything but; MicroChess, one of their first releases, was quite probably the first computer game ever to be packaged in a full-color box rather than a Ziploc baggie. But their course was changed dramatically the following year when a Harvard MBA student named Dan Bricklin contacted Fylstra with a proposal for a software tool that would let accountants and other businesspeople automate most of the laborious financial calculations they were accustomed to doing by hand. Fylstra was intrigued enough to lend the microcomputer-less Bricklin one of his own Apple IIs — whereupon, according to legend at least, the latter proceeded to invent the electronic spreadsheet over the course of a single weekend. He hired a more skilled programmer named Bob Frankston and formed a company called Software Arts to develop that rough prototype into a finished application, which Fylstra’s Personal Software published in October of 1979.

Up to that point, early microcomputers like the Apple II, Radio Shack TRS-80, and Commodore PET had been a hard sell as practical tools for business — even for their most seemingly obvious business application of all, that of word processing. Their screens could often only display 40 columns of big, blocky characters, often only in upper case — about as far away from the later GUI ideal of “what you see is what you get” as it was possible to go — while their user interfaces were arcane at best and their minuscule memories could only accommodate documents of a few pages in length. Most potential business users took one look at the situation, added on the steep price tag for it all, and turned back to their typewriters with a shrug.

VisiCalc, however, was different. It was so clearly, manifestly a better way to do accounting that every accountant Fylstra showed it to lit up like a child on Christmas morning, giggling with delight as she changed a number here or there and watched all of the other rows and columns update automagically. VisiCalc took off like nothing the young microcomputer industry had ever seen, landing tens of thousands of the strange little machines in corporate accounting departments. As the first tangible proof of what personal computing could mean to business, it prompted people to begin asking why IBM wasn’t a part of this new party, doing much to convince the latter to remedy that absence by making a microcomputer of their own. It’s thus no exaggeration to say that the entire industry of business-oriented personal computing was built on the proof of concept that was VisiCalc. It would sell 500,000 copies by January of 1983, an absolutely staggering figure for that time. Fylstra, seeing what was buttering his bread, eventually dropped all of the games and other hobbyist-oriented software from his catalog and reinvented Personal Software as VisiCorp, the first major publisher of personal-computer business applications.

But all was not quite as rosy as it seemed at the new VisiCorp. Almost from the moment of the name change, Dan Fylstra found his relationship with Dan Bricklin growing strained. The latter was suspicious of his publisher’s rebranding themselves in the image of his intellectual property, feeling they had been little more than the passive beneficiaries of his brilliant stroke. This point of view was by no means an entirely fair one. While it may have been true that Fylstra had been immensely lucky to get his hands on Bricklin’s once-in-a-lifetime innovation, he’d also made it possible by loaning Bricklin an Apple II in the first place, then done much to make VisiCalc palatable for corporate America through slick, professional packaging and marketing that projected exactly the right conservative, businesslike image, consciously eschewing the hippie ethos of the Homebrew Computer Club. Nevertheless, Bricklin, perhaps a bit drunk on all the praise of his genius, credited VisiCorp’s contribution to VisiCalc’s success but little. And so Fylstra, nervous about continuing to stake his entire company on Bricklin, set up an internal development team to create more products for the business market.

By the beginning of 1981, the IBM PC project which VisiCalc had done so much to prompt was in full swing, with the finished machine due to be released before the end of the year. Thanks to their status as publisher of the hottest application in business software, VisiCorp had been taken into IBM’s confidence, one of a select number of software developers and publishers given access to prototype hardware in order to have products ready to go on the day the new machine shipped. It seems that VisiCorp realized even at this early point how underwhelming the new machine’s various operating paradigms were likely to be, for even before they had actual IBM hardware to hand, they started mocking up the GUI environment that would become known as Visi On using Apple II and III machines. Already at this early date, it reflected a real, honest, fundamental attempt to craft a more workable model for personal computing than the nightmare that MS-DOS alone could be. William Coleman, the head of the development team, later stated in reference to the project’s founding goals that “we wanted users to be able to have multiple programs on the screen at one time, ease of learning and use, and simple transfer of data from one program to another.”

Visi On seemed to have huge potential. When VisiCorp demonstrated an early version, albeit far later than they had expected to be able to, at a trade show in December of 1982, Dan Fylstra remembers a rapturous reception, “competitors standing in front of [the] booth at the show, shaking their heads and wondering how the company had pulled the product off.” It was indeed an impressive coup; well before the Apple Macintosh or even Lisa had debuted, VisiCorp was showing off a full-fledged GUI environment running on hardware that had heretofore been considered suitable only for ugly old MS-DOS.

Still, actually bringing a GUI environment to market and making a success out of it was a much taller order than it might have first appeared. As even Apple would soon be learning to their chagrin, any such product trying to make a go of it within the increasingly MS-DOS-dominated culture of mainstream business computing ran headlong into a whole pile of problems which lacked clearly best solutions. Visi On, like almost all of the GUI products that would follow for the IBM hardware architecture, was built on top of MS-DOS, using the latter’s low-level function calls to manage disks and files. This meant that users could install it on their hard drive and pop between Visi On and vanilla MS-DOS as the need arose. But a much thornier question was that of running existing MS-DOS applications within the Visi On environment. Those which assumed they had full control of the system — which was practically all of them, because why wouldn’t they? — would flame out as soon as they tried to directly access some piece of hardware that was now controlled by Visi On, or tried to put something in some specific place inside what was now a shared pool of memory, or tried to do any number of other now-forbidden things. VisiCorp thus made the hard decision to not even try to get existing MS-DOS applications to run under Visi On. Software developers would have to make new, native applications for the system; Visi On would effectively be a new computing platform onto itself.

This decision was questionable in commercial if not technical terms, given how hard it must be to get a new platform accepted in an MS-DOS-dominated marketplace. But VisiCorp then proceeded to make the problem even worse. It would only be possible to program Visi On, they announced, after purchasing an expensive development kit and installing it on a $20,000 DEC PDP-11 minicomputer. They thus opted for an approach similar to one Apple was opting for with the Lisa: to allow that machine to be programmed only by yoking it up to a second Lisa. In thus betraying the original promise of the personal computer as an anything machine which ordinary users could program to do their will, both Visi On and the Lisa operating system arguably removed their hosting hardware from that category entirely, converting it into a closed electronic appliance more akin to a game console. Taxonomical debates aside, the barriers to entry even for one who wished merely to use Visi On to run store-bought applications were almost as steep: when this first MS-DOS-based GUI finally shipped on December 16, 1983, after a long series of postponements, it required a machine with 512 K of memory and a hard drive to run and cost more than $1000 to buy.

Visi On was, as the technology pundits like to say, “ahead of the hardware market.” In quite a number of ways it was actually far more ambitious than what would emerge a month or so after it as the Apple Macintosh. Multiple Visi On applications could be open at the same time (although they didn’t actually run concurrently), and a surprisingly sophisticated virtual-memory system was capable of swapping out pages to hard disk if software tried to allocate more memory than was physically available on the computer. Similar features wouldn’t reach MacOS until 1987’s System 5 and 1991’s System 7 respectively.

In the realm of usability, however, Visi On unquestionably fell down in comparison to Apple’s work. The user interfaces for the Lisa and the Macintosh made almost all the right choices right from the beginning, expanding upon the work done at Xerox PARC in all the right ways. Many of the choices made by VisiCorp, on the other hand, feel far more dubious today — and, one has to believe, not just out of the contempt bred by all those intervening decades of user interfaces modeled on Apple’s. Consider the task of moving and sizing windows on the screen, which was implemented so elegantly on the original Lisa and Macintosh that it’s been changed not at all in all the decades since. While Visi On too allows windows to be sized and placed where you will, and allows them to overlay one another — something by no means true of all of the MS-DOS GUI systems that would follow — doing so is a clumsy process involving picking options out of menus rather than simply dragging title bars or sizing widgets. In fact, Visi On uses no icons whatsoever. For anyone still enamored with the old saw that Apple just ripped off the Xerox PARC interface in its entirety and stuck it on the Lisa and Mac, Visi On, being much more slavishly based on the PARC model, provides an instructive demonstration of how far the likes of the Xerox Alto still was from the intuitive ease of Apple’s interface.

A Quick Tour of Visi On


With mice still exotic creatures, VisiCorp provided their own to work with Visi On. Many other early GUI-makers, Microsoft among them, would follow their lead.

Visi On looks like this upon booting up on an original IBM PC with 640 K of memory and a CGA video card, running in high-resolution monochrome mode at 640 X 200. “Services” is Visi On’s terminology for installed applications. The list of them which you see here, all provided by VisiCorp themselves, are the only ones that would ever exist, thanks to Visi On’s complete commercial failure.

We’ve started up a spreadsheet, a graphing application, and a word processor at the same time. These don’t actually run concurrently, as they would under a true multitasking operating system, but are visible onscreen in their separate windows, becoming active when we click them. (Something similar would not have been possible under MacOS prior to 1987.)

Although Visi On does sport windows that can be sized and placed anywhere and can overlap one another, arranging them is made extremely tedious by its lack of any concept of mouse-dragging; the mouse can only be used for single clicks. So, you have to click the “Frame” menu option and see its instructions through step by step. Note also the lack of pull-down menus, another of Apple’s expansions upon the work down at Xerox PARC. Menus here are just one-shot commands, akin to what a modern GUI user would call a button.

Fortunately, you can make a window full-screen with just a couple of clicks. Unfortunately, you then have to laboriously re-“Frame” it when you want to shrink it again; it doesn’t remember where it used to be.

The lack of a mouse-drag affordance makes the “Transfer” function — Visi On’s version of copy-and-paste — extremely tedious.

And, as with most things in Visi On, transferring data is also slow. Moving that little snippet of text from the word processor to the spreadsheet took about ten seconds.

On the plus side, Visi On sports a help system that’s crazily comprehensive for its time — much more so than the one that would ship with MacOS or, for that matter, Microsoft Windows for quite some years.

As if it didn’t have enough intrinsic problems working against it, extrinsic ones also contrived to undo Visi On in the marketplace. By the time it shipped, VisiCorp was a shadow of what they had so recently been. VisiCalc sales had collapsed over the past year, going from nearly 40,000 units in December of 1982 alone to fewer than 6000 units in December of 1983 in the face of competing products — most notably the burgeoning juggernaut Lotus 1-2-3 — and what VisiCorp described as Software Arts’s failure to provide “timely upgrades” amidst a relationship that was growing steadily more tense. With VisiCorp’s marketplace clout thus dissipating like air out of a balloon, it was hardly the ideal moment for them to ask for the sorts of commitments from users and developers required by Visi On.

The very first MS-DOS-based GUI struggled along with no uptake whatsoever for nine months or so; the only applications made for it were the word processor, spreadsheet, and graphing program VisiCorp made themselves. In September of 1984, with VisiCorp and Software Arts now embroiled in a court battle that would benefit only their competitors, the Visi On technology was sold to a veteran manufacturer of mainframes and supercomputers called Control Data Corporation, who proceeded to do very little if anything with it. VisiCorp went bankrupt soon after, while Lotus bought out Software Arts for a paltry $800,000, thus ending the most dramatic boom-and-bust tale of the early business-software industry. “VisiCorp’s auspicious climb and subsequent backslide,” wrote InfoWorld magazine, “will no doubt become a ‘how-not-to’ primer for software companies of the future.”

Visi On’s struggles may have been exacerbated by the sorry state of its parent company, but time would prove them to be by no means atypical of MS-DOS-based GUI systems in general.  Already in February of 1984, PC Magazine could point to at least four other GUIs of one sort or another in the works from other third-party developers: Concurrent CP/M with Windows by Digital Research, VisuALL by Trillian Computer Corporation, DesqView by Quarterdeck Office Systems, and WindowMaster by Structured Systems. All of these would make different choices in trying to balance the seemingly hopelessly competing priorities of reasonable speed and reasonable hardware requirements, compatibility with MS-DOS applications and compatibility with post-MS-DOS philosophies of computing. None would find the sweet spot. Neither they nor the still more GUI environments that followed them would be able to offer a combination of features, ease of use, and price that the market found compelling, so much so that by 1985 the whole field of MS-DOS GUIs was coming to be viewed with disdain by computer users who had been disappointed again and again. If you wanted a GUI, went the conventional wisdom, buy a Macintosh and live with the paltry software selection and the higher price. The mainstream of business computing, meanwhile, continued to truck along with creaky old MS-DOS, a shaky edifice made still more unstable by all of the hacks being grafted onto it to expand its memory model or to force it to load more than one application at a time. “Windowing and desktop environments are a solution looking for a problem,” said Robert Lefkowits, director of software services for Infocorp, in the fall of 1985. “Users aren’t really looking for any kind of windowing environment to solve problems. Users are not expressing a need or desire for it.”

The reason they weren’t, of course, was because they hadn’t yet seen a GUI in which the pleasure outweighed the pain. Entrenched as users were in the old way of doing things, accepting as they had become of all of MS-DOS’s discontents as simply the way computing was, it was up to software developers to show them why a GUI was something they had never known they couldn’t live without. Microsoft at least, the very people who had saddled their industry with the MS-DOS albatross, were smart enough to realize that mainstream business computing must be remade in the image of the much-scoffed-at Macintosh at some point. Further, they understood that it behooved them to do the remaking if they didn’t want to go the way of VisiCorp. By the time Lefkowits said his words, the long, winding tale of dogged perseverance in the face of failure and frustration that would become the story of Microsoft Windows had already been playing out for several years. One of these days, the GUI was going to make its breakthrough in one way or another, and it was going to do so with a Microsoft logo on its box — even if Bill Gates had to personally ram it down his customers’ throats.

(Sources: the books The Making of Microsoft: How Bill Gates and His Team Created the World’s Most Successful Software Company by Daniel Ichbiah and Susan L. Knepper and Computer Wars: The Fall of IBM and the Future of Global Technology by Charles H. Ferguson and Charles R. Morris; InfoWorld of October 31 1983, November 14 1983, April 2 1984, July 2 1984, and October 7 1985; Byte of June 1983, July 1983; PC Magazine of February 7 1984, and October 2 1984; the episode of the Computer Chronicles television program called “Integrated Software.” Finally, I owe a lot to Nathan Lineback for the histories, insights, comparisons, and images found at his wonderful online “GUI Gallery.”)

Footnotes

Footnotes
1 MS-DOS was known as PC-DOS when sold directly under license by IBM. Its functionality, however, was almost or entirely identical to the Microsoft-branded version. For simplicity’s sake, I will just refer to “MS-DOS” whenever speaking about either product — or, more commonly, both — in the course of this series of articles.
 

Tags: , , ,

The 640 K Barrier

There was a demon in memory. They said whoever challenged him would lose. Their programs would lock up, their machines would crash, and all their data would disintegrate.

The demon lived at the hexadecimal memory address A0000, 655,360 in decimal, beyond which no more memory could be allocated. He lived behind a barrier beyond which they said no program could ever pass. They called it the 640 K barrier.

— with my apologies to The Right Stuff[1]Yes, that is quite possibly the nerdiest thing I’ve ever written.

The idea that the original IBM PC, the machine that made personal computing safe for corporate America, was a hastily slapped-together stopgap has been vastly overstated by popular technology pundits over the decades since its debut back in August of 1981. Whatever the realities of budgets and scheduling with which its makers had to contend, there was a coherent philosophy behind most of the choices they made that went well beyond “throw this thing together as quickly as possible and get it out there before all these smaller companies corner the market for themselves.” As a design, the IBM PC favored robustness, longevity, and expandability, all qualities IBM had learned the value of through their many years of experience providing businesses and governments with big-iron solutions to their most important data–processing needs. To appreciate the wisdom of IBM’s approach, we need only consider that today, long after the likes of the Commodore Amiga and the original Apple Macintosh architecture, whose owners so loved to mock IBM’s unimaginative beige boxes, have passed into history, most of our laptop and desktop computers — including modern Macs — can trace the origins of their hardware back to what that little team of unlikely business-suited visionaries accomplished in an IBM branch office in Boca Raton, Florida.

But of course no visionary has 20-20 vision. For all the strengths of the IBM PC, there was one area where all the jeering by owners of sexier machines felt particularly well-earned. Here lay a crippling weakness, born not so much of the hardware found in that first IBM PC as the operating system the marketplace chose to run on it, that would continue to vex programmers and ordinary users for two decades, not finally fading away until Microsoft’s release of Windows XP in 2001 put to bed the last legacies of MS-DOS in mainstream computing. MS-DOS, dubbed the “quick and dirty” operating system during the early days of its development, is likely the piece of software in computing history with the most lopsided contrast between the total number of hours put into its development and the total number of hours it spent in use, on millions and millions of computers all over the world. The 640 K barrier, the demon all those users spent so much time and energy battling for so many years, was just one of the more prominent consequences of corporate America’s adoption of such a blunt instrument as MS-DOS as its standard. Today we’ll unpack the problem that was memory management under MS-DOS, and we’ll also examine the problem’s multifarious solutions, all of them to one degree or another ugly and imperfect.


 

The original IBM PC was built around an Intel 8088 microprocessor, a cost-reduced and somewhat crippled version of an earlier chip called the 8086. (IBM’s decision to use the 8088 instead of the 8086 would have huge importance for the expansion buses of this and future machines, but the differences between the two chips aren’t important for our purposes today.) Despite functioning as a 16-bit chip in most ways, the 8088 had a 20-bit address space, meaning it could address a maximum of 1 MB of memory. Let’s consider why this limitation should exist.

Memory, whether in your brain or in your computer, is of no use to you if you can’t keep track of where you’ve put things so that you can retrieve them again later. A computer’s memory is therefore indexed by bytes, with every single byte having its own unique address. These addresses, numbered from 0 to the upper limit of the processor’s address space, allow the computer to keep track of what is stored where. The biggest number that can be represented in 20 bits is 1,048,575, or 1 MB. Thus this is the maximum amount of memory which the 8088, with its 20-bit address bus, can handle. Such a limitation hardly felt like a deal breaker to the engineers who created the IBM PC. Indeed, it’s difficult to overemphasize what a huge figure 1 MB really was when they released the machine in 1981, in which year the top-of-the-line Apple II had just 48 K of memory and plenty of other competing machines shipped with no more than 16 K.

A processor needs to address other sorts of memory besides the pool of general-purpose RAM which is available for running applications. There’s also ROM memory — read-only memory, burned inviolably into chips — that contains essential low-level code needed for the computer to boot itself up, along with, in the case of the original IBM PC, an always-available implementation of the BASIC programming language. (The rarely used BASIC in ROM would be phased out of subsequent models.) And some areas of RAM as well are set aside from the general pool for special purposes, like the fully 128 K of addresses given to video cards to keep track of the onscreen display in the original IBM PC. All of these special types of memory must be accessed by the CPU, must be given their own unique addresses to facilitate that, and must thus be subtracted from the address space available to the general pool.

IBM’s engineers were quite generous in drawing the boundary between their general memory pool and the area of addresses allocated to special purposes. Focused on expandability and longevity as they were, they reserved big chunks of “special” memory for purposes that hadn’t even been imagined yet. In all, they reserved the upper three-eighths of the available addresses for specialized purposes actual or potential, leaving the lower five-eighths — 640 K — to the general pool. In time, this first 640 K of memory would become known as “conventional memory,” the remaining 384 K — some of which would be ROM rather than RAM — as “high memory.” The official memory map which IBM published upon the debut of the IBM PC looked like this:

It’s important to understand when looking at a memory map like this one that the existence of a logical address therein doesn’t necessarily mean that any physical memory is connected to that address in any given real machine. The first IBM PC, for instance, could be purchased with as little as 16 K of conventional memory installed, and even a top-of-the-line machine had just 256 K, leaving most of the conventional-memory space vacant. Similarly, early video cards used just 32 K or 64 K of the 128 K of address space offered to them in high memory. The 640 K barrier was thus only a theoretical limitation early on, one few early users or programmers ever even noticed.

That blissful state of affairs, however, wouldn’t last very long. As IBM’s creations — joined, soon enough, by lots of clones — became the standard for American business, more and more advanced applications appeared, craving more and more memory alongside more and more processing power. Already by 1984 the 640 K barrier had gone from a theoretical to a very real limitation, and customers were beginning to demand that IBM do something about it. In response, IBM that year released the PC/AT, built around Intel’s new 80286 microprocessor, which boasted a 24-bit address space good for 16 MB of memory. To unlock all that potential extra memory, IBM made the commonsense decision to extend the memory map above the specialized high-memory area that ended at 1 MB, making all addresses beyond 1 MB a single pool of “extended memory” available for general use.

Problem solved, right? Well, no, not really — else this would be a much shorter article. Due more to software than hardware, all of this potential extended memory proved not to be of much use for the vast majority of people who bought PC/ATs. To understand why this should be, we need to examine the deadly embrace between the new processor and the old operating system people were still running on it.

The 80286 was designed to be much more than just a faster version of the old 8086/8088. Developing the chip before IBM PCs running MS-DOS had come to dominate business computing, Intel hadn’t allowed the need to stay compatible with that configuration to keep them from designing a next-generation chip that would help to take computing to where they saw it as wanting to go. Intel believed that microcomputers were at the stage at which the big institutional machines had been a couple of decades earlier, just about ready to break free of what computer scientist Brian L. Stuart calls the “Triangle of Ones”: one user running one program at a time on one machine. At the very least, Intel believed, the second leg of the Triangle must soon fall; everyone recognized that multitasking — running several programs at a time and switching freely between them — was a much more efficient way to do complex work than laboriously shutting down and starting up application after application. But unfortunately for MS-DOS, the addition of multitasking complicates the life of an operating system to an absolutely staggering degree.

Operating systems are of course complex subjects worthy of years or a lifetime of study. We might, however, collapse their complexities down to a few fundamental functions: to provide an interface for the user to work with the computer and manage her programs and files; to manage the various tasks running on the computer and allocate resources among them; and to act as a buffer or interface between applications and the underlying hardware of the computer. That, anyway, is what we expect at a minimum of our operating systems today. But for a computer ensconced within the Triangle of Ones, the second and third functions were largely moot: with only one program allowed to run at a time, resource-management concerns were nonexistent, and, without the need for a program to be concerned about clashing with other programs running at the same time, bare-metal programming — manipulating the hardware directly, without passing requests through any intervening layer of operating-system calls — was often considered not only acceptable but the expected approach. In this spirit, MS-DOS provided just 27 function calls to programmers, the vast majority of them dealing only with disk and file management. (Compare that, my fellow programmers, with the modern Windows or OS X APIs!) For everything else, banging on the bare metal was fine.

We can’t even begin here to address all of the complications that are introduced when we add multitasking into the equation, asking the operating system in the process to fully embrace all three of the core functions listed above. Memory management alone, the one aspect we will look deeper into today, becomes complicated enough. A program which is sharing a machine with other programs can no longer have free run of the memory map, placing whatever it wants to wherever it wants to; to do so risks overwriting the code or data of another program running on the system. Instead the operating system must demand that individual programs formally request the memory they’d like to use, and then must come up with a way to keep a program, whether due to bugs or malice, from running roughshod over areas of memory that it hasn’t been granted.

Or perhaps not. The Commodore Amiga, the platform which pioneered multitasking on personal computers in 1985, didn’t so much solve the latter part of this problem as punted it away. An application program is expected to request from the Amiga’s operating system any memory that it requires. The operating system then returns a pointer to a block of memory of the requested size, and trusts the application not to write to  memory outside of these bounds. Yet nothing besides the programmer’s skill and good nature absolutely prevents such unauthorized memory access from happening. Every application on the Amiga, in other words, can write to any address in the machine’s memory, whether that address be properly allocated to it or not. Screen memory, free memory, another program’s data, another program’s code — all are fair game to the errant program. Such unauthorized memory access will almost always eventually result in a total system crash. A non-malicious programmer who wants her program to be a good citizen would of course never intentionally write to memory she hasn’t properly requested, but bugs of this nature are notoriously easy to create and notoriously hard to track down, and on the Amiga a single instance of one can bring down not only the offending program but the entire operating system. With all due respect to the Amiga’s importance as the first multitasking personal computer, this is obviously not the ideal way to implement it.

A far more sustainable approach is to take the extra step of tracking and protecting the memory that has been allocated to each program. Memory protection is usually accomplished using  what’s known as virtual memory: when a program requests memory, it’s returned not a true address within the system’s memory pool but rather a virtual address that’s translated back into the real address to which it corresponds every time the program accesses its data. Each program is thus effectively sandboxed from everything else, allowed to read from and write to only its own data. Only the lowest levels of the operating system have global access to the memory pool as a whole.

Implementing such memory protection in software alone, however, must be an untenable drain on the resources available to systems engineers in the 1980s — a fact which does everything to explain its absence from the Amiga. Intel therefore decided to give software a leg up via hardware. They built into the 80286 a memory-management unit that could automatically translate from virtual to real memory addresses and vice versa, making this constantly ongoing process fairly transparent even to the operating system.

Nevertheless, the operating system must know about this capability, must in fact be written very differently if it’s to run on a CPU with memory protection built into its circuitry. Intel recognized that it would take time for such operating systems to be created for the new chip, and recognized that compatibility with the earlier 8086/8088 chips would be a very good thing to have in the meantime. They therefore built two possible operating modes into the 80286. In “protected mode” — the mode they hoped would eventually come to be used almost universally — the chip’s full potential would be realized, including memory protection and the ability to address up to 16 MB of memory. In “real mode,” the 80286 would function essentially like a turbocharged 8086/8088, with no memory-protection capabilities and with the old limitation on addressable memory of 1 MB still in place. Assuming that in the early days at least the new chip would need to run on operating systems with no knowledge of its full capabilities, Intel made the 80286 default to real mode on startup. An operating system which did know about the 80286 and wanted to bring out its full potential could switch it to protected mode at boot-up and be off to the races.

It’s at the intersection between the 80286 and the operating system that Intel’s grand plans for the future of their new chip went awry. An overwhelming percentage of the early 80286s were used in IBM PC/ATs and clones, and an overwhelming percentage of those machines were running MS-DOS. Microsoft’s erstwhile “quick and dirty” operating system knew nothing of the 80286’s full capabilities. Worse, trying to give it knowledge of those capabilities would have to entail a complete rewrite which would break compatibility with all existing MS-DOS software. Yet the whole reason MS-DOS was popular in the first place — it certainly wasn’t because of a generous feature set, a friendly interface, or any aesthetic appeal — was that very same huge base of business software. Getting users to make the leap to some hypothetical new operating system in the absence of software to run on it would be as difficult as getting developers to write programs for an operating system with no users. It was a chicken-or-the-egg situation, and neither chicken nor egg was about to stick its neck out anytime soon.

IBM was soon shipping thousands upon thousands of PC/ATs every month, and the clone makers were soon shipping even more 80286-based machines of their own. Yet at least 95 percent of those machines were idling along at only a fraction of their potential, thanks to the already creakily archaic MS-DOS. For all these users, the old 640 K barrier remained as high as ever. They could stuff their machines full of extended memory if they liked, but they still couldn’t access it. And of course the multitasking that the 80286 was supposed to have enabled remained as foreign a concept to MS-DOS as a GPS unit to a Model T. The only solution IBM offered those who complained about the situation was to run another operating system. And indeed, there were a number of alternatives to MS-DOS available for the PC/AT and other 80286-based machines, including several variants of the old institutional-computing favorite Unix — one of them even from Microsoft — and new creations like Digital Research’s Concurrent DOS, which struggled with mixed results to wedge in some degree of MS-DOS compatibility. Still, the only surefire way to take full advantage of MS-DOS’s huge software base was to run the real — in more ways than one now! — MS-DOS, and this is what the vast majority of people with 80286-equipped machines wound up doing.

Meanwhile the very people making the software which kept MS-DOS the only viable choice for most users were feeling the pinch of being confined to 640 K more painfully almost by the month. Finally Lotus Corporation —  makers of the Lotus 1-2-3 spreadsheet package that ruled corporate America, the greatest single business-software success story of their era — decided to use their clout to do something about it. They convinced Intel to join them in devising a scheme for breaking the 640 K barrier without abandoning MS-DOS. What they came up with was one mother of an ugly kludge — a description the scheme has in common with virtually all efforts to break through the 640 K barrier.

Looking through the sparsely populated high-memory area which the designers of the original IBM PC had so generously carved out, Lotus and Intel realized it should be possible on almost any extant machine to identify a contiguous 64 K chunk of those addresses which wasn’t being used for anything. This chunk, they decided, would be the gateway to potentially many more megabytes installed elsewhere in the machine. Using a combination of software and hardware, they implemented what’s known as a bank-switching scheme. The 64 K chunk of high-memory addresses was divided into four segments of 16 K, each of which could serve as a lens focused on a 16 K segment of additional memory above and beyond 1 MB. When the processor accessed the addresses in high memory, the data it would actually access would be the data at whatever sections of the additional memory their lenses were currently pointing to. The four lenses could be moved around at will, giving access, albeit in a roundabout way, to however much extra memory the user had installed. The additional memory unlocked by the scheme was dubbed “expanded memory.”  The name’s unfortunate similarity to “extended memory” would cause much confusion over the years to come; from here on, we’ll call it by its common acronym of “EMS.”

All those gobs of extra memory wouldn’t quite come for free: applications would have to be altered to check for the existence of EMS memory and make use of it, and there would remain a distinct difference between conventional memory and EMS memory with which programmers would always have to reckon. Likewise, the overhead of constantly moving those little lenses around made EMS memory considerably slower to access than conventional memory. On the brighter side, though, EMS worked under MS-DOS with only the addition of a single device driver during startup. And, since the hardware mechanism for moving the lenses around was completely external to the CPU, it would even work on machines that weren’t equipped with the new 80286.

This diagram shows the different types of memory available on PCs of the mid-1980s. In blue, we see the original 1 MB memory map of the IBM PC. In green, we see a machine equipped with additional extended memory. And in orange we see a machine equipped with additional expanded memory.

Shortly before the scheme made its official debut at a COMDEX trade show in May of 1985, Lotus and Intel convinced a crucial third partner to come aboard: Microsoft. “It’s garbage! It’s a kludge!” said Bill Gates. “But we’re going to do it.” With the combined weight of Lotus, Intel, and Microsoft behind it, EMS took hold as the most practical way of breaking the 640 K barrier. Imperfect and kludgy though it was, software developers hurried to add support for EMS memory to whatever programs of theirs could practically make use of it, while hardware manufacturers rushed EMS memory boards onto the market. EMS may have been ugly, but it was here today and it worked.

At the same time that EMS was taking off, however, extended memory wasn’t going away. Some hardware makers — most notably IBM themselves — didn’t want any part of EMS’s ugliness. Software makers therefore continued to probe at the limits of machines equipped with extended memory, still looking for a way to get at it from within the confines of MS-DOS. What if they momentarily switched the 80286 into protected mode, just for as long as they needed to manipulate data in extended memory, then went back into real mode? It seemed like a reasonable idea — except that Intel, never anticipating that anyone would want to switch modes on the fly like this, had neglected to provide a way to switch an 80286 in protected mode back into real mode. So, proponents of extended memory had to come up with a kludge even uglier than the one that allowed EMS memory to function. They could force the 80286 back into real mode, they realized, by resetting it entirely, just as if the user had rebooted her computer. The 80286 would go through its self-check again — a process that admittedly absorbed precious milliseconds — and then pick back up where it left off. It was, as Microsoft’s Gordon Letwin memorably put it, like “turning off the car to change gears.” It was staggeringly kludgy, it was horribly inefficient, but it worked in its fashion. Given the inefficiencies involved, the scheme was mostly used to implement virtual disks stored in the extended memory, which wouldn’t be subject to the constant access of an application’s data space.

In 1986, the 32-bit 80386, Intel’s latest and greatest chip, made its public bow at the heart of the Compaq Deskpro 386 rather than an IBM machine, a landmark moment signaling the slow but steady shift of business computing’s power center from IBM to Microsoft and the clone makers using their operating system. While working on the new chip, Intel had had time to see how the 80286 was actually being used in the wild, and had faced the reality that MS-DOS was likely destined to be cobbled onto for years to come rather than replaced in its entirety with something better. They therefore made a simple but vitally important change to the 80386 amidst its more obvious improvements. In addition to being able to address an inconceivable total of 4 GB of memory in protected mode thanks to its 32-bit address space, the 80386 could be switched between protected mode and real mode on the fly if one desired, without needing to be constantly reset.

In freeing programmers from that massive inefficiency, the 80386 cracked open the door that much further to making practical use of extended memory in MS-DOS. In 1988, the old EMS consortium of Lotus, Intel, and Microsoft came together once again, this time with the addition to their ranks of the clone manufacturer AST; the absence of IBM is, once again, telling. Together they codified a standard approach to extended memory on 80386 and later processors, which corresponded essentially to the scheme I’ve already described in the context of the 80286, but with a simple command to the 80386 to switch back to real mode replacing the resets. They called it the eXtended Memory Specification; memory accessed in this way soon became known universally as “XMS” memory. Under XMS as under EMS, a new device driver would be loaded into MS-DOS. Ordinary real-mode programs could then call this driver to access extended memory; the driver would do the needful switching to protected mode, copy blocks of data from extended memory into conventional memory or vice versa, then switch the processor back to real mode when it was time to return control to the program. It was still inelegant, still a little inefficient, and still didn’t use the capabilities of Intel’s latest processors in anything like the way Intel’s engineers had intended them to be used; true multitasking still remained a pipe dream somewhere off in a shadowy future. Owners of sexier machines like the Macintosh and Amiga, in other words, still had plenty of reason to mock and scoff. In most circumstances, working with XMS memory was actually slower than working with EMS memory. The primary advantage of XMS was that it let programs work with much bigger chunks of non-conventional memory at one time than the four 16 K chunks that EMS allowed. Whether any given program chose EMS or XMS came to depend on which set of advantages and disadvantages best suited its purpose.

The arrival of XMS along with the ongoing use of EMS memory meant that MS-DOS now had two competing memory-management solutions. Buyers now had to figure out not only whether they had enough extra memory to run a program but whether they had the right kind of extra memory. Ever accommodating, hardware manufacturers began shipping memory boards that could be configured as either EMS or XMS memory — whatever the application you were running at the moment happened to require.

The next stage in the slow crawl toward parity with other computing platforms in the realm of memory management would be the development of so-called “DOS extenders,” software to allow applications themselves to run in protected mode, thus giving them direct access to extended memory without having to pass their requests through an inefficient device driver. An application built using a DOS extender would only need to switch the processor to real mode when it needed to communicate with the operating system. The development of DOS extenders was driven by Microsoft’s efforts to turn Windows, which like seemingly everything else in business computing ran on top of MS-DOS, into a viable alternative to the command line and a viable challenger to the Macintosh. That story is thus best reserved for a future article, when we look more closely at Windows itself. As it is, the story that I’ve told so far today moves us nicely into the era of computer-gaming history we’ve reached on the blog in general.

In said era, the MS-DOS machines that had heretofore been reserved for business applications were coming into homes, where they were often used to play a new generation of games taking advantage of the VGA graphics, sound cards, and mice sported by the latest systems. Less positively, all of the people wanting to play these new games had to deal with the ramifications of a 640 K barrier that could still be skirted only imperfectly. As we’ve seen, both EMS and XMS imposed to one degree or another a performance penalty when accessing non-conventional memory. What with games being the most performance-sensitive applications of all, that made that first 640 K of lightning-fast conventional memory most precious of all for them.

In the first couple of years of MS-DOS’s gaming dominance, developers dealt with all of the issues that came attached to using memory beyond 640 K by the simple expedient of not using any memory beyond 640 K. But that solution was compatible neither with developers’ growing ambitions for their games nor with the gaming public’s growing expectations of them.

The first harbinger of what was to come was Origin Systems’s September 1990 release Wing Commander, which in its day was renowned — and more than a little feared — for pushing the contemporary state of the art in hardware to its limits. Even Wing Commander didn’t go so far as to absolutely require memory beyond 640 K, but it did use it to make the player’s audiovisual experience snazzier if it was present. Setting a precedent future games would largely follow, it was quite inflexible in its approach, demanding EMS — as opposed to XMS — memory. In the future, gamers would have to become all too familiar with the differences between the two standards, and how to configure their machines to use one or the other. Setting another precedent, Wing Commander‘s “installation guide” included a section on “memory usage” that was required reading in order to get things working properly. In the future, such sections would only grow in length and complexity, and would need to be pored over by long-suffering gamers with far more concentrated attention than anything in the manual having anything to do with how to actually play the games they purchased.

In Accolade’s embarrassing Leisure Suit Larry knockoff Les Manley in: Lost in LA, the title character explains EMS and XMS memory to some nubile companions. The ironic thing was that anyone who wished to play the latest games on an MS-DOS machine really did need to know this stuff, or at least have a friend who did.

Thus began the period of almost a decade, remembered with chagrin but also often with an odd sort of nostalgia by old-timers today, in which gamers spent hours monkeying about with MS-DOS’s “config.sys” and “autoexec.bat” files and swapping in and out various third-party utilities in the hope of squeezing out that last few kilobytes of conventional memory that Game X needed to run. The techniques they came to employ were legion.

In the process of developing Windows, Microsoft had discovered that the kernel of MS-DOS itself, a fairly tiny program thanks to its sheer age, could be stashed into the first 64 K of memory beyond 1 MB and still accessed like conventional memory on an 80286 or later processor in real mode thanks to what was essentially an undocumented technical glitch in the design of those processors. Gamers thus learned to include the line “DOS=HIGH” in their configuration files, freeing up a precious block of conventional memory. Likewise, there was enough unused space scattered around in the 384 K of high memory on most machines to stash many or all of MS-DOS’s device drivers there instead of in conventional memory. Thus “DOS=HIGH” soon became “DOS=HIGH,UMB,” the second parameter telling the computer to make use of these so-called “upper-memory blocks” and thereby save that many kilobytes more.

These were the most basic techniques, the starting points. Suffice to say that things got a lot more complicated from there, turning into a baffling tangle of tweaks, some saving mere bytes rather than kilobytes of conventional memory, but all of them important if one was to hope to run games that by 1993 would be demanding 604 K of 640 K for their own use. That owners of machines which by that point typically contained memories in the multi-megabytes should have to squabble with the operating system over mere handfuls of bytes was made no less vexing by being so comically absurd. And every new game seemed to up the ante, seemed to demand that much more conventional memory. Those with a sunnier disposition or a more technical bent of mind took the struggle to get each successive purchase running as the game before the game got started, as it were. Everyone else gnashed their teeth and wondered for the umpteenth time if they might not have been better off buying a console where games Just Worked. The only thing that made it all worthwhile was the mixture of relief, pride, and satisfaction that ensued when you finally got it all put together just right and the title screen came up and the intro music sprang to life — if, that is, you’d managed to configure your sound card properly in the midst of all your other travails. Such was the life of the MS-DOS gamer.

Before leaving the issue of the 640 K barrier behind in exactly the way that all those afflicted by it for so many years were so conspicuously unable to do, we have to address Bill Gates’s famous claim, allegedly made at a trade show in 1981, that “640 K ought to be enough for anybody.” The quote has been bandied about for years as computer-industry legend, seeming to confirm as it does the stereotype of Bill Gates as the unimaginative dirty trickster of his industry, as opposed to Steve Jobs the guileless visionary (the truth is, needless to say, far more complicated). Sadly for the stereotypers, however, the story of the quote is similar to all too many legends in the sense that it almost certainly never happened. Gates himself, for one, vehemently denies ever having said any such thing. Fred Shapiro, for another, editor of The Yale Book of Quotations, conducted an exhaustive search for a reputable source for the quote in 2008, going so far as to issue a public plea in The New York Times for anyone possessing knowledge of such a source to contact him. More than a hundred people did so, but none of them could offer up the smoking gun Shapiro sought, and he was left more certain than ever that the comment was “apocryphal.” So, there you have it. Blame Bill Gates all you want for the creaky operating system that was the real root cause of all of the difficulties I’ve spent this article detailing, but don’t ever imagine he was stupid enough to say that. “No one involved in computers would ever say that a certain amount of memory is enough for all time,” said Gates in 2008. Anyone doubting the wisdom of that assertion need only glance at the history of the IBM PC.

(Sources: the books Upgrading and Repairing PCs, 3rd edition by Scott Mueller and Principles of Operating Systems by Brian L. Stuart; Computer Gaming World of June 1993; Byte of January 1982, November 1984, and March 1992; Byte‘s IBM PC special issues of Fall 1985 and Fall 1986; PC Magazine of May 14 1985, January 14 1986, May 30 1989, June 13 1989, and June 27 1989; the episode of the Computer Chronicles television show entitled “High Memory Management”; the online article “The ‘640K’ quote won’t go away — but did Gates really say it?” on Computerworld.)

Footnotes

Footnotes
1 Yes, that is quite possibly the nerdiest thing I’ve ever written.
 
 

Tags: , , ,

A Slow-Motion Revolution

CD-ROM

A quick note on terminology before we get started: “CD-ROM” can be used to refer either to the use of CDs as a data-storage format for computers in general or to the Microsoft-sponsored specification for same. I’ll be using the term largely in the former sense in the introduction to this article, in the latter after something called “CD-I” enters the picture. I hope the point of transition won’t be too hard to identify, but my apologies if this leads to any confusion. Sometimes this language of ours is a very inexact thing.



In the first week of March 1986, much of the computer industry converged on Seattle for the first annual Microsoft CD-ROM Conference. Microsoft had anticipated about 500 to 600 attendees to the four-day event. Instead more than 1000 showed up, forcing the organizers to reject many of them at the door of a conference center that by law could only accommodate 800 people. Between the presentations on CD-ROM’s bright future, the attendees wandered through an exhibit hall showcasing the format’s capabilities. The hit of the hall was what was about to become the first CD-ROM product ever to be made available for sale to the public, consisting of the text of all 21 volumes of the Grolier Academic Encyclopedia, some 200 MB in all, on a single disc. It was to be published by KnowledgeSet, a spinoff of Digital Research. Digital’s founder Gary Kildall, apparently forgiving Bill Gates his earlier trespasses in snookering a vital IBM contract out from under his nose, gave the conference’s keynote address.

Kildall’s willingness to forgive and forget in light of the bright optical-storage future that stood before the computer industry seemed very much in harmony with the mood of the conference as a whole. Sentiments often verged on the utopian, with talk of a new “paperless society” abounding, a revolution to rival that of Gutenberg. “The compact disc represents a major discontinuity in the cost of producing and distributing information,” said one Ed Schmid of DEC. “You have to go back to the invention of movable type and the printing press to find something equivalent.” The enthusiasm was so intense and the good vibes among the participants — many of them, like Gates and Kildall, normally the bitterest of enemies — so marked that some came to call the conference “the computer industry’s Woodstock.” If the attendees couldn’t quite smell peace and love in the air, they certainly could smell potential and profit.

All the excitement came down to a single almost unbelievable number: the 650 MB of storage offered by every tiny, inexpensive-to-manufacture compact disc. It’s very, very difficult to fully convey in our current world of gigabytes and terabytes just how inconceivably huge a figure 650 MB actually was in 1986, a time when a 40 MB hard drive was a cavernous, how-can-I-ever-possibly-fill-this-thing luxury found on only the most high-end computers. For developers who had been used to making their projects fit onto floppy disks boasting less than 1 MB of space, the idea of CD-ROM sounded like winning the lottery several times over. You could put an entire 21-volume encyclopedia on one of the things, for Pete’s sake, and still have more than two-thirds of the space left over! Suddenly one of the most nail-biting constraints against which they had always labored would be… well, not so much eased as simply erased. After all, how could anything possibly fill 650 MB?

And just in case that wasn’t enough great news, there was also the fact that the CD was a read-only format. If the industry as a whole moved to CD-ROM as its format of choice, the whole piracy problem, which organizations like the Software Publishers Association ardently believed was costing it billions every year, would dry up and blow away like a dandelion in the fall. Small wonder that the mood at the conference sometimes approached evangelistic fervor. Microsoft, as swept away with it all as anyone, published a collection of the papers that were presented there under the very non-businesslike, non-Microsoft-like title of CD-ROM: The New Papyrus. The format just seemed to demand a touch of rhapsodic poetry.

But the rhapsody wasn’t destined to last very long. The promised land of a software industry built around the effectively unlimited storage capacity of the compact disc would prove infuriatingly difficult to reach; the process of doing so would stretch over the better part of a decade, by the end of which time the promised land wouldn’t seem quite so promising anymore. Throughout that stretch, CD-ROM was always coming in a year or two, always the next big thing right there on the horizon that never quite arrived. This situation, so antithetical to the usual propulsive pace of computer technology, was brought about partly by limitations of the format itself which were all too easy to overlook amid the optimism of that first conference, and partly by a unique combination of external factors that sometimes almost seemed to conspire, perfect-storm-like, to keep CD-ROM out of the hands of consumers.



The compact disc was developed as a format for music by a partnership of the Dutch electronics giant Philips and the Japanese Sony during the late 1970s. Unlike the earlier analog laser-disc format for the storage of video, itself a joint project of Philips and the American media conglomerate MCA, the CD stored information digitally, as long strings of ones and zeros to be passed through digital-to-analog converters and thus turned into rich stereo sound. Philips and Sony published the final specifications for the music CD in 1980, opening up to others who wished to license the technology what would become known as the “Red Book” standard after the color of the binder in which it was described. The first consumer-oriented CD players began to appear in Japan in 1982, in the rest of the world the following year. Confined at first to the high-end audiophile market, by the time of that first Microsoft CD-ROM Conference in 1986 the CD was already well on its way to overtaking the record album and, eventually, the cassette tape to become the most common format for music consumption all over the world.

There were good reasons for the CD’s soaring popularity. Not only did CDs sound better than at least all but the most expensive audiophile turntables, with a complete absence of hiss or surface noise, but, given that nothing actually touched the surface of a disc when it was being played, they could effectively last forever, no matter how many times you listened to them; “Perfect sound forever!” ran the tagline of an early CD advertising campaign. Then there was the way you could find any song you liked on a CD just by tapping a few buttons, as opposed to trying to drop a stylus on a record at just the right point or rewind and fast-forward a cassette to just the right spot. And then there was the way that CDs could be carried around and stored so much more easily than a record album, plus the way they could hold up to 75 minutes worth of music, enough to pack many double vinyl albums onto a single CD. Throw in the lack of a need to change sides to listen to a full album, and seldom has a new media format appeared that is so clearly better than the existing formats in almost all respects.

It didn’t take long for the computer industry to come to see the CD format, envisioned originally strictly as a music medium, as a natural one to extend to other types of data storage. Where the rubber met the road — or the laser met the platter — a CD player was just a mechanism for reading bits off the surface of the disc and sending them on to some other circuitry that knew what to do with them. This circuitry could just as easily be part of a computer as a stereo system.

Such a sanguine view was perhaps a bit overly reductionist. When one started really delving into the practicalities of the CD as a format for data storage, one found a number of limitations, almost all of them drawn directly from the technology’s original purpose as a music-delivery solution. For one thing, CD drives were only capable of reading data off a disc at a rate of 153.6 K per second, this figure corresponding not coincidentally to the speed required to stream standard CD sound for real-time playback. [1]The data on a music CD is actually read at a speed of approximately 172.3 K per second. The first CD-ROM drives had an effective reading speed that was slightly slower due to the need for additional error-correcting checksums in the raw data. Such a throughput was considered pretty good but hardly breathtaking by mid-1980s hard-disk standards; an average 10 MB hard drive of the period might have a transfer rate of about 96 K per second, although high-performance drives could triple or even quadruple that figure.

More problematic was a CD drive’s atrocious seek speed — i.e., the speed at which files could be located for reading on a disc. An average 10 MB hard disk of 1986 had a typical seek time of about 100 milliseconds, a worst-case-scenario maximum of about 200 — although, again, high-performance models could improve on those figures by a factor of four. A CD drive, by contrast, had a typical seek time of 500 milliseconds, a maximum of 1000  — one full second. The designers of the music CD hadn’t been particularly concerned by the issue, for a music-CD player would spend the vast majority of its time reading linear streams of sound data. On those occasions when the user did request a certain track found deeper on the disc, even a full second spent by the drive in seeking her favorite song would hardly be noticed unduly, especially in comparison to the pain of trying to find something on a cassette or a record album. For storage of computer data, however, the slow seek speed gave far more cause for concern.

The LMS LaserDrive is typical of the oddball formats that proliferated during the early years of optical data storage. It can hold 1 GB on each side of a double-sided disc. Unfortunately, each disc cost hundreds of dollars, the unit itself thousands.

The Laser Magnetic Storage LaserDrive is typical of the oddball formats that proliferated during the early years of optical data storage. It could hold 1 GB on each side of a double-sided disc. Unfortunately, each disc cost hundreds of dollars, the unit itself thousands.

Given these issues of performance, which promised only to get more marked in comparison to hard drives as the latter continued to get faster, one might well ask why the industry was so determined to adapt the music CD specifically to data storage rather than using Philips and Sony’s work as a springboard to another optical format with affordances more suitable to the role. In fact, any number of companies did choose the latter course, developing optical formats in various configurations and capacities, many even offering the ability to write to as well as read from the disc. (Such units were called “WORM” drives, for “Write Once Read Many”; data, in other words, could be written to their discs, but not erased or rewritten thereafter.) But, being manufactured in minuscule quantities as essentially bespoke items, all such efforts were doomed to be extremely expensive.

The CD, on the other hand, had the advantage of an existing infrastructure dedicated to stamping out the little silver discs and filling them with data. At the moment, that data consisted almost exclusively of encoded music, but the process of making the discs didn’t care a whit what the ones and zeros being burned into them actually represented. CD-ROM would allow the computer industry to piggy-back on an extant, mature technology that was already nearing ubiquity. That was a huge advantage when set against the cost of developing a new format from scratch and setting up a similar infrastructure to turn it out in bulk — not to mention the challenge of getting the chaotic, hyper-competitive computer industry to agree on another format in the first place. For all these reasons, there was surprisingly little debate on whether adapting the music CD to the purpose of data storage was really the best way to go. For better or for worse, the industry hitched its wagon to the CD; its infelicities as a general-purpose data-storage solution would just have to be worked around.

One of the first problems to be confronted was the issue of a logical file format for CD-ROM. The physical layout of the bits on a data CD was largely dictated by the design of the platters themselves and the machinery used to burn data into them. Yet none of that existing infrastructure had anything to say about how a filesystem appropriate for use with a computer should work within that physical layout. Microsoft, understanding that a certain degree of inter-operability was a valuable thing to have even among the otherwise rival platforms that might wind up embracing CD-ROM, pushed early for a standardized logical format. As a preliminary step on the road to that landmark first CD-ROM Conference, they brought together a more intimate group of eleven other industry leaders at the High Sierra Resort and Casino in Lake Tahoe in November of 1985 to hash out a specification. Among those present were Philips, Sony, Apple, and DEC; notably absent was IBM, a clear sign of Microsoft’s growing determination to step out of the shadow of Big Blue and start dictating the direction of the industry in their own right. The so-called “High Sierra” format would be officially published in finalized form in May of 1986.

In the run-up to the first Microsoft CD-ROM Conference, then, everything seemed to be coming together nicely. CD-ROM had its problems, but virtually everyone agreed that it was a tremendously exciting development. For their part, Microsoft, driven by a Bill Gates who was personally passionate about the format and keenly aware that his company, the purveyor of clunky old MS-DOS, needed for reasons of public relations if nothing else a cutting-edge project to rival any of Apple’s, had established themselves as the driving force behind the nascent optical revolution. And then, just five days before the conference was scheduled to convene — timing that struck very few as accidental — Philips injected a seething ball of chaos into the system via something called CD-I.

CD-I was a different, competing file format for CD data storage. But CD-I was also much, much more. Excited by the success the music CD had enjoyed, Philips, with the tacit support of Sony, had decided to adapt the format into the all-singing, all-dancing, all-around future of home entertainment in the abstract. Philips would be making a CD-I box for the home, based on a minimalist operating system called OS-9 running on a Motorola 68000 processor. But this would be no typical home computer; the user would be able to control CD-I entirely using a VCR-style remote control. CD-I was envisioned as the interactive television of the future, a platform for not only conventional videogames but also lifestyle products of every description, from interactive astronomy lessons to the ultimate in exercise tapes. Philips certainly wasn’t short of ideas:

Think of owning an encyclopedia which presents chosen topics in several different ways. Watching a short audio/video sequence to gain a general background to the topic. Then choosing a word or subject for more in-depth study. Jumping to another topic without losing your place — and returning again after studying the related topic to proceed further. Or watching a cartoon film, concert, or opera with the interactive capabilities of CD-I added. Displaying the score, libretto, or text onscreen in a choice of languages. Or removing one singer or instrument to be able to sing along with the music.

Just as they had with the music CD, Philips would license the specifications to whoever else wanted to make gadgets of their own capable of playing the CD-I discs. They declared confidently that there would be as many CD-I players in the world as phonographs within a few years of the format’s debut, that “in the long run” CD-I “could be every bit as big as the CD-audio market.”

Already at the Microsoft CD-ROM Conference, Philips began aggressively courting developers in the existing computer-games industry to embrace CD-I. Plenty of them were more than happy to do so. Despite the optimism that dominated at the conference, it wasn’t clear how much priority Microsoft, who earned the vast majority of their money from business computing, would really give to more consumer-focused applications of CD-ROM like gaming. Philips, on the other hand, was a giant of consumer electronics. While they paid due lip service to applications of CD-I in areas like corporate training, it was always clear that it would be first and foremost a technology for the living room, one that comprehensively addressed what most believed was the biggest factor limiting the market for conventional computer games: that the machines that ran them were just too fiddly to operate. At the time that CD-I was first announced, the videogame console was almost universally regarded as a dead fad; the machine that would so dramatically reverse that conventional wisdom, the Nintendo Entertainment System, was still an oddball upstart being sold in selected markets only. Thus many game makers saw CD-I as their only viable route out of the back bedroom and into the living room — into the mainstream of home entertainment.

So, when Philips spoke, the game developers listened. Many publishers, including big powerhouses like Activision as well as smaller boutique houses like the 68000 specialists Aegis Development, committed to CD-I projects during 1986, receiving in return a copy of the closely guarded “Green Book” that detailed the inner workings of the system. There was no small pressure to get in on the action quickly, for Philips was promising to ship the first finished CD-I units in time for the Christmas of 1987. Trip Hawkins of Electronic Arts made CD-I a particular priority, forming a whole new in-house development division for the platform. He’d been waiting for a true next-generation mainstream game machine for years. At first, he’d thought the Commodore Amiga would be that machine, but Commodore’s clueless marketing and the Amiga’s high price were making such an outcome look less and less likely. So now he was looking to CD-I, which promised graphics and sound as good as those of the Amiga, along with the all but infinite storage of the unpirateable CD format, and all in a tidy, inexpensive package designed for the living room. What wasn’t to like? He imagined Silicon Valley becoming “the New Hollywood,” imagined a game like Electronic Arts’s hit Starflight remade as a CD-I experience.

You could actually do it just like a real movie. You could hire a costume designer from the movie business, and create special-effects costumes for the aliens. Then you’d videotape scenes with the aliens, and have somebody do a soundtrack for the voices and for the text that they speak in the game.

Then you’d digitize all of that. You could fill up all the space on the disc with animated aliens and interesting sounds. You would also have a universe that’s a lot more interesting to look at. You might have an out-of-the-cockpit view, like Star Trek, with planets that look like planets — rotating, with detailed zooms and that sort of thing.

Such a futuristic vision seemed thoroughly justifiable based on Philips’s CD-I hype, which promised a rich multimedia environment combining CD-quality stereo sound with full-motion video, all at a time when just displaying a photo-realistic still image captured from life on a computer screen was considered an amazing feat. (Among extant personal computers, only the Amiga could manage it.) When developers began to dive into the Green Book, however, they found the reality of CD-I often sharply at odds with the hype. For instance, if you decided to take advantage of the CD-quality audio, you had to tie up the CD drive entirely to stream it, meaning you couldn’t use it to fetch pictures or video or anything else for this supposed rich multimedia environment.

Video playback became an even bigger sore spot that echoed back to those fundamental limitations that had been baked into the CD when it was regarded only as a medium for music delivery. A transfer rate of barely 150 K per second just wasn’t much to work with in terms of streaming video. Developers found themselves stymied by an infuriating Catch-22. If you tried to work with an uncompressed or only modestly compressed video format, you simply couldn’t read it off the disk fast enough to display it in real-time. Yet if you tried to use more advanced compression techniques, it became so expensive in terms of computation to decompress the data that the CD-I unit’s 68000 CPU couldn’t keep up. The best you could manage was to play video snippets that only filled a quarter of the screen — not a limitation that felt overly compatible with the idea of CD-I as the future of home entertainment in the abstract. It meant that a game like the old laser-disc-driven arcade favorite Dragon’s Lair, the very sort of thing people tended to think of first when you mentioned optical storage in the context of entertainment, would be impossible with CD-I. The developers who had signed contracts with Philips and committed major resources to CD-I could only soldier on and hope the technology would continue to evolve.

By 1987, then, the CD as a computer format had been split into two camps. While the games industry had embraced CD-I, the powers that were in business computing had jumped aboard the less ambitious, Microsoft-sponsored standard of CD-ROM, which solved issues like the problematic video playback of CD-I by the simple expediency of not having anything at all to say about them. Perhaps the most impressive of the very early CD-ROM products was the Microsoft Bookshelf, which combined Roget’s Thesaurus, The American Heritage Dictionary, The Chicago Manual of Style, The World Almanac and Book of Facts, and Bartlett’s Familiar Quotations alongside spelling and grammar checkers, a ZIP Code directory, and a collection of forms and form letters, all on a single disc — as fine a demonstration of the potential of the new format as could be imagined short of all that rich multimedia that Philips had promised. Microsoft proudly noted that Bookshelf was their largest single product ever in terms of the number of bits it contained and their smallest ever in physical size. Nevertheless, with most drives costing north of $1000 and products to use with them like Microsoft Bookshelf hundreds more, CD-ROM remained a pricey proposition found in vanishingly few homes — and for that matter not in all that many businesses either.

But at least actual products were available in CD-ROM format, which was more than could be said for CD-I. As 1986 turned into 1987, developers still hadn’t received any CD-I hardware at all, being forced to content themselves with printed specifications and examples of the system in action distributed on videotape by Philips. Particularly for a small company like Aegis, which had committed heavily to a game based on Jules Verne’s 20,000 Leagues Under the Sea, for which they had recruited Jim Sachs of Defender of the Crown fame as illustrator, it was turning into a potentially dangerous situation.

The computer industry — even those parts of it now more committed to CD-I than CD-ROM — dutifully came together once again for the second Microsoft CD-ROM Conference in March of 1987. In contrast to the unusual Pacific Northwest sunshine of the previous conference, the weather this year seemed to match the more unsettled mood: three days of torrential downpour. It was a more skeptical and decidedly less Woodstock-like audience who filed into the auditorium one day for a presentation by no less unlikely a party than the venerable old American conglomerate General Electric. But in the course of that presentation, the old rapture came back in a hurry, culminating in a spontaneous standing ovation. What had so shocked and amazed the audience was the impossible made real: full-screen video running in real-time off a CD drive connected to what to all appearances was an ordinary IBM PC/AT computer. Digital Video Interactive, or DVI, had just made its dramatic debut.

DVI’s origins dated back to 1983, when engineer Larry Ryan of another old-school American company, RCA, had been working on ways to make the old analog laser-disc technology more interactive. Growing frustrated with the limitations he kept bumping against, he proposed to his bosses that RCA dump the laser disc from the equation entirely and embrace digital optical storage. They agreed, and a new project on those lines was begun in 1984. It was still ongoing two years later — just reaching the prototype stage, in fact — when General Electric acquired RCA.

DVI worked by throwing specialized hardware at the problem which Philips had been fruitlessly trying to solve via software alone. By using ultra-intensive compression techniques, it was possible to crunch video playing at a resolution of 256 X 240 — not an overwhelming resolution even by the standards of the day, but not that far below the practical resolution of a typical television set either — down to a size below 153.6 K per second of footage without losing too much quality. This fact was fairly well-known, not least to Philips. The bottleneck had always been the cost of decompressing the footage fast enough to get it onto the screen in real time. DVI attacked this problem via a hardware add-on that consisted principally of a pair of semi-autonomous custom chips designed just for the task of decompressing the video stream as quickly as possible. DVI effectively transformed the potential 75 minutes of sound that could be stored on a CD into 75 minutes of video.

Philosophically, the design bore similarities to the Amiga’s custom chips — similarities which became even more striking when you considered some of the other capabilities that came almost as accidental byproducts of the design. You could, for instance, overlay conventional graphics onto the streaming video by using the computer’s normal display circuitry in conjunction with DVI, just as you could use an Amiga to overlay titles and other graphics onto a “genlocked” feed from a VCR or other video source. But the difference with DVI was that it required no complicated external video source at all, just a CD in the computer’s CD drive. The potential for games was obvious.

In this demonstration of DVI's potential, the user can explore an ancient Mayan archeological site that's depicted using real-world video footage, while the control icons are traditional computer graphics.

In this demonstration of DVI’s potential, the user can explore an ancient Mayan archeological site that’s depicted using real-world video footage, while the icons used as controls are traditional computer graphics.

Still, DVI’s dramatic debut barely ended before the industry’s doubts began. It seemed clear enough that DVI was technically better than CD-I, at least in the hugely important area of video playback, but General Electric — hardly anyone’s idea of a nimble innovator — offered as yet no clear road map for the technology, no hint of what they really planned to do with it. Should game developers place their CD-I projects on hold to see if something better really was coming in the form of DVI, or should they charge full speed ahead and damn the torpedoes? Some did one, some did the other; some made halfhearted commitments to both technologies, some vacillated between them.

But worst of all was the effect that DVI had on Philips. They were thrown into a spin by that presentation from which they never really recovered. Fearful of getting their clock cleaned in the marketplace by a General Electric product based on DVI, Philips stopped CD-I in its tracks, demanding that a way be found to make it do full-screen video as well. From an original plan to ship the first finished CD-I units in time for Christmas 1987, the timetable slipped to promise the first prototypes for developers by January of 1988. Then that deadline also came and went, and all that developers had received were software emulators. Now the development prototypes were promised by summer 1988, finished units expected to ship in 1989. The delay notwithstanding, Philips still confidently predicted sales in “the tens of millions.” But then world domination was delayed again until 1990, then 1991.

Prototype CD-I units finally began reaching developers in early 1989, years behind schedule.

Prototype CD-I units finally began reaching developers in early 1989, years behind schedule.

Wanting CD-I to offer the best of everything, the project chased its own tail for years, trying to address every actual or potential innovation from every actual or potential rival. The game publishers who had jumped aboard with such enthusiasm in the early days were wracked with doubt upon the announcement of each successive delay. Should they jump off the merry-go-round now and cut their losses, or should they stay the course in the hope that CD-I finally would turn into the revolutionary product Philips had been promising for so long? To this day, you merely have to mention CD-I to even the most mild-mannered old games-industry insider to be greeted with a torrent of invective. Philips’s merry-go-round cost the industry huge. Some smaller developers who had trusted Philips enough to bet their very survival on CD-I paid the ultimate price. Aegis, for example, went out of business in 1990 with CD-I still vaporware.

While CD-I chased its tail, General Electric, the unwitting instigators of all this chaos, tried to decide in their slow, bureaucratic way what to do with this DVI thing they’d inherited. Thus things were as unsettled as ever on the CD-I and DVI fronts when the third Microsoft CD-ROM Conference convened in March of 1988. The old plain-Jane CD-ROM format, however, seemed still to be advancing slowly but steadily. Certainly Microsoft appeared to be in fine fettle; harking back to the downpour that had greeted the previous year’s conference, they passed out oversized gold umbrellas to everyone — emblazoned, naturally, with the Microsoft logo in huge type. They could announce at their conference that the High Sierra logical format for CD-ROM had been accepted, with some modest modifications to support languages other than English, by the International Standards Organization as something that would henceforward be known as “ISO 9660.” (It remains the standard logical format for CD-ROM to this day.) Meanwhile Philips and Sony were about to begrudgingly codify the physical format for CD-ROM, extant already as a de facto standard for several years now, as the Yellow Book, latest addition to a library of binders that was turning into quite the rainbow. Apple, who had previously been resistant to CD-ROM, driven as it was by their arch-rival Microsoft, showed up with an official CD-ROM drive for a Macintosh or even an Apple II, albeit at a typically luxurious Apple price of $1200. Even IBM showed up for the conference this time, albeit with a single computer attached to a non-IBM CD-ROM drive and a carefully noncommittal official stance on all this optical evangelism.

As CD-ROM gathered momentum, the stories of DVI and CD-I alike were already beginning to peter out in anticlimax. After doing little with DVI for eighteen long months, General Electric finally sold it to Intel at the end of 1988, explaining that DVI just “didn’t mesh with [their] strategic plans.” Intel began shipping DVI setups to early adopters in 1989, but they cost a staggering $20,000 — a long, long way from a reasonable consumer price point. DVI continued to lurch along into the 1990s, but the price remained too high. Intel, possessed of no corporate tradition of marketing directly to consumers, often seemed little more motivated to turn DVI into a practical product than had been General Electric. Thus did the technology that had caused such a sensation and such disruption in 1987 gradually become yesterday’s news.

Ironically, we can lay the blame for the creeping irrelevancy of DVI directly at the feet of the work for which Intel was best known. As Gordon Moore — himself an Intel man — had predicted decades before, the overall throughput of Intel’s most powerful microprocessors continued to double every two years or so. This situation meant that the problem DVI addressed through all that specialized hardware — that of conventional general-purpose CPUs not having enough horsepower to decompress an ultra-compressed video stream fast enough — wasn’t long for this world. And meanwhile other engineers were attacking the problem from the other side, addressing the standard CD’s reading speed of just 153.6 K per second. They realized that by applying an integral multiplier to the timing of a CD drive’s circuitry, its reading (and seeking) speed could be increased correspondingly. Soon so-called “2X” drives began to appear, capable of reading data at well over 300 K per second, followed in time by “4X” drives, “8X” drives, and whatever unholy figure they’ve reached by today. These developments rendered all of the baroque circuitry of DVI pointless, a solution in search of a problem. Who needed all that complicated stuff?

CD-I’s end was even more protracted and ignominious. The absurd wait eventually got to be too much for even the most loyal CD-I developers. One by one, they dropped their projects. It marked a major tipping point when in 1989 Electronic Arts, the most enthusiastic of all the software publishers in the early days of CD-I, closed down the department they had formed to develop for the platform, writing off millions of dollars on the aborted venture. In another telling sign of the times, Greg Riker, the manager of that department, left Electronic Arts to work for Microsoft on CD-ROM.

When CD-I finally trickled onto store shelves just a few weeks shy of Christmas 1991, it was able to display full-screen video of a sort but only in 128 colors, and was accompanied by an underwhelming selection of slapdash games and lifestyle products, most funded by Philips themselves, that were a far cry from those halcyon expectations of 1986. CD-I sales disappointed — immediately, consistently, and comprehensively. Philips, nothing if not persistent, beat the dead horse for some seven years before giving up at last, having sold only 1 million units in total, many of them at fire-sale discounts.

In the end, the big beneficiary of the endless CD-I/DVI standoff was CD-ROM, the simple, commonsense format that had made its public debut well before either of them. By 1993 or so, you didn’t need anything special to play video off a CD at equivalent or better quality to that which had been so amazing in 1987; an up-to-date CPU combined with a 2X CD-ROM drive would do the job just fine. The Microsoft standard had won out. Funny how often that happened in the 1980s and 1990s, isn’t it?

Bill Gates’s reputation as a master Machiavellian being what it is, I’ve heard it suggested that the chaos and indecision which followed the public debut of DVI had been consciously engineered by him — that he had convinced a clueless General Electric to give that 1987 demonstration and later convinced Intel to keep DVI at least ostensibly alive and thus paralyzing Philips long enough for everyday PC hardware and vanilla CD-ROM to win the day, all the while knowing full well that DVI would never amount to anything. That sounds a little far-fetched to this writer, but who knows? Philips’s decision to announce CD-I five days before Microsoft’s CD-ROM Conference had clearly been a direct shot across Bill Gates’s bow, and such challenges did tend not to end well for the challenger. Anything else is, and must likely always remain, mere speculation.

(Sources: Amazing Computing of May 1986; Byte of May 1986, October 1986, April 1987, January 1989, May 1989, and December 1990; Commodore Magazine of November 1988; 68 Micro Journal of August/September 1989; Compute! of February 1987 and June 1988; Macworld of April 1988; ACE of September 1989, March 1990, and April 1990; The One of October 1988 and November 1988; Sierra On-Line’s newsletter of Autumn 1989; PC Magazine of April 29 1986; the premiere issue of AmigaWorld; episodes of the Computer Chronicles television series entitled “Optical Storage Devices,” “CD-ROMs,” and “Optical Storage”; the book CD-ROM: The New Papyrus from the Microsoft Press. Finally, my huge thanks to William Volk, late of Aegis and Mediagenic, for sharing his memories and impressions of the CD wars with me in an interview.)

Footnotes

Footnotes
1 The data on a music CD is actually read at a speed of approximately 172.3 K per second. The first CD-ROM drives had an effective reading speed that was slightly slower due to the need for additional error-correcting checksums in the raw data.
 
44 Comments

Posted by on September 30, 2016 in Digital Antiquaria, Interactive Fiction

 

Tags:

IBM’s New Flavor

The PS/2 lineup

IBM’s greatest triumph was inextricably linked with what by 1986 was turning into their biggest problem. Following its introduction five years before, the IBM PC had remade the face of corporate computing in its image, legitimizing personal computing in the eyes of the Fortune 500 and all those smaller companies who dreamed of someday joining their ranks. The ecosystem that surrounded the IBM PC and its successors was now worth countless billions, the greatest story of American business success of them all to play out during Ronald Reagan’s storied Morning in America.

The problem, at least as IBM and many of their worried stockholders perceived it, was that they now seemed on the verge of losing control of the very standard they had created. A combination of the decisions that had allowed the original IBM PC to become a standard in the first place — its simple, workmanlike design that utilized only off-the-shelf components; the scrupulously thorough documentation of said design; the decision to outsource the machine’s operating system to Microsoft, a third party all too willing to license the same operating system to other parties as well — had led to a thriving market in so-called “clone” machines whose combined revenues now far exceeded IBM’s personal-computer sales. IBM believed that the clonesters were lifting billions out of their pockets every year, even as they saw their own sales, which had broken record after record in the first few years following the IBM PC’s launch, beginning to show signs of stagnation.

Compaq of Houston, Texas, the most aggressive and innovative of the clonesters, had first begun to collect for themselves a reputation to rival IBM’s own with their very first product back in 1983, a portable — or, perhaps better said, “luggable” — all-in-one IBM-compatible. The Compaq Portable had forced IBM for the first time to play catch-up with a personal-computing rival, rushing to market a luggable of their own. To make matters worse, the IBM version of portable computing had proved far less practical than the Compaq, as many a reviewer wasn’t shy about pointing out.

Now, in 1986, Compaq threatened to wrangle away from IBM the mantle of technological leadership via a machine that represented a more fundamental advance than a new form factor. After hearing that IBM didn’t have any immediate plans to release a machine built around the Intel 80386, a new 32-bit processor that was sending waves of excitement rippling through the industry, Compaq decided to push ahead with a 386-based machine of their own — right now, this very year. The public launch of the Compaq Deskpro 386 on September 9, 1986 — almost exactly five years after the debut of the original IBM PC — was another watershed moment, the first time one of the clonesters had released a machine more powerful than anything in IBM’s stable. Compaq’s CEO Rod Canion, never a shrinking violet under any circumstances, outdid himself at the launch, declaring the Deskpro 386 “the third generation of the personal-computer revolution” after the Apple II and the IBM PC, thus implicitly placing his own Compaq on a par with those two storied companies.

The clone market was getting so big that there seemed a danger that the clones wouldn’t be dismissed under that selfsame moniker much longer. People in the business world were beginning to replace the phrase “IBM clone” with phrases like “the MS-DOS standard” or “the Intel standard,” giving no credit to the company that had really created that standard. As was well attested by their checkered history of antitrust investigations and allegations of unfair competitive practices, IBM had never been known as a bastion of corporate generosity. It may not be exaggerating the case to say that they felt themselves to have a moral right to the PC standard they’d created, a right that encompassed not just an acknowledgement that said standard was still the IBM standard but also the ability to continue to steer every aspect of the further development of that standard. And by all rights the right should also encompass — and this was the sticking point that really irked — their fair share of all those billions that all those other companies were making from IBM’s standard.

In addition to furnishing what they saw as ample evidence of a need for them to reassert control of their industry, this period found IBM at another, more purely technical crossroads. The imminent move from 16-bit to 32-bit computing represented by the new 80386 would have to bring with it some elaborations on IBM’s tried-and-true architecture — elaborations that would undoubtedly define the face of mainstream business computing into the 1990s. IBM saw in those elaborations a way to remedy the ongoing problem of the clonesters as well. Unknown to everyone outside the company, they were about to initiate the so-called “bus wars,” a premeditated strike aimed directly at what they saw as parasites like Compaq.

The bus in this context referred not to a mode of public transportation but rather to the system of expansion slots that allowed the innermost core of an IBM-compatible computer — little more than the processor and memory — to communicate with just about everything else that made up a full-fledged PC: floppy and hard disk drives, monitors, modems, printers, ad infinitum, from the most generalized components found in just about every office to the most specialized for the most esoteric of tasks. The original IBM PC, built around a hybrid 8-bit and 16-bit chip called the Intel 8088, had used an 8-bit bus, meaning the electronic “channel” it used to talk to all these myriad devices was just 8 bits wide. In 1984, IBM had released the PC/AT, built around the newer fully 16-bit Intel 80286, and in that machine had expanded the original bus to support 16-bit devices while remaining backward compatible with the older 8-bit standard. The result retroactively came to be known as the Industry Standard Architecture, or ISA.

Now, with the 32-bit 80386 a reality, it was time to think about revisiting the bus again, to make it support 32-bit communications. To fail to do so would be to cripple the 386, forcing it to act like a 16-bit chip every time it wanted to communicate with a peripheral; impressive as they were in many ways, the Compaq Deskpro 386 and other early 386 clones saw their performance limited by exactly this problem. Most people expected IBM to do for the 386 what they had previously done for the 286, delivering a new bus which would support 32-bit peripherals but remain compatible with older 16-bit and even 8-bit devices. Instead they delivered something they called the Micro Channel Architecture, or MCA, a complete break with the past which supported only 32-bit peripherals.

So much controversy over something barely noticeable. The four Micro Channel slots sit at the left rear of this PS/2 Model 50.

So much controversy over something barely noticeable. The four Micro Channel slots sit at the left rear of this PS/2 Model 50. Many of the components that would have been housed in expansion cards in earlier IBM systems, such as the video card and hard-drive controller, were moved onto the motherboard with the PS/2 line.

MCA debuted as a key component in a new line of personal computers in April of 1987, the most ambitious such IBM had ever or would ever introduce. The Personal System/2 lineup — better known as the PS/2 — was envisioned as exactly the next generation in personal computing that an ebullient Rod Canion had perhaps overenthusiastically declared the Compaq Deskpro 386 to represent barely six months before. IBM was determined to once again remake the computer industry in their image — and to get it right this time, avoiding the perceived mistakes that had led to the rise of the clonesters. The PS/2 lineup did encompass lower-end machines using the old 16-bit PC/AT bus, but the real point of the effort lay with the higher-end models, IBM’s first to use the 80386 and their first to use the new MCA bus architecture to take advantage of all of the 32 bits of throughput offered by that chip. IBM offered various technical justifications for the failure of MCA to support their older bus standards, but they always rang false. As the more astute industry observers quickly realized, MCA had more to do with business and marketing than it did with technology in the abstract.

IBM was attempting a delicate trick with MCA. They wanted to be able to continue to reap the enormous benefits of the business-computing standard they had birthed, with its huge constellation of compatible software that by now even more so than IBM’s reputation made an MS-DOS machine the only kind to be seriously considered by the vast majority of corporate purchasing departments. At the same time, though, they wanted to cut off the oxygen to the clonesters who were also benefiting so conspicuously from that same universal acceptance, and to reassert their role as the ultimate authorities on the direction business computing would take in the future. They believed they could accomplish all of that, in the long term at least, by threading the needle of compatibility — keeping the 386-based PS/2 lineup software-compatible with the older machines while deliberately breaking the hardware compatibility so relied on by the clonesters. In doing so, they would take the hardware to a place the clonesters couldn’t follow, thus securing for themselves all those billions the clonesters had heretofore been stealing out of their pockets.

Unlike the original IBM bus architecture, MCA was locked up inside an ironclad cage of patents, making it legally uncloneable unless one could somehow negotiate a license to do so through IBM. The patents even extended to add-on cards and other peripherals that might be compatible with MCA, meaning that absolutely anyone who wanted to make a hardware add-on for an MCA machine would have to negotiate a license and pay for the privilege. The result should be not only a lucrative new revenue stream but also complete control of business computing’s further evolution. Yes, the clonesters would be able to survive for a few more years making machines using the older 16-bit bus architecture. In the longer term, however, as personal computing inevitably transitioned into a realm of 32 bits, they would survive purely at IBM’s whim, their fate predicated on IBM’s willingness to grant them a patent license for MCA and their own willingness to pay dearly for it.

The clonesters rightly and immediately saw MCA as nothing less than an existential threat, and were thrown into a tizzy trying to figure out how to respond to it. It was the ever-quotable Rod Canion who came up with the best line of attack, drawing an analogy between MCA and the recent soft-drink marketing disaster of New Coke. (What with Pepsi alumnus John Sculley in charge over at Apple, computers and soft drinks seemed to be running oddly in parallel during this era.) Clever, pithy, and blessedly non-technical, Canion’s comparison spread like wildfire through the business press, regurgitated ad nauseam by journalists who often had little to no idea what this MCA thing that it referenced actually was. IBM never quite managed to formulate a response that didn’t sound nefariously evasive.

With the “New Coke” meme setting the tone, just about everything about the PS/2 line turned into an unexpected uphill struggle for IBM. While plenty of early reviewers dutifully toed the line, doubtless mindful that if no one ever got fired for buying IBM no one was likely to get fired for giving them a positive review either, a surprising number of the reviews were distinctly lukewarm. The complaints started and often ended with the prices. Even the low-end 16-bit PS/2 models started at a suggested list price of $2295 without monitor, while the high-end models topped out at almost $7000. Insider reports had it that IBM was enjoying profit margins of 40 percent or more, leading to rampant speculation on what the cost of entry into business-friendly personal computing might become if they really should manage to stamp out the clonesters.

The high-end models in particular struck many as a pointless waste of money given that IBM didn’t have an operating system ready to take advantage of their capabilities. The machines were all still saddled with MS-DOS, clunky and archaic and barely worthy of the name “operating system” even in the terms of 1987. In one of the more striking examples of hardware running away from software in computing history, the higher-end models shipped with 1 MB of memory, but couldn’t actually use more than 640 K of it thanks to MS-DOS’s built-in limitations. IBM promised a new, next-generation operating system called OS/2 to unlock the real potential of these next-generation machines. But OS/2, a project they had once again chosen to turn over to Microsoft, was still an unknown number of months away, with the so-called “Presentation Manager” that would add to it a Macintosh-style GUI due yet further months after that. [1]The full story of OS/2 and the Presentation Manager and their relationship to Microsoft Windows and even Apple’s MacOS is a complex yet fascinating one, but also one best reserved for a future article where I can give it its proper due. And, as a final little bit of buyer discouragement, IBM planned to charge the people who had already spent many thousands on their PS/2 hardware another $800 or so for the privilege of using the eventual OS/2 to take advantage of it.

The PS/2 launch prompted constant comparisons with the original IBM PC launch of five and a half years before, and constantly came up wanting. IBM’s publicity campaign was lavish — as it ought to have been, given those profit margins — but unfocused and uninspired. Its centerpiece was a series of commercials involving much of the cast from M*A*S*H, playing their old sitcom characters inexplicably transported from the Korean War to a modern office. With M*A*S*H still a beloved cultural touchstone only a few years removed from its record-shattering final episode, the spots had plenty of sheer star power, but lacked even a modicum of the charm or creativity that had characterized the award-winning “Charlie Chaplin” advertisements for the original IBM PC.

Likewise, it was hard not to compare the unexpected spirit of openness that had suffused the 1981 IBM PC with the domination and control IBM so plainly intended to assert with the 1987 PS/2 launch. Apple’s iconic old “Big Brother” Macintosh advertisement, a soaring triumph of rhetoric over substance back in its day, would have fit much better to the PS/2 line than it had to the state of business computing back in 1984. Many chose to blame the change in tone on the loss of Don Estridge, the leader of the small team that had built the original IBM PC. An unusually charismatic personality and independent thinker for the famously conservative and bureaucratic IBM — enough so that he had been courted by Steve Jobs to fill the CEO role John Sculley ended up taking at Apple — Estridge had been killed in a plane crash in 1985. His stewardship over IBM’s microcomputer division had been succeeded by that of William Lowe, a much more traditional rank-and-file, buttoned-down IBM man. Whether due to this reason or some other, the shift in tone and direction from 1981 to 1987 was striking.

In the months following the PS/2 line’s release, the media narrative drifted from one of uncertain excitement to reports of the new machines’ disappointing reception in many quarters. IBM sold around 200,000 MCA-equipped PS/2s in the first six months, mostly to the biggest of big business; United Airlines alone, for example, bought 40,000 of them as part of a complete revamping of their reservations system. But far too many even within the Fortune 500 proved stubbornly, unexpectedly resistant to IBM’s unsubtle prodding to jump onto the PS/2 train. Many chose to invest in the clonesters’ cheaper 80386 offerings instead; the 16-bit bus used by those machines, while far from ideal from a purely technical standpoint, did at least have the advantage of compatibility with existing peripherals. Seventeen months after MCA’s debut, 66 percent of all business computers being sold each month were still using the old bus architecture, versus just 20 percent that used MCA. (The remainder was largely accounted for by the Macintosh.) Survey after survey reported IBM to be losing market share rather than gaining it since the arrival of the PS/2. By this point OS/2 and its “Presentation Manager” GUI were finally available, but, hampered by that high price tag, the new operating system’s uptake had also been limited at best.

And then, just when it seemed the news couldn’t get much worse for IBM, much of the industry went into unthinkable open revolt against their ongoing hegemony. On September 13, 1988, a group of the clonesters, driven as usual by Compaq and with the tacit support of Intel and Microsoft, announced the creation of a new 32-bit bus standard, to be called the Extended Industry Standard Architecture, or EISA. Unlike MCA, EISA would be compatible with older 16-bit and 8-bit peripherals. And it would manage to be so without performing notably worse than MCA, thus giving the lie to IBM’s claims that their decision to abandon bus compatibility had been motivated by technical rather than business concerns. The press promptly dubbed the budding consortium, which included virtually every manufacturer of IBM-compatible computers not named IBM, the “Gang of Nine” after the allegedly traitorous Gang of Four of the Chinese Cultural Revolution. Machines using the new EISA bus entered production within a year.

This shot of an EISA card illustrates the unique two-layer connection that allowed the same sockets to work for both the older ISA standard and the newer EISA. The shorter pins correspond to the older 16-bit standard; the longer extend it to 32 bits.

This shot of an EISA card illustrates the unique two-layer connection devised by the Gang of Nine to extend the old ISA standard without requiring ridiculously long, unwieldy cards and sockets. The shorter pins correspond to the older 16-bit standard; the longer extend it to 32 bits.

In the end, EISA would prove of limited technical importance in the evolution of the Intel architecture. The new standard didn’t have much more luck than had MCA in establishing itself as the market’s default. Instead, by the time a 32-bit bus became a truly commonplace need among ordinary computer users, EISA and MCA alike were replaced by a still newer and better standard than either called the Peripheral Component Interconnect, or PCI. The bus wars of the late 1980s and very early 1990s can thus all too easily be seen as just another of the industry’s tempests in a teapot, an obscure squabble over technical esoterica of interest only to hardcore hackers.

Look a little harder at EISA, however, and we see a watershed moment in the history of the personal computer that dwarfs even the arrival of the Compaq Portable or the Deskpro 386. The Gang of Nine’s announcement brought with it a torrent of press coverage that for the first time openly questioned IBM’s continuing dominance of business-oriented computing. CNN’s Moneyline, the most-watched business report on cable television, dredged up Canion’s evergreen New Coke analogy yet again, going so far as to open its reports on the Gang of Nine’s announcement with a shot of soda bottles moving down a production line. IBM was “faced with overwhelming resistance to the flavor of ‘New Compute,'” declared the breathless report that followed; September 13, 1988, “was a day that left Big Blue looking black and blue.” An only slightly more sober Wall Street Journal article had it that the Gang of Nine “was joining forces in an audacious attempt to wrest away from IBM the power of setting the standard for how personal computers are designed, and they seem to have a chance of succeeding.” The article threw all its metaphors in a blender for the big conclusion: “For IBM, the Gang’s announcement yesterday is at best a dust storm of confusion, and, at worst, a dagger to the heart of its PC strategy.” When the Wall Street Journal threatens to turn against your big business, you know you have problems.

And, indeed, September 13, 1988, wound up representing everything the pundits and journalists said it might and more. Simply put, this was the instant that IBM finally and definitively lost control of the business-computing industry, the moment when the architecture they had created back in 1981 left the nest to go its own way. After this instant, no one would ever defer to IBM again. In January of 1989, Arlan Levitan, a columnist for the big consumer-computing magazine Compute! — like most such magazines, not particularly known for the boldness of its editorial stances — signaled the shifting conventional wisdom. His editors empowered him to launch a satirical broadside at IBM, the PS/2, MCA, and even all those who had bought into the hype, a group that very much included their own magazine.

During a Monday morning press breakfast hosted by IBM, over a thousand representatives of the computing press were shocked to hear newly hired Entry Systems Division president P.W. Herman declare that the firm’s PS/2 computer systems and its associated products were part of an elaborate psychological study undertaken at the behest of the National Institute of Mental Health. “I sure am glad the American people haven’t lost their sense of humor. It’s good to know that in these times everybody still appreciates a good joke.” According to Herman, the study was intended to quantify the limits of the operational parameters associated with Abraham Lincoln’s most famous aphorism. Said Herman, “I guess you really can’t fool all of the people all of the time. I’ll tell ya, though — the Micro Channel Architecture even had me going for a while.” All PS/2 owners will receive a letter signed by Herman and thanking them for their personal contribution toward furthering the present-day understanding of aberrant behavior. Corporate executives who committed their firms to IBM’s $800 OS/2 operating system will receive free remedial therapy in DOS reeducation centers. Those who took advantage of IBM’s trade-in policy, whereby users gave up their XTs or ATs for a PS/2, will receive their weight in PCjr computers. According to internal IBM sources, all costs associated with manufacturing and promoting PS/2s will cumulatively qualify as a tax-deductible research grant.

In terms of hardware if not software — Microsoft’s long, often damaging domination was just beginning in the latter realm — the industry was now a meritocracy, bound together only by a set of mutually if often only tacitly agreed-upon standards. That could only mean hard times for IBM, who were hardly used to competing on such a level playing field. In 1993, they posted a loss of a staggering $8 billion, the largest to that point in American business history, prompting a long, painful process of reinvention as a smaller, nimbler, dare I say it even humbler company. In 2004, in another watershed moment symbolic of many things, IBM stopped making PCs altogether, selling what was left of their personal-computer division to the Chinese computer manufacturer Lenovo in order to focus on consulting services.

The PS/2 story has rightfully gone down in business history as a classic tale of overweening arrogance that received its justified comeuppance. In attempting so aggressively to seize complete control of business computing — all of it — IBM pissed away the enviable dominance they already enjoyed. In attempting to build an empire that stood utterly alone and unchallenged, they burned the one they already had.

Yet there is another side to the PS/2 story that also deserves its due. Existing in those seemingly misbegotten machines alongside MCA and the cynicism it represented was a more positive, one might even say technically idealistic determination to advance the state of the art for this architecture that had long since become the mainstream face of computing, dwarfing in terms of the sheer money it generated any other platform.

And make no mistake: the world of the IBM compatibles was in sore need of advancement on multiple fronts. While machines like the Apple Macintosh and Commodore Amiga had opened whole new paradigms of computing — the former with its friendly GUI interface and crisp almost print-quality display, the latter with its multitasking operating system and implementation of the ideal of multimedia computing long before “multimedia” became a buzzword — the world of the clones had remained as bland as ever, a land of green or amber text-only displays, unpleasant beeps and squawks, and inscrutable command lines. For all the apparently proud users and sellers who took all this ugliness as a sign of serious businesslike intent, there were others who recognized that IBM and the clonesters had long since ceded the high ground of real, fundamental innovation in computing to rival platforms. Thankfully, some inside IBM were included in the latter group, and the results could be seen in the PS/2 machines.

Given how far the IBM-compatible world had fallen behind, it’s not surprising that many or most of the alleged innovations of the PS/2 were really a case of playing catch-up. For example, IBM finally produced their first-ever mouse for the line. They also switched over from the old, fragile 5.25-inch floppy-disk format to the newer, more robust and higher-capacity 3.5-inch format already being used by machines like the Macintosh and Amiga.

But undoubtedly the most welcome and significant of all the PS/2’s new technical developments were some desperately needed display improvements. The Video Graphics Array, or VGA, was included with the higher-end PS/2 models; lower-end models shipped with something called the Multi-Color Graphics Array (MCGA), with many but not quite all of the capabilities of VGA. After allowing their machines’ graphics capabilities to languish for years, IBM through VGA and to some extent MCGA finally brought them up to a level that compared very favorably with the Amiga. VGA and MCGA defined a palette of fully 262,144 colors, a huge leap over the 64 offered by the Enhanced Graphics Adapter (EGA), IBM’s previous best display option for their mainstream machines. The Amiga, by contrast, offered just 4096 colors, although its blitter and other custom hardware still gave it some notable advantages in the realm of fast animation.

All of these new developments marked IBM’s last great gifts to the standard they had birthed — gifts destined to long outlive the PS/2 line itself. The mouse connection IBM developed, for instance, remained a standard well beyond the millennium, with so-called “PS/2” connectors remaining common jargon, used by younger tech-heads and system builders who likely had only the vaguest idea from whence the usage derived. The VGA standard proved even longer-lived. It still survives today as the lowest-common-denominator baseline for computer displays, while ports matching the specification defined by IBM all those years ago remain on the back of every monitor and television set.

Ironically given IBM’s laser focus on using the PS/2 line to secure their dominance of business computing, its technical innovations ultimately proved most important in making the architecture viable as a proposition for the home, paving the way for the Microsoft-dominated second home-computer revolution of the 1990s. With good graphics falling into place at last thanks to VGA and the raw power of the 32-bit 80386, only two barriers remained to making PC-compatible machines realistic rivals to the likes of the Amiga as compelling home computers: decent sound to replace those atrocious beeps and squawks, and a decent price.

The first problem wouldn’t be a problem at all for very much longer. The first gaming-focused sound cards began to reach the market within a year of the PS/2 line’s debut, and by 1989 Creative Music Systems and Ad Lib both offered popular cards at street prices of $200 or less.

But the prices of home-oriented systems incorporating all of the PS/2 line’s innovations — MCA excepted — would, alas, take a little longer to fall. As late as July of 1989, when the VGA standard was already more than two years old, Computer Gaming World ran an article titled “Is VGA Worth It?” that seriously questioned whether it was indeed worth the still very considerable expense — VGA boards still cost $500 or more — to so equip a machine, especially given how few games supported VGA at that point. Nor did the 80386 find an immediate place in homes. As the 1980s turned into the 1990s, the newer chip was still a piece of pricey exotica in terms of the average consumer’s budget; the vast majority of the Intel-based PCs that were in consumers’ homes were still built around the 80286 or even the venerable old 8088.

Still, in the long run prices could only fall in such a hyper-competitive market. Given Commodore’s lackadaisical attitude toward improving the Amiga and Apple’s almost complete neglect of the consumer market in their eagerness to force the Macintosh into the offices of corporate America, the emerging standard of a 32-bit Intel-based PC with VGA graphics and a sound card came to the fore effectively unopposed. With the Internet having yet to emerge as home computing’s killer app to end all killer apps, it was games that drove this shift. In 1989, an Amiga was still the ultimate gaming computer. By 1991, it was an afterthought for American game publishers, the market being absolutely dominated by what was now starting to be called the “Wintel” standard. While game consoles and mobile devices have come and gone by the handful over the years since, in the realm of desktop- and laptop-based personal computing the heirs of the original IBM PC remain the overwhelming standard to this day. How ironic that this decades-long dominance was ensured by the PS/2, simultaneously the downfall of IBM and the savior of the inadvertently standard architecture IBM created.

(Sources: the books Big Blues: The Unmaking of IBM by Paul Carroll, Open: How Compaq Ended IBM’s PC Domination and Helped Invent Modern Computing by Rod Canion, and Hard Drive: Bill Gates and Making of the Microsoft Empire by James Wallace and Jim Erickson; Byte of June 1987, July 1987, August 1987, and December 1987; Compute! of June 1988, January 1989, and March 1989; Computer Gaming World of July 1989 and September 1989; Wall Street Journal of September 14 1988; the episodes of The Computer Chronicles titled “Intel 386 — The Fast Lane,” “IBM Personal System/2,” and “Bus Wars.”)

Footnotes

Footnotes
1 The full story of OS/2 and the Presentation Manager and their relationship to Microsoft Windows and even Apple’s MacOS is a complex yet fascinating one, but also one best reserved for a future article where I can give it its proper due.
 
 

Tags: , ,

A Pirate’s Life for Me, Part 3: Case Studies in Copy Protection

Copy-protection schemes, whether effected through software, a combination of software and hardware, or hardware alone, can and do provide a modicum of software protection. But such schemes alone are no better forms of security than locks. One with the appropriate tools can pick any lock. Locks only project the illusion of protection, to both the owner and the prospective thief.

Our focus on copy protection is the primary reason why our industry’s software-protection effort has come under skeptical scrutiny and intense attack. Many users now consider the copy-protection scheme to be just an obstacle to be overcome en route to their Congressionally- and self-granted right to the backup copy.

Dale A. Hillman
President, XOR Software
1985

An impregnable copy-protection scheme is a fantasy. With sufficient time and effort, any form of copy protection can be broken. If game publishers didn’t understand this reality at the dawn of their industry, they were given plenty of proof of its veracity almost as soon as they began applying copy protection to their products and legions of mostly teenage crackers began to build their lives around breaking it.

Given the unattainability of the dream of absolute protection, the next best thing must be protection that is so tough that the end result of a cracked, copyable disk simply isn’t worth the tremendous effort required to get there. When even this level of security proved difficult if not impossible to achieve, some publishers — arguably the wisest — scaled back their expectations yet further, settling for fairly rudimentary schemes that would be sufficient to deter casual would-be pirates but that would hardly be noticed by the real pros. Their games, so the reasoning went, were bound to get cracked anyway, so why compound the loss by pouring money into ever more elaborate protection schemes? Couldn’t that money be better used to make the game themselves better?

Others, however, doubled down on the quixotic dream of the game that would never be cracked, escalating a war between the copy-protection designers who developed ever more devious schemes and the intrepid crackers of the scene, the elite of the elite who staked their reputations on their ability to crack any game ever made. In the long term, the crackers won every single battle of this war, as even many of the publishers who waged it realized was all but inevitable. The best the publishers could point to was a handful of successful delaying actions that bought their games a few weeks or months before they were spread all over the world for free. And even those relative successes, it must be emphasized, were extremely rare. Few schemes stood up much more than a day or two under the onslaught of the scene’s brigade of talented and highly motivated crackers.

Just as so many crackers found the copy-protection wars to be the greatest game of all, far more intriguing and addictive than the actual contents of the disks being cracked, the art of copy protection — or, as it’s more euphemistically called today, digital-rights management or DRM — remains an almost endlessly fascinating study for those of a certain turn of mind. Back in the day, as now, cracking was a black art. Both sides in the war had strong motivations to keep it so: the publishers because information on how their schemes worked meant the power to crack them, and the crackers because their individual reputations hinged on being the first and preferably the only to crack and spread that latest hot game. Thus information in print on copy protection, while not entirely unheard of, was often hard to find. It’s only long since that wild and woolly first decade of the games industry that much detailed information on how the most elaborate schemes worked has been widely available, thanks to initiatives like The Floppy Disk Preservation Project.

This article will offer just a glimpse of how copy protection began and how it evolved over its first decade, as seen through the schemes that were applied to four historically significant games that we’ve already met in other articles: Microsoft Adventure for the TRS-80, Ultima III for the Apple II, Pirates! for the Commodore 64, and Dungeon Master for the Atari ST. Sit back, then, and join me on a little tour through the dawn of DRM.

Microsoft Adventure box art

The release of Microsoft Adventure in late 1979 for the Radio Shack TRS-80 marks quite a number of interrelated firsts for the games industry. It was the first faithful port of Will Crowther and Don Woods’s perennial Adventure, itself one of the most important computer games ever written, to a home computer. It accomplished this feat by taking advantage of the capabilities of the floppy disk, becoming in the process the first major game to be released on disk only, as opposed to the cassettes that still dominated the industry. And to keep those disks from being copied, normally a trivially easy thing to do in comparison to copying a cassette, Microsoft applied one of the earliest notable instances of physical copy protection to the disk, a development novel enough to attract considerable attention in its own right in the trade press. Byte magazine, for instance, declared the game “a gold mine for the enthusiast and a nightmare for the software pirate.”

Floppy Disk

The core of a 5¼-inch floppy disk, the type used by the TRS-80 and most other early microcomputers, is a platter made of a flexible material such as Mylar — thus the “floppy” — with a magnetic coating made of ferric oxide or a similar material, capable of recording the long sequences of ones and zeroes (or ons and offs) that are used to store all computer code and data. The platter is housed within a plastic casing that exposes just enough of it to give the read/write head of the disk drive access as the platter is spun.

The floppy disk is what’s known as a random-access storage medium. Unlike a cassette drive, a floppy drive can access any of its contents at any time at a simple request from the computer to which it’s attached. To allow this random access, there needs to be an organizing scheme to the disk, a way for the drive to know what lies where and, conversely, what spaces are still free for writing new files. A program known as a “formatter,” which must be run on every new disk before it can be used, writes an initially empty framework to the disk to keep track of what it contains and where it all lives on the disk’s surface.

In the case of the TRS-80, said surface is divided into 35 concentric rings, known as “tracks,” numbered from 0 to 34, with track 0 lying at the outer margin of the disk and track 34 closest to the inner ring. Each track is subdivided along its length into 10 equal-sized sectors, each capable of storing 256 bytes of data. Thus the theoretical maximum capacity of an entire disk is about ((256 * 10 * 35) / 1024) 87 K.

Figure 1

Figure 1 (click to expand)

Figure 1 shows the general organization of the tracks on a TRS-80 disk. Much of this is specific to the TRS-80’s operating system and thus further down in the weeds than we really need to go, but a couple of details are very relevant to our purposes. Notice track 18, the “system directory.” It’s just what its name would imply. The entire track is reserved to be the disk’s directory service, a list of all the files it contains along with the track and sector numbers where each begins. The directory is placed in the middle of the disk for efficiency’s sake. Because it must be read from every time a file is requested, having it here minimizes the distance the head must travel both to read from the directory and, later, to access the file in question. For the same reason, most floppy-disk systems try to fill disks outward from the directory track, using the farthest-flung regions only if the disk is otherwise full.

The one exception to this rule in the case of the TRS-80 as well as many other computers is the “boot sector”: track 0, sector 0. It contains code, stored outside the filesystem described in the directory, which the computer will always try to access and execute on boot-up. This “bootstrap” code tells the computer how to get started loading the operating system and generally getting on with things. There isn’t much space here — only a single sector’s worth, 256 bytes — but it’s enough to set the larger process in motion.

Figure 2

Figure 2

Figure 2 shows the layout of an individual disk sector. This diagram presumes a newly formatted disk, so the “dummy data” represents the sector’s 256 bytes of available storage, waiting to be filled. Note the considerable amount of organizing and housekeeping information surrounding the actual data, used to keep the drive on track and aware at all times of just where it is. Again, there’s much more here than we need to dig into today. Relevant for our purposes are the track and sector numbers stored near the beginning of each sector. These amount to the sector’s home address, its index in the directory listing.

Microsoft Adventure introduces a seeming corruption into the disk’s scheme. Beginning with track 1 — track 0 must be left alone so the system can find the boot sector and get started — the tracks are numbered not from 1 to 34 but from 127 to 61, in downward increments of 2. The game’s bootstrap inserts a patch into the normal disk-access routines that tells them how to deal with these weirdly numbered tracks. But, absent the patch, the normal TRS-80 operating system has no idea what to make of it. Even a so-called “deep” copier, which tries to copy a disk sector by sector rather than file by file to create a truly exact mirror image of the original, fails because it can’t figure out where the sectors begin and end.

If one wants to make a copy of a protected program, whether for the legal purpose of backing it up or the illegal one of software piracy, one can take either of two approaches. The first is to find a way to exactly duplicate the disk, copy protection and all, so that there’s no way for the program it contains to know it isn’t running on an original. The other is to crack it, to strip out or ignore the protection and modify the program itself to run correctly without it.

One of the first if not the first to find a way to duplicate Microsoft Adventure and then to crack it to boot was an Australian teenager named Nick Andrew (right from the beginning, before the scene even existed, cracking already seemed an avocation for the young). After analyzing the disk to work out how it was “corrupted,” he rewrote the TRS-80’s usual disk formatter to format disks with the alternate track-numbering system. Then he rewrote the standard copier to read and write to the same system. After “about two days,” he had a working duplicate of the original disk.

But he wasn’t quite done yet. After going through all the work of duplicating the disk, the realization dawned that he could easily go one step further and crack it, turn it into just another everyday disk copyable with everyday tools. To do so, he wouldn’t need his modified disk formatter at all. He needed only make a modification to his customized copier, to read from a disk with the alternate track-numbering system but write to a normal one. Remove the custom bootstrap to make Adventure boot like any other disk, and he was done. This first “nightmare for the software pirate” was defanged.

Ultima III

Released in 1983, Ultima III was already the fourth commercial CRPG to be written by the 22-year-old Richard Garriott, but the first of them to be published by his own new company, Origin Systems. With the company’s future riding on its sales, he and his youthful colleagues put considerable effort into devising as tough a copy-protection scheme as possible. It provides a good illustration of the increasing sophistication of copy protection in general by this point, four years after Microsoft Adventure.

Apple II floppy-disk drives function much like their TRS-80 equivalents, with largely only practical variations brought on by specific engineering choices. The most obvious of the differences is the fact that the Apple II writes its data more densely to the disk, giving it 16 256-byte sectors on each of its 35 tracks rather than the 10 of the TRS-80. This change increases each disk’s capacity to ((256 * 16 * 35) / 1024) 140 K.

Ultima III shipped on two disks, one used to boot the game and the other to load in data and to save state as needed during play. The latter is a completely normal Apple II disk, allowing the player to make copies as she will in the name of being able to start a fresh game with a new character at any time. The former, however, is a different story.

The game’s first nasty trick is to make the boot disk less than half a disk. Only tracks 0 through 16 are formatted at all. Like the TRS-80, the Apple II expects the disk’s directory to reside in the middle of the disk, albeit on track 17 rather than 18. In this case, though, track 17 literally doesn’t exist.

But how, you might be wondering, can even a copy-protected disk function at all without a directory? Well, it really can’t, or at least it doesn’t in this case. Again like the TRS-80, the beginning of an Apple II disk is reserved for a boot block. The Ultima III bootstrap substitutes alternative code for a standard operating-system routine called the “Read Write Track Sector” routine, or, more commonly, the “RWTS.” It’s this routine that programs call when they need to access a disk file or to do just about any other operation to a disk. Ultima III provides an RWTS that knows to look for the directory listing not on track 17 but rather on track 7, right in the middle of its half-a-disk. Thus it knows how to find its files, but no one else does.

Ultima III‘s other trick is similar to the approach taken by Microsoft Adventure in theory, but far more gnarly in execution. To understand it, we need to have a look at the structure of an Apple II sector. As on the TRS-80, each sector is divided into an “address field,” whose purpose is to keep the drive on track and help it to locate what it’s looking for, and a “data field” containing the actual data written there. Figures 3 and 4 show the structure of each respectively.

Figure 3

Figure 3

Figure 4

Figure 4

Don’t worry too much about the fact that our supposed 256 bytes of data have suddenly grown to 342. This transformation is down to some nitty-gritty details of the hardware that mean that 256 logical bytes can’t actually be packed into 256 bytes of physical space, that the drive needs some extra breathing room. A special encoding process, known as Group Code Recording (GCR) on the Apple II, converts the 256 bytes into 342 that are easily manipulable by the drive and back again. If we were really serious about learning to create copy protection or how to crack it, we’d need to know a lot more about this. But it’s not necessary to understand if you’re just dipping your toes into that world, as we’re doing today.

Of more immediate interest are the “prologues” and “epilogues” that precede and trail both the address and data fields. On a normal disk these are fixed runs of numbers, which you see shown in hexadecimal notation in Figures 3 and 4. (If you don’t know what that means, again, don’t worry too much about it. Just trust me that they’re fixed numbers.) Like so much else here, they serve to keep the drive on track and to reassure it that everything is kosher.

Ultima III, however, chooses other numbers to place in these spaces. Further, it doesn’t just choose a new set of fixed numbers — that would be far too easy — but rather varies the expected numbers from track to track and even sector to sector according to a table only it has access to, housed in its custom RWTS. Thus what looks like random garbage to the computer normally suddenly becomes madness with a method behind it when the computer has been booted from the Ultima III disk. If any of these fields don’t match with what they should be — i.e., if someone is trying to use an imperfect copy —  the game loads no further.

It’s a tough scheme, particularly for its relatively early date, but far from an unbreakable one. There are a couple of significant points of vulnerability. The first is the fact that Ultima III doesn’t need to read and write only protected disks. There is, you’ll remember, also that second disk in a standard format. The modified RWTS needs to be able to fall back to the standard routine when using that second disk, which is no more readable by the modified routine than the protected disk is by the standard. It relies on the disk’s volume number to decide which routine to use: volume 1 is the first, protected disk; volume 2 the second, unprotected (if the volume number is anything else, it knows somebody must be up to some sort of funny business and just stops entirely). Thus if we can just get a copy of the first disk in an everyday disk format and set its volume number to 2, Ultima III will happily accept it and read from it.

But that “just” is, of course, a tricky proposition. We would seemingly need to write a program of our own to read from a disk — or rather from half of a disk — with all those ever-changing prologue and epilogue fields. That, anyway, is one approach. But, if we’re really clever, we won’t have to. Instead of working harder, we can work smarter, using Ultima III‘s own code to crack it.

One thing that legions of hackers and crackers came to love about the Apple II was its integrated machine-language monitor, which can be used to pause and break into a running program at almost any point. We can use it now to pause Ultima III during its own boot process and look up the address of its customized RWTS in memory; because all disk operations use the RWTS, it is easily locatable via a global system pointer. Once we know where the new RWTS lives, we can save that block of memory to disk for later use.

Next we need only boot back into the normal system, load up the customized RWTS we saved to disk, and redirect the system pointer to it rather than the standard RWTS. Remember that the custom RWTS is already written to assume that disks with a volume number of 1 are in the protected format, those with a volume number of 2 in the normal format. So, if we now use an everyday copy program to copy from the original, which has a volume number of 1, to a blank disk which we’ve formatted with a volume number of 2, Ultima III essentially cracks itself. The copy operation, like all disk operations, simply follows the modified system pointer to the new RWTS, and is never any the wiser that it’s been modified. Pretty neat, no? Elegant tricks like this warm any hacker’s heart, and are much of the reason that vintage cracking remains to this day such an intriguing hobby.

Pirates!

Ultima III‘s copy protection was clever enough in its day, but trivial compared to what would start to appear just a year or so later as the art reached a certain level of maturity. As the industry itself got more and more cutthroat, many of the protection schemes also got just plain nasty. The shadowy war between publisher and pirate was getting ever more personal.

A landmark moment in the piracy wars was the 1984 founding of the Software Publishers Association. It was the brainchild of a well-connected Washington, D.C., lawyer named Ken Wasch who decided that what the industry really needed was a D.C.-based advocacy group and that he, having no previous entanglements within it, was just the neutral party to start it. The SPA had a broad agenda, from gathering data on sales trends from and for its members to presenting awards for “software excellence,” but, from the perspective of the outsider at any rate, seemed to concern itself with the piracy problem above all else. Its rhetoric was often strident to the point of shrillness, while some of its proposed remedies smacked of using a hydrogen bomb to dig a posthole. For instance, the SPA at one point protested to Commodore that multitasking shouldn’t be a feature of the revolutionary new Amiga because it would make it too easy for crackers to break into programs. And Wasch lobbied Congress to abolish the user’s right to make backup copies of their software for personal archival purposes, a key part of the 1980 Software Copyright Act that he deemed a “legal loophole” because it permitted the existence of programs capable of copying many forms of copy-protected software — a small semi-underground corner of the software industry that the SPA was absolutely desperate to eliminate rather than advocate for. The SPA also did its best to convince the FBI and other legal authorities to investigate the bulletin-board systems of the cracking scene, with mixed success at best.

Meanwhile copy protection was becoming a business in its own right, the flip side to the business of making copying programs. In place of the home-grown protection schemes of our first two case studies, which amounted to whatever the developers themselves could devise in whatever time they had available, third-party turnkey protection systems, the products of an emerging cottage industry, became increasingly common as the 1980s wore on. The tiny companies that created the systems weren’t terribly far removed demographically from the crackers that tried to break them; they were typically made up of one to three young men with an encyclopedic knowledge of their chosen platforms and no small store of swagger of their own. Their systems, sporting names like RapidLok and PirateBusters, were multifaceted and complex, full of multiple failsafes, misdirections, encryptions, and honey pots. Copy-protection authors took to sneaking taunting messages into their code, evincing a braggadocio that wouldn’t have felt out of place in the scene: “Nine out of ten pirates go blind trying to copy our software. The other gets committed!”

Protection schemes of this later era are far too complex for me to describe in any real detail in an accessible article like this one, much less explain how people went about cracking them. I would, however, like to very briefly introduce RapidLok, the most popular of the turnkey systems on the Commodore 64. It was the product of a small company called the Dane Final Agency, and was used in its various versions by quite a number of prominent publishers from early 1986 on, including MicroProse. You’ll find it on that first bona fide Sid Meier classic, the ironically-titled-for-our-purposes Pirates!, along with all of their other later Commodore 64 games.

The protection schemes we’ve already seen have modified their platforms’ standard disk formats to confuse copy programs. RapidLok goes to the next level by implementing its own custom format from scratch. A standard Commodore 64 disk has 17 to 21 sectors per track, depending on where the track is located; a RapidLok disk has 11 or 12 much larger sectors, with the details of how those sectors organize their data likewise re-imagined. Rapidlok also adds a track to the standard 35, shoved off past the part of the disk that is normally read from or written to. This 36th track serves as an encrypted checksum store for all of the other tracks. If any track fails the checksum check — indicating it’s been modified from the original — the system immediately halts.

Like any protection scheme, RapidLok must provide a gate to its walled garden, an area of the disk formatted normally so that the computer can boot the game in the first place. Further, writing to RapidLok-formatted tracks isn’t practical. The computer would need to recalculate the checksum for the track as a whole, encrypt it, and rewrite that portion to the checksum store out past the normally accessible part of the disk — a far too demanding task for a little Commodore 64. For these reasons, Rapidlok disks are hybrids, partially formatted as standard disks and partially in the protected format. Figure 5 below shows the first disk of Pirates! viewed with a contemporary copying utility.

Figure 5

Figure 5

As the existence of such a tool will attest, techniques did exist to analyze and copy RapidLok disks in their heyday. Among the crackers, Mitch of Eagle Soft was known as the RapidLok master; it’s his vintage crack of Pirates! and many other RapidLok-protected games that you’ll find floating around the disk-image archives today. Yet even those cracks, masterful as they were, were forced to strip out a real advantage that RapidLok gave to the ordinary player, that was in fact the source of the first part of its name: its custom disk format was much faster to read from than the standard, by a factor of five or six. Pirates who chose to do their plundering via Mitch’s cracked version of Pirates! would have to be very patient criminals.

But balanced against the one great advantage of RapidLok for the legitimate user was at least one major disadvantage beyond even the obvious one of not being able to make a backup copy. In manipulating the Commodore 64 disk drive in ways its designers had never intended, RapidLok put a lot of stress on the hardware. Drives that were presumably just slightly out of adjustment, but that nevertheless did everything else with aplomb, proved unable to load RapidLok disks, or, almost worse, failed intermittently in the middle of game sessions (seemingly always just after you’d scored that big Silver Train robbery in the case of Pirates!, of course). And, still worse from the standpoint of MicroProse’s customer relations, a persistent if unproven belief arose that RapidLok was actually damaging disk drives, throwing them out of alignment through its radical operations. It certainly didn’t sound good in action, producing a chattering and general caterwauling and shaking the drive so badly one wondered if it was going to walk right off the desktop one day.

The belief, quite probably unfounded though it was, that MicroProse and other publishers were casually destroying their customers’ expensive hardware in the name of protecting their own interests only fueled the flames of mistrust between publisher and consumer that so much of the SPA’s rhetoric had done so much to ignite. RapidLok undoubtedly did its job in preventing a good number of people from copying MicroProse games. A fair number of them probably even went out and bought the games for themselves as an alternative. Whether those sales were worth the damage it did to MicroProse’s relations with their loyal customers is a question with a less certain answer.

Dungeon Master

No discussion of copy protection in the 1980s could be complete without mentioning Dungeon Master. Like everything else about FTL’s landmark real-time CRPG, its copy protection was innovative and technically masterful, so much so that it became a veritable legend in its time. FTL wasn’t the sort of company to be content with any turnkey copy-protection solution, no matter how comprehensive. What they came up with instead is easily as devious as any dungeon level in the game proper. As Atari ST and Amiga crackers spent much of 1988 learning, every time you think you have it beat it turns the tables on you again. Let’s have a closer look at the protection used on the very first release of Dungeon Master, the one that shipped for the ST on December 15, 1987.

3 1/2 inch floppy disk

With the ST and its 68000-based companions, we’ve moved into the era of the 3½-inch disk, a format that can pack more data onto a smaller disk and also do so more reliably; the fragile magnetic platter is now protected beneath a rigid plastic case and a metal shield that only pulls away to expose it when the disk is actually inserted into a drive. The principles of the 3½-inch disk’s operation are, however, the same as those of the 5¼-inch, so we need not belabor the subject here.

Although most 3½-inch drives wrote to both sides of the disk, early STs used just one, in a format that consisted of 80 tracks, each with 9 512-byte sectors, for a total of ((512 * 9 * 80) / 1024) 360 K of storage capacity. The ST uses a more flexible filesystem than was the norm on the 8-bit machines we’ve discussed so far, one known as FAT, for File Allocation Table. The FAT filesystem dates back to the late 1970s, was adopted by Microsoft for MS-DOS in 1981, and is still in common use today in a form known as FAT32; the ST uses FAT12. The numerical suffix refers to the number of bits allocated to each file’s home address on the disk, which in turn dictates the maximum possible capacity of the disk itself. FAT is designed to accommodate a wide range of floppy and hard disks, and thus allows the number of tracks and sectors to be specified at the beginning of the disk itself. Thanks to FAT’s flexibility, Dungeon Master can easily bump the number of sectors per track from 9 to 10, a number still well within the capabilities of the ST’s drive. That change increases the disk’s storage capacity to ((512 * 10 * 80) / 1024) 400 K. It was only this modification, more a response to a need for just a bit more disk space than an earnest attempt at copy protection, that allowed FTL to pack the entirety of Dungeon Master onto a single disk.

Dungeon Master‘s real protection is a very subtle affair, which is one of the keys to its success. At first glance one doesn’t realize that the disk is protected at all — a far cry from the radical filesystem overhaul of RapidLok. The disk’s contents can be listed like those of any other, its individual files even read in and examined. The disk really is a completely normal one — except for track 0, sectors 7 and 8.

Let’s recall again the two basic methods of overcoming copy protection: by duplicating the protection on the copy or by cracking the original, making it so that you don’t need to duplicate the protection. Even with a scheme as advanced as RapidLok, duplication often remained an option. Increasingly by the era of Dungeon Master, though, we see the advent of schemes that are physically impossible for the disk drives on the target machines to duplicate under any circumstances, that rely on capabilities unique to industrial-scale disk duplicators. Nate Lawson, a reader of this blog who was hugely helpful to me in preparing this article, describes good copy protection as taking advantage of “asymmetry”: “the difference between the environment where the code is executed versus where it was produced.” The ultimate form of asymmetry must be a machine on the production side that can write data in a format that the machine on the execution side physically cannot.

Because FTL duplicated their own disks in-house rather than using an outside service like most publishers, they had a great deal of control over the process used to create them. They used their in-house disk duplicator to write an invalid sector number to a single sector: track 0, sector 8 is labeled sector 247. At first blush, this hardly seems special; Microsoft Adventure, that granddaddy of copy-protected games, had after all used the same technique eight years earlier. But there’s something special about this sector 247: due to limitations of the ST’s drive hardware that we won’t get into here, the machine physically can’t write that particular sector number. Any disk with a sector labeled 247 has to have come from something other than an ST disk drive.

Track 0, sector 7, relies on the same idea of hardware asymmetry, but adds another huge wrinkle sufficient to warm the heart of any quantum physicist. Remember that the data stored on a disk boils down to a series of 1s and 0s, magnetized or demagnetized areas that are definitively in one state or the other. But what if it was possible to create a “fuzzy” bit, one that capriciously varies between states on each successive read? Well, it wasn’t possible to do anything like that on an ST disk drive or even most industrial disk duplicators. But FTL, technology-driven company that they were, modified their own disk duplicator to be able to do just that. By cramming a lot of “flux reversals,” or transitions between a magnetic and demagnetized state, into a space far smaller than the read resolution of the ST disk drive, they could create bits that lived in a perpetually in-between state — bits that the drive would randomly read sometimes as on and sometimes as off.

Dungeon Master has one of these fuzzy bits on track 0, sector 7. When the disk is copied, the copy will contain not a fuzzy bit but a normal bit, on or off according to the quantum vagaries of the read process that created it.

Figure 6

Figure 6

As illustrated in Figure 6, Dungeon Master‘s copy-protection routines read the ostensible fuzzy bit over and over, waiting for a discordant result. When that comes, it can assume that it’s running from an original disk and continue. If it tries many times, always getting the same result, it assumes it’s running from a copy and behaves accordingly.

FTL’s scheme was so original that they applied for and were granted a patent on it, one that’s been cited many times in subsequent filings. It represents a milestone in the emerging art and science of DRM. Ironically, the most influential aspect of Dungeon Master, a hugely influential game on its own terms, might just be its fuzzy-bit copy protection. Various forms of optical media continue to use the same approach to this day.

With duplication a complete non-starter in the case of both this sector numbered 247 and the fuzzy bit, the only way to pirate Dungeon Master must be to crack it. Doing so must entail diving into the game’s actual code, looking for the protection check and modifying it to always return a positive response. In itself, that wasn’t usually too horrible; crackers had long ago learned to root through code to disable look-up-a-word-in-the-manual and code-wheel-based “soft” protection schemes. But FTL, as usual, had a few tricks up their sleeves to make it much harder: they made the protection checks multitudinous and their results non-obvious.

Instead of checking the copy protection just once, Dungeon Master does it over and over, from half-a-dozen or so different places in its code, turning the cracker’s job into a game of whack-a-mole. Every time he thinks he’s got it at last, up pops another check. The most devious of all the checks is the one that’s hidden inside a file called “graphics.dat,” the game’s graphics store. Who would think to look for executable code there?

Compounding the problem of finding the checks is the fact that even on failure they don’t obviously do anything. The game simply continues, only to become unstable and start spitting out error messages minutes later. For this reason, it’s extremely hard to know when and whether the game is finally fully cracked. It was the perfect trap for the young crackers of the scene, who weren’t exactly known for their patience. The pirate boards were flooded with crack after crack of Dungeon Master, all of which turned out to be broken after one had actually played a while. In a perverse way, it amounted to a masterful feat of advertising. Many an habitual pirate got so frustrated with not being able properly to play this paradigm-shattering game that he made Dungeon Master the only original disk in his collection. Publishers had for years already been embedding their protection checks some distance into their games, both to make life harder for crackers and to turn the copies themselves into a sort of demo version that unwitting would-be pirates distributed for them for free. But Dungeon Master used the technique to unprecedented success in terms of pirated copies that turned into sold originals.

Dungeon Master still stands as one of copy protection’s — or, if you like, DRM’s — relatively few absolutely clear, unblemished success stories. It took crackers more than a year, an extraordinary amount of time by their usual standards, to wrap their heads around the idea of a fuzzy bit and to find all of the checks scattered willy-nilly through the code (and, in the case of “graphics.dat,” out of it). After that amount of time the sales window for any computer game, even one as extraordinary as Dungeon Master, must be closing anyway. Writing about the copy protection twenty years later, Doug Bell of FTL couldn’t resist a bit of crowing.

Dungeon Master exposed the fallacy in the claims of both the pirates and the crackers. The pirates who would never have paid for the game if they could steal it did pay for it. Despite a steadily growing bounty of fame and notoriety for cracking the game, the protection lasted more than a year. And the paying customer was rewarded with not just a minimally invasive copy-protection scheme, but, just as importantly, with the satisfaction of not feeling like a schmuck for paying for something that most people were stealing.

As the developer of both Dungeon Master and the software portion of its copy protection, I knew that eventually the copy protection would be broken, but that the longer it held out the less damage we would suffer when it was broken.

Dungeon Master had a greater than 50-percent market penetration on the Atari ST—that is, more than one copy of Dungeon Master was sold for each two Atari ST computers sold. That’s easily ten times the penetration of any other game of the time on any other platform.

So what’s the lesson? That piracy does take significant money out of the pocket of the developer and that secure anti-piracy schemes are viable.

Whether we do indeed choose to view Dungeon Master as proof of the potential effectiveness of well-crafted DRM as a whole or, as I tend to, as something of an historical aberration produced by a unique combination of personalities and circumstances, it does remain a legend among old sceners, respected as perhaps the worthiest of all the wily opponents they encountered over the years — not just technically brilliant but conceptually and even psychologically so. By its very nature, the long war between the publishers and the crackers could only be a series of delaying actions on the part of the former. For once, the delay created by Dungeon Master‘s copy protection was more than long enough.

And on that note we’ll have to conclude this modest little peek behind the curtain of 1980s copy protection. Like so many seemingly narrow and esoteric topics, it only expands and flowers the deeper you go into it. People continue to crack vintage games and other software to this day, and often document their findings in far more detail than I can here. Apple II fans may want to have a look at the work of one “a2_4am” on Mastodon, while those of you who want to know more about RapidLok may want to look into the C64 Preservation Project‘s detailed RapidLok Handbook, which is several times the length of this article. And if all that’s far, far more information than you want — and no, I really don’t blame you — I hope this article, cursory as it’s been, has instilled some respect for the minds on both sides of the grand software-piracy wars of the 1980s.

(Sources: Beneath Apple DOS by Don Worth and Pieter Lechner; The Anatomy of the 1541 Disk Drive by Lothar Englisch and Norbert Szczepanowski; Inside Commodore DOS by Richard Immers and Gerald G. Newfeld; The Kracker Jax Revealed Trilogy; Commodore Power Play of August/September 1985; Kilobaud of July 1982; New Zealand Bits and Bytes of May 1984; Games Machine of June 1988; Transactor 5.3; 80 Microcomputing of November 1980; Byte of December 1980; Hardcore Computist #9 and #11; Midnite Software Gazette of April 1986. Online sources include Nick Andrew’s home page, the aforementioned C64 Preservation Project, and The Dungeon Master Encyclopedia. See also Jean Louis-Guérin’s paper “Atari Floppy Disk Copy Protection.” Information on the SPA’s activities comes from the archive of SPA-related material donated to the Strong Museum of Play by Doug Carlston, first fruit of my research here in Rochester.

My huge thanks to Nate Lawson for doing something of a peer review of this article prior to publication!)

 
 

Tags: , , , , , , , , , , , ,