RSS

Doing Windows, Part 1: MS-DOS and Its Discontents

22 Jun

Has any successful piece of software ever deserved its success less than the benighted, unloved exercise in minimalism that was MS-DOS? The program that started its life as a stopgap under the name of “The Quick and Dirty Operating System” at a tiny, long-forgotten hardware maker called Seattle Computer Products remained a stopgap when it was purchased by Bill Gates of Microsoft and hastily licensed to IBM for their new personal computer. Archaic even when the IBM PC shipped in October of 1981, MS-DOS immediately sent half the software industry scurrying to come up with something better. Yet actually arriving at a viable replacement would absorb a decade’s worth of disappointment and disillusion, conflict and compromise — and even then the “replacement” would still have to be built on top of the quick-and-dirty operating system that just wouldn’t die.

This, then, is the story of that decade, and of how Microsoft at the end of it finally broke Windows into the mainstream.


When IBM belatedly turned their attention to the emerging microcomputer market in 1980, it was both a case of bold new approaches and business-as-usual. In the willingness they showed to work together with outside partners on the hardware and especially the software front, the IBM PC was a departure for them. In other ways, though, it was a continuation of a longstanding design philosophy.

With the introduction of the System/360 line of mainframes back in 1964, IBM had in many ways invented the notion of a computing platform: a nexus of computer models that could share hardware peripherals and that could all run the same software. To buy an IBM system thereafter wasn’t so much to buy a single computer as it was to buy into a rich computing ecosystem. Long before the saying went around corporate America that “no one ever got fired for buying Microsoft,” the same was said of IBM. When you contacted them, they sent a salesman or two out to discuss your needs, desires, and budget. Then, they tailored an installation to suit and set it up for you. You paid a bit more for an IBM, but you knew it was safe. System/360 models were available at prices ranging from $2500 per month to $115,000 per month, with the latter machine a thousand times more powerful than the former. Their systems were thus designed, as all their sales literature emphasized, to grow with you. When you needed more computer, you just contacted the mother ship again, and another dark-suited fellow came out to help you decide what your latest needs really were. With IBM, no sharp breaks ever came in the form of new models which were incompatible with the old, requiring you to remake from scratch all of the processes on which your business depended. Progress in terms of IBM computing was a gradual evolution, not a series of major, disruptive revolutions. Many a corporate purchasing manager loved them for the warm blanket of safety, security, and compatibility they provided. “Once a customer entered the circle of 360 users,” noted IBM’s president Thomas Watson Jr., “we knew we could keep him there a very long time.”

The same philosophy could be seen all over the IBM PC. Indeed, it would, as much as the IBM name itself, make the first general-purpose IBM microcomputer the accepted standard for business computing on the desktop, just as were their mainframe lines in the big corporate data centers. You could tell right away that the IBM PC was both built to last and built to grow along with you. Opening its big metal case revealed a long row of slots just waiting to be filled, thereby transforming it into exactly the computer you needed. You could buy an IBM PC with one or two floppy drives, or more, or none; with a color or a monochrome display card; with anywhere from 16 K to 256 K of RAM.

But the machine you configured at time of purchase was only the beginning. Both IBM and a thriving aftermarket industry would come to offer heaps more possibilities in the months and years that followed the release of the first IBM PC: hard drives, optical drives, better display cards, sound cards, ever larger RAM cards. And even when you finally did bite the bullet and buy a whole new machine with a faster processor, such as 1984’s PC/AT, said machine would still be able to run the same software as the old, just as its slots would still be able to accommodate hardware peripherals scavenged from the old.

Evolution rather than revolution. It worked out so well that the computer you have on your desk or in your carry-on bag today, whether you prefer Windows, OS X, or Linux, is a direct, lineal descendant of the microcomputer IBM released more than 35 years ago. Long after IBM themselves got out of the PC game, and long after sexier competitors like the Commodore Amiga and the first and second generation Apple Macintosh have fallen by the wayside, the beast they created shambles on. Its long life is not, as zealots of those other models don’t hesitate to point out, down to any intrinsic technical brilliance. It’s rather all down to the slow, steady virtues of openness, expandibility, and continuity. The timeline of what’s become known as the “Wintel” architecture in personal computing contains not a single sharp break with the past, only incremental change that’s been carefully managed — sometimes even technologically compromised in comparison to what it might have been — so as not to break compatibility from one generation to the next.

That, anyway, is the story of the IBM PC on the hardware side, and a remarkable story it is. On the software side, however, the tale is more complicated, thanks to the failure of IBM to remember the full lesson of their own System/360.

At first glance, the story of the IBM PC on the software side seems to be just another example of IBM straining to offer a machine that can be made to suit every potential customer, from the casual home user dabbling in games and BASIC to the most rarefied corporate purchaser using it to run mission-critical applications. Thus when IBM announced the computer, four official software operating paradigms were also announced. One could use the erstwhile quick-and-dirty operating system that was now known as MS-DOS;1 one could use CP/M, the standard for much of pre-IBM business microcomputing, from which MS-DOS had borrowed rather, shall we say, extensively (remember the latter’s original name?); one could use an innovative cross-platform environment, developed by the University of California San Diego’s computer-science department, that was based around the programming language Pascal; or one could choose not to purchase any additional operating software at all, instead relying on the machine’s built-in ROM-hosted Microsoft BASIC environment, which wasn’t at all dissimilar from those the same company had already provided for many or most of the other microcomputers on the market.

In practice, though, this smorgasbord of possibilities only offered one remotely appetizing entree in the eyes of most users. The BASIC environment was really suited only to home users wanting to tinker with simple programs and save them on cassettes, a market IBM had imagined themselves entering with their first microcomputer but had in reality priced themselves out of. The UCSD Pascal system was ahead of its time with its focus on cross-platform interoperability, accomplished using a form of byte code that would later inspire the Java virtual machine, but it was also rather slow, resource-hungry, and, well, just kind of weird — and it was quite expensive as well. CP/M ought to have been poised for success on the new machine given its earlier dominance, but its parent company Digital Research was unconscionably late making it available for the IBM PC, taking until well after the machine’s October 1981 launch to get it ported from the Zilog Z-80 microprocessor to the Intel architecture of the IBM PC and its successor models — and when CP/M finally did appear it was, once again, expensive.

That left MS-DOS, which worked, was available, and was fairly cheap. As corporations rushed out to purchase the first safe business microcomputer at a pace even IBM had never anticipated, MS-DOS relegated the other three solutions to a footnote in computing history. Nobody’s favorite operating system, it was about to become the most popular one in the world.

The System/360 line that had made IBM the 800-pound gorilla of large-scale corporate data-processing had used an operating system developed in-house by them with an eye toward the future every bit as pronounced as that evinced by the same line’s hardware. The emerging IBM PC platform, on the other hand, had gotten only half of that equation down. MS-DOS was locked into the 1 MB address space of the Intel 8088, allowing any computer on which it ran just 640 K of RAM at the most. When newer Intel processors with larger address spaces began to appear in new IBM models as early as 1984, software and hardware makers and ordinary users alike would be forced to expend huge amounts of time and effort on ugly, inefficient hacks to get around the problem.

Infamous though the 640 K barrier would become, memory was just one of the problems that would dog MS-DOS programmers throughout the operating system’s lifetime. True to its post-quick-and-dirty moniker of the Microsoft Disk Operating System, most of its 27 function calls involved reading and writing to disks. Otherwise, it allowed programmers to read the keyboard and put text on the screen — and not much of anything else. If you wanted to show graphics or play sounds, or even just send something to the printer, the only way to do it was to manually manipulate the underlying hardware. Here the huge amount of flexibility and expandability that had been designed into the IBM PC’s hardware architecture became a programmer’s nightmare. Let’s say you wanted to put some graphics on the screen. Well, a given machine might have an MDA monochrome video card or a CGA color card, or, soon enough, a monochrome Hercules card or a color EGA card. You the programmer had to build into your program a way of figuring out which one of these your host had, and then had to write code for dealing with each possibility on its own terms.

An example of how truly ridiculous things could get is provided by WordPerfect, the most popular business word processor from the mid-1980s on. WordPerfect Corporation maintained an entire staff of programmers whose sole job function was to devour the technical specifications and command protocols of each new printer that hit the market and write drivers for it. Their output took the form of an ever-growing pile of disks that had to be stuffed into every WordPerfect box, even though only one of them would be of any use to any given buyer. Meanwhile another department had to deal with the constant calls from customers who had purchased a printer for which they couldn’t find a driver on their extant mountain of disks, situations that could be remedied in the era before widespread telecommunications only by shipping off yet more disks. It made for one hell of a way to run a software business; at times the word processor itself could almost feel like an afterthought for WordPerfect Printer Drivers, Inc.

But the most glaringly obvious drawback to MS-DOS stared you in the face every time you turned on the computer and were greeted with that blinking, cryptic “C:\>” prompt. Hackers might have loved the command line, but it was a nightmare for a secretary or an executive who saw the computer only as an appliance. MS-DOS contrived to make everything more difficult through its sheer primitive minimalism. Think of the way you work with your computer today. You’re used to having several applications open at once, used to being able to move between them and cut and paste bits and pieces from one to the other as needed. With MS-DOS, you couldn’t do any of this. You could run just one application at a time, which would completely fill the screen. To do something else, you had to shut down the application you were currently using and start another. And if what you were hoping to do was to use something you had made in the first application inside the second, you could almost always forget about it; every application had its own proprietary data formats, and MS-DOS didn’t provide any method of its own of moving data from one to another.

Of course, the drawbacks of MS-DOS spelled opportunity for those able to offer ways to get around them. Thus Lotus Corporation became one of the biggest software success stories of the 1980s by making Lotus 1-2-3, an unwieldy colossus that integrated a spreadsheet, a database manager, and a graph- and chart-maker into a single application. People loved the thing, bloated though it was, because all of its parts could at least talk to one another.

Other solutions to the countless shortcomings of MS-DOS, equally inelegant and partial, were rampant by the time Lotus 1-2-3 hit it big. Various companies published various types of hacks to let users keep multiple applications resident in memory, switching between them using special arcane key sequences. Various companies discussed pacts to make interoperable file formats for data transfer between applications, although few of them got very far. Various companies made a cottage industry out of selling pre-packaged printer drivers to other developers for use in their applications. People wrote MS-DOS startup scripts that brought up easy-to-choose-from menus of common applications on bootup, thereby insulating timid secretaries and executives alike from the terrifying vagueness of the command line. And everybody seemed to be working a different angle when it came to getting around the 640 K barrier.

All of these bespoke solutions constituted a patchwork quilt which the individual user or IT manager would have to stitch together for herself in order to arrive at anything like a comprehensive remedy for MS-DOS’s failings. But other developers had grander plans, and much of their work quickly coalesced around various forms of the graphical user interface. Initially, this fixation may sound surprising if not inexplicable. A GUI built using a mouse, menus, icons, and windows would seem to fix only one of MS-DOS’s problems, that being its legendary user-unfriendliness. What about all the rest of its issues?

As it happens, when we look closer at what a GUI-based operating environment does and how it does it, we find that it must or at least ought to carry with it solutions to MS-DOS’s other issues as well. A windowed environment ideally allows multiple applications to be open at one time, if not actually running simultaneously. Being able to copy and paste pieces from one of those open applications to another requires interoperable data formats. Running or loading multiple applications also means that one of them can’t be allowed to root around in the machine’s innards indiscriminately, lest it damage the work of the others; this, then, must mark the end of the line for bare-metal programming, shifting the onus onto the system software to provide a proper layer of high-level function calls insulating applications from a machine’s actual or potential hardware. And GUIs, given that they need to do all of the above and more, are notoriously memory-hungry, which obligated many of those who made such products in the 1980s to find some way around MS-DOS’s memory constraints. So, a GUI environment proves to be much, much more than just a cutesy way of issuing commands to the computer. Implementing one on an IBM PC or one of its descendants meant that the quick-and-dirty minimalism of MS-DOS had to be chucked forever.

Some casual histories of computing would have you believe that the entire software industry was rigidly fixated on the command line until Steve Jobs came along to show them a better way with the Apple Macintosh, whereupon they were dragged kicking and screaming into computing’s necessary future. Such histories generally do acknowledge that Jobs himself got the GUI religion after a visit to the Xerox Palo Alto Research Center in December of 1979, but what tends to get lost is the fact that he was hardly alone in viewing PARC’s user-interface innovations as the natural direction for computing to go in the more personal, friendlier era of high technology being ushered in by the microcomputer. Indeed, by 1981, two years before a GUI made its debut on an Apple product in the form of the Lisa, seemingly everyone was already talking about them, even if the acronym itself had yet to be invented. This is not meant to minimize the hugely important role Apple really would play in the evolution of the GUI; as we’ll see to a large extent in the course of this very series of articles, they did much original formative work that has made its way into the computer you’re probably using to read these words right now. It’s rather just to say that the complete picture of how the GUI made its way to the personal computer is, as tends to happen when you dig below the surface of any history, more variegated than a tidy narrative of “A caused B which caused C” allows for.

In that spirit, we can note that the project destined to create the MS-DOS world’s first GUI was begun at roughly the same time that a bored and disgruntled Steve Jobs over at Apple, having been booted off the Lisa project, seized control of something called the Macintosh, planned at the time as an inexpensive and user-friendly computer for the home. This other pioneering project in question, also started during the first quarter of 1981, was the work of a brief-lived titan of business software called VisiCorp.

VisiCorp had been founded by one Dan Fylstra under the name of Personal Software in 1978, at the very dawn of the microcomputer age, as one of the first full-service software publishers, trafficking mostly in games which were submitted to him by hobbyists. His company became known for their comparatively slick presentation in a milieu that was generally anything but; MicroChess, one of their first releases, was quite probably the first computer game ever to be packaged in a full-color box rather than a Ziploc baggie. But their course was changed dramatically the following year when a Harvard MBA student named Dan Bricklin contacted Fylstra with a proposal for a software tool that would let accountants and other businesspeople automate most of the laborious financial calculations they were accustomed to doing by hand. Fylstra was intrigued enough to lend the microcomputer-less Bricklin one of his own Apple IIs — whereupon, according to legend at least, the latter proceeded to invent the electronic spreadsheet over the course of a single weekend. He hired a more skilled programmer named Bob Frankston and formed a company called Software Arts to develop that rough prototype into a finished application, which Fylstra’s Personal Software published in October of 1979.

Up to that point, early microcomputers like the Apple II, Radio Shack TRS-80, and Commodore PET had been a hard sell as practical tools for business — even for their most seemingly obvious business application of all, that of word processing. Their screens could often only display 40 columns of big, blocky characters, often only in upper case — about as far away from the later GUI ideal of “what you see is what you get” as it was possible to go — while their user interfaces were arcane at best and their minuscule memories could only accommodate documents of a few pages in length. Most potential business users took one look at the situation, added on the steep price tag for it all, and turned back to their typewriters with a shrug.

VisiCalc, however, was different. It was so clearly, manifestly a better way to do accounting that every accountant Fylstra showed it to lit up like a child on Christmas morning, giggling with delight as she changed a number here or there and watched all of the other rows and columns update automagically. VisiCalc took off like nothing the young microcomputer industry had ever seen, landing tens of thousands of the strange little machines in corporate accounting departments. As the first tangible proof of what personal computing could mean to business, it prompted people to begin asking why IBM wasn’t a part of this new party, doing much to convince the latter to remedy that absence by making a microcomputer of their own. It’s thus no exaggeration to say that the entire industry of business-oriented personal computing was built on the proof of concept that was VisiCalc. It would sell 500,000 copies by January of 1983, an absolutely staggering figure for that time. Fylstra, seeing what was buttering his bread, eventually dropped all of the games and other hobbyist-oriented software from his catalog and reinvented Personal Software as VisiCorp, the first major publisher of personal-computer business applications.

But all was not quite as rosy as it seemed at the new VisiCorp. Almost from the moment of the name change, Dan Fylstra found his relationship with Dan Bricklin growing strained. The latter was suspicious of his publisher’s rebranding themselves in the image of his intellectual property, feeling they had been little more than the passive beneficiaries of his brilliant stroke. This point of view was by no means an entirely fair one. While it may have been true that Fylstra had been immensely lucky to get his hands on Bricklin’s once-in-a-lifetime innovation, he’d also made it possible by loaning Bricklin an Apple II in the first place, then done much to make VisiCalc palatable for corporate America through slick, professional packaging and marketing that projected exactly the right conservative, businesslike image, consciously eschewing the hippie ethos of the Homebrew Computer Club. Nevertheless, Bricklin, perhaps a bit drunk on all the praise of his genius, credited VisiCorp’s contribution to VisiCalc’s success but little. And so Fylstra, nervous about continuing to stake his entire company on Bricklin, set up an internal development team to create more products for the business market.

By the beginning of 1981, the IBM PC project which VisiCalc had done so much to prompt was in full swing, with the finished machine due to be released before the end of the year. Thanks to their status as publisher of the hottest application in business software, VisiCorp had been taken into IBM’s confidence, one of a select number of software developers and publishers given access to prototype hardware in order to have products ready to go on the day the new machine shipped. It seems that VisiCorp realized even at this early point how underwhelming the new machine’s various operating paradigms were likely to be, for even before they had actual IBM hardware to hand, they started mocking up the GUI environment that would become known as Visi On using Apple II and III machines. Already at this early date, it reflected a real, honest, fundamental attempt to craft a more workable model for personal computing than the nightmare that MS-DOS alone could be. William Coleman, the head of the development team, later stated in reference to the project’s founding goals that “we wanted users to be able to have multiple programs on the screen at one time, ease of learning and use, and simple transfer of data from one program to another.”

Visi On seemed to have huge potential. When VisiCorp demonstrated an early version, albeit far later than they had expected to be able to, at a trade show in December of 1982, Dan Fylstra remembers a rapturous reception, “competitors standing in front of [the] booth at the show, shaking their heads and wondering how the company had pulled the product off.” It was indeed an impressive coup; well before the Apple Macintosh or even Lisa had debuted, VisiCorp was showing off a full-fledged GUI environment running on hardware that had heretofore been considered suitable only for ugly old MS-DOS.

Still, actually bringing a GUI environment to market and making a success out of it was a much taller order than it might have first appeared. As even Apple would soon be learning to their chagrin, any such product trying to make a go of it within the increasingly MS-DOS-dominated culture of mainstream business computing ran headlong into a whole pile of problems which lacked clearly best solutions. Visi On, like almost all of the GUI products that would follow for the IBM hardware architecture, was built on top of MS-DOS, using the latter’s low-level function calls to manage disks and files. This meant that users could install it on their hard drive and pop between Visi On and vanilla MS-DOS as the need arose. But a much thornier question was that of running existing MS-DOS applications within the Visi On environment. Those which assumed they had full control of the system — which was practically all of them, because why wouldn’t they? — would flame out as soon as they tried to directly access some piece of hardware that was now controlled by Visi On, or tried to put something in some specific place inside what was now a shared pool of memory, or tried to do any number of other now-forbidden things. VisiCorp thus made the hard decision to not even try to get existing MS-DOS applications to run under Visi On. Software developers would have to make new, native applications for the system; Visi On would effectively be a new computing platform onto itself.

This decision was questionable in commercial if not technical terms, given how hard it must be to get a new platform accepted in an MS-DOS-dominated marketplace. But VisiCorp then proceeded to make the problem even worse. It would only be possible to program Visi On, they announced, after purchasing an expensive development kit and installing it on a $20,000 DEC PDP-11 minicomputer. They thus opted for an approach similar to one Apple was opting for with the Lisa: to allow that machine to be programmed only by yoking it up to a second Lisa. In thus betraying the original promise of the personal computer as an anything machine which ordinary users could program to do their will, both Visi On and the Lisa operating system arguably removed their hosting hardware from that category entirely, converting it into a closed electronic appliance more akin to a game console. Taxonomical debates aside, the barriers to entry even for one who wished merely to use Visi On to run store-bought applications were almost as steep: when this first MS-DOS-based GUI finally shipped on December 16, 1983, after a long series of postponements, it required a machine with 512 K of memory and a hard drive to run and cost more than $1000 to buy.

Visi On was, as the technology pundits like to say, “ahead of the hardware market.” In quite a number of ways it was actually far more ambitious than what would emerge a month or so after it as the Apple Macintosh. Multiple Visi On applications could be open at the same time (although they didn’t actually run concurrently), and a surprisingly sophisticated virtual-memory system was capable of swapping out pages to hard disk if software tried to allocate more memory than was physically available on the computer. Similar features wouldn’t reach MacOS until 1987’s System 5 and 1991’s System 7 respectively.

In the realm of usability, however, Visi On unquestionably fell down in comparison to Apple’s work. The user interfaces for the Lisa and the Macintosh made almost all the right choices right from the beginning, expanding upon the work done at Xerox PARC in all the right ways. Many of the choices made by VisiCorp, on the other hand, feel far more dubious today — and, one has to believe, not just out of the contempt bred by all those intervening decades of user interfaces modeled on Apple’s. Consider the task of moving and sizing windows on the screen, which was implemented so elegantly on the original Lisa and Macintosh that it’s been changed not at all in all the decades since. While Visi On too allows windows to be sized and placed where you will, and allows them to overlay one another — something by no means true of all of the MS-DOS GUI systems that would follow — doing so is a clumsy process involving picking options out of menus rather than simply dragging title bars or sizing widgets. In fact, Visi On uses no icons whatsoever. For anyone still enamored with the old saw that Apple just ripped off the Xerox PARC interface in its entirety and stuck it on the Lisa and Mac, Visi On, being much more slavishly based on the PARC model, provides an instructive demonstration of how far the likes of the Xerox Alto still was from the intuitive ease of Apple’s interface.

A Quick Tour of Visi On


With mice still exotic creatures, VisiCorp provided their own to work with Visi On. Many other early GUI-makers, Microsoft among them, would follow their lead.

Visi On looks like this upon booting up on an original IBM PC with 640 K of memory and a CGA video card, running in high-resolution monochrome mode at 640 X 200. “Services” is Visi On’s terminology for installed applications. The list of them which you see here, all provided by VisiCorp themselves, are the only ones that would ever exist, thanks to Visi On’s complete commercial failure.

We’ve started up a spreadsheet, a graphing application, and a word processor at the same time. These don’t actually run concurrently, as they would under a true multitasking operating system, but are visible onscreen in their separate windows, becoming active when we click them. (Something similar would not have been possible under MacOS prior to 1987.)

Although Visi On does sport windows that can be sized and placed anywhere and can overlap one another, arranging them is made extremely tedious by its lack of any concept of mouse-dragging; the mouse can only be used for single clicks. So, you have to click the “Frame” menu option and see its instructions through step by step. Note also the lack of pull-down menus, another of Apple’s expansions upon the work down at Xerox PARC. Menus here are just one-shot commands, akin to what a modern GUI user would call a button.

Fortunately, you can make a window full-screen with just a couple of clicks. Unfortunately, you then have to laboriously re-“Frame” it when you want to shrink it again; it doesn’t remember where it used to be.

The lack of a mouse-drag affordance makes the “Transfer” function — Visi On’s version of copy-and-paste — extremely tedious.

And, as with most things in Visi On, transferring data is also slow. Moving that little snippet of text from the word processor to the spreadsheet took about ten seconds.

On the plus side, Visi On sports a help system that’s crazily comprehensive for its time — much more so than the one that would ship with MacOS or, for that matter, Microsoft Windows for quite some years.

As if it didn’t have enough intrinsic problems working against it, extrinsic ones also contrived to undo Visi On in the marketplace. By the time it shipped, VisiCorp was a shadow of what they had so recently been. VisiCalc sales had collapsed over the past year, going from nearly 40,000 units in December of 1982 alone to fewer than 6000 units in December of 1983 in the face of competing products — most notably the burgeoning juggernaut Lotus 1-2-3 — and what VisiCorp described as Software Arts’s failure to provide “timely upgrades” amidst a relationship that was growing steadily more tense. With VisiCorp’s marketplace clout thus dissipating like air out of a balloon, it was hardly the ideal moment for them to ask for the sorts of commitments from users and developers required by Visi On.

The very first MS-DOS-based GUI struggled along with no uptake whatsoever for nine months or so; the only applications made for it were the word processor, spreadsheet, and graphing program VisiCorp made themselves. In September of 1984, with VisiCorp and Software Arts now embroiled in a court battle that would benefit only their competitors, the Visi On technology was sold to a veteran manufacturer of mainframes and supercomputers called Control Data Corporation, who proceeded to do very little if anything with it. VisiCorp went bankrupt soon after, while Lotus bought out Software Arts for a paltry $800,000, thus ending the most dramatic boom-and-bust tale of the early business-software industry. “VisiCorp’s auspicious climb and subsequent backslide,” wrote InfoWorld magazine, “will no doubt become a ‘how-not-to’ primer for software companies of the future.”

Visi On’s struggles may have been exacerbated by the sorry state of its parent company, but time would prove them to be by no means atypical of MS-DOS-based GUI systems in general.  Already in February of 1984, PC Magazine could point to at least four other GUIs of one sort or another in the works from other third-party developers: Concurrent CP/M with Windows by Digital Research, VisuALL by Trillian Computer Corporation, DesqView by Quarterdeck Office Systems, and WindowMaster by Structured Systems. All of these would make different choices in trying to balance the seemingly hopelessly competing priorities of reasonable speed and reasonable hardware requirements, compatibility with MS-DOS applications and compatibility with post-MS-DOS philosophies of computing. None would find the sweet spot. Neither they nor the still more GUI environments that followed them would be able to offer a combination of features, ease of use, and price that the market found compelling, so much so that by 1985 the whole field of MS-DOS GUIs was coming to be viewed with disdain by computer users who had been disappointed again and again. If you wanted a GUI, went the conventional wisdom, buy a Macintosh and live with the paltry software selection and the higher price. The mainstream of business computing, meanwhile, continued to truck along with creaky old MS-DOS, a shaky edifice made still more unstable by all of the hacks being grafted onto it to expand its memory model or to force it to load more than one application at a time. “Windowing and desktop environments are a solution looking for a problem,” said Robert Lefkowits, director of software services for Infocorp, in the fall of 1985. “Users aren’t really looking for any kind of windowing environment to solve problems. Users are not expressing a need or desire for it.”

The reason they weren’t, of course, was because they hadn’t yet seen a GUI in which the pleasure outweighed the pain. Entrenched as users were in the old way of doing things, accepting as they had become of all of MS-DOS’s discontents as simply the way computing was, it was up to software developers to show them why a GUI was something they had never known they couldn’t live without. Microsoft at least, the very people who had saddled their industry with the MS-DOS albatross, were smart enough to realize that mainstream business computing must be remade in the image of the much-scoffed-at Macintosh at some point. Further, they understood that it behooved them to do the remaking if they didn’t want to go the way of VisiCorp. By the time Lefkowits said his words, the long, winding tale of dogged perseverance in the face of failure and frustration that would become the story of Microsoft Windows had already been playing out for several years. One of these days, the GUI was going to make its breakthrough in one way or another, and it was going to do so with a Microsoft logo on its box — even if Bill Gates had to personally ram it down his customers’ throats.

(Sources: the books The Making of Microsoft: How Bill Gates and His Team Created the World’s Most Successful Software Company by Daniel Ichbiah and Susan L. Knepper and Computer Wars: The Fall of IBM and the Future of Global Technology by Charles H. Ferguson and Charles R. Morris; InfoWorld of October 31 1983, November 14 1983, April 2 1984, July 2 1984, and October 7 1985; Byte of June 1983, July 1983; PC Magazine of February 7 1984, and October 2 1984; the episode of the Computer Chronicles television program called “Integrated Software.” Finally, I owe a lot to Nathan Lineback for the histories, insights, comparisons, and images found at his wonderful online “GUI Gallery.”)


  1. MS-DOS was known as PC-DOS when sold directly under license by IBM. Its functionality, however, was almost or entirely identical to the Microsoft-branded version. For simplicity’s sake, I will just refer to “MS-DOS” whenever speaking about either product — or, more commonly, both — in the course of this series of articles. 

 

Tags: , , ,

56 Responses to Doing Windows, Part 1: MS-DOS and Its Discontents

  1. Tomber

    June 22, 2018 at 4:05 pm

    It is such a dense history, that I look forward to seeing which markers you find to be the significant ones.

    I think it is important to note that IBM supposedly was so lax in operating system control on the IBM PC because they were still operating under a consent decree with the US government from a prior anti-trust action(or fearful of triggering another). That’s where many of the ‘dumb’ decisions came from.

     
    • Jimmy Maher

      June 23, 2018 at 9:19 am

      The legal aspects of IBM’s situation is something I wish I’d given proper consideration to in my articles on the making of the IBM PC way back in the day. Unfortunately, these articles aren’t quite the right place to try to get into it now. That will probably have to wait for an updated version of those older articles — maybe for an eventual book about the history of the PC architecture? As it is, though, I’m acutely conscious that a significant piece of the story is being left untold. (A similar situation holds true with my article on the rise of the PC clones: man, I wish I’d given more consideration to the legal aspects of reverse-engineering the original IBM BIOS.)

      As it is, though, yeah, this is a big topic, one’s that been really challenging to corral into a readable narrative. Apart from Visi On, whose pioneering status made it a story worth telling, I’ve tried to limit the principal players to Microsoft, IBM, and to some extent Apple in the interest of not missing the forest for the trees. The story of the shifting alliances among those three should make for good reading if I can pull it off. Wish me luck!

       
      • Tomber

        June 23, 2018 at 11:59 pm

        Yes, people want a readable story and narrative, and history often isn’t so convenient. But you’ve already tackled worse. :)

        Another potential strand is Intel, and the 386, which was really the first post-PC design. IBM didn’t adopt it very quickly, while Compaq did. While OS/2 1.x was a 286-targeted system, the 386 had virtual machine capabilities that were used by Compaq’s EMM386.EXE and Microsoft’s Windows/386 (and finally 3.0). These had a huge impact that ultimately led to Microsoft’s total control and IBM’s defeat. Was that in some sense planned? Most people don’t realize that Windows 3.0 (and probably /386) in 386 Enhanced Mode actually was a 32-bit operating system, very cleverly interwoven with the DOS real mode drivers and data structures(Exactly like 32-bit DOS extenders). Windows 95 gets the credit for being 32-bit, because it expanded it to the Windows API. Yet Windows 3.1 could already run some 32-bit apps with ‘Win32s,’ because it had always had a 32-bit kernel. And Windows extenders allowed the same years earlier, back to Windows 3.0.

        The traditional story is that IBM stuck to the 286 because they wanted to protect their customers’ investment in the AT; perhaps an alternative is the 386 was recognized as a way to pull the rug out from under IBM, to the benefit of everybody else. And maybe IBM had seen that too, but bet the wrong way(again!). Once the 386 entered the scene, whoever commands ring 0 owns the computer. Microsoft was there repeatedly, and early.

         
  2. Ken Brubaker

    June 22, 2018 at 4:19 pm

    hacks being grated onto it

    Should be grafted?

     
    • Jimmy Maher

      June 22, 2018 at 7:38 pm

      Thanks!

       
  3. Alex Smith

    June 22, 2018 at 4:29 pm

    much more than just a cutsey was of issuing commands to the computer

    way

     
    • Jimmy Maher

      June 22, 2018 at 7:39 pm

      Thanks!

       
  4. stepped pyramids

    June 22, 2018 at 4:39 pm

    This is a good piece, but I think it somewhat overstates the degree to which PC hardware compatibility is a flaw in MS-DOS. Previous microcomputer platforms didn’t have that problem because they had fairly limited expansion options, and even on platforms where expansion ports were readily available (like the Apple II) your software had to be written specifically to use the hardware.

    Apple’s solution to the same problem with the Macintosh was to create its own hardware ecosystem and protocols. This wasn’t really an option in the larger and less centralized PC market. And third party hardware support was as much of a headache for Macs as for PCs — “extension” was as scary a term for a Mac user as “TSR” was for a DOS user.

    And I think Visi On and the other failed GUIs of the time demonstrate that the hardware just wasn’t ready for a more sophisticated operating system yet. It wasn’t until the 386 introduced protected mode that it was feasible to implement a “real” OS for the platform.

     
    • Jimmy Maher

      June 23, 2018 at 9:32 am

      I’m having trouble parsing your statement that “PC hardware compatibility is a flaw in MS-DOS.” Not only is that not anything I wrote, but it just doesn’t make sense to me on even a basic semantic level.

      My argument is that there was a mismatch on the IBM PC between a hardware architecture that clearly was designed to grow into a long-term future and an operating system that was not, as most infamously demonstrated by the 640 K barrier. That other popular platforms were not well-equipped to grow is immaterial to this argument except as the status quo illuminating the aspect of the IBM PC — its open hardware architecture that was amendable to almost infinite expansion — that made it so special, and would prove its one saving grace in a marketplace full of slicker, sexier machines.

      I have no real quibble, on the other hand, with the statement that Intel hardware probably wasn’t fully ready for GUIs until the 80386 arrived. Indeed, this will become a bit of a running theme as we look at the early travails of Windows.

       
      • stepped pyramids

        June 24, 2018 at 4:16 am

        it was already dawning on careful observers how much more difficult IBM had made everyone’s lives through their eagerness to just get an operating system — any operating system, even a quick-and-dirty one — onto their new microcomputer and get it out the door…
        …memory was just one of the things that MS-DOS handled poorly — or, perhaps better said, failed to handle at all. True to its post-quick-and-dirty moniker of the Microsoft Disk Operating System…
        If you wanted to show graphics or play sounds, or even just send something to the printer, the only way to do it was to manually manipulate the underlying hardware. Here the huge amount of flexibility and expandability that had been designed into the IBM PC’s hardware architecture became a programmer’s nightmare.

        This comes off to me as at least strongly suggesting that the reason PCs had so many hardware compatibility issues is because IBM chose a “quick-and-dirty” operating system like MS-DOS. I’m simply arguing that there existed no alternative operating system — real or hypothetical — that would have handled hardware substantially better. I’m not disputing your overall point that MS-DOS had design flaws that hobbled the PC platform in the longer term, I just don’t think this particular issue is one you can pin on the OS.

         
        • Jimmy Maher

          June 24, 2018 at 8:44 am

          Okay. It’s a reasonable argument, and I agree that there was no other operating system sitting around on the shelf which IBM/Microsoft could have scooped up and employed. I’m not so sure, however, about *hypothetical* operating systems. Visi On implemented virtual memory, multitasking, and device drivers on the original IBM PC hardware. It’s interesting to consider whether an operating system could have been devised that could have been updated fairly easily to use make use of the 80286’s protected mode; given the historically close relationship between IBM and Intel, the former likely knew that the 80286 was in the works at the time the IBM PC was being developed. Could an operating system have been devised that would have run in protected mode on an 80286 from the beginning, returning pointers to virtual addresses, while maintaining compatibility with existing applications? Food for thought anyway!

          Of course, such a beast would have required more hardware than IBM, who dreamed at the beginning of making the IBM PC a fixture in homes as well as businesses, was willing to demand that their customers purchased. It also would have required an extra year or two to develop. IBM, concerned that they were missing out on the next big wave in computing, felt that they needed to get a product out there quickly. The hardware could indeed be put together quickly; the software side was much more problematic, but they felt they had to just make do and deal with the consequences later. Ergo, MS-DOS.

          Note that my intention isn’t really to blame IBM (or Microsoft) or to preach about what they should have done. Computer engineering is always going to be a negotiation between the ideal approach and those practical necessities that are generally quantified in terms of money and time. If I state things a little forcefully here, that’s largely out of a desire to make sure my point doesn’t get lost by the general reader. I do handle these issues with a bit more nuance elsewhere, in my original history of the IBM PC and my article on the 640 K barrier. My real priority here is to hammer home just why everyone was in such a hurry to build something better on top of MS-DOS.

           
          • stepped pyramids

            June 26, 2018 at 2:27 pm

            I appreciate your willingness to consider the feedback! The article is very good overall (as usual). I just think there’s a tendency in pop history of computing to view IBM/MS/the PC in a more cynical light while having a certain degree of rose-colored glasses for the underdogs (Mac/Amiga/etc.). Then again, I also think it’s valuable to express the genuine frustration computer users and programmers had with the PC at the time. I think the updates you’ve made to the article strike a good balance.

             
  5. Andy

    June 22, 2018 at 5:12 pm

    A few minor points of correction:
    1) The PC/XT was in fact entirely identical to the original PC, with only the addition of the MFM hard drive controller and hard drive consuming the second floppy bay. It did not have a faster processor. The AT, however, did step up to a 6Mhz 80286.
    2) The TRS-80 always had a 64-column display, not 40, but it’s a minor point at best. :-) Also, there were quite a variety of non-dot-matrix printers available. There were several popular daisy wheel models that at least produced typewriter quality output. To that point about the early 8-bit machines being a “hard sell,” I’d argue that it had much more to do with the fact that their 64K addressable limit virtually always meant that the user interfaces to their word processors and databases were necessarily even more arcane and bizarre than the applications of the MS-DOS era, and could handle only much smaller documents and data sets to boot. By the time you finished loading the program, you were lucky to have a few K left for the actual user data!

    Also, not technically a “correction”, but the original monochrome IBM branded display adapter had no graphic capability whatsoever, other than character-based line graphics. This was the main selling point of the original Hercules adapter, which could actually do true dot-based graphics on the monochrome display in Lotus 1-2-3, in addition to being otherwise compatible with the original text mode. However, the uber-point is correct in that there were no operating system aids to dealing with those hardware capabilities or lack thereof.

     
    • Jimmy Maher

      June 23, 2018 at 9:46 am

      Thanks for this! Made a few edits. For some reason, I’ve had it stuck in my head since forever that it was the PC/XT that introduced the “turbo-mode” 8 MHz 8088…

       
      • whomever

        June 23, 2018 at 3:49 pm

        The XT didn’t introduce Turbo, but it did have 8 slots (vs 5) and dropped cassette support (because no one used it).

         
  6. tedder

    June 22, 2018 at 5:16 pm

    “incompatible with the old, requiring you to remake”
    -> old, which would have required you to remake
    (prevents someone from misreading the sentence fragment)

    “just kind of weird — and it was quite expensive to boot”
    -> remove “to boot”? some possible ambiguity on the meaning- especially without a comma

    “having been booted off the Lisa project”
    -> kicked off?

    “hacks being grated onto it”
    -> grafted

    “in the fall of 1985”
    -> Fall

    ultimately this is a great breakdown of the disadvantages of switching costs and advantages of the ‘worse is better’ philosophy. Windows didn’t make much of a leap until it became more than a layer on top of DOS. Of course, starting with that is part of why OS/2 was late and Visi On failed. (personally, the first gui/graphical directory browser I used on dos was Norton Commander)

    I keep wanting to go back and dabble with building a ‘modern 8080’ simply so I can write on top of those old DOS functions, using support chips like the ESP32 to support it. But at that point I could just virtualize the whole thing. IDK.

     
    • Jimmy Maher

      June 22, 2018 at 7:42 pm

      Thanks, but I’m going to call writer’s discretion in this case on all except the penultimate point. ;)

       
      • tedder

        June 23, 2018 at 6:40 am

        Of course! As you should.

         
  7. Brian

    June 22, 2018 at 5:51 pm

    I grew up with MS-DOS starting at Christmas in 1985… it’s fascinating that such an antiquated operating system lasted into the 90s for so many. The 640k barrier seemed as insurmountable as the speed of light in those days. Voodoo memory for Ultima VII was something else!

    Another mystery solved as well… I’ve heard of VisiCalc, but never understood why they disappeared so quickly. In a way, there is a parallel with Infocom – spending tons of effort and dollars on a large unwieldy system that didn’t really do a great job.

    Another revealing piece Jimmy!

     
  8. Jim Leonard

    June 22, 2018 at 5:51 pm

    It is unfortunate that the screenshots are limited to 600 pixels wide, when they contain 640 pixel columns. Is there any way they can be forced to display 640-wide? They have some visually unappealing scaling artifacts otherwise.

     
    • Jimmy Maher

      June 23, 2018 at 10:02 am

      Can’t say I can see any artifacts. You must have a better eye than I. ;)

      Unfortunately, I scaled the screenshots to suit the blog’s width before I uploaded them. I’d have to recreate them all from scratch, and I’d still have to stretch them vertically in order to fix the non-square-pixel issue and get the correct aspect ratio. (Not sure if this would introduce more artifacts of its own.)

      For what it’s worth, there are a lot of other screenshots here: http://toastytech.com/guis/vision.html. These have the correct width and are scaled up vertically to 400 pixels. This means that the aspect ratio still isn’t quite right, but there shouldn’t be any artifacts, thanks to the vertical scaling by an integral factor. Pick your poison, I guess. ;)

      Will try to preserve the original width on future screenshots. (Although not the ones in the next couple of articles, since they’re already completed and in place.)

       
  9. Lisa H.

    June 22, 2018 at 8:12 pm

    that blinking, cryptic “C>” prompt

    C:\> ? Or did early versions not have the colon and backslash? (I think the first MS-DOS I saw was 3.x.)

    This bit from MST3K is specifically about Mac vs. PC, but germane, I think: https://www.youtube.com/watch?v=ixQE496Pcn8 (Or, there’s also this bit https://youtu.be/vmNZwJTp9nM?t=1h5m17s)

    I’d never even heard of Visi On (man, do my 21st century sensibilities for such names want to remove that space) before this post. Complete commercial failure, indeed.

    My dad was so attached to Lotus 1-2-3 that he nursed it way past its fall from grace. I forget when he finally gave up still trying to get whatever version he had to work under modern Windows, probably sometime in the 2000s – although I’m surprised to learn in my quick Google for dates that IBM didn’t actually end support for it until 2014 (!).

    VisuALL by Trillian Computer Corporation

    Hopefully it did not have Eddie’s personality.

    the hacks being grated onto it

    Like so much Parmesan cheese?

     
    • Jimmy Maher

      June 23, 2018 at 10:17 am

      I believe is was just “C>” at the beginning, but I’m having trouble finding a definitive screenshot to be sure. Problem was that not many people had hard drives then, so “A>” was much more common than “C>”. That said, the statement in question wasn’t really confining itself to the earliest days, so the “C:\>” is probably the better choice anyway. Thanks!

      There’s actually some question as to whether it should be “Visi On” or “VisiOn.” Some sources do have it as the latter. But most primary sources have the former, so that’s what I went with as well. It’s a terrible, overly cutesy name any way you cut it in my opinion.

      Parents and their old software… oh, man. I struggled for years to find ways to let my dad keep using his WordPerfect 5.x for DOS. Our latest problem has come since he upgraded to Windows 10 and his trusty Outlook Express doesn’t work anymore. He doesn’t really understand *why* he can’t keep using what he’s always used, and I know he’s convinced that I’m just being stubborn in trying to force him to use all this new-fangled stuff. He’s at an age now where learning new things doesn’t come easily, and it kind of breaks my heart to subject him to this, but what can I do? Leave him with Windows XP and see how many viruses he can collect?

       
      • Sniffnoy

        June 23, 2018 at 7:36 pm

        George R. R. Martin famously still does write in WordStar 4.0, having a separate DOS computer set up for this!

         
        • Brian

          June 24, 2018 at 7:33 pm

          Hah! I remember forcing my mom to change to MS Word around 2003 when we simply couldn’t get Express Write to work on her newest computer. Express was originally bought for our Tandy 1000 around 1986…

          After great reluctance she found out the MS Word was a 1000x improvement and hasn’t looked back.

           
    • Lars

      June 23, 2018 at 11:24 am

      The prompt was configurable. The default was C>, but many people had “prompt $p$g” in their autoexec.bat, which gave the C:\> prompt.

       
      • Nate

        June 24, 2018 at 7:08 am

        Aw man! I remember typing that far too many times. Ugh.

        Also wasn’t it special that the ‘prompt’ command had its very own variable-substitution syntax which was completely unrelated to cmd.exe’s % notation for environment variables.

         
    • Aula

      June 23, 2018 at 11:33 am

      *All* standalone versions of MS-DOS have a default prompt of A> or C> depending on whether you boot from a floppy or a hard drive; I don’t know if DOS 7 changed that. The earliest versions didn’t have directories, so there was no need to show anything except the drive letter. The versions that had directories were practically always started with an autoexec.bat containing at least “prompt $p$g” because 1) the presence of autoexec.bat prevents DOS from asking the user to enter date and time, and 2) showing the current path in the prompt is a virtual necessity.

       
      • Jimmy Maher

        June 23, 2018 at 11:45 am

        Ah, okay. But “C:\>” is still the best reflection of MS-DOS as most people saw it, so we’ll stay with that. Thanks!

         
  10. Jacen

    June 22, 2018 at 8:18 pm

    The very first MS-DOS-based GUI struggled along with no uptake whatsoever for nine months or so;” updates?

    ther of Apple’s expansions upon the work down at Xerox PARC. Menus here are just one-shot commands” done?

     
    • Jimmy Maher

      June 23, 2018 at 10:26 am

      Thanks, but these are as intended.

       
  11. Andre

    June 22, 2018 at 9:20 pm

    I have to say it, I just can’t get enough of your non-gaming and more hardware/software culture oriented articles! Your history of the IBM PC was a truly interesting read back then and I’m excited the saga is now continued, in a way. Being socialized on the Amiga up into the early 2000s as a teen, I never had much to do with DOS (apart from trying to run old games later) and the early versions of Windows but both felt weirdly clumsy and somewhat cobbled together when I got exposed to them now and then. Looking forward to learn more about the backgrounds ;)

     
  12. Bernie

    June 22, 2018 at 10:57 pm

    Hi Jimmy. Great write-up as always. This new series looks very promising and i’m sure it’s going to become a favorite of your readers.

    But I will dare to dissent with your position that MS-DOS remained “quick and dirty” throughout its history. While it’s true and generally accepted that versions 1.0 through 3.3 were very clunky and impractical to work with, later versions became pretty sophisticated. DOS 5.0 was pretty solid and offered most of the important features of modern operating systems, multi-tasking aside (it wasn’t a market priority then) : advanced memory management, real-mode capability, menu-based shell, etc … DOS 6.22 is what Windows would be built on, and would remain its backbone until Windows 95 came along. DOS 7 was the version developed with Windows 95 and wasn’t sold separately but could be booted up without the GUI layer and gave users a lot of control over the machine. The Windows 2000-XP-Vista-7-8-10 command prompt and shell is directly descended from DOS 7 and offers pretty much all of the functionality of Linux, the king of “GUI-independent OS’s”.

    What I mean is that Microsoft pretty much evolved and modernized MS-DOS in tandem with Windows in order to arrive at the unified platform that was Windows 95 and later versions.

    This was very different from what Apple did with the Finder and System (later called MacOS), which were designed from the beginning to exist without a “DOS-like” layer. Microsoft may have been dead set on imitating the Mac System’s “look and feel” (and even got sued for it) but their software engineering had more in common with the likes of AmigaDOS/Workbench and UNIX/Xwindow than with Apple’s architecture.

     
    • Jimmy Maher

      June 23, 2018 at 10:34 am

      By the time we get to the MS-DOS 5.x era, we’re into the 1990s, and you’re absolutely right that it was being developed as much as the technological underpinning of Windows as an operating system in its own right. I’d argue, however, that it still had an awful lot of problems when taken as a standalone operating system, all of them a direct outgrowth of its “quick and dirty” origins. More on this when we get there…

       
  13. Casey Muratori

    June 22, 2018 at 11:42 pm

    I had trouble parsing “but for a secretary or an executive who saw the computer only as an appliance it was a nightmare”, and had to reread it. Maybe “but it was a nightmare for a secretary or an executive who saw the computer only as an appliance.”

    ?

    – Casey

     
    • Jimmy Maher

      June 23, 2018 at 10:36 am

      Sure. Thanks!

       
  14. Aula

    June 23, 2018 at 11:41 am

    “MGA monochrome video card”

    That should be MDA (for Monochrome Display Adapter).

    “moment moment”

    In some contexts that duplication could be appropriate, since the word has different meanings, but this isn’t one of them.

     
    • Jimmy Maher

      June 23, 2018 at 11:47 am

      Thanks!

       
  15. Aula

    June 23, 2018 at 4:06 pm

    “Unlike the hardware on which it was installed, MS-DOS had been created with no eye whatsoever to the long-term future. The most infamous sign of this was its hard-coded memory model, locked into the 1 MB address space of the Intel 8088, allowing any computer on which it ran just 640 K of RAM at the most.”

    The memory model was hard-coded in hardware by choices made by IBM. What do you think MS-DOS could have done differently about it?

     
    • Jimmy Maher

      June 23, 2018 at 5:19 pm

      A more modern, forward-thinking operating system would have taken memory-allocation tasks to itself, requiring applications to request the memory they wished to use from the operating system. Had MS-DOS done so, Microsoft could have released a new version to accompany the PC/AT, the first IBM hardware design to use memory above the 1 MB address-space barrier of the original IBM PC. There might have still been some issues with allocating huge chunks of memory, as the RAM pool would still be non-contiguous, but it would have been more akin to the Amiga’s fast/chip RAM split than the endless headache that the 640 K barrier remained for years to come.

      If there’s an original sin on the hardware side of all this, it must be placed at the feet of Intel rather than IBM. It was the former who elected to place the 8088’s reset vector at the very top of its address space. Given that situation, IBM’s engineers didn’t have much choice but to employ the memory map they used.

       
      • Aula

        June 24, 2018 at 10:09 am

        “A more modern, forward-thinking operating system would have taken memory-allocation tasks to itself, requiring applications to request the memory they wished to use from the operating system.”

        Huh? MS-DOS *did* do exactly that. What does that even have to do with the 1 MB limit?

        “Had MS-DOS done so, Microsoft could have released a new version to accompany the PC/AT, the first IBM hardware design to use memory above the 1 MB address-space barrier of the original IBM PC.”

        Well yes, new applications written specifically to be run in protected mode under the new OS could have used memory beyond 1 MB. Legacy programs written for 8088/8086 would still have been limited to the 640 kB.

        “There might have still been some issues with allocating huge chunks of memory, as the RAM pool would still be non-contiguous, but it would have been more akin to the Amiga’s fast/chip RAM split than the endless headache that the 640 K barrier remained for years to come.”

        That comparison makes no sense to me. The 68000 had 24-bit addresses so the original Amiga had a lot of unused address space. In contrast, the 8088/8086 (and the compatible real-address mode on 286 and later) is limited to 20-bit addresses, just like the Z80 and 6510 are limited to 16-bit addresses.

        “If there’s an original sin on the hardware side of all this, it must be placed at the feet of Intel rather than IBM.”

        Did Intel somehow force IBM to choose the 8088 for the original PC? If not, then IBM is the only entity to blame for the choice of a processor with completely idiotic memory addressing.

         
        • Jimmy Maher

          June 24, 2018 at 11:20 am

          “A more modern, forward-thinking operating system would have taken memory-allocation tasks to itself, requiring applications to request the memory they wished to use from the operating system.”

          Huh? MS-DOS *did* do exactly that. What does that even have to do with the 1 MB limit?

          Actually, MS-DOS 1 didn’t. There’s no concept of an application *requesting* a block of memory from the operating system which is returned in the form of a (real or virtual) pointer. Because MS-DOS was an operating system designed around the “triangle of ones” — single user, single task, single computer — it happily left memory wide open to applications to do whatever they wished with, wherever they wished.

          I bring it up because an operating system which *did* expect its applications to request the memory they wished to use would have been able to apportion memory beyond the 1 MB barrier… if it could get around the real-mode/protected-mode divide, which is admittedly an open question.

          “Had MS-DOS done so, Microsoft could have released a new version to accompany the PC/AT, the first IBM hardware design to use memory above the 1 MB address-space barrier of the original IBM PC.”

          Well yes, new applications written specifically to be run in protected mode under the new OS could have used memory beyond 1 MB. Legacy programs written for 8088/8086 would still have been limited to the 640 kB.

          I wonder a bit whether it might have been possible to create a “286 DOS” which ran in protected mode and maintained a degree of compatibility with well-behaved applications for an earlier version, if said earlier version had never allowed/expect software to bang away at the raw metal. See my comment to Stepped Pyramid, which covers a lot of this same territory.

          “There might have still been some issues with allocating huge chunks of memory, as the RAM pool would still be non-contiguous, but it would have been more akin to the Amiga’s fast/chip RAM split than the endless headache that the 640 K barrier remained for years to come.”

          That comparison makes no sense to me. The 68000 had 24-bit addresses so the original Amiga had a lot of unused address space. In contrast, the 8088/8086 (and the compatible real-address mode on 286 and later) is limited to 20-bit addresses, just like the Z80 and 6510 are limited to 16-bit addresses.

          I just meant in the sense that there were two pools of memory on the Amiga, and you couldn’t — according to my recollection, anyway — span them with a single memory-allocation request — i.e., you couldn’t request more memory than was available in the form of chip OR fast memory, even there was enough free memory in the aggregate. I was thinking of the larger address spaces of the 80286 and chips in this context. Obviously, 1 MB was all she wrote on the 8088 without going through some truly heroic contortions.

          “If there’s an original sin on the hardware side of all this, it must be placed at the feet of Intel rather than IBM.”

          Did Intel somehow force IBM to choose the 8088 for the original PC? If not, then IBM is the only entity to blame for the choice of a processor with completely idiotic memory addressing.

          I wouldn’t call it “completely idiotic,” just one unfortunate choice. But anyway, I was thinking more in terms of the IBM engineers who were told that “we’re going with the Intel processor, now find a way to make it work.” That said, there was some discussion about using the 68000, but it was still new and very expensive. I’m not sure there were a lot of other great alternatives. It would probably have saved everyone a lot of trouble if IBM had just waited for the 80286. ;)

          All that said… you and Stepped Pyramid have convinced me that the article was a little… um… more judgmental than it needed to be on this subject. Perhaps this was just a technical problem with no really good solution. I made some edits that soften or excise the harshest language. Thanks!

           
          • tedder

            June 24, 2018 at 3:39 pm

            I now want to read the canonical history of virtual memory and paged memory.

             
          • Jimmy Maher

            June 24, 2018 at 3:54 pm

            That would be interesting to research… but I’m afraid it might strain my readers’ patience even more than all those Civilization articles. ;)

             
          • tedder

            June 24, 2018 at 7:15 pm

            Yeah, I was careful to not say “you should write about” :) I already get much more than I pay for.

            I love the Civ and hardware articles, more in my wheelhouse than IF. I’m weird like that.

             
  16. Sam Garret

    June 24, 2018 at 10:03 pm

    minor typo:

    much more than just a cutsey way of issuing commands to the computer

    Someone already posted this sentence, but there’s a second error: cutsey is presumably cutesy

    Just as a datapoint, I also initially misread ‘was quite expensive to boot’ – as its idiomatic and jargonistic meanings both fit. I wasn’t sure if you were being punny.

    And finally – I’m surprised you didn’t put in a reminder or crosslink to https://www.filfre.net/2012/01/selling-zork/ as, I assume (with all danger there associated), it’s the same Personal Software that was Zork’s first publisher (if that’s the right term).

     
    • Jimmy Maher

      June 25, 2018 at 8:24 am

      Thanks! I’m a little sheepish to say that I totally forgot about the Infocom connection. Edits made; “to boot” is no more. ;)

       
  17. cpt kangarooski

    June 24, 2018 at 10:26 pm

    Jimmy,
    In several places you say that the Mac would not be able to show multiple applications at once until System 6 in 1988. I’m afraid you’re off by one — Multifinder shipped in 1987 as part of System Software 5.

    But that’s the official release. Andy Hertzfeld had previously released Switcher, which Apple licensed and made widely available. It allowed for multitasking (and allowed the Clipboard, which always worked between applications, to be a lot more useful in that you didn’t need to quit and launch between programs) but it only showed one program at a time. Switcher had shown up in 1985.

    Between Switcher and Multifinder, there was one other stab at it from the usual sources. This was Servant. It never got out of beta, but it served as a proof of concept for MultiFinder, and had come out in 1986.

    And none of this was conceptually groundbreaking — the Lisa had operated like this for years.

     
    • Jimmy Maher

      June 25, 2018 at 8:12 am

      Thanks! I do think it’s fair to judge these things in terms of the official operating system rather than add-ons to same, even if they come from Apple. So, we’ll settle on the System 5 reference point.

       
  18. Lt. Nitpicker

    June 26, 2018 at 3:11 am

    “…and even then the “replacement” would still have to be build on top of…”
    I think you meant built.

    Anyway, I’m excited to see where this series goes. Just a couple of comments.

    Are you going to mention the “Gabe Newell choosing to sleep in his office in order to get Windows out on time (or at least get it to ship sooner)” anecdote? Given it’s become pretty well known, I think it’s worth mentioning.

    Have you considered contacting Michal Necasek, the proprietor of the OS/2 Museum regarding his knowledge on the history of OS/2, Windows, and DOS (especially DOS 5)? I’m betting you’re using him as a source, but he has linked to your FTL (the company) article, so he would probably be willing to talk.

     
    • Jimmy Maher

      June 26, 2018 at 7:32 am

      Thanks!

      I hadn’t really thought about whether or not to use that anecdote yet. I can say, though, that I’m generally more interested in writing these articles from a somewhat strategic perspective than I am in giving a bunch of “crazy antics the programmers and engineers got up to” anecdotes. There’s a lot of that sort of thing already about. That’s fine, of course, but I think the bigger picture often gets lost in stories that spend too much time down in the trenches, as it were.

      I don’t really feel like I’m hurting for sources as things are. In this sense these articles are very different from writing about a somewhat obscure game or company, where I really need to conduct personal interviews to have anything to write about. The challenge in this case is rather proving to be that of corralling one hell of a complicated already-extant data set into a readable narrative. I want to do a better job of explaining the tech than the pop-business journalists generally do, but don’t plan to get anywhere near as far down in the weeds as Mr. Necasek’s blog — which doesn’t mean that his work isn’t worthwhile, of course. It’s just a case of different writerly priorities. I’m hoping to hit a sweet spot between the business journalists who don’t know a bit from a byte and the fan blogs that love to talk about exactly how Programmer X developed Obscure Function Y in a flash of insight while eating an Hawaiian pizza after an 18-hour hacking bender.

       
  19. Rowan Lipkovits

    June 26, 2018 at 8:45 pm

    “It would only be possible to program Visi On, they announced, after purchasing an expensive development kit and installing it on a $20,000 DEC PDP-11 minicomputer.”

    Now I’m wondering if emulation fiends have managed to eventually get any homebrew together for the Visi On software platform, make entries under it for demoparties etc?

     
  20. Rafael

    June 27, 2018 at 3:09 am

    Very informative article! Thanks for the insights!

     
  21. int19h

    June 27, 2018 at 9:13 am

    > Let’s say you wanted to put some graphics on the screen. Well, a given machine might have an MDA monochrome video card or a CGA color card, or, soon enough, a monochrome Hercules card or a color EGA card. You the programmer had to build into your program a way of figuring out which one of these your host had, and then had to write code for dealing with each possibility on its own terms.

    BIOS actually provided some higher-level graphics primitives that were adapter-independent – INT 10h. You could change video modes based on standardized mode numbers, output text (even in graphic modes!), and draw pixels of various colors. It was pretty slow if you needed to draw e.g. a line pixel by pixel, but generally workable when animation was not required.

     
    • plam

      June 30, 2018 at 7:54 pm

      As I recall, the write pixel call under INT 10h was unusably slow for EGA or better, and didn’t permit e.g. drawing lines except pixel-by-pixel, so even more slow. You basically had to provide your own graphics routines.

       

Leave a Reply

Your email address will not be published.


This site uses Akismet to reduce spam. Learn how your comment data is processed.