RSS

Tag Archives: ibm

The Deal of the Century (or, The Alliance of Losers)

+

= ?

I think [the] Macintosh accomplished everything we set out to do and more, even though it reaches most people these days as Windows.

— Andy Hertzfeld (original Apple Macintosh systems programmer), 1994

When rumors first began to circulate early in 1991 that IBM and Apple were involved in high-level talks about a major joint initiative, most people dismissed them outright. It was, after all, hard to imagine two companies in the same industry with more diametrically opposed corporate cultures. IBM was Big Blue, a bedrock of American business since the 1920s. Conservative and pragmatic to a fault, it was a Brylcreemed bastion of tradition where casual days meant that employees might remove their jackets to reveal the starched white shirts they wore underneath. Apple, on the other hand, had been founded just fifteen years before by two long-haired children of the counterculture, and its campus still looked more like Woodstock than Wall Street. IBM placed great stock in the character of its workforce; Apple, as journalist Michael S. Malone would later put it in his delightfully arch book Infinite Loop, “seemed to have no character, but only an attitude, a style, a collection of mannerisms.” IBM talked about enterprise integration and system interoperability; Apple prattled on endlessly about changing the world. IBM played Lawrence Welk at corporate get-togethers; Apple preferred the Beatles. (It was an open secret that the name the company shared with the Beatles’ old record label wasn’t coincidental.)

Unsurprisingly, the two companies didn’t like each other very much. Apple in particular had been self-consciously defining itself for years as the sworn enemy of IBM and everything it represented. When Apple had greeted the belated arrival of the IBM PC in 1981 with a full-page magazine advertisement bidding Big Blue “welcome, seriously,” it had been hard to read as anything other than snarky sarcasm. And then, and most famously, had come the “1984” television advertisement to mark the debut of the Macintosh, in which Apple was personified as a hammer-throwing freedom fighter toppling a totalitarian corporate titan — Big Blue recast as Big Brother. What would the rumor-mongers be saying next? That cats would lie down with dogs? That the Russians would tell the Americans they’d given up on the whole communism thing and would like to be friends… oh, wait. It was a strange moment in history. Why not this too, then?

Indeed, when one looked a little harder, a partnership began to make at least a certain degree of sense. Apple’s rhetoric had actually softened considerably since those heady early days of the Macintosh and the acrimonious departure of Steve Jobs which had marked their ending. In the time since, more sober minds at the company had come to realize that insulting conservative corporate customers with money to spend on Apple’s pricey hardware might be counter-productive. Most of all, though, both companies found themselves in strikingly similar binds as the 1990s got underway. After soaring to rarefied heights during the early and middle years of the previous decade, they were now being judged by an increasing number of pundits as the two biggest losers of the last few years of computing history. In the face of the juggernaut that was Microsoft Windows, that irresistible force which nothing in the world of computing could seem to defy for long, it didn’t seem totally out of line to ask whether there even was a future for IBM or Apple. Seen in this light, the pithy clichés practically wrote themselves: “the enemy of my enemy is my friend”; “any port in a storm”; etc. Other, somewhat less generous commentators just talked about an alliance of losers.

Each of the two losers had gotten to this juncture by a uniquely circuitous route.

When IBM released the IBM PC, their first mass-market microcomputer, in August of 1981, they were as surprised as anyone by the way it took off. Even as hackers dismissed it as boring and unimaginative, corporate America couldn’t get enough of the thing; a boring and unimaginative personal computer — i.e., a safe one — was exactly what they had been waiting for. IBM’s profits skyrocketed during the next several years, and the pundits lined up to praise the management of this old, enormous company for having the flexibility and wherewithal to capitalize on an emerging new market; a tap-dancing elephant became the metaphor of choice.

And yet, like so many great successes, the IBM PC bore the seeds of its downfall within it from the start. It was a simple, robust machine, easy to duplicate by plugging together readily available commodity components — a process made even easier by IBM’s commitment to scrupulously documenting every last detail of its design for all and sundry. Further, IBM had made the mistake of licensing its operating system from a small company known as Microsoft rather than buying it outright or writing one of their own, and Bill Gates, Microsoft’s Machiavellian CEO, proved more than happy to license MS-DOS to anyone else who wanted it as well. The danger signs could already be seen in 1982, when an upstart company called Compaq released a “portable” version of IBM’s computer — in those days, this meant a computer which could be packed into a single suitcase — before IBM themselves could get around to it. A more dramatic tipping point arrived in 1986, when the same company made a PC clone built around Intel’s hot new 80386 CPU before IBM managed to do so.

In 1987, IBM responded to the multiplying ranks of the clone makers by introducing the PS/2 line, which came complete with a new, proprietary bus architecture, locked up tight this time inside a cage of patents and legalese. A cynical move on the face of it, it backfired spectacularly in practice. Smelling the overweening corporate arrogance positively billowing out of the PS/2 lineup, many began to ask themselves for the first time whether the industry still needed IBM at all. And the answer they often came to was not the one IBM would have preferred. IBM’s new bus architecture slowly died on the vine, while the erstwhile clone makers put together committees to define new standards of their own which evolved the design IBM had originated in more open, commonsense ways. In short, IBM lost control of the very platform they had created. By 1990, the words “PC clone” were falling out of common usage, to be replaced by talk of the “Wintel Standard.” The new standard bearer, the closest equivalent to IBM in this new world order, was Microsoft, who continued to license MS-DOS and Windows, the software that allowed all of these machines from all of these diverse manufacturers to run the same applications, to anyone willing to pay for it. Meanwhile OS/2, IBM’s mostly-compatible alternative operating system, was struggling mightily; it would never manage to cross the hump into true mass-market acceptance.

Apple’s fall from grace had been less dizzying in some ways, but the position it had left them in was almost as frustrating.

After Steve Jobs walked away from Apple in September of 1985, leaving behind the Macintosh, his twenty-month-old dream machine, the more sober-minded caretakers who succeeded him did many of the reasonable, sober-minded things which their dogmatic predecessor had refused to allow: opening the Mac up for expansion, adding much-requested arrow keys to its keyboard, toning down the revolutionary rhetoric that spooked corporate America so badly. These things, combined with the Apple LaserWriter laser printer, Aldus PageMaker software, and the desktop-publishing niche they spawned between them, saved the odd little machine from oblivion. Yet something did seem to get lost in the process. Although the Mac remained a paragon of vision in computing in many ways — HyperCard alone proved that! — Apple’s management could sometimes seem more interested in competing head-to-head with PC clones for space on the desks of secretaries than nurturing the original dream of the Macintosh as the creative, friendly, fun personal computer for the rest of us.

In fact, this period of Apple’s history must strike anyone familiar with the company of today — or, for that matter, with the company that existed before Steve Jobs’s departure — as just plain weird. Quibbles about character versus attitude aside, Apple’s most notable strength down through the years has been a peerless sense of self, which they have used to carve out their own uniquely stylish image in the ofttimes bland world of computing. How odd, then, to see the Apple of this period almost willfully trying to become the one thing neither the zealots nor the detractors have ever seen them as: just another maker of computer hardware. They flooded the market with more models than even the most dutiful fans could keep up with, none of them evincing the flair for design that marks the Macs of earlier or later eras. Their computers’ bland cases were matched with bland names like “Performa” or “Quadra” — names which all too easily could have come out of Compaq or (gasp!) IBM rather than Apple. Even the tight coupling of hardware and software into a single integrated user experience, another staple of Apple computing before and after, threatened to disappear, as CEO John Sculley took to calling Apple a “software company” and intimated that he might be willing to license MacOS to other manufacturers in the way that Microsoft did MS-DOS and Windows. At the same time, in a bid to protect the software crown jewels, he launched a prohibitively expensive and ethically and practically ill-advised lawsuit against Microsoft for copying MacOS’s “look and feel” in Windows.

Apple’s attempts to woo corporate America by acting just as bland and conventional as everyone else bore little fruit; the Macintosh itself remained too incompatible, too expensive, and too indelibly strange to lure cautious purchasing managers into the fold. Meanwhile Apple’s prices remained too high for any but the most well-heeled private users. And so the Mac soldiered on with a 5 to 10 percent market share, buoyed by a fanatically loyal user base who still saw revolutionary potential in it, even as they complained about how many of its ideas Microsoft and others had stolen. Admittedly, their numbers were not insignificant: there were about 3 and a half million members of the Macintosh family by 1990. They were enough to keep Apple afloat and basically profitable, at least for now, but already by the early 1990s most new Macs were being sold “within the family,” as it were. The Mac became known as the platform where the visionaries tried things out; if said things proved promising, they then reached the masses in the form of Windows implementations. CD-ROM, the most exciting new technology of the early 1990s, was typical. The Mac pioneered this space; Mediagenic’s The Manhole, the very first CD-ROM entertainment product, shipped first on that platform. Yet most of the people who heard the hype and went out to buy a “multimedia PC” in the years that followed brought home a Wintel machine. The Mac was a sort of aspirational showpiece platform; in defiance of the Mac’s old “computer for the rest of us” tagline, Windows was the place where the majority of ordinary people did ordinary things.

The state of MacOS added weight to these showhorse-versus-workhorse stereotypes. Its latest incarnation, known as System 6, had fallen alarmingly behind the state of the art in computing by 1990. Once one looked beyond its famously intuitive and elegant user interface, one found that it lacked robust support for multitasking; lacked for ways to address memory beyond 8 MB; lacked the virtual memory that would allow users to open more and larger applications than the physical memory allowed; lacked the memory protection that could prevent errant applications from taking down the whole system. Having been baked into many of the operating system’s core assumptions from the start — MacOS had originally been designed to run on a machine with no hard drive and just 128 K of memory — these limitations were infuriatingly difficult to remedy after the fact. Thus Apple struggled mightily with the creation of a System 7, their attempt to do just that. When System 7 finally shipped in May of 1991, two years after Apple had initially promised it would, it still lagged behind Windows under the hood in some ways: for example, it still lacked comprehensive memory protection.

The problems which dogged the Macintosh were typical of any computing platform that attempts to survive beyond the technological era which spawned it. Keeping up with the times means hacking and kludging the original vision, as efficiency and technical elegance give way to the need just to make it work, by hook or by crook. The original Mac design team had been given the rare privilege of forgetting about backward compatibility — given permission to build something truly new and “insanely great,” as Steve Jobs had so memorably put it. That, needless to say, was no longer an option. Every decision at Apple must now be made with an eye toward all of the software that had been written for the Mac in the past seven years or so. People depended on it now, which sharply limited the ways in which it could be changed; any new idea that wasn’t compatible with what had come before was an ipso-facto nonstarter. Apple’s clever programmers doubtless could have made a faster, more stable, all-around better operating system than System 7 if they had only had free rein to do so. But that was pie-in-the-sky talk.

Yet the most pressing of all the technical problems confronting the Macintosh as it aged involved its hardware rather than its software. Back in 1984, the design team had hitched their wagon to the slickest, sexiest new CPU in the industry at the time: the Motorola 68000. And for several years, they had no cause to regret that decision. The 68000 and its successor models in the same family were wonderful little chips — elegant enough to live up to even the Macintosh ideal of elegance, an absolute joy to program. Even today, many an old-timer will happily wax rhapsodic about them if given half a chance. (Few, for the record, have similarly fond memories of Intel’s chips.)

But Motorola was both a smaller and a more diversified company than Intel, the international titan of chip-making. As time went on, they found it more and more difficult to keep up with the pace set by their rival. Lacking the same cutting-edge fabrication facilities, it was hard for them to pack as many circuits into the same amount of space. Matters began to come to a head in 1989, when Intel released the 80486, a chip for which Motorola had nothing remotely comparable. Motorola’s response finally arrived in the form of the roughly-equivalent-in-horsepower 68040 — but not until more than a year later, and even then their chip was plagued by poor heat dissipation and heavy power consumption in many scenarios. Worse, word had it that Motorola was getting ready to give up on the whole 68000 line; they simply didn’t believe they could continue to compete head-to-head with Intel in this arena. One can hardly overstate how terrifying this prospect was for Apple. An end to the 68000 line must seemingly mean the end of the Macintosh, at least as everyone knew it; MacOS, along with every application ever written for the platform, were inextricably bound to the 68000. Small wonder that John Sculley started talking about Apple as a “software company.” It looked like their hardware might be going away, whether they liked it or not.

Motorola was, however, peddling an alternative to the 68000 line, embodying one of the biggest buzzwords in computer-science circles at the time: “RISC,” short for “Reduced Instruction Set Chip.” Both the Intel x86 line and the Motorola 68000 line were what had been retroactively named “CISC,” or “Complex Instruction Set Chips”: CPUs whose set of core opcodes — i.e., the set of low-level commands by which they could be directly programmed — grew constantly bigger and more baroque over time. RISC chips, on the other hand, pared their opcodes down to the bone, to only those commands which they absolutely, positively could not exist without. This made them less pleasant for a human programmer to code for — but then, the vast majority of programmers were working by now in high-level languages rather than directly controlling the CPU in assembly language anyway. And it made programs written to run on them by any method bigger, generally speaking — but then, most people by 1990 were willing to trade a bit more memory usage for extra speed. To compensate for these disadvantages, RISC chips could be simpler in terms of circuitry than CISC chips of equivalent power, making them cheaper and easier to manufacture. They also demanded less energy and produced less heat — the computer engineer’s greatest enemy — at equivalent clock speeds. As of yet, only one RISC chip was serving as the CPU in mass-market personal computers: the ARM chip, used in the machines of the British PC maker Acorn, which weren’t even sold in the United States. Nevertheless, Motorola believed RISC’s time had come. By switching to RISC, they wouldn’t need to match Intel in terms of transistors per square millimeter to produce chips of equal or greater speed. Indeed, they’d already made a RISC CPU of their own, called the 88000, in which they were eager to interest Apple.

They found a receptive audience among Apple’s programmers and engineers, who loved Motorola’s general design aesthetic. Already by the spring of 1990, Apple had launched two separate internal projects to study the possibilities for RISC in general and the 88000 in particular. One, known as Project Jaguar, envisioned a clean break with the past, in the form of a brand new computer that would be so amazing that people would be willing to accept that none of their existing software would run on it. The other, known as Project Cognac, studied whether it might be possible to port the existing MacOS to the new architecture, and then — and this was the really tricky part — find a way to make existing applications which had been compiled for a 68000-based Mac run unchanged on the new machine.

At first, the only viable option for doing so seemed to be a sort of Frankenstein’s monster of a computer, containing both an 88000- and a 68000-series CPU. The operating system would boot and run on the 88000, but when the user started an application written for an older, 68000-based Mac, it would be automatically kicked over to the secondary CPU. Within a few years, so the thinking went, all existing users would upgrade to the newer models, all current software would get recompiled to run natively on the RISC chip, and the 68000 could go away. Still, no one was all that excited by this approach; it seemed the worst Macintosh kludge yet, the very antithesis of what the machine was supposed to be.

A eureka moment came in late 1990, with the discovery of what Cognac project leader Jack McHenry came to call the “90/10 Rule.” Running profilers on typical applications, his team found that in the case of many or most of them it was the operating system, not the application itself, that consumed 90 percent or more of the CPU cycles. This was an artifact — for once, a positive one! — of the original MacOS design, which offered programmers an unprecedentedly rich interface toolbox meant to make coding as quick and easy as possible and, just as importantly, to give all applications a uniform look and feel. Thus an application simply asked for a menu containing a list of entries; it was then the operating system that did all the work of setting it up, monitoring it, and reporting back to the application when the user chose something from it. Ditto buttons, dialog boxes, etc. Even something as CPU-intensive as video playback generally happened through the operating system’s QuickTime library rather than the application actually employing it.

All of this meant that it ought to be feasible to emulate the 68000 entirely in software. The 68000 code would necessarily run slowly and inefficiently through emulation, wiping out all of the speed advantages of the new chip and then some. Yet for many or most applications the emulator would only need to be used about 10 percent of the time. The other 90 percent of the time, when the operating system itself was doing things at native speed, would more than make up for it. In due course, applications would get recompiled and the need for 68000 emulation would largely go away. But in the meanwhile, it could provide a vital bridge between the past and the future — a next-generation Mac that wouldn’t break continuity with the old one, all with a minimum of complication, for Apple’s users and for their hardware engineers alike. By mid-1991, Project Cognac had an 88000-powered prototype that could run a RISC-based MacOS and legacy Mac applications together.

And yet this wasn’t to be the final form of the RISC-based Macintosh. For, just a few months later, Apple and IBM made an announcement that the technology press billed — sometimes sarcastically, sometimes earnestly — as the “Deal of the Century.”

Apple had first begun to talk with IBM in early 1990, when Michael Spindler, the former’s president, had first reached out to Jack Kuehler, his opposite number at IBM. It seemed that, while Apple’s technical rank and file were still greatly enamored with Motorola, upper management was less sanguine. Having been burned once with the 68000, they were uncertain about Motorola’s commitment and ability to keep evolving the 88000 over the long term.

It made a lot of sense in the abstract for any company interested in RISC technology, as Apple certainly was, to contact IBM; it was actually IBM who had invented the RISC concept back in the mid-1970s. Not all that atypically for such a huge company with so many ongoing research projects, they had employed the idea for years only in limited, mostly subsidiary usage scenarios, such as mainframe channel controllers. Now, though, they were just introducing a new line of “workstation computers” — meaning extremely high-powered desktop computers, too expensive for the consumer market — which used a RISC chip called the POWER CPU that was the heir to their many years of research in the field. Like the workstations it lay at the heart of, the chip was much too expensive and complex to become the brain of Apple’s next generation of consumer computers, but it might, thought Spindler, be something to build upon. And he knew that, with IBM’s old partnership with Microsoft slowly collapsing into bickering acrimony, Big Blue might just be looking for a new partner.

The back-channel talks were intermittent and hyper-cautious at first, but, as the year wore on and the problems both of the companies faced became more and more obvious, the discussions heated up. The first formal meeting took place in February of 1991 or shortly thereafter, at an IBM facility in Austin, Texas. The Apple people, knowing IBM’s ultra-conservative reputation and wishing to make a good impression, arrived neatly groomed and dressed in three-piece suits, only to find their opposite numbers, having acted on the same motivation, sitting there in jeans and denim shirts.

That anecdote illustrates how very much both sides wanted to make this work. And indeed, the two parties found it much easier to work together than anyone might have imagined. John Sculley, the man who really called the shots at Apple, found that he got along smashingly with Jack Kuehler, to the extent that the two were soon talking almost every day. After beginning as a fairly straightforward discussion of whether IBM might be able and willing to make a RISC chip suitable for the Macintosh, the negotiations just kept growing in scale and ambition, spurred on by both companies’ deep-seated desire to stick it to Microsoft and the Wintel hegemony in any and all possible ways. They agreed to found a joint subsidiary called Taligent, staffed initially with the people from Apple’s Project Jaguar, which would continue to develop a brand new operating system that could be licensed by any hardware maker, just like MS-DOS and Windows (and for that matter IBM’s already extant OS/2). And they would found another subsidiary called Kaleida Labs, to make a cross-platform multimedia scripting engine called ScriptX.

Still, the core of the discussions remained IBM’s POWER architecture — or rather the PowerPC, as the partners agreed to call the cost-reduced, consumer-friendly version of the chip. Apple soon pulled Motorola into these parts of the talks, thus turning a bilateral into a trilateral negotiation, and providing the name for their so-called “AIM alliance” — “AIM” for Apple, IBM, and Motorola. IBM had never made a mass-market microprocessor of their own before, noted Apple, and Motorola’s experience could serve them well, as could their chip-fabrication facilities once actual production began. The two non-Apple parties were perhaps less excited at the prospect of working together — Motorola in particular must have been smarting at the rejection of their own 88000 processor which this new plan would entail — but made nice and got along.

Jack Kuehler and John Sculley brandish what they call their “marriage certificate,” looking rather disturbingly like Neville Chamberlain declaring peace in our time. The marriage would not prove an overly long or happy one.

On October 2, 1991 — just six weeks after the first 68040-based Macintosh models had shipped — Apple and IBM made official the rumors that had been swirling around for months. At a joint press briefing held inside the Fairmont Hotel in downtown San Francisco, they trumpeted all of the initiatives I’ve just described. The Deal of the Century, they said, would usher in the next phase of personal computing. Wintel must soon give way to the superiority of a PowerPC-based computer running a Taligent operating system with ScriptX onboard. New Apple Macintosh models would also use the PowerPC, but the relationship between them and these other, Taligent-powered machines remained vague.

Indeed, it was all horribly confusing. “What Taligent is doing is not designed to replace the Macintosh,” said Sculley. “Instead we think it complements and enhances its usefulness.” But what on earth did that empty corporate speak even mean? When Apple said out of the blue that they were “not going to do to the Macintosh what we did to the Apple II” — i.e., orphan it — it rather made you suspect that that was exactly what they meant to do. And what did it all mean for IBM’s OS/2, which Big Blue had been telling a decidedly unconvinced public was also the future of personal computing for several years now? “I think the message in those agreements for the future of OS/2 is that it no longer has a future,” said one analyst. And then, what was Kaleida and this ScriptX thing supposed to actually do?

So much of the agreement seemed so hopelessly vague. Compaq’s vice president declared that Apple and IBM must be “smoking dope. There’s no way it’s going to work.” One pundit called the whole thing “a con job. There’s no software, there’s no operating system. It’s just a last gasp of extinction by the giants that can’t keep up with Intel.” Apple’s own users were baffled and consternated by this sudden alliance with the company which they had been schooled to believe was technological evil incarnate. A grim joke made the rounds: what do you get when you cross Apple and IBM? The answer: IBM.

While the journalists reported and the pundits pontificated, it was up to the technical staff at Apple, IBM, and Motorola to make PowerPC computers a reality. Like their colleagues who had negotiated the deal, they all got along surprisingly well; once one pushed past the surface stereotypes, they were all just engineers trying to do the best work possible. Apple’s management wanted the first PowerPC-based Macintosh models to ship in January of 1994, to commemorate the platform’s tenth anniversary by heralding a new technological era. The old Project Cognac team, now with the new code name of “Piltdown Man” after the famous (albeit fraudulent) “missing link” in the evolution of humanity, was responsible for making this happen. For almost a year, they worked on porting MacOS to the PowerPC, as they’d previously done to the 88000. This time, though, they had no real hardware with which to work, only specifications and software emulators. The first prototype chips finally arrived on September 3, 1992, and they redoubled their efforts, pulling many an all-nighter. Thus MacOS booted up to the desktop for the first time on a real PowerPC-based machine just in time to greet the rising sun on the morning of October 3, 1992. A new era had indeed dawned.

Their goal now was to make a PowerPC-based Macintosh work exactly like any other, only faster. MacOS wouldn’t even get a new primary version number for the first PowerPC release; this major milestone in Mac history would go under the name of System 7.1.2, a name more appropriate to a minor maintenance release. It looked so identical to what had come before that its own creators couldn’t spot the difference; they wound up lighting up a single extra pixel in the PowerPC version just so they could know which was which.

Their guiding rule of an absolutely seamless transition applied in spades to the 68000 emulation layer, duly ported from the 88000 to the PowerPC. An ordinary user should never have to think about — should not even have to know about — the emulation that was happening beneath the surface. Another watershed moment came in June of 1993, when the team brought a PowerPC prototype machine to the MacHack, a coding conference and competition. Without telling any of the attendees what was inside the machine, the team let them use it to demonstrate their boundary-pushing programs. The emulation layer performed beyond their most hopeful prognostications. It looked like the Mac’s new lease on life was all but a done deal from the engineering side of things.

But alas, the bonhomie exhibited by the partner companies’ engineers and programmers down in the trenches wasn’t so marked in their executive suites after the deal was signed. The very vagueness of so many aspects of the agreement had papered over what were in reality hugely different visions of the future. IBM, a company not usually given to revolutionary rhetoric, had taken at face value the high-flown words spoken at the announcement. They truly believed that the agreement would mark a new era for personal computing in general, with a new, better hardware architecture in the form of PowerPC and an ultra-modern operating system to run on it in the form of Taligent’s work. Meanwhile it was becoming increasingly clear that Apple’s management, who claimed to be changing the world five times before breakfast on most days, had in reality seen Taligent largely as a hedge in case their people should prove unable to create a PowerPC Macintosh that looked like a Mac, felt like a Mac, and ran vintage Mac software. As Project Piltdown Man’s work proceeded apace, Apple grew less and less enamored with those other, open-architecture ideas IBM was pushing. The Taligent people didn’t help their cause by falling headfirst into a pit of airy computer-science abstractions and staying mired there for years, all while Project Piltdown Man just kept plugging away, getting things done.

The first two and a half years of the 1990s were marred by a mild but stubborn recession in the United States, during which the PC industry had a particularly hard time of it. After the summer of 1992, however, the economy picked up steam and consumer computing eased into what would prove its longest and most sustained boom of all time, borne along on a wave of hype about CD-ROM and multimedia, along with the simple fact that personal computers in general had finally evolved to a place where they could do useful things for ordinary people in a reasonably painless way. (A bit later in the boom, of course, the World Wide Web would come along to provide the greatest impetus of all.)

And yet the position of both Apple and IBM in the PC marketplace continued to get steadily worse while the rest of their industry soared. At least 90 percent of the computers that were now being sold in such impressive numbers ran Microsoft Windows, leaving OS/2, MacOS, and a few other oddballs to divide the iconoclasts, the hackers, and the non-conformists of the world among themselves. While IBM continued to flog OS/2, more out of stubbornness than hope, Apple tried a little bit of everything to stop the slide in market share and remain relevant. Still not entirely certain whether their future lay with open architectures or their own closed, proprietary one, they started porting selected software to Windows, including most notably QuickTime, their much-admired tool for encoding and playing video. They even shipped a Mac model that could also run MS-DOS and Windows, thanks to an 80486 housed in its case alongside its 68040. And they entered into a partnership with the networking giant Novell to port MacOS itself to Intel hardware — a partnership that, like many Apple initiatives of these years, petered out without ultimately producing much of anything. Perhaps most tellingly of all, this became the only period in Apple’s history when the company felt compelled to compete solely on price. They started selling Macs in department stores for the first time, where a stream of very un-Apple-like discounts and rebates greeted prospective buyers.

While Apple thus toddled along without making much headway, IBM began to annihilate all previous conceptions of how much money a single company could possibly lose, posting oceans of red that looked more like the numbers found in macroeconomic research papers than entries in an accountant’s books. The PC marketplace was in a way one of their smaller problems. Their mainframe business, their real bread and butter since the 1950s, was cratering as customers fled to the smaller, cheaper computers that could often now do the jobs of those hulking giants just as well. In 1991, when IBM first turned the corner into loss, they did so in disconcertingly convincing fashion: they lost $2.82 billion that year. And that was only the beginning. Losses totaled $4.96 billion in 1992, followed by $8.1 billion in 1993. IBM lost more money during those three years alone than any other company in the history of the world to that point; their losses exceeded the gross domestic product of Ecuador.

The employees at both Apple and IBM paid the toll for the confusions and prevarications of these years: both companies endured rounds of major layoffs. Those at IBM marked the very first such in the long history of the company. Big Blue had for decades fostered a culture of employment for life; their motto had always been, “If you do your job, you will always have your job.” This, it was now patently obvious, was no longer the case.

The bloodletting at both companies reached their executive suites as well within a few months of one another. On April 1, 1993, John Akers, the CEO of IBM, was ousted after a seven-year tenure which one business writer called “the worst record of any chief executive in the history of IBM.” Three months later, following a terrible quarterly earnings report and a drop in share price of 58 percent in the span of six months, Michael Spindler replaced John Sculley as the CEO of Apple.

These, then, were the storm clouds under which the PowerPC architecture became a physical reality.

The first PowerPC computers to be given a public display bore an IBM rather than an Apple logo on their cases. They arrived at the Comdex trade show in November of 1993, running a port of OS/2. IBM also promised a port of AIX — their version of the Unix operating system — while Sun Microsystems announced plans to port their Unix-based Solaris operating system and, most surprisingly of all, Microsoft talked about porting over Windows NT, the more advanced, server-oriented version of their world-conquering operating environment. But, noted the journalists present, “it remains unclear whether users will be able to run Macintosh applications on IBM’s PowerPC” — a fine example of the confusing messaging the two alleged allies constantly trailed in their wake. Further, there was no word at all about the status of the Taligent operating system that was supposed to become the real PowerPC standard.

Meanwhile over at Apple, Project Piltdown Man was becoming that rarest of unicorns in tech circles: a major software-engineering project that is actually completed on schedule. The release of the first PowerPC Macs was pushed back a bit, but only to allow the factories time to build up enough inventory to meet what everyone hoped would be serious consumer demand. Thus the “Power Macs” made their public bow on March 14, 1994, at New York City’s Lincoln Center, in three different configurations clocked at speeds between 60 and 80 MHz. Unlike IBM’s machines, which were shown six months before they shipped, the Power Macs were available for anyone to buy the very next day.

The initial trio of Power Macs.

This speed test, published in MacWorld magazine, shows how all three of the Power Mac machines dramatically outperform top-of-the-line Pentium machines when running native code.

They were greeted with enormous excitement and enthusiasm by the Mac faithful, who had been waiting anxiously for a machine that could go head-to-head with computers built around Intel’s new Pentium chip, the successor to the 80486. This the Power Macs could certainly do; by some benchmarks at least, the PowerPC doubled the overall throughput of a Pentium. World domination must surely be just around the corner, right?

Predictably enough, the non-Mac-centric technology press greeted the machines’ arrival more skeptically than the hardcore Mac-heads. “I think Apple will sell [a] million units, but it’s all going to be to existing Mac users,” said one market researcher. “DOS and Windows running on Intel platforms is still going to be 85 percent of the market. [The Power Mac] doesn’t give users enough of a reason to change.” Another noted that “the Mac users that I know are not interested in using Windows, and the Windows users are not interested in using the Mac. There has to be a compelling reason [to switch].”

In the end, these more guarded predictions proved the most accurate. Apple did indeed sell an impressive spurt of Power Macs in the months that followed, but almost entirely to the faithful. One might almost say that they became a victim of Project Piltdown Man’s success: the Power Mac really did seem exactly like any other Macintosh, except that it ran faster. And even this fact could be obscured when running legacy applications under emulation, as most people were doing in the early months: despite Project Piltdown Man’s heroic efforts, applications like Excel, Word, and Photoshop actually ran slightly slower on a Power Mac than on a top-of-the-line 68040-based machine. So, while the transition to PowerPC allowed the Macintosh to persist as a viable computing platform, it ultimately did nothing to improve upon its small market share. And because the PowerPC MacOS was such a direct and literal port, it still retained all of the shortcomings of MacOS in general. It remained a pretty interface stretched over some almost laughably archaic plumbing. The new generation of Mac hardware wouldn’t receive an operating system truly, comprehensively worthy of it until OS X arrived seven long years later.

Still, these harsh realities shouldn’t be allowed to detract from how deftly Apple — and particularly the unsung coders of Project Piltdown Man — executed the transition. No one before had ever picked up a consumer-computing platform bodily and moved it to an entirely new hardware architecture at all, much less done it so transparently that many or most users never really had to think about what was happening at all. (There would be only one comparable example in computing’s future. And, incredibly, the Mac would once again be the platform in question: in 2006, Apple would move from the fading PowerPC line to Intel’s chips — if you can’t beat ’em, join ’em, right? — relying once again on a cleverly coded software emulator to see them through the period of transition. The Macintosh, it seems, has more lives than Lazarus.)

Although the briefly vaunted AIM alliance did manage to give the Macintosh a new lease on life, it succeeded in very little else. The PowerPC architecture, which had cost the alliance more than $1 billion to develop, went nowhere in its non-Mac incarnations. IBM’s own machines sold in such tiny numbers that the question of whether Apple would ever allow them to run MacOS was all but rendered moot. (For the record, though: they never did.) Sun Solaris and Microsoft Windows NT did come out in PowerPC versions, but their sales couldn’t justify their existence, and within a year or two they went away again. The bold dream of creating a new reference platform for general-purpose computing to rival Wintel never got off the ground, as it became painfully clear that said dream had been taken more to heart by IBM than by Apple. Only after the millennium would the PowerPC architecture find a measure of mass-market success outside the Mac, when it was adopted by Nintendo, Microsoft, and Sony for use in videogame consoles. In this form, then, it finally paid off for IBM; far more PowerPC-powered consoles than even Macs were sold over the lifetime of the architecture. PowerPC also eventually saw use in other specialized applications, such as satellites and planetary rovers employed by NASA.

Success, then, is always relative. But not so the complete lack thereof, as Kaleida and Taligent proved. Kaleida burned through $200 million before finally shipping its ScriptX multimedia-presentation engine years after other products, most notably Macromedia’s Director, had already sewn up that space; it was disbanded and harvested for scraps by Apple in November of 1995. Taligent burned through a staggering $400 million over the same period of time, producing only some tepid programming frameworks in lieu of the revolutionary operating system that had been promised, before being absorbed back into IBM.

There is one final fascinating footnote to this story of a Deal of the Century that turned out to be little more than a strange anecdote in computing history. In the summer of 1994, IBM, having by now stopped the worst of the bleeding, settling by now into their new life as a smaller, far less dominant company, offered to buy Apple outright for a premium of $5 over their current share price. In IBM’s view, the synergies made sense: the Power Macs were selling extremely well, which was more than could be said for IBM’s PowerPC models. Why not go all in?

Ironically, it was those same healthy sales numbers that scuppered the deal in the end. If the offer had come a year earlier, when a money-losing Apple was just firing John Sculley, they surely would have jumped at it. But now Apple was feeling their oats again, and by no means entirely without reason; sales were up more than 20 percent over the previous year, and the company was once more comfortably in the black. So, they told IBM thanks, but no thanks. The same renewed taste of success also caused them to reject serious inquiries from Philips, Sun Microsystems, and Oracle. Word had it that new CEO Michael Spindler was convinced not only that the Power Mac had saved Apple, but that it had fundamentally altered their position in the marketplace.

The following year revealed how misguided that thinking really was; the Power Mac had fixed none of Apple’s fundamental problems. That year it was Microsoft who cemented their world domination instead, with the release of Windows 95, while Apple grappled with the reality that almost all of those Power Mac sales of the previous year had been to existing members of the Macintosh family, not to the new customers they so desperately needed to attract. What happened now that everyone in the family had dutifully upgraded? The answer to that question wasn’t pretty: Apple plunged off a financial cliff as precipitous in its own way as the one which had nearly destroyed IBM a few years earlier. Now, nobody was interested in acquiring them anymore. The pundits smelled the stink of death; it’s difficult to find an article on Apple written between 1995 and 1998 which doesn’t include the adjective “beleaguered.” Why buy now when you can sift through the scraps at the bankruptcy auction in just a little while?

Apple didn’t wind up dying, of course. Instead a series of improbable events, beginning with the return of prodigal-son Steve Jobs in 1997, turned them into the richest single company in the world — yes, richer even than Microsoft. These are stories for other articles. But for now, it’s perhaps worth pausing for a moment to think about an alternate timeline where the Macintosh became an IBM product, and the Deal of the Century that got that ball rolling thus came much closer to living up to its name. Bizarre, you say? Perhaps. But no more bizarre than what really happened.

(Sources: the books Insanely Great: The Life and Times of Macintosh by Steven Levy, Apple Confidential 2.0: The Definitive History of the World’s Most Colorful Company by Owen W. Linzmayer, Infinite Loop: How the World’s Most Insanely Great Computer Company Went Insane by Michael S. Malone, Big Blues: The Unmaking of IBM by Paul Carroll, and The PowerPC Macintosh Book by Stephan Somogyi; InfoWorld of September 24 1990, October 15 1990, December 3 1990, April 8 1991, May 13 1991, May 27 1991, July 1 1991, July 8 1991, July 15 1991, July 22 1991, August 5 1991, August 19 1991, September 23 1991, September 30 1991, October 7 1991, October 21 1991, November 4 1991, December 30 1991, January 13 1992, January 20 1992, February 3 1992, March 9 1992, March 16 1992, March 23 1992, April 27 1992, May 11 1992, May 18 1992, June 15 1992, June 29 1992, July 27 1992, August 3 1992, August 10 1992, August 17 1992, September 7 1992, September 21 1992, October 5 1992, October 12 1992, October 19 1992, December 14 1992, December 21 1992, December 28 1992, January 11 1993, February 1 1993, February 22 1993, March 8 1993, March 15 1993, April 5 1993, April 12 1993, May 17 1993, May 24 1993, May 31 1993, June 21 1993, June 28 1993, July 5 1993, July 12 1993, July 19 1993, August 2 1993, August 9 1993, August 30 1993, September 6 1993, September 27 1993, October 4 1993, October 11 1993, October 18 1993, November 1 1993, November 15 1993, November 22 1993, December 6 1993, December 13 1993, December 20 1993, January 10 1994, January 31 1994, March 7 1994, March 14 1994, March 28 1994, April 25 1994, May 2 1994, May 16 1994, June 6 1994, June 27 1994; MacWorld of September 1992, February 1993, July 1993, September 1993, October 1993, November 1993, February 1994, and May 1994; Byte of November 1984. Online sources include IBM’s own corporate-history timeline and a vintage IBM lecture on the PowerPC architecture.)

 
 

Tags: , ,

Doing Windows, Part 9: Windows Comes Home

This series of articles so far has been a story of business-oriented personal computing. Corporate America had been running for decades on IBM before the IBM PC appeared, so it was only natural that the standard IBM introduced would be embraced as the way to get serious, businesslike things done on a personal computer. Yet long before IBM entered the picture, personal computing in general had been pioneered by hackers and hobbyists, many of whom nursed grander dreams than giving secretaries a better typewriter or giving accountants a better way to add up figures. These pioneers didn’t go away after 1981, but neither did they embrace the IBM PC, which most of them dismissed as technically unimaginative and aesthetically disastrous. Instead they spent the balance of the 1980s using computers like the Apple II, the Commodore 64, the Commodore Amiga, and the Atari ST to communicate with one another, to draw pictures, to make music, and of course to write and play lots and lots of games. Dwarfed already in terms of dollars and cents at mid-decade by the business-computing monster the IBM PC had birthed, this vibrant alternative computing ecosystem — sometimes called home computing, sometimes consumer computing — makes a far more interesting subject for the cultural historian of today than the world of IBM and Microsoft, with its boring green screens and boring corporate spokesmen running scared from the merest mention of digital creativity. It’s for this reason that, a few series like this one aside, I’ve spent the vast majority of my time on this blog talking about the cultures of creative computing rather than those of IBM and Microsoft.

Consumer computing did enjoy one brief boom in the 1980s. From roughly 1982 to 1984, a narrative took hold within the mainstream media and the offices of venture capitalists alike that full-fledged computers would replace the Atari VCS and other game consoles in American homes on a massive scale. After all, computers could play games just like the consoles, but they alone could also be used to educate the kids, write school reports and letters, balance the checkbook, and — that old favorite to which the pundits returned again and again — store the family recipes.

All too soon, though, the limitations of the cheap 8-bit computers that had fueled the boom struck home. As a consumer product, those early computers with their cryptic blinking command prompts were hopeless; at least with an Atari VCS you could just put a cartridge in the slot, turn it on, and play. There were very few practical applications for which they weren’t more trouble than they were worth. If you needed to write a school report, a standalone word-processing machine designed for that purpose alone was often a cheaper and better solution, and the family accounts and recipes were actually much easier to store on paper than in a slow, balky computer program. Certainly paper was the safer choice over a pile of fragile floppy disks.

So, what we might call the First Home Computer Revolution fizzled out, with most of the computers that had been purchased over its course making the slow march of shame from closet to attic to landfill. That minority who persisted with their new computers was made up of the same sorts of personalities who had had computers in their homes before the boom — for the one concrete thing the First Home Computer Revolution had achieved was to make home computers in general more affordable, and thus put them within the reach of more people who were inclined toward them anyway. People with sufficient patience continued to find home computers great for playing games that offered more depth than the games on the consoles, while others found them objects of wonder unto themselves, new oceans just waiting to have their technological depths plumbed by intrepid digital divers. It was mostly young people, who had free time on their hands, who were open to novelty, who were malleable enough to learn something new, and who were in love with escapist fictions of all stripes, who became the biggest home-computer users.

Their numbers grew at a modest pace every year, but the real money, it was now clear, was in business computing. Why try to sell computers piecemeal to teenagers when you could sell them in bulk to corporations? IBM, after having made one abortive stab at capturing home computing as well via the ill-fated PCjr, went where the money was, and all but a few other computer makers — most notable among these home-computer loyalists were Commodore, Atari, and Radio Shack — followed them there. The teenagers, for their part, responded to the business-computing majority’s contempt in kind, piling scorn onto the IBM PC’s ludicrously ugly CGA graphics and its speaker that could do little more than beep and fart at you, all while embracing their own more colorful platforms with typical adolescent zeal.

As the 1980s neared their end, however, the ugly old MS-DOS computer started down an unanticipated road of transformation. In 1987, as part of the misbegotten PS/2 line, IBM introduced a new graphics standard called VGA that, with up to 256 onscreen colors from a palette of more than 260,000, outdid all of the common home computers of the time. Soon after, enterprising third parties like Ad Lib and Creative Labs started making add-on sound cards for MS-DOS machines that could make real music and — just as important for game fanatics — real explosions. Many a home hacker woke up one morning to realize that the dreaded PC clone suddenly wasn’t looking all that bad. No, the technical architecture wasn’t beautiful, but it was robust and mature, and the pressure of having dozens of competitors manufacturing machines meeting the standard kept the bang-for-your-buck ratio very good. And if you — or your parents — did want to do any word processing or checkbook balancing, the software for doing so was excellent, honed by years of catering to the most demanding of corporate users. Ditto the programming tools that were nearer to a hacker’s heart; Borland’s Turbo Pascal alone was a thing of wonder, better than any other programming environment on any other personal computer.

Meanwhile 8-bit home computers like the Apple II and the Commodore 64 were getting decidedly long in the tooth, and the companies that made them were doing a peculiarly poor job of replacing them. The Apple Macintosh was so expensive as to be out of reach of most, and even the latest Apple II, known as the IIGS, was priced way too high for what it was; Apple, having joined the business-computing rat race, seemed vaguely embarrassed by the continuing existence of the Apple II, the platform that had made them. The Commodore Amiga 500 was perhaps a more promising contender to inherit the crown of the Commodore 64, but its parent company had mismanaged their brand almost beyond hope of redemption in the United States.

So, in 1988 and 1989 MS-DOS-based computing started coming home, thanks both to its own sturdy merits and a lack of compelling alternatives from the traditional makers of home computers. The process was helped along by Sierra Online, a major publisher of consumer software who had bet big and early on the MS-DOS standard conquering the home in the end, and were thus out in front of its progress now with a range of appealing games that took full advantage of the new graphics and sound cards. Other publishers, reeling before a Nintendo onslaught that was devastating the remnants of the 8-bit software market, soon followed their lead. By 1990, the vast majority of the American consumer-software industry had joined their counterparts in business software in embracing MS-DOS as their platform of first — often, of only — priority.

Bill Gates had always gone where the most money was. In years past, the money had been in business computing, and so Microsoft, after experimenting briefly with consumer software in the period just before the release of the IBM PC, had all but ignored the consumer market in favor of system software and applications targeted squarely at corporate America. Now, though, the times were changing, as home computers became powerful and cheap enough to truly go mainstream. The media was buzzing about the subject as they hadn’t for years; everywhere it was multimedia this, CD-ROM that. Services like Prodigy and America Online were putting a new, friendlier face on the computer as a tool for communicating and socializing, and game developers were buzzing about an emerging new form of mass-market entertainment, a merger of Silicon Valley and Hollywood. Gates wasn’t alone in smelling a Second Home Computer Revolution in the wind, one that would make the computer a permanent fixture of modern American home life in all the ways the first had failed to do so.

This, then, was the zeitgeist into which Microsoft Windows 3.0 made its splashy debut in May of 1990. It was perfectly positioned both to drive the Second Home Computer Revolution and to benefit from it. Small wonder that Microsoft undertook a dramatic branding overhaul this year, striving to project a cooler, more entertaining image — an image appropriate for a company which marketed not to other companies but to individual consumers. One might say that the Microsoft we still know today was born on May 22, 1990, when Bill Gates strode onto a stage — tellingly, not a stage at Comdex or some other stodgy business-oriented computing event — to introduce the world to Windows 3.0 over a backdrop of confetti cannons, thumping music, and huge projection screens.

The delirious sales of Windows 3.0 that followed were not — could not be, given their quantity — driven exclusively by sales to corporate America. The world of computing had turned topsy-turvy; consumer computing was where the real action was now. Even as they continued to own business-oriented personal computing, Microsoft suddenly dominated in the home as well, thanks to the capitulation without much of a fight of all of the potential rivals to MS-DOS and Windows. Countless copies of Windows 3.0 were sold by Microsoft directly to Joe Public to install on his existing home computer, through a toll-free hotline they set up for the purpose. (“Have your credit card ready and call!”) Even more importantly, as new computers entered American homes in mass quantities for the second time in history, they did so with Windows already on their hard drives, thanks to Microsoft’s longstanding deals with the companies that made them.

In April of 1992,  Windows 3.1 appeared, sporting as one of its most important new features a set of “multimedia extensions” — this meaning tools for recording and playing back sounds, for playing audio CDs, and, most of all, for running a new generation of CD-ROM-based software sporting digitized voices and music and video clips — which were plainly aimed at the home rather than the business user.  Although Windows 3.1 wasn’t as dramatic a leap forward as its predecessor had been, Microsoft nevertheless hyped it to the skies in the mass media, rolling out an $8 million television-advertising campaign among other promotional strategies that would have been unthinkable from the business-focused Microsoft of just a few years earlier. It sold even faster than had its predecessor.

A Quick Tour of Windows for Workgroups 3.1


Released in April of 1992, Windows 3.1 was the ultimate incarnation of Windows’s third generation. (A version 3.11 was released the following year, but it confined itself to bug fixes and modest performance tweaks, introducing no significant new features.) It dropped support for 8088-based machines, and with it the old “real mode” of operation; it now ran only in protected mode or 386 enhanced mode. It made welcome strides in terms of stability, even as it still left much to be desired on that front. And this Windows was the last to be sold as an add-on to an MS-DOS which had to be purchased separately. Consumer-grade incarnations of Windows would continue to be built on top of MS-DOS for the rest of the decade, but from Windows 95 on Microsoft would do a better job of hiding their humble foundation by packaging the whole software stack together as a single product.

Stuff like this is the reason Windows always took such a drubbing in comparison to other, slicker computing platforms. In truth, Microsoft was doing the best they could to support a bewildering variety of hardware, a problem with which vendors of turnkey systems like Apple didn’t have to contend. Still, it’s never a great look to have to tell your customers, “If this crashes your computer, don’t worry about it, just try again.” Much the same advice applied to daily life with Windows, noted the scoffers.

Microsoft was rather shockingly lax about validating Windows 3 installations. The product had no copy protection of any sort, meaning one person in a neighborhood could (and often did) purchase a copy and share it with every other house on the block. Others in the industry had a sneaking suspicion that Microsoft really didn’t mind that much if Windows was widely pirated among their non-business customers — that they’d rather people run pirated copies of Windows than a competing product. It was all about achieving the ubiquity which would open the door to all sorts of new profit potential through the sale of applications. And indeed, Windows 3 was pirated like crazy, but it also became thoroughly ubiquitous. As for the end to which Windows’s ubiquity was the means: by the time applications came to represent 25 percent of Microsoft’s unit sales, they already accounted for 51 percent of their revenue. Bill Gates always had an instinct for sniffing out where the money was.

Probably the most important single enhancement in Windows 3.1 was its TrueType fonts. The rudimentary bitmap fonts which shipped with older versions looked… not all that nice on the screen or on the page, reportedly due to Bill Gates’s adamant refusal to pay a royalty for fonts to an established foundry like Adobe, as Apple had always done. This decision led to a confusion of aftermarket fonts in competing formats. If you used some of these more stylish fonts in a document, you couldn’t share that document with anyone else unless she also had installed the same fonts. So, you could either share ugly documents or keep nice-looking ones to yourself. Some choice! Thankfully, TrueType came along to fix all that, giving Macintosh users at least one less thing to laugh at when it came to Windows.

The TrueType format was the result of an unusual cooperative project led by Microsoft and Apple — yes, even as they were battling one another in court. The system of glyphs and the underlying technology to render them were intended to break the stranglehold Adobe Systems enjoyed over high-end printing; Adobe charged a royalty of up to $100 per gadget that employed their own PostScript font system, and were widely seen in consequence as a retrograde force holding back the entire desktop-publishing and GUI ecosystem. TrueType would succeed splendidly in its monopoly-busting goal, to such an extent that it remains the standard for fonts on Microsoft Windows and Apple’s OS X to this day. Bill Gates, no stranger to vindictiveness, joked that “we made [the widely disliked Adobe head] John Warnock cry.”

The other big addition to Windows 3.1 was the “multimedia extensions.” These let you do things like record sounds using an attached microphone and play your audio CDs on your computer. That they were added to what used to be a very businesslike operating environment says much about how important home users had become to Microsoft’s strategy.

In a throwback to an earlier era of computing, MS-DOS still shipped with a copy of BASIC included, and Windows 3.1 automatically found it and configured it for easy access right out of the box — this even though home computing was now well beyond the point where most users would ever try to become programmers. Bill Gates’s sentimental attachment to BASIC, the language on which he built his company before the IBM PC came along, has often been remarked upon by his colleagues, especially since he wasn’t normally a man given to much sentimentality. It was the widespread perception of Borland’s Turbo Pascal as the logical successor to BASIC — the latest great programming tool for the masses — that drove the longstanding antipathy between Gates and Borland’s flamboyant leader, Philippe Kahn. Later, it was supposedly at Gates’s insistence that Microsoft’s Visual BASIC, a Pascal-killer which bore little resemblance to BASIC as most people knew it, nevertheless bore the name.

Windows for Workgroups — a separate, pricier version of the environment aimed at businesses — was distinguished by having built-in support for networking. This wasn’t, however, networking as we think of it today. It was rather intended to connect machines together only in a local office environment. No TCP/IP stack — the networking technology that powers the Internet — was included.

But you could get on the Internet with the right additional software. Here, just for fun, I’m trying to browse the web using Internet Explorer 5 from 1999, the last version made for Windows 3. Google is one of the few sites that work at all — albeit, as you can see, not very well.

All this success — this reality of a single company now controlling almost all personal computing, in the office and in the home — brought with it plenty of blowback. The metaphor of Microsoft as the Evil Empire, and of Bill Gates as the computer industry’s very own Darth Vader, began in earnest in these years of Windows 3’s dominance. Neither Gates nor his company had ever been beloved among their peers, having always preferred making money to making friends. Now, though, the naysayers came out in force. Bob Metcalfe, a Xerox PARC alum famous in hacker lore as the inventor of the Ethernet networking protocol, talked about Microsoft’s expanding “death grip” on innovation in the computer industry. Indeed, zombie imagery was prevalent among many of Microsoft’s rivals; Mitch Kapor of Lotus called the new Windows-driven industry “the kingdom of the dead”: “The revolution is over, and free-wheeling innovation in the software industry has ground to a halt.” Any number of anonymous commenters mused about doing Gates any number of forms of bodily harm. “It’s remarkable how widespread the negative feelings toward Microsoft are,” mused Stewart Alsop. “No one wants to work with Microsoft anymore,” said noted Gates-basher Phillipe Kahn of Borland. “We sure won’t. They don’t have any friends left.” Channeling such sentiments, Business Month magazine cropped his nerdy face onto a body-builder’s body and labeled him the “Silicon Bully” on its cover: “How long can Bill Gates kick sand in the face of the computer industry?”

Setting aside the jealousy that always follows great success, even setting aside for the moment the countless ways in which Microsoft really did play hardball with their competitors, something about Bill Gates rubbed many people the wrong way on a personal, visceral level. In keeping with their new, consumer-friendly image, Microsoft had hired consultants to fix up his wardrobe and work on his speaking style — not to mention to teach him the value of personal hygiene — and he could now get through a canned presentation ably enough. When it came to off-the-cuff interactions, though, he continued to strike many as insufferable. To judge him on the basis of his weedy physique and nasally speaking voice — the voice of the kid who always had to show how smart he was to the rest of the class — was perhaps unfair. But one certainly could find him guilty of a thoroughgoing lack of graciousness.

His team of PR coaches could have told him that, when asked who had contributed the most to the personal-computer revolution, he ought to politely decline to answer, or, even better, modestly reflect on the achievements of someone like his old friend Steve Jobs. But they weren’t in the room with him one day when that exact question was put to him by a smiling reporter, and so, after acknowledging that it really should be answered by “others less biased than me,” he proceeded to make the case for himself: “I will say that I started the first microcomputer-software company. I put BASIC in micros before 1980. I was influential in making the IBM PC a 16-bit machine. My DOS is in 50 million computers. I wrote software for the Mac.” I, I, I. Everything he said was true, at least if one presumed that “I” meant “Bill Gates and the others at Microsoft” in this context. Yet there was something unappetizing about this laundry list of achievements he could so easily rattle off, and about the almost pathological competitiveness it betrayed. We love to praise ambition in the abstract, but most of us find such naked ambition as that constantly displayed by Gates profoundly off-putting. The growing dislike for Microsoft in the computer industry and even in much of the technology press was fueled to a large extent by a personal aversion to their founder.

Which isn’t to say that there weren’t valid grounds for concern when it came to Microsoft’s complete dominance of personal-computer system software. Comparisons to the Standard Oil trust of the Gilded Age were in the air, so much so that by 1992 it was already becoming ironically useful for Microsoft to keep the Macintosh and OS/2 alive and allow them their paltry market share, just so the alleged monopolists could point to a couple of semi-viable competitors to Windows. It was clear that Microsoft’s ambitions didn’t end with controlling the operating system installed on the vast majority of computers in the country and, soon, the world. On the contrary, that was only a means to their real end. They were already using their status as the company that made Windows to cut deep into the application market, invading territory that had once belonged to the likes of Lotus 1-2-3 and WordPerfect. Now, those names were slowly being edged out by Microsoft Excel and Microsoft Word. Microsoft wanted to own more or less all of the software on your computer. Any niche outside developers that remained in computing’s new order, it seemed, would do so at Microsoft’s sufferance. The established makers of big-ticket business applications would have been chilled if they had been privy to the words spoken by Mike Maples, Microsoft’s head of applications, to his own people: “If someone thinks we’re not after Lotus and after WordPerfect and after Borland, they’re confused. My job is to get a fair share of the software applications market, and to me that’s 100 percent.” This was always the problem with Microsoft. They didn’t want to compete in the markets they entered; they wanted to own them.

Microsoft’s control of Windows gave them all sorts of advantages over other application developers which may not have been immediately apparent to the non-technical public. Take, for instance, the esoteric-sounding technology of Object Linking and Embedding, or OLE, which debuted with Windows 3.0 and still exists in current versions. OLE allows applications to share all sorts of dynamic data between themselves. Thanks to it, a word-processor document can include charts and graphs from a spreadsheet, with the one updating itself automatically when the other gets updated. Microsoft built OLE support into new versions of Word and Excel that accompanied Windows 3.0’s release, but refused for many months to tell outside developers how to use it.  Thus Microsoft’s applications had hugely desirable capabilities which their competitors did not for a long, long time. Similar stories played out again and again, driving the competition to distraction while Bill Gates shrugged his shoulders and played innocent. “We bend over backwards to make sure we’re not getting special advantage,” he said, while Steve Ballmer talked about a “Chinese wall” between Microsoft’s application and system programmers — a wall which people who had actually worked there insisted simply didn’t exist.

On March 1, 1991, news broke that the Federal Trade Commission was investigating Microsoft for anti-trust violations and monopolistic practices. The investigators specifically pointed to that agreement with IBM that had been announced at the Fall 1989 Comdex, to target low-end computers with Microsoft’s Windows and high-end computers with the two companies’ joint operating system OS/2 — ironically, an “anti-competitive” initiative that Microsoft had never taken all that seriously. Once the FTC started digging, however, they found that there was plenty of other evidence to be turned up, from both the previous decade and this new one.

There was, for instance, little question that Microsoft had always leveraged their status as the maker of MS-DOS in every way they could. When Windows 3.0 came out, they helped to ensure its acceptance by telling hardware makers that the only way they would continue to be allowed to buy MS-DOS for pre-installation on their computers was to buy Windows and start pre-installing that too. Later, part of their strategy for muscling into the application market was to get Microsoft Works, a stripped-down version of the full Microsoft Office suite, pre-installed on computers as well. How many people were likely to go out and buy Lotus 1-2-3 or WordPerfect when they already had similar software on their computer? Of course, if they did need something more powerful, said the little card included with every computer, they could have the more advanced version of Microsoft Works for the cost of a nominal upgrade fee…

And there were other, far more nefarious stories to tell. There was, for instance, the tale of DR-DOS, a 1988 alternative to MS-DOS from Digital Research which was compatible with Microsoft’s operating system but offered a lot of welcome enhancements. Microsoft went after any clone maker who tried to offer DR-DOS pre-installed on their machines with both carrots (they would undercut Digital Research’s price to the point of basically giving MS-DOS away if necessary) and sticks (they would refuse to license them the upcoming, hotly anticipated Windows 3.0 if they persisted in their loyalty to Digital Research). Later, once the DR-DOS threat had been quelled, most of the features that had made it so desirable turned up in the next release of MS-DOS. Digital Research — a company which Bill Gates seemed to delight in tormenting — had once again been, in the industry’s latest parlance, “Microslimed.”

But Digital Research was neither the first nor the last such company. Microsoft, it was often claimed, had a habit of negotiating with smaller companies under false pretenses, learning what made their technology tick under the guise of due diligence, and then launching their own product based on what they had learned. In early 1990, Microsoft told Intuit, a maker of a hugely successful money-management package called Quicken, that they were interested in acquiring them. After several weeks of negotiations, including lots of discussions about how Quicken was programmed, how it was used in the wild, and what marketing strategies had been most effective, Microsoft abruptly broke off the talks, saying they “couldn’t find a way to make it work.” Before the end of 1990, they had announced Microsoft Money, their own money-management product.

More and more of these types of stories were being passed around. A startup who called themselves Go came to Microsoft with a pen-based computing interface. (The latter was all the rage at the time; Apple as well was working on something called the Newton, a sort of pen-based proto-iPad that, like all of the other initiatives in this direction, would turn into an expensive failure.) After spending weeks examining Go’s technology, Microsoft elected not to purchase it or sign them to a contract. But, just days later, they started an internal project to create a pen-based interface for Windows, headed by the engineer who had been in charge of “evaluating” Go’s technology. A meme was emerging, by no means entirely true but perhaps not entirely untrue either, of Microsoft as a company better at doing business than doing technology, who would prefer to copy the innovations of others than do the hard work of coming up with their own ideas.

In a way, though, this very quality was a source of strength for Microsoft, the reason that corporate clients flocked to them now like they once had to IBM; the mantra that “no one ever got fired for buying IBM” was fast being replaced in corporate America by “no one ever got fired for buying Microsoft.” “We don’t do innovative stuff, like completely new revolutionary stuff,” Bill Gates admitted in an unguarded moment. “One of the things we are really, really good at doing is seeing what stuff is out there and taking the right mix of good features from different products.” For businesses and, now, tens of millions of individual consumers, Microsoft really was the new IBM: they were safe. You bought a Windows machine not because it was the slickest or sexiest box on the block but because you knew it was going to be well-supported, knew there would be software on the shelves for it for a long time to come, knew that when you did decide to upgrade the transition would be a relatively painless one. You didn’t get that kind of security from any other platform. If Microsoft’s business practices were sometimes a little questionable, even if Windows crashed sometimes or kept on running inexplicably slower the longer you had it on your computer, well, you could live with that. Alan Boyd, an executive at Microsoft for a number of years:

Does Bill have a vision? No. Has he done it the right way? Yes. He’s done it by being conservative. I mean, Bill used to say to me that his job is to say no. That’s his job.

Which is why I can understand [that] he’s real sensitive about that. Is Bill innovative? Yes. Does he appear innovative? No. Bill personally is a lot more innovative than Microsoft ever could be, simply because his way of doing business is to do it very steadfastly and very conservatively. So that’s where there’s an internal clash in Bill: between his ability to innovate and his need to innovate. The need to innovate isn’t there because Microsoft is doing well. And innovation… you get a lot of arrows in your back. He lets things get out in the market and be tried first before he moves into them. And that’s valid. It’s like IBM.

Of course, the ethical problem with this approach to doing business was that it left no space for the little guys who actually had done the hard work of innovating the technologies which Microsoft then proceeded to co-opt. “Seeing what stuff is out there and taking it” — to use Gates’s own words against him — is a very good way indeed to make yourself hated.

During the 1990s, Windows was widely seen by the tech intelligentsia as the archetypal Microsoft product, an unimaginative, clunky amalgam of other people’s ideas. In his seminal (and frequently hilarious) 1999 essay “In the Beginning… Was the Command Line,” Neal Stephenson described operating systems in terms of vehicles. Windows 3 was a moped in this telling, “a Rube Goldberg contraption that, when bolted onto a three-speed bicycle [MS-DOS], enabled it to keep up, just barely, with Apple-cars. The users had to wear goggles and were always picking bugs out of their teeth while Apple owners sped along in hermetically sealed comfort, sneering out the windows. But the Micro-mopeds were cheap, and easy to fix compared with the Apple-cars, and their market share waxed.”

And yet if we wished to identify one Microsoft product that truly was visionary, we could do worse than boring old ramshackle Windows. Bill Gates first put his people to work on it, we should remember, before the original IBM PC and the first version of MS-DOS had even been released — so strongly did he believe even then, just as much as that more heralded visionary Steve Jobs, that the GUI was the future of computing. By the time Windows finally reached the market four years later, it had had occasion to borrow much from the Apple Macintosh, the platform with which it was doomed always to be unfavorably compared. But Windows 1 also included vital features of modern computing that the Mac did not, such as multitasking and virtual memory. No, it didn’t take a genius to realize that these must eventually make their way to personal computers; Microsoft had fine examples of them to look at from the more mature ecosystems of institutional computing, and thus could be said, once again, to have implemented and popularized but not innovated them.

Still, we should save some credit for the popularizers. Apple, building upon the work done at Xerox, perfected the concept of the GUI to such an extent in LisaOS and MacOS that one could say that all of the improvements made to it since have been mere details. But, entrenched in a business model that demanded high profit margins and proprietary hardware, they were doomed to produce luxury products rather than ubiquitous ones. This was the logical flaw at the heart of the much-discussed “1984” television advertisement and much of the rhetoric that continued to surround the Macintosh in the years that followed. If you want to change the world through better computing, you have to give the people a computer they can afford. Thanks to Apple’s unwillingness or inability to do that, it was Microsoft that brought the GUI to the world in their stead — in however imperfect a form.

The rewards for doing so were almost beyond belief. Microsoft’s revenues climbed by roughly 50 percent every year in the few years after the introduction of Windows 3.0, as the company stormed past Boeing to become the biggest corporation in the Pacific Northwest. Someone who had invested $1000 in Microsoft in 1986 would have seen her investment grow to $30,000 by 1991. By the same point, over 2000 employees or former employees had become millionaires. In 1992, Bill Gates was anointed by Forbes magazine the richest person in the world, a distinction he would enjoy for the next 25 years by most reckonings. The man who had been so excited when his company grew to be bigger than Lotus in 1987 now owned a company that was larger than the next five biggest software publishers combined. And as for Lotus alone? Well, Microsoft was now over four times their size. And the Decade of Microsoft had only just begun.

In 2000, the company’s high-water point, an astonishing 97 percent of all consumer computing devices would have some sort of Microsoft software installed on them. In the vast majority of cases, of course, said software would include Microsoft Windows. There would be all sorts of grounds for concern about this kind of dominance even had it not been enjoyed by a company with such a reputation for playing rough as Microsoft. (Or would a company that didn’t play rough ever have gotten to be so dominant in the first place?) In future articles, we’ll be forced to spend a lot more time dealing with Microsoft’s various scandals and controversies, along with reactions to them that took the form of legal challenges from the American government and the European Union and the rise of an alternative ideology of software called the open-source movement.

But, as we come to the end of this particular series of articles on the early days of Windows, we really should give Bill Gates some credit as well. Had he not kept doggedly on with Windows in the face of a business-computing culture that for years wanted nothing to do with it, his company could very easily have gone the way of VisiCorp, Lotus, WordPerfect, Borland, and, one might even say, IBM and Apple for a while: a star of one era of computing that was unable to adapt to the changing times. Instead, by never wavering in his belief that the GUI was computing’s future, Gates conquered the world. That he did so while still relying on the rickety foundation of MS-DOS is, yes, kind of appalling for anyone who values clean, beautiful computer engineering. Yet it also says much about his programmers’ creativity and skill, belying any notion of Microsoft as a place bereft of such qualities. Whatever else you can say about the sometimes shaky edifices that were Windows 3 and its next few generations of successors, the fact that they worked at all was something of a minor miracle.

Most of all, we should remember the huge role that Windows played in bringing computing home once again — and, this time, permanently. The third generation of Microsoft’s GUI arrived at the perfect time, just when the technology and the culture were ready for it. Once a laughingstock, Windows became for quite some time the only face of computing many people knew — in the office and in the home. Who could have dreamed it? Perhaps only one person: a not particularly dreamy man named Bill Gates.

(Sources: the books Hard Drive: Bill Gates and the Making of the Microsoft Empire by James Wallace and Jim Erickson, Gates: How Microsoft’s Mogul Reinvented an Industry and Made Himself the Richest Man in America by Stephen Manes and Paul Andrews, and In the Beginning… Was the Command Line by Neal Stephenson; Computer Power User of October 2004; InfoWorld of May 20 1991 and January 31 1994. Finally, I owe a lot to Nathan Lineback for the histories, insights, comparisons, and images found at his wonderful online “GUI Gallery.”)

 
 

Tags: , , ,

Doing Windows, Part 7: Third Time’s the Charm

Microsoft entered the last year of the 1980s looking toward a new decade that seemed equally rife with opportunity and danger. On the one hand, profits were up, and Bill Gates and any number of his colleagues could retire as very rich men indeed even if it all ended tomorrow — not that that outcome looked likely. The company was coming to be seen as the standard setter of the personal-computer industry, more important even than an IBM that had been gravely weakened by the PS/2 debacle and the underwhelming reception of OS/2. Microsoft Windows, now once again viewed by Gates as the keystone of his company’s future after the disappointment that had been OS/2, stood to benefit greatly from Microsoft’s new clout. Windows 2 had gained some real traction, and the upcoming Windows 3 was being talked about with mounting expectation by an MS-DOS marketplace that finally seemed to be technologically and psychologically ready for a GUI environment.

The more worrisome aspects of the future, on the other hand, all swirled around the other two most important companies in American business computing. Through most of the decade now about to pass away, Microsoft had managed to maintain cordial if not always warm relationships with both IBM and Apple — until, that is, the latter declared war by filing a copyright-infringement lawsuit against Windows in 1988. The stakes of that lawsuit were far greater than any mere monetary settlement; they were rather the very right of Windows to continue to exist. It wasn’t at all clear what Microsoft could or would do next if they lost the case and with it Windows. Meanwhile their relationship with IBM was becoming almost equally strained. Disagreements about the technical design of OS/2, along with disputes over the best way to market it, had caused Microsoft to assume the posture of little more than subcontractors working on IBM’s operating system of the future at the same time that they pushed hard on their own Windows. OS/2 and Windows, those two grand bids for the future of mainstream business computing, seemingly had to come into conflict with one another at some point. What happened then? IBM’s reputation had unquestionably been tarnished by recent events, but at the end of the day they were still IBM, the legendary Big Blue, the most important and influential company in the history of computing to date. Was Microsoft ready to take on both Apple and IBM as full-fledged enemies?

So, the people working on Windows 3 had plenty of potential distractions to contend with as they tried to devise a GUI environment good enough to leap into the mainstream. “Just buckle down and make it as good as possible,” said Bill Gates, “and let our lawyers and business strategists deal with the distractions.” By all indications, the Windows people managed to do just that; there’s little indication that all of the external chaos had much effect on their work.

That said, when they did raise their heads from their keyboards, they could take note of encouraging signs that Microsoft might be able to navigate through their troubles with Apple and IBM. As I described in my previous article, on March 18, 1989, Judge William Schwarzer ruled that the 1985 agreement between the two companies applied only to those aspects of Windows 2 — and by inference of an eventual Windows 3 — which had also been a part of Windows 1. Thus the 1985 agreement wouldn’t be Microsoft’s ticket to a quick victory; it appeared that they would rather have to invalidate the very premise of “visual copyright” as applied by Apple in this case in order to win. On July 21, however, Microsoft got some more positive news when Judge Schwarzer made his ruling on exactly which features of Windows 2 weren’t covered by the old agreement. He threw out no less than 250 of Apple’s 260 instances of claimed infringement, vastly simplifying the case — and vastly reducing the amount of damages which Apple could plausibly ask for. The case remained a potential existential threat to Windows, but disposing of what Microsoft’s lawyers trumpeted was the “vast bulk” of it at one stroke did give some reason to take heart. Now, what remained of the case seemed destined to grind away quietly in the background for a long, long time to come — a Sword of Damocles perhaps, but one which Bill Gates at any rate was determined not to let affect the rest of his company’s strategy. If he could make Windows a hit — a fundamental piece of the world’s computing infrastructure — while the case was still grinding on, it would be very difficult indeed for any judge to order the nuclear remedy of banning Microsoft from continuing to sell it.

Microsoft’s strategy with regard to IBM was developing along a similarly classic Gatesian line. Inveterate bets-hedger that he was, Gates wasn’t willing to cut ties completely with IBM, just in case OS/2 and possibly even PS/2 turned around and some of Big Blue’s old clout returned. Instead he was careful to maintain at least a semblance of good relations, standing ready to jump off the Windows bandwagon and back onto OS/2, if it should prove necessary. He was helped immensely in this by the unlamented departure from IBM of Bill Lowe, architect of the disastrous PS/2 strategy, an executive with whom Gates by this point was barely on speaking terms. Replacing Lowe as head of IBM’s PC division was one Jim Cannavino, a much more tech-savvy executive who trusted Gates not in the slightest but got along with him much better one-on-one, and was willing to continue to work with him for the time being.

At the Fall 1989 Comdex, the two companies made a big show of coming together — the latest of the series of distancings and rapprochements that had always marked their relationship. They trotted out a new messaging strategy that had Windows as the partnership’s “low-end” GUI, OS/2’s Presentation Manager as the high-end GUI of the future, suitable at present only for machines with an 80386 processor and at least 4 MB of memory. (The former specification was ironic in light of all the bickering IBM and Microsoft had done in earlier years on the issue of supporting the 80286 in OS/2.) The press release stated that “Windows is not intended to be used as a server, nor will future releases contain advanced OS/2 features [some of which were only planned for future OS/2 releases at this point] such as distributed processing, the 32-bit flat memory model, threads, or long filenames.” The pair even went so far as to recommend that developers working on really big, ambitious applications for the longer-term future focus their efforts on OS/2. (“No advice,” InfoWorld magazine would wryly note eighteen months later, “could have been worse.”)

But Microsoft’s embrace of the plan seemed tentative at best even in the moment. It certainly didn’t help IBM’s comfort level when Steve Ballmer in an unguarded moment blurted out that “face it: in the future, everyone’s gonna run Windows.” Likewise, Bill Gates showed little personal enthusiasm for this idea of Windows as the cut-price, temporary alternative to OS/2 and the Presentation Manager. As usual, he was just trying to keep everyone placated while he worked out for himself what the future held. And as time went on, he seemed to find more and more to like about the idea of a Windows-centric future. Several months after the Comdex show, he got slightly drunk at a big industry dinner, and confessed to rather more than he might have intended. “Six months after Windows 3 ships,” he said, “it will have a greater market share than Presentation Manager will ever have — OS/2 applications won’t have a chance.” He further admitted to deliberately dragging his feet on updates to OS/2 in order to ensure that Windows 3.0 got all the attention in 1990.

He needn’t have worried too much on that front: press coverage of the next Windows was reaching a fever pitch, and evincing little of the skepticism that had accompanied Windows 1 and 2. Throughout 1989, rumors and even the occasional technical document leaked out of Microsoft — and not, one senses, by accident. Carefully timed grist for the rumor mill though it may have been, the news was certainly intriguing on its own merits. The press wrote that Tandy Trower, the manager who had done the oft-thankless job of bringing Windows 1 and 2 to fruition, had transferred off the team, but the team itself was growing like never before, and now being personally supervised once again by the ever-flexible Steve Ballmer, who had left Microsoft’s OS/2 camp and rejoined the Windows zealots. Ballmer had hired visual designer Susan Kare, known throughout the industry as the author of MacOS’s clean and crisp look, to apply some of the same magic to their own GUI.

But for those who understood Windows’s longstanding technical limitations, another piece of news was the most intriguing and exciting of all. Already before the end of 1989, Microsoft started talking openly about their plans to accomplish two things which had heretofore been considered mutually exclusive: to continue running Windows on top of hoary old MS-DOS, and yet to shatter the 640 K barrier once and for all.

It had all begun back in June of 1988, when Microsoft programmer David Weise, one of the former Dynamical Systems Research people who had proved such a boon to Windows, bumped into an old friend named Murray Sargent, a physics professor at the University of Arizona who happened to do occasional contract programming for Microsoft on the side. At the moment, he told Weise, he was working on adding new memory-management functionality to Microsoft’s CodeView debugger, using an emerging piece of software technology known as a “DOS extender,” which had been pioneered over the last couple of years by an innovative company in the system-software space called Quarterdeck Office Systems.

As I’ve had occasion to describe in multiple articles by now, the most crippling single disadvantage of MS-DOS had always been its inability to directly access more than 640 K of memory, due to its origins on the Intel 8088 microprocessor, which had a sharply limited address space. Intel’s newer 80286 and 80386 processors could run MS-DOS only in their 8088-compatible “real” mode, where they too were limited to 640 K, rather than being able to use their “protected” mode to address up to 16 MB (in the case of the 80286) or 4 GB (in the case of the 80386). Because they ran on top of MS-DOS, most versions of Windows as well had been forced to run in real mode — the sole exception was Windows/386, which made extensive use of the 80386’s virtual mode to ease some but not all of the constant headache that was memory management in the world of MS-DOS. Indeed, when he asked himself what were the three biggest aggravations which working with Windows entailed, Weise had no doubt about the answer: “memory, memory, and memory.” But now, he thought that Sargent might just have found a solution through his tinkering with a DOS extender.

It turned out that the very primitiveness of MS-DOS could be something of a saving grace. Its functions mostly dealt only with the basics of file management. Almost all of the other functions that we think of as rightfully belonging to an operating system were handled either by an extended operating environment like Windows, or not handled at all — i.e., left to the programmer to deal with by banging directly on the hardware. Quarterdeck Office Systems had been the first to realize that it should be possible to run the computer most of the time in protected mode, if only some way could be found to down-shift into real mode when there was a need for MS-DOS, as when a file on disk needed to be read from or written to. This, then, was what a DOS extender facilitated. Its code was stashed into an unused corner of memory and hooked into the function calls that were used for communicating with MS-DOS. That done, the processor could be switched into protected mode for running whatever software you liked with unfettered access to memory beyond 640 K. When said software tried to talk to MS-DOS after that, the DOS extender trapped that function call and performed some trickery: it copied any data that MS-DOS might need to access in order to carry out the task into the memory space below 640 K, switched the CPU into real mode, and then reissued the function call to let MS-DOS act on that data. Once MS-DOS had done its work, the DOS extender switched the CPU back into protected mode, copied any necessary data back to where the protected-mode software expected it to be, and returned control to it.

One could argue that a DOS extender was just as much a hack as any of the other workarounds for the 640 K barrier; it certainly wasn’t as efficient as a more straightforward contiguous memory model, like that enjoyed by OS/2, would have been. It was particularly inefficient on the 80286, which unlike the 80386 had to perform a costly reset every time it was switched between protected and real mode and vice versa. But even so, it was clearly a better hack than any of the ones that had been devised to date. It finally let Intel’s more advanced processors run, most of the time anyway, as their designers had intended them to run. And from the programmer’s perspective it was, with only occasional exceptions, transparent; you just asked for the memory you needed and went about your business from there, and let the DOS extender worry about all the details going on behind the scenes. The technology was still in an imperfect state that summer of 1988, but if it could be perfected it would be a dream come true for programmers, the next best thing to a world completely free of MS-DOS and its limitations. And it might just be a dream come true for Windows as well, thought David Weise.

Quarterdeck may have pioneered the idea of the DOS extender, but their implementation was lacking in the view of Weise and his sometime colleague Murray Sargent. With Sargent’s help in the early stages, Weise implemented his own DOS extender and then his own protected-mode version of Windows which used it over three feverish months of nights and weekends. “We’re not gonna ask anybody, and then if we’re done and they shoot it down, they shoot it down,” he remembers thinking.

There are all these little gotchas throughout it, but basically you just work through the gotchas one at a time. You just close your eyes, and you just charge ahead. You don’t think of the problems, or you’re not gonna do it. It’s fun. Piece by piece, it’s coming. Okay, here come the keyboard drivers, here come the display drivers, here comes GDI — oh, look, here’s USER!

By the fall of of 1988, Weise had his secret project far enough along to present to Bill Gates, Steve Ballmer, and the rest of the Windows team. In addition to plenty of still-unresolved technical issues, the question of whether a protected-mode Windows would step too much on the toes of OS/2, an operating system whose allure over MS-DOS was partially that it could run in protected mode all the time, haunted the discussion. But Gates, exasperated beyond endurance by IBM, wasn’t much inclined to defer to them anymore. Never a boss known for back-patting, he told Weise simply, “Okay, let’s do it.”

Microsoft would eventually release their approach to the DOS extender as an open protocol called the “DOS Protected Mode Interface,” or DPMI. It would change the way MS-DOS-based computers were programmed forever, not only inside Windows but outside of it as well. The revolutionary non-Windows game Doom, for example, would have been impossible without the standalone DOS extender DOS/4GW, which implemented the DPMI specification and was hugely popular among game programmers in particular for years. So, DPMI became by far the most important single innovation of Windows 3.0. Ironically given that it debuted as part of an operating environment designed to hide the ongoing existence of MS-DOS from the user, it single-handedly made MS-DOS a going concern right through the decade of the 1990s, giving the Quick and Dirty Operating System That Refused to Die a lifespan absolutely no one would ever have dreamed for it back in 1981.

But the magic of DPMI wouldn’t initially apply to all Windows systems. Windows 3.0 could still run, theoretically at least, on even a lowly 8088-based PC compatible from the early 1980s — a computer whose processor didn’t have a protected mode to be switched into. For all that he had begged and cajoled IBM to make OS/2 an 80386-exclusive operating system, Bill Gates wasn’t willing to abandon less powerful machines for Microsoft’s latest operating environment. In addition to fueling conspiracy theories that Gates had engineered OS/2 to fail from the beginning, this data point did fit the brief-lived official line that OS/2 was for high-end machines, Windows for low-end machines. Yet the real reasons behind it were more subtle. Partially due to a global chip shortage that made all sorts of computers more expensive in the late 1980s and briefly threatened to derail the inexorable march of Moore’s Law, users hadn’t flocked to the 80386-based machines quite as quickly as Microsoft had anticipated when the OS/2 debate was raging back in 1986. The fattest part of the market’s bell curve circa 1989 was still the 80286 generation of computers, with a smattering of pace-setting 80386s and laggardly 8088s on either side of them. Microsoft thus ironically judged the 80386 to be exactly the bridge too far in 1989 that IBM had claimed it to be in 1986. Even before Windows 3.0 came out, the chip shortage was easing and Moore’s Law was getting back on track; Intel started producing their fourth-generation microprocessor, the 80486, in the last weeks of 1989. [1]The 80486 was far more efficient than its predecessor, boasting roughly twice the throughput when clocked at the same speed. But, unlike the 80286 and 80386, it didn’t sport any new operating modes or fundamentally new capabilities, and thus didn’t demand any special consideration from software like Windows/386 and Windows 3.0 that was already utilizing the 80386 to its full potential. For the time being, though, Windows was expected to support the full range of MS-DOS-based computers, reaching all the way back to the beginning.

And yet, as we’ve seen, DPMI was just too brilliant an innovation to give up in the name of maintaining compatibility with antiquated 8088-based machines. MS-DOS had for years been forcing owners of higher-end hardware to use their machines in a neutered fashion, and Microsoft wasn’t willing to continue that dubious tradition in the dawning era of Windows. So, they decided to ship three different versions of Windows in every box. When started on an 8088-class machine, or on any machine without memory beyond 640 K, Windows ran in “real mode.” When started on an 80286 with more than 640 K of memory, or on an 80386 with more than 640 K but less than 2 MB of memory, it ran in “standard mode.” And when started on an 80386 with at least 2 MB of memory, it ran in its ultimate incarnation: “386 enhanced mode.”

In both of the latter modes, Windows 3.0 could offer what had long been the Holy Grail for any MS-DOS-hosted GUI environment: an application could simply request as much memory as it needed, without having to worry about what physical addresses that memory included or whether it added up to more than 640 K. [2]This wasn’t quite the “32-bit flat memory model” which Microsoft had explicitly promised Windows would never include in the joint statement with IBM. That referred to an addressing mode unique to the 80386 and its successors, which allowed them to access up to 4 GB of memory in a very flexible way. Having been written to support the 80286, Windows 3.0, even in 386 enhanced mode, was still limited to 16 MB of memory, and had to use a somewhat more cumbersome form of addressing known as a segmented memory model. Still, it was close enough that it arguably went against the spirit of the statement, something that wouldn’t be lost on IBM. No earlier GUI environment, from Microsoft or anyone else, had met this standard.

In the 386-enhanced mode, Windows 3.0 also incorporated elements of the earlier Windows/386 for running vanilla MS-DOS applications. Such applications ran in the 80386’s virtual mode; thus Windows 3.0 used all three operating modes of the 80386 in tandem, maximizing the potential of a chip whose specifications owed a lot to Microsoft’s own suggestions. When running on an 8088 or 80286, Windows still served as little more than a task launcher for MS-DOS applications, but on an 80386 with enough memory they multitasked as seamlessly as native Windows applications — or perhaps more so: vanilla MS-DOS applications running inside their virtual machines actually multitasked preemptively, while normal Windows applications only multitasked cooperatively. So, on an 80386 in particular, Windows 3.0 had a lot going for it even for someone who couldn’t care less about Susan Kare’s slick new icons. It was much, much more than just a pretty face. [3]Memory management on MS-DOS-based versions of Windows is an extremely complicated subject, one which alone has filled thick technical manuals. This article has presented by no means a complete picture, only the most cursory of overviews intended to convey the importance of Windows 3.0’s central innovation of DPMI. In addition to that innovation, though, Windows 3.0 and its successors employed plenty of other tricks, many of them making yet more clever use of the 80386’s virtual mode, Intel’s gift that kept on giving. For truly dedicated historians of a technical bent, I recommend a book such as Unauthorized Windows 95 by Andrew Schulman (which does cover memory management under earlier versions of Windows as well), Windows Internals by Matt Pietrek, and/or DOS and Windows Protected Mode by Al Williams.

Which isn’t to say that the improved aesthetics weren’t hugely significant in their own right. While the full technical import of Windows 3.0’s new underpinnings would take some time to fully grasp, it was immediately obvious that it was slicker and far more usable than what had come before. Macintosh zealots would continue to scoff, at times with good reason, at the clunkier aspects of the environment, but it unquestionably came far closer than anything yet to that vision which Bill Gates had expressed in an unguarded moment back in 1984 — the vision of “the Mac on Intel hardware.”

A Quick Tour of Windows 3.0


Windows 3.0 really is a dramatic leap compared to what came before. The text-based “MS-DOS Executive” — just the name sounds clunky, doesn’t it? — has been replaced by the “Program Manager.” Applications are now installed, and are represented as icons; we’re no longer forced to scroll through long lists of filenames just to start our word processor. Indeed, the whole environment is much more attractive in general, having finally received some attention from real visual designers like Susan Kare of Macintosh fame.

One area that’s gotten a lot of attention from the standpoint of both usability and aesthetics is the Control Panel. Much of this part of Windows 3.0 is lifted directly from the OS/2 Presentation Manager — with just enough differences introduced to frustrate.

In one of the countless new customization and personalization options, we can now use images as our desktop background, .

The help system is extensive and comprehensive. Years before a web browser became a standard Windows component, Windows Help was a full-fledged hypertext reader, a maze of twisty little links complete with embedded images and sounds.

The icons on the desktop still represent only running applications that have been minimized. We would have to wait until Windows 95 for the desktop-as-general-purpose-workspace concept to reach fruition.

For all the aesthetic improvements, the most important leap made by Windows 3.0 is its shattering of the 640 K barrier. When run on an 80286 or 80386, it uses Microsoft’s new DPMI technology to run in those processors’ protected mode, leaving the user and (for the most part) the programmer with just one heap of memory to think about; no more “conventional” and “extended” and “expanded” memory to scratch your head over. It’s difficult to exaggerate what a miracle this felt like after all the years of struggle. Finally, the amount of memory you had in your machine was the amount of memory you had to run Windows and its applications — end of story.

In contrast to all of the improvements in the operating environment itself, the set of standard applets that shipped with Windows 3.0 is almost unchanged since the days of Windows 1.

The Program Manager, like the MS-DOS Executive before it, in a sense is Windows; we close it to exit the operating environment itself and return to the MS-DOS prompt.

A consensus emerged well ahead of Windows 3.0’s release that this was the GUI which corporate America could finally embrace — that the GUI’s time had come, and that this GUI was the one destined to become the standard. One overheated pundit declared that “this is probably the most anticipated product in the history of the world.” Microsoft did everything possible to stoke those fires of anticipation. Rather than aligning the launch with a Comdex show, they opted to put-on a glitzy Apple-style self-standing media event to mark the beginning of the Windows 3.0 era. In fact, one might even say that they rather outdid the famously showy Apple.

The big rollout took place on May 22, 1990, at New York’s Center City at Columbus Circle. A hundred third-party publishers showed up with Windows 3.0 applications, along with fifty hardware makers who were planning to ship it pre-installed on every machine they sold. Closed-circuit television feeds beamed the proceedings to big-screen theaters in half a dozen other cities in the United States, along with London, Paris, Madrid, Singapore, Stockholm, Milan, and Mexico City. Everywhere standing-room-only crowds clustered, made up of those privileged influence-wielders who could score a ticket to what Bill Gates himself described as “the most extravagant, extensive, and elaborate software introduction ever,” to the tune of a $3 million price tag. Microsoft had tried to go splashy from time to time before, but never had they indulged in anything like this. It was, Gates’s mother reckoned, the “happiest day of Bill’s life” to date.

The industry press was carried away on Microsoft’s river of hype, manifesting on their behalf a messianic complex that was as worthy of Apple as had been the big unveiling. “If you think technology has changed the world in the last few years, hold on to your seats,” wrote one pundit. Gates made the rounds of talk shows like Good Morning America, as Microsoft spent another $10 million on an initial advertising campaign and carpet-bombed the industry with 400,000 demonstration copies of Windows 3.0, sent to anyone who was or might conceivably become a technology taste-maker.

The combination of wall-to-wall hype and a truly compelling product was a winning one; this time, Microsoft wouldn’t have to fudge their Windows sales numbers. When they announced that they had sold 1 million boxed copies of Windows 3.0 in the first four months, each for $80, no one doubted them. “There is nothing that even compares or comes close to the success of this product,” said industry analyst Tim Bajarin. He went on to note in a more ominous vein that “Microsoft is on a path to continue dominating everything in desktop computing when it comes to software. No one can touch or even slow them down.”

Windows 3.0 inevitably won “Best Business Program” for 1990 from the Software Publishers Association, an organization that ran on the hype generated by its members. More persuasive were the endorsements from other sources. For example, after years of skepticism toward previous versions of Windows, the hardcore tech-heads at Byte magazine were effusive in their praise of this latest one, titling their first review thereof simply “Three’s the One.” “On both technical and strategic grounds,” they wrote, “Windows 3.0 succeeds brilliantly. After years of twists and turns, Microsoft has finally nailed this product. Try it. You’ll like it.” PC Computing put an even more grandiose spin on things, straining toward a scriptural note (on the Second Day, Microsoft created the MS-DOS GUI, and it was Good):

When the annals of the PC are written, May 22, 1990, will mark the first day of the second era of IBM-compatible PCs. On that day, Microsoft released Windows 3.0. And on that day, the IBM-compatible PC, a machine hobbled by an outmoded, character-based operating system and 1970s-style programs, was transformed into a computer that could soar in a decade of multitasking graphical operating environments and powerful new applications. Windows 3.0 gets right what its predecessors — Visi On, GEM, earlier versions of Windows, and OS/2 Presentation Manager — got wrong. It delivers adequate performance, it accommodates existing DOS applications, and it makes you believe that it belongs on a PC.

Windows 3.0 sold and sold and sold, like no piece of software had ever sold before, transforming in a matter of months the picture that sprang to most people’s minds when they thought of personal computing from a green screen with a blinking command prompt to a mouse pointer, icons, and windows — thus accomplishing the mainstream computing revolution that Apple had never been able to manage, despite the revolutionary rhetoric of their old “1984” advertisement. Windows became so ubiquitous so quickly that the difficult questions that had swirled around Microsoft prior to its launch — the question of Apple’s legal case and the question of Microsoft’s ongoing relationship with IBM and OS/2 — faded into the background noise, just as Bill Gates had hoped they would.

Sure, Apple zealots and others could continue to scoff, could note that Windows crashed all too easily, that too many things were still implemented clunkily in comparison to MacOS, that the inefficiencies that came with building on such a narrow foundation as MS-DOS meant that it craved far better hardware than it ought to in order to run decently. None of it mattered. All that mattered was that Windows 3.0 was a usable, good-enough GUI that ran on cheap commodity hardware, was free of the worst drawbacks that came with MS-DOS, and had plenty of software available for it — enough native software, in fact, to make its compatibility with vanilla MS-DOS software, once considered so vital for any GUI hoping to make a go of it, almost moot. The bet Bill Gates had first put down on something called the Interface Manager before the IBM PC even officially existed, which he had doubled down on again and again only to come up dry every time, had finally paid off on a scale even he hadn’t ever imagined. Microsoft would sell 2.75 million copies of Windows 3.0 by the end of 1990 — and then the surge really began. Sales hit 15 million copies by the end of 1991. And yet if anything such numbers underestimate its ubiquity at the end of its first eighteen months on the market. Thanks to widespread piracy which Microsoft did virtually nothing to prevent, estimates were that at least two copies of Windows had been installed for every one boxed copy that had been purchased. Windows was the new standard for mainstream personal computing in the United States and, increasingly, all over the world.

At the Comdex show in November of 1990, Bill Gates stepped onstage to announce that Windows 3.0 had already gotten so big that no general-purpose trade show could contain it. Instead Microsoft would inaugurate the Windows World Exposition Conference the following May. Then, after that and the other big announcements were all done, he lapsed into a bit of uncharacteristic (albeit carefully scripted) reminiscing. He remembered coming onstage at the Fall Comdex of seven years before to present the nascent first version of Windows, infamously promising that it would be available by April of 1984. Everyone at that show had talked about how relentlessly Microsoft laid on the Windows hype, how they had never seen anything quite like it. Yet, looking back, it all seemed so unbearably quaint now. Gates had spent all of an hour preparing his big speech to announce Windows 1.0, strolled onto a bare stage carrying his own slide projector, and had his father change the slides for him while he talked. Today, the presentation he had just completed had consisted of four big screens, each featuring people with whom he had “talked” in a carefully choreographed one-man show — all in keeping with the buzzword du jour of 1990, “multimedia.”

The times, they were indeed a-changing. An industry, a man, a piece of software, and, most of all, a company had grown up. Gates left no doubt that it was only the beginning, that he intended for Microsoft to reign supreme over the glorious digital future.

All these new technologies await us. Unless they are implemented in standard ways on standard platforms, any technical benefits will be wasted by the further splintering of the information base. Microsoft’s role is to move the current generation of PC software users, which is quickly approaching 60 million, to an exciting new era of improved desktop applications and truly portable PCs in a way that keeps users’ current applications, and their huge investment in them, intact. Microsoft is in a unique position to unify all those efforts.

Once upon a time, words like these could have been used only by IBM. But now Microsoft’s software, not IBM’s hardware, was to define the new “standard platform” — the new safe choice in personal computing. The PC clone was dead. Long live the Wintel standard.

(Sources: the books The Making of Microsoft: How Bill Gates and His Team Created the World’s Most Successful Software Company by Daniel Ichbiah and Susan L. Knepper, Hard Drive: Bill Gates and the Making of the Microsoft Empire by James Wallace and Jim Erickson, Gates: How Microsoft’s Mogul Reinvented an Industry and Made Himself the Richest Man in America by Stephen Manes and Paul Andrews, Computer Wars: The Fall of IBM and the Future of Global Technology by Charles H. Ferguson and Charles R. Morris, and Apple Confidential 2.0: The Definitive History of the World’s Most Colorful Company by Owen W. Linzmayer; New York Times of March 18 1989 and July 22 1989; PC Magazine of February 12 1991; Byte of June 1990 and January 1992; InfoWorld of May 20 1991; Computer Gaming World of June 1991. Finally, I owe a lot to Nathan Lineback for the histories, insights, comparisons, and images found at his wonderful online “GUI Gallery.”)

Footnotes

Footnotes
1 The 80486 was far more efficient than its predecessor, boasting roughly twice the throughput when clocked at the same speed. But, unlike the 80286 and 80386, it didn’t sport any new operating modes or fundamentally new capabilities, and thus didn’t demand any special consideration from software like Windows/386 and Windows 3.0 that was already utilizing the 80386 to its full potential.
2 This wasn’t quite the “32-bit flat memory model” which Microsoft had explicitly promised Windows would never include in the joint statement with IBM. That referred to an addressing mode unique to the 80386 and its successors, which allowed them to access up to 4 GB of memory in a very flexible way. Having been written to support the 80286, Windows 3.0, even in 386 enhanced mode, was still limited to 16 MB of memory, and had to use a somewhat more cumbersome form of addressing known as a segmented memory model. Still, it was close enough that it arguably went against the spirit of the statement, something that wouldn’t be lost on IBM.
3 Memory management on MS-DOS-based versions of Windows is an extremely complicated subject, one which alone has filled thick technical manuals. This article has presented by no means a complete picture, only the most cursory of overviews intended to convey the importance of Windows 3.0’s central innovation of DPMI. In addition to that innovation, though, Windows 3.0 and its successors employed plenty of other tricks, many of them making yet more clever use of the 80386’s virtual mode, Intel’s gift that kept on giving. For truly dedicated historians of a technical bent, I recommend a book such as Unauthorized Windows 95 by Andrew Schulman (which does cover memory management under earlier versions of Windows as well), Windows Internals by Matt Pietrek, and/or DOS and Windows Protected Mode by Al Williams.
 
 

Tags: , , ,

Doing Windows, Part 6: Look and Feel

From left, Dan Fylstra of VisiCorp, Bill Gates of Microsoft, and Gary Kildall of Digital Research in 1984. As usual, Gates looks rumpled, high-strung, and vaguely tortured, while Kildall looks polished, relaxed, and self-assured. (Which of these men would you rather chat with at a party?) Pictures like these perhaps reveal one of the key reasons that Gates consistently won against more naturally charismatic characters like Kildall: he personally needed to win in ways that they did not.

In the interest of clarity and concision, I’ve restricted this series of articles about non-Apple GUI environments to the efforts of Microsoft and IBM, making an exception to that rule only for VisiCorp’s Visi On, the very first product of its type. But, as I have managed to acknowledge in passing, those GUIs hardly constituted the sum total of the computer industry’s efforts in this direction. Among the more impressive and prominent of what we might label the alternative MS-DOS GUIs was a product from none other than Gary Kildall and Digital Research — yes, the very folks whom Bill Gates once so slyly fleeced out of a contract to provide the operating system for the first IBM PC.

To his immense credit, Kildall didn’t let the loss of that once-in-a-lifetime opportunity get him down for very long. Digital Research accepted the new MS-DOS-dominated order with remarkable alacrity, and set about making the best of things by publishing system software, such as the multitasking Concurrent DOS, which tried to do the delicate dance of improving on MS-DOS while maintaining compatibility. In the same spirit, they made a GUI of their own, called the “Graphics Environment Manager” — GEM.

After futzing around with various approaches, the GEM team found their muse on the day in early 1984 when team-member Darrell Miller took Apple’s new Macintosh home to show his wife: “Her eyes got big and round, and she hates computers. If the Macintosh gets that kind of reaction out of her, this is powerful.” Miller is blunt about what happened next: “We copied it exactly.” When they brought their MacOS clone to the Fall 1984 Comdex, Steve Jobs expressed nothing but approbation. “You did a great job!” he said. No one from Apple seemed the slightest bit concerned at this stage about the resemblance to the Macintosh, and GEM hit store shelves the following spring as by far the most elegant and usable MS-DOS GUI yet.

A few months later, though, Apple started singing a very different tune. In the summer of 1985, they sent a legal threat to Digital Research which included a detailed list of all the ways that they believed GEM infringed on their MacOS copyrights. Having neither the stomach nor the cash for an extended court battle and fearing a preliminary injunction which might force them to withdraw GEM from the market entirely, Digital Research caved without a fight. They signed an agreement to replace the current version of GEM with a new one by November 15, doing away with such distinctive and allegedly copyright-protected Macintosh attributes as “the trash-can icon, the disk icons, and the close-window button in the upper-left-hand corner of a window.” They also agreed to an “undisclosed monetary settlement,” and to “provide programming services to Apple at a reduced rate.”

Any chance GEM might have had to break through the crowded field of MS-DOS GUIs was undone by these events. Most of the third-party developers Digital Research so desperately needed were unnerved by the episode, abandoning any plans they might have hatched to make native GEM applications. And so GEM, despite being vastly more usable than the contemporaneous Microsoft Windows even in its somewhat bowdlerized post-agreement form, would go on to become just another also-ran in the GUI race. [1]Reworked to run under a 68000 architecture, GEM would enjoy some degree of sustained success in another realm: not as an MS-DOS-hosted GUI but as the GUI hosted in the Atari ST’s ROM. In this form, it would survive well into the 1990s.

For the industry at large, the GEM smackdown was most significant as a sign of changing power structures inside Apple — changes which carried with them a new determination that others shouldn’t be allowed to rip off all of the Mac’s innovations. The former Pepsi marketing manager John Sculley was in the ascendant at Apple by the summer of 1985, Steve Jobs already being eased out the door. The former had been taught by the Cola Wars that a product’s secret formula was everything, and had to be protected at all costs. And the Macintosh’s secret formula was its beautiful interface; without it, it was just an overpriced chunk of workmanlike hardware — a bad joke when set next to a better, cheaper Motorola 68000-based computer like the new Commodore Amiga. The complaint against Digital Research was a warning shot to an industry that Sculley believed had gotten far too casual about throwing around phrases like “Mac-like.” “Apple is going after everybody,” warned one fearful software executive to the press. The relationship between Microsoft and Apple in particular was about to get a whole lot more complicated.

Said relationship had been a generally good one during the years when Steve Jobs was calling many of Apple’s shots. Jobs and Bill Gates, dramatically divergent in countless ways but equally ambitious, shared a certain esprit de corp born of having been a part of the microcomputer industry since before there was a microcomputer industry. Jobs genuinely appreciated his counterpart’s refusal to frame business computing as a zero-sum game between the Macintosh and the MS-DOS standard, even when provoked by agitprop like Apple’s famous “1984” Super Bowl advertisement. Instead Gates, contrary to his established popular reputation as the ultimate zero-sum business warrior, supported Apple’s efforts as well as IBM’s with real enthusiasm: signing up to produce Macintosh software two full years before the finished Mac was released, standing at Jobs’s side when Apple made major announcements, coming to trade shows conspicuously sporting a Macintosh tee-shirt. All indications are that the two truly liked and respected one another. For all that Apple and Microsoft through much of these two men’s long careers would be cast as the yin and yang of personal computing — two religions engaged in the most righteous of holy wars — they would have surprisingly few negative words to say about one another personally down through the years.

But when Steve Jobs decided or was forced to submit his resignation letter to Apple on September 17, 1985, trouble for Microsoft was bound to follow. John Sculley, the man now charged with cleaning up the mess Jobs had supposedly made of the Macintosh, enjoyed nothing like the same camaraderie with Bill Gates. He and his management team were openly suspicious of Microsoft, whose Windows was already circulating widely in beta form. Gates and others at Microsoft had gone on the record repeatedly saying they intended for Windows and the Macintosh to be sufficiently similar that they and other software developers would be able to port applications in short order between the two. Few prospects could have sounded less appealing to Sculley. Apple, whose products then as now enjoyed the highest profit margins in the industry thanks to their allure as computing’s hippest luxury brand, could see their whole business model undone by the appearance of cheap commodity clones that had been transformed by the addition of Windows into Mac-alikes. Of course, one look at Windows as it actually existed in 1985 could have disabused Sculley of the notion that it was likely to win any converts among people who had so much as glanced at MacOS. Still, he wasn’t happy about the idea of the Macintosh losing its status, now or in the future, as the only GUI environment that could serve as a true, comprehensive solution to all of one’s computing needs. So, within weeks of Jobs’s departure, feeling his oats after having so thoroughly cowed Digital Research, he threatened to sue Microsoft as well for copying the “look and feel” of the Macintosh in Windows.

He really ought to have thought things through a bit more before doing so. Threatening Bill Gates was always a dangerous game to play, and it was sheer folly when Gates had the upper hand, as he largely did now. Apple was at their lowest ebb of the 1980s when they tried to tell Microsoft that Windows would have to be cancelled or radically redesigned to excise any and all similarities to the Macintosh. Sales of the Mac had fallen to some 20,000 units per month, about one-fifth of Apple’s pre-launch projections for this point. The stream of early adopters with sufficient disposable income to afford the pricey gadget had ebbed away, and other potential buyers had started asking what you could really do with a Macintosh that justified paying two or three times as much for it as for an equivalent MS-DOS-based computer. Aldus PageMaker, the first desktop-publishing package for the Mac, had been released the previous summer, and would eventually go down in history as the product that, when combined with the Apple LaserWriter printer, saved the platform by providing a usage scenario that ugly old MS-DOS clearly, obviously couldn’t duplicate. But the desktop-publishing revolution would take time to show its full import. In the meantime, Apple was hard-pressed, and needed Microsoft — one of the few major publishers of business software actively supporting the Mac — far too badly to go around issuing threats to them.

Gates responded to Sculley’s threat with several of his own. If Sculley followed through with a lawsuit, Gates said, he’d stop all work at Microsoft on applications for the Macintosh and withdraw those that were already on store shelves, treating business computing henceforward as exactly the zero-sum game which he had never believed it to be in the past. This was a particularly potent threat in light of Microsoft’s new Excel spreadsheet, which had just been released to rave reviews and already looked likely to join PageMaker as the leading light among the second generation of Mac applications. In light of the machine’s marketplace travails, Apple was in no position to toss aside a sales driver like that one, the first piece of everyday Mac business software that was not just as good as but in many ways quite clearly better than equivalent offerings for MS-DOS. Yet Gates wouldn’t stop there. He would also, he said, refuse to renew Apple’s license to use Microsoft’s BASIC on their Apple II line of computers. This was a serious threat indeed, given that the aged Apple II line was the only thing keeping Apple as a whole afloat as the newer, sexier Macintosh foundered. Duly chastised, Apple backed down quickly — whereupon Gates, smelling blood in the water, pressed his advantage relentlessly, determined to see what else he could get out of finishing the fight Sculley had so foolishly begun.

One ongoing source of frustration between the two companies, dating back well into the days of Steve Jobs’s power and glory, was the version of BASIC for the Mac which Microsoft had made available for purchase on the day the machine first shipped. In the eyes of Apple and most of their customers, the mere fact of its existence on a platform that wasn’t replete with accessible programming environments was its only virtue. In practice, it didn’t work all that differently from Microsoft’s Apple II BASIC, offering almost no access to the very things which made the Macintosh the Macintosh, like menus, windows, and dialogs. A second release a year later had improved matters somewhat, but nowhere near enough in most people’s view. So, Apple had started work on a BASIC of their own, to be called simply MacBASIC, to supersede Microsoft’s. Microsoft BASIC for the Macintosh was hardly a major pillar of his company’s finances, but Bill Gates was nevertheless bothered inordinately by the prospect of it being cast aside. “Essentially, since Microsoft started their company with BASIC, they felt proprietary towards it,” speculates Andy Hertzfeld, one of the most important of the Macintosh software engineers. “They felt threatened by Apple’s BASIC, which was a considerably better implementation than theirs.” Gates said that Apple would have to kill their own version of BASIC and — just to add salt to the wound — sign over the name “MacBASIC” to Microsoft if they wished to retain the latter’s services as a Mac application developer and retain Microsoft BASIC on the Apple II.

And that wasn’t even the worst form taken by Gates’s escalation. Apple would also have to sign what amounted to a surrender document, granting Microsoft the right to create “derivative works of the visual displays generated by Apple’s Lisa and Macintosh graphic-user-interface programs.” The specific “derivative works” covered by the agreement were the user interfaces already found in Microsoft Windows for MS-DOS and five Microsoft applications for the Macintosh, including Word and Excel. The agreement provided Microsoft with nothing less than a “non-exclusive, worldwide, royalty-free, perpetual, non-transferable license to use those derivative works in present and future software programs, and to license them to and through third parties for use in their software programs.” In return, Microsoft would promise only to support Word and Excel on the Mac until October 1, 1986 — something they would certainly have done anyway. Gates was making another of those deviously brilliant tactical moves that were already establishing his reputation as the computer industry’s most infamous villain. Rather than denying that a “visual display” could fall under the domain of copyright, as many might have been tempted to do, he would rather affirm the possibility while getting Apple to grant Microsoft an explicit exception to being bound by it. Thus Apple — or, for that matter, Microsoft — could continue to sue MacOS’s — and potentially Windows’s — competitors out of existence while Windows trundled on unmolested.

Sculley called together his management team to discuss what to do about this Apple threat against Microsoft that had suddenly boomeranged into a Microsoft threat against Apple. Most at the meeting insisted that Gates had to be bluffing, that he would never cut off several extant revenue streams just to spite Apple and support this long-overdue Windows product of his which had been an industry laughingstock for so long. But Sculley wasn’t sure; he kept coming back to the fact that Microsoft could undoubtedly survive without Apple, but Apple might not be able to survive without Microsoft — at least not right now, given the Mac’s current travails. “I’m not ready to bloody the company,” he said, and signed the surrender document two days after Windows 1.01 first appeared in its boxed form at the Fall 1985 Comdex show’s Microsoft Roast. His tone toward Gates now verged on pleading: “What I’m really asking for, Bill, is a good relationship. I’m glad to give you the rights to this stuff.”

After the full scale of what John Sculley had given away to Bill Gates became clear, Apple fans started drawing pointed comparisons between Sculley and Neville Chamberlain. As it happened, Sculley’s version of “peace for our time” would last scarcely longer than Chamberlain’s. And as for Gates… well, plenty of Apple fans would indeed soon be calling him the Adolf Hitler of the computer industry in the midst of plenty of other overheated rhetoric.

Bill Gates wrote a jubilant email to eleven colleagues at Microsoft’s partner companies, saying that he had “received a release from Apple for any possible copyright, trade-secret, or patent issue relating to our products, including Windows.” The people at Apple were less jubilant. “Everyone was somewhat disgusted over [the agreement],” remembers Donn Denman, the chief programmer of Apple’s much superior but shitcanned MacBASIC. Sculley could only say to him and his colleagues that “it was the right decision for the company. It was a business decision.” They didn’t find him very convincing. The bad feelings engendered by the agreement would never entirely go away, and the relationship between Apple and Microsoft would never be quite the same again — not even when Excel became one of the prime drivers of something of a Macintosh Renaissance in the following year.

We jump forward now to March 17, 1988, by which time the industry had changed considerably. Microsoft was still entangled with IBM in the development of OS/2 and its Presentation Manager, but was also continuing to push Windows, which had come out in a substantially revised version 2 some six months earlier. The Macintosh, meanwhile, had carved out a reasonable niche for itself as a tool for publishers and creative professionals of various stripes, even as the larger world of business-focused personal computing continued to run on MS-DOS.

Sitting in his office that day, Bill Gates agreed to take a call from a prominent technology journalist, who asked him if he had a comment to make about the new lawsuit from Apple against Microsoft. “Lawsuit? What lawsuit?” Gates asked. He had just met with Sculley the day before to discuss Microsoft’s latest Mac applications. “He never mentioned it to me. Not one word,” said Gates to the reporter on the other end of the line.

Sculley, it seemed, had decided not to risk losing his nerve again. Apple had gone straight to filing their lawsuit in court, without giving Microsoft so much as a warning, much less a chance to negotiate a remedy. [2]Apple sued Hewlett-Packard at the same time, for an application called NewWave which ran on top of Windows and provided many of the Mac-like features, such as icons representing programs and disks and a desktop workspace, which Windows 2 alone still lacked. But that lawsuit would always remain a sideshow in comparison to the main event to whose fate its own must inevitably be tied. So, in the interest of that aforementioned clarity and concision, we won’t concern ourselves with it here. It appeared that the latest version of Microsoft’s GUI environment for MS-DOS, which with its free-dragging and overlapping windows hewed much closer to the Macintosh than its predecessor, had both scared and enraged Sculley to such an extent that he had judged this declaration of war to be his only option. “Windows 2 is an unconscionable ripoff of MacOS,” claimed Apple. They demanded $50,000 per infringement per unit of Windows sold — adding up to a downright laughable total of $4.5 billion by their current best estimate — and the “impoundment and destruction” of all extant or future copies of Windows. Microsoft replied that Apple had signed over the rights to the Mac’s “visual displays” for use in Windows in 1985, and, even if they hadn’t, such things weren’t really copyrightable anyway.

So, who had the right of this thing? As one might expect, the answer to that question is far more nuanced than the arguments which either side would present in court. Writing years after the lawsuit had passed into history but repeating the arguments he had once made in court, Tandy Trower, the Windows project leader at Microsoft from 1985 to 1988, stated that “the allegation clearly had no merit as I had never intended to copy the Macintosh interface, was never given any directive to do that, and never directed my team to do that. The similarities between the two products were largely due to the fact that both Windows and Macintosh had common ancestors, that being many of the earlier windowing systems, such as those like Alto and Star that were created at Xerox PARC.” This is, to put it bluntly, nonsense. To deny the massive influence of the Macintosh on Windows is well-nigh absurd — although, I should be careful to say, I have no reason to believe that Trower makes his absurd argument out of anything but ignorance here. By the time he arrived on the Windows team, Apple’s implementation of the GUI had already been so thoroughly internalized by the industry in general that the huge strides it had made over the Xerox PARC model were being forgotten, and the profoundly incorrect and unfair meme that Apple had simply copied Xerox’s work and called it a day was already taking hold.

The people at Xerox PARC had indeed originated the idea of the GUI, but, as playing with a Xerox Alto emulator will quickly reveal, hadn’t been able to take it anywhere close to the Macintosh’s place of elegant, intuitive usability. By the time the Xerox GUI made its one appearance as a commercial product, in the form of the Xerox Star office system, it had actually regressed in at least one way even as it progressed in many others: overlapping windows, which had been possible in Xerox PARC’s Smalltalk environment, were not allowed on the Star. Tellingly, the aspect of Windows 1 which attracted the most derision back in the day, and which still makes it appear so hapless today, is a similar rigid system of tiled windows. (The presence of this retrograde-seeming element was largely thanks to Scott MacGregor, who arrived at Microsoft to guide the Windows project after having been one of the major architects of the Star.) Meanwhile, as I also noted in my little tour of Windows 1 in a previous article, many of those aspects of it which do manage to feel natural and intuitive today — such as the drop-down menus — are those that work more like the Macintosh than anything developed at Xerox PARC. In light of this reality, Microsoft’s GUI would only hew closer to the Mac model as time went on, for the incontrovertible reason that the Mac model was just better for getting real stuff done in the real world.

And there are plenty of other disconcerting points of similarity between early versions of MacOS and early versions of Windows. Right from the beginning, Windows 1 shipped with a suite of applets — a calculator, a “control panel” for system settings, a text editor that went by the name of “notepad,” etc. — that were strikingly similar to those included in MacOS. Further, if other members of the Windows team itself are to be believed, Microsoft’s Neil Konzen, a programmer intimately familiar with the Macintosh, duplicated some of MacOS’s internal structures so closely as to introduce some of the same bugs. In short, to believe that the Macintosh wasn’t the most important influence on the Windows user interface by far, given not only the similarities in the finished product but the knowledge that Microsoft had been working daily with the evolving Macintosh since January of 1982, is to actively deny reality out of either ignorance or some ulterior motive.

Which isn’t to say that Microsoft’s designers had no ideas of their own. In fact, some of those ideas are still in place in current versions of Windows. To take perhaps the most immediately obvious example, Windows then and now places its drop-down menus at the top of the windows themselves, while the Macintosh has a menu bar at the top of the screen which changes to reflect the currently selected window. [3]Both Microsoft and Apple have collected reams of data which they claim prove that their approach is the best one. I do suspect, however, that the original impetus can be found in the fact that MacOS was originally a single-tasking operating system, meaning that only one menu bar would need to be available at any one time. Windows, on the other hand, was designed as a multitasking environment from the start. And Microsoft’s embrace of the two-button mouse, contrasted with Apple’s stubborn loyalty to the one-button version of same, has sparked constant debate for decades. Still, differing details like these should be seen as exactly that in light of all the larger-scale similarities.

And yet just acknowledging that Windows was, shall we say, strongly influenced by MacOS hardly got to the bottom of the 1988 case. There was still the matter of that November 1985 agreement, which Microsoft was now waving in the face of anyone in the legal or journalistic professions who would look at it. The bone of contention between the two companies here was whether the “visual displays” of Windows 2 as well as Windows 1 were covered by the agreement. Microsoft naturally contended that they were; Apple contended that the Windows 2 interface had changed so much in comparison to its predecessor — coming to resemble the Macintosh even more in the process — that it could no longer be considered one of the specific “derivative works” to which Apple had granted Microsoft a license.

We’ll return to the court’s view of this question shortly. For now, though, let’s give Apple the benefit of the doubt as we continue to explore the full ramifications of their charges against Microsoft. The fact was that if one accepted Apple’s contention that Windows 2 wasn’t covered by the agreement, the questions surrounding the case grew more rather than less momentous. Could and should one be able to copyright the “look and feel” of a user interface, as opposed to the actual code used to create it? In pressing their claim, Apple was relying on an amorphous, under-explicated area of copyright law known as “visual copyright.”

In terms of computer software, the question of the bounds of visual copyright had been most thoroughly explored in the context of videogames. Back in 1980, Midway, a major producer of standup-arcade games, had sued a much smaller company called Dirkschneider for producing a clone of their popular game Galaxian. The judge in that case ruled in favor of Midway, formulating a new legal standard called the “Ten-foot Rule”: “If a reasonable person could not, at ten feet, tell the difference between two competitive products, then there was cause to believe an infringement was occurring.” Atari, the biggest videogame producer of all, then proceeded to use this precedent to pressure dozens of companies into withdrawing their clones of Atari games — in arcades, on game consoles, and on computers — from the market.

Somewhat later, in 1985, Brøderbund Software sued Kyocera for bundling with their printers an application called Printmaster, a thinly veiled clone of Brøderbund’s own hugely popular Print Shop package for making signs, greeting cards, and banners. They won their case the following year, with Judge William H. Orrick stating that Brøderbund’s copyright did indeed cover “the overall appearance, structure, and sequence” of screens in their software, and that Kyocera had thus infringed on same. Brøderbund’s Gary Carlston called the ruling “historic”: “If we don’t have copyright protection for our products, then it is going to be significantly more difficult to maintain a competitive advantage.” Encouraged by this ruling, in 1987 a maker of telecommunications software called Digital Communications Associates sued a company called Softklone Corporation — their name certainly didn’t help their cause — for copying the status display of their terminal software, and won their case as well. The Ten-Foot Rule, it seemed, could be successfully applied to software other than games. Both of these cases were cited by Apple’s lawyers in their own suit against Microsoft.

Brøderbund’s Print Shop side-by-side with Kyocera’s Printmaster.

Yet the Ten-Foot Rule, at least when applied to general-purpose software rather than games, struck many as deeply problematic. One of the most important advantages of a GUI was the way it made diverse types of software from diverse developers work and look the same, thereby keeping the user from having to relearn how to do the same basic tasks over and over again. What sort of chaos would follow if people started suing each other willy-nilly over this much-needed uniformity? And what did the Ten-Foot Rule mean for the many GUI environments, on MS-DOS and other platforms, that looked so similar to one another and to MacOS? That, of course, was the real crux of the matter for Microsoft and Apple as they faced one another in court.

The debate over the Ten-Foot Rule  and its potential ramifications wasn’t actually a new one, having already been taken up in public by the software industry before Apple filed their lawsuit. Fully thirteen months before that momentous day, Larry Tesler, an Apple executive, clashed heatedly with Bill Gates over this very issue at a technology conference. Tesler insisted that there was no problem inherent in applying the Ten-Foot Rule to operating systems and operating environments: “When someone comes up with a very good and popular look and feel, as we’ve done with the Macintosh, then they can make that available by licensing [it] to other people.”

But Gates was having none of this:

There’s no control of look and feel. I don’t know anybody who has asserted that things like drop-down menus and dialog boxes and just those general form-type aspects are subject to this look-and-feel stuff. Certainly it’s our view that the consistency of the user interface has become a very spreading thing, and that it’s open, generic technology. All of these approaches — how you click on [menu] bars, and certainly all those user-interface techniques and windows — there’s absolutely no restriction in any way on how people use those.

He thus ironically argued against the very premise of the 1985 agreement between Apple and Microsoft — that Apple had created a “visual display” subject to copyright protection, to which they were now granting Microsoft a license for certain products. But then, Gates seldom let deeply-held philosophical beliefs interfere with his pursuit of short-term advantage. In this latest debate as well, Gates’s arguments were undoubtedly self-serving, but they were no less valid in this case for being so. The danger he could point to if this sort of thing should spread was that of every innovative new application seeking copyright protection not just for its code but for the very ideas that made it up. Because form — i.e., look and feel — ideally followed function in software engineering. What would have happened if VisiCorp had been able to copyright the look and feel of the first spreadsheet? (VisiCalc and Lotus 1-2-3 looked pretty much identical from ten feet.) If WordStar had been able to copyright the look and feel of the word processor? If, to choose a truly absurd example, the first individual to devise a command-line interface back in the mists of time had been able to copyright that? It wasn’t at all clear where the lines could be drawn once the law started down this slippery slope. If Apple owned the set of ideas and approaches that everyone now thought of as the GUI in general, where did that leave the rest of the industry?

For this reason, Apple’s lawsuit, when it came, was greeted with deep concern even by many of those who weren’t particularly friendly with Microsoft. “Although Apple has a right to protect the results of its development and marketing efforts,” said the respected Silicon Valley pundit Larry Magid, “it should not try to thwart the obvious direction of the industry.” “If Apple is trying to push this as far as they appear to be trying to push it,” said Dan Bricklin of VisiCalc fame, “this is a sad day for the software industry in America.” More surprisingly, MacOS architect Andy Hertzfeld said that “in general, it’s a horrible thing. Apple could really end up hurting itself.” Most surprisingly of all, even Steve Jobs, now running a new company called NeXT, found Apple’s arguments as dangerous as they were unconvincing: “When we were developing the Macintosh, we kept in mind a famous quote of Picasso: ‘Good artists copy, great artists steal.’ What do I think of the suit? I personally don’t understand it. Can I copyright gravity? No.”

Interestingly, the lawyers pressing the lawsuit on Apple’s behalf didn’t ask for a preliminary injunction that would have forced Microsoft to withdraw Windows from the market. Some legal watchers interpreted this fact as a sign that they themselves weren’t certain about the real merits of their case, and hoped to win it as much through bluster as finely-honed legal arguments. Ditto Apple’s request that the eventual trial be decided by a jury of ordinary people who might be prone to weigh the case based on everyday standards of “fairness,” rather than by a judge who would be well-versed in the niceties of the law and the full ramifications of a verdict against Microsoft.

At this point, and especially given those ramifications, one feels compelled to ask just why Apple chose at this juncture to embark on such a lengthy, expensive, and fraught enterprise as a lawsuit against the company that remained the most important single provider of serious business software for the Macintosh, a platform whose cup still wasn’t exactly running over with such things. By way of an answer, we should consider that John Sculley was as proud a man as most people who rise to his elevated status in business tend to be. The belief, widespread both inside and outside of Apple, that he had let Bill Gates bully, outsmart, and finally rob him blind back in 1985 had to rankle him badly. In addition, Apple in general had long nursed a grievance, unproductive but understandable, against all the outsiders who had copied the interface they had worked so long and hard to perfect; thus those threatened lawsuits against Digital Research and Microsoft all the way back in 1985. A wiser leader might have told his employees to take their competitors’ imperfect copying as proof of Apple’s superiority, might have exhorted them to look toward their next big innovation rather than litigate their innovations of the past. But, at least on March 17, 1988, John Sculley wasn’t that wiser leader. Thus this lawsuit, dangerous not just to Apple and Microsoft but to their entire industry.

Bill Gates, for his part, remained more accustomed to bullying than being bullied. It had been spelled out for him right there in the court filing that a loss to Apple would almost certainly mean the end of Windows, the operating environment which was quite possibly the key to Microsoft’s future. Even widespread fear of such an event, he realized, could be devastating to Windows’s — and thus to Microsoft’s — prospects. So, he struck back fiercely so as to leave no doubt where he stood. Microsoft filed a counter-suit in April of 1988, accusing Apple of breaking the 1985 agreement and of filing their own lawsuit in bad faith, in the hope of creating fear, uncertainty, and doubt around Windows and thus “wrongfully inhibiting” its commercial future. Adding weight to their argument that the original lawsuit was a form of business competition by other means was the fact that Apple was being oddly selective in choosing whom to sue over the alleged copyright violations. Asked why they weren’t going after other products just as similar to MacOS as Windows, such as IBM’s forthcoming OS/2 Presentation Manager, Apple refused to comment.

The first skirmishes took place in the press rather than a courtroom: Sculley accusing Gates of having tricked him into signing the 1985 agreement, Gates saying a contract was a contract, and what sort of a chief executive let himself be tricked anyway? The exchanges just kept getting uglier from there. The technology journalists, naturally, loved every minute of it, while the software industry was thrown into a tizzy, wondering what this would mean for Windows just as it finally seemed to be gaining some traction. Phillipe Kahn, CEO of Borland, described the situation in colorful if non-politically-correct language: it was like “waking up and finding your partner might have AIDS.”

The court case marched forward much more slowly than the tabloid war of words. Gates stated in a sworn deposition that “from a user’s perspective, the visual displays which appear in Windows 2 are virtually identical to those which appear in Windows 1.” “This assertion,” Apple replied, “is contradicted by even the most casual observation of the two products.” On March 18, 1989, Judge William Schwarzer of the Federal District Court in San Francisco marked the one-year anniversary of the case by ruling against Microsoft on this issue, stating that only those attributes of Windows 2 which had also existed in Windows 1 were covered by the 1985 agreement. This meant most notably that the newer GUI’s system of overlapping windows stood outside the boundaries of that document, and thus that, as the judge put it, the 1985 agreement alone “was not a complete defense” for Microsoft. It did not, he ruled, give Microsoft the right “to develop future versions of Windows as it pleases. What Microsoft received was a license to use the visual displays in the named software products as they appeared to the user in November 1985. The displays [of Windows 1 and Windows 2] are fundamentally different.” Microsoft’s stock price promptly plummeted by 27 percent. It was an undeniable setback. “Microsoft’s major defense has been shot down,” crowed Apple’s attorneys.

“Major” was perhaps not the right choice of adjectives, but certainly Microsoft’s simplest possible form of defense had proved insufficient to bail them out. It seemed that total victory could be achieved now only by invalidating the whole notion of visual copyright which underlay both the 1985 agreement and Apple’s lawsuit based on its violation. That meant a long, tough slog at best. And with Windows 3 — the version that Microsoft was convinced would finally be the breakthrough version — getting closer and closer to release and looking more and more like the Macintosh all the while, the stakes were higher than ever.

The question of look and feel and visual copyright as applied to software had implications transcending even the fate of Windows or either company. If Apple’s suit succeeded, it would transform the software business overnight, making it extremely difficult to borrow or build on the ideas of others in the way that software had always done in the past. Bill Gates was an avid student of business history. As was his wont, he now looked back to compare his current plight with that of an earlier titan of industry. Back in 1903, just as the Ford Motor Company was getting off the ground, Henry Ford had been hit with a lawsuit from a group of inventors claiming to own a patent on the very concept of the automobile. He had battled them for years, vowing to fight on even after losing in open court in 1909: “There will be no let-up in the legal fight,” he declared on that dark day. At last, in 1911, he won the case on appeal — winning it not only for Ford Motor Company but for the future of the automobile industry as a field of open competition. His own legal war had similar stakes, Gates believed, and he and Microsoft intended to prosecute it in equally stalwart fashion — to win it not just for themselves but for the future of the software industry. This was necessary, he wrote in a memo, “to help set the boundaries of where copyrights should and should not be applied. We will prevail.”

(Sources: the books The Making of Microsoft: How Bill Gates and His Team Created the World’s Most Successful Software Company by Daniel Ichbiah and Susan L. Knepper, Hard Drive: Bill Gates and the Making of the Microsoft Empire by James Wallace and Jim Erickson, Gates: How Microsoft’s Mogul Reinvented an Industry and Made Himself the Richest Man in America by Stephen Manes and Paul Andrews, and Apple Confidential 2.0: The Definitive History of the World’s Most Colorful Company by Owen W. Linzmayer; Wall Street Journal of September 25 1987; Creative Computing of May 1985; InfoWorld of October 7 1985 and October 20 1986; MacWorld of October 1993; New York Times of March 18 1988 and March 18 1989.)

Footnotes

Footnotes
1 Reworked to run under a 68000 architecture, GEM would enjoy some degree of sustained success in another realm: not as an MS-DOS-hosted GUI but as the GUI hosted in the Atari ST’s ROM. In this form, it would survive well into the 1990s.
2 Apple sued Hewlett-Packard at the same time, for an application called NewWave which ran on top of Windows and provided many of the Mac-like features, such as icons representing programs and disks and a desktop workspace, which Windows 2 alone still lacked. But that lawsuit would always remain a sideshow in comparison to the main event to whose fate its own must inevitably be tied. So, in the interest of that aforementioned clarity and concision, we won’t concern ourselves with it here.
3 Both Microsoft and Apple have collected reams of data which they claim prove that their approach is the best one. I do suspect, however, that the original impetus can be found in the fact that MacOS was originally a single-tasking operating system, meaning that only one menu bar would need to be available at any one time. Windows, on the other hand, was designed as a multitasking environment from the start.
 

Tags: , , , , ,

Doing Windows, Part 5: A Second Try

The beginning of serious work on the operating system that would come to be known as OS/2 left Microsoft’s team of Windows developers on decidedly uncertain ground. As OS/2 ramped up, Windows ramped down in proportion, until by the middle of 1986 Tandy Trower had just a few programmers remaining on his team. What had once been Microsoft’s highest-priority project had now become a backwater. Many were asking what the point of Windows was in light of OS/2 and its new GUI, the Presentation Manager. Steve Ballmer, ironically the very same fellow who had played the role of Windows’s cheerleader-in-chief during 1984 and 1985, was now the most prominent of those voices inside Microsoft who couldn’t understand why Trower’s team continued to exist at all.

Windows survived only thanks to the deep-seated instinct of Bill Gates to never put all his eggs in one basket. Stewart Alsop II, editor-in-chief of InfoWorld magazine during this period:

I know from conversations with people at Microsoft in environments where they didn’t have to bullshit me that they almost killed Windows. It came down to Ballmer and Gates having it out. Ballmer wanted to kill Windows. Gates prevented him from doing it. Gates viewed it as a defensive strategy. Just look at Gates. Every time he does something, he tries to cover his bet. He tries to have more than one thing going at once. He didn’t want to commit everything to OS/2, just on the off chance it didn’t work. And in hindsight, he was right.

Gates’s determination to always have a backup plan showed its value around the beginning of 1987, when IBM informed Microsoft of their marketing plans for their new PS/2 hardware line as well as OS/2. Gates had, for very good reason, serious reservations about virtually every aspect of the plans which IBM now laid out for him, from the fortress of patents being constructed around the proprietary Micro Channel Architecture of the PS/2 hardware to an attitude toward the OS/2 software which seemed to assume that the new operating system would automatically supersede MS-DOS, just because of the IBM name. (How well had that worked out for TopView?) “About this time is when Microsoft really started to get hacked at IBM,” remembers Mark Mackaman, Microsoft’s OS/2 product manager at the time. IBM’s latest misguided plans, the cherry on top of all of Gates’s frustration with his inefficient and bureaucratic partners, finally became too much for him. Beginning to feel a strong premonition that the OS/2 train was going to run off the tracks alongside PS/2, he suddenly started to put some distance between IBM’s plans and Microsoft’s. After having all but ignored Windows for the past year, he started to talk about it in public again. And Tandy Trower found his tiny team’s star rising once again inside Microsoft, even as OS/2’s fell.

The first undeniable sign that a Windows rehabilitation had begun came in March of 1987, when Microsoft announced that they had sold 500,000 copies of the operating environment since its release in November of 1985. This number came as quite a surprise to technology journalists, whose own best guess would have pegged Windows’s sales at 10 percent of that figure at best. It soon emerged that Microsoft was counting all sorts of freebies and bundle deals as regular unit sales, and that even by their own most optimistic estimates no more than 100,000 copies of Windows had ever actually been installed. But no matter. For dedicated Microsoft watchers, their fanciful press release was most significant not for the numbers it trumpeted but as a sign that Windows was on again.

According to Paul and George Grayson, whose company Micrografix was among the few which embraced Windows 1, the public announcement of OS/2 and its Presentation Manager in April of 1987 actually lent Microsoft’s older GUI new momentum:

Everybody predicted when IBM announced OS/2 and PM [that] it was death for Windows developers. It was the exact opposite: sales doubled the next month. Everybody all of a sudden knew that graphical user interfaces were critical to the future of the PC, and they said, “How can I get one?”

You had better graphics, you had faster computers, you had kind of the acknowledgement that graphical user interfaces were in your future, you had the Macintosh being very successful. So you had this thing, this phenomenon called Mac envy, beginning to occur where people had PCs and they’d look at their DOS-based programs and say, “Boy, did I get ripped off.” And mice were becoming popular. People wanted a way to use mice. All these things just kind of happened at one moment in time, and it was like hitting the accelerator.

It did indeed seem that Opportunity was starting to knock — if Microsoft could deliver a version of Windows that was more compelling than the first. And Opportunity, of course, was one house guest whom Bill Gates seldom rejected. On October 6, 1987, Microsoft announced that Windows 2 would ship in not one but three forms within the next month or two. Vanilla Windows 2.03 would run on the same hardware as the previous version, while Windows/386 would be a special supercharged version made just for the 80386 CPU — a raised middle finger to IBM for refusing to let Microsoft make OS/2 an 80386-exclusive operating system.

But the most surprising new Windows product of all actually bore the name “Microsoft Excel” on the box. After struggling fruitlessly for the past two years to get people to write native applications for Windows, Microsoft had decided to flip that script by making a version of Windows that ran as part of an application. The new Excel spreadsheet would ship with what Microsoft called a “run-time” version of Windows 2, sufficient to run Excel and only Excel. When people tried Excel and liked it, they’d go out and buy a proper Windows in order to make all their computing work this way. That, anyway, was the theory.

Whether considered as Excel for Windows or Windows for Excel, Microsoft’s latest attempt to field a competitor to Lotus 1-2-3 already had an interesting history. It was in fact a latecomer to the world of MS-DOS, a port of a Macintosh product that had been very successful over the past two years.

After releasing a fairly workmanlike version of Multiplan for the Macintosh back in 1984, Microsoft had turned their attention to a more ambitious Mac spreadsheet that would be designed from scratch in order to take better advantage of the GUI. The wisdom of committing resources to such a move sparked considerable debate both inside and outside their offices, especially after Lotus announced plans of their own for a Macintosh product called Jazz.

Lotus 1-2-3 on MS-DOS was well on its way to becoming the most successful business application of the 1980s by combining a spreadsheet, a database, and a business-graphics application in one package. Now, Lotus Jazz proposed to add a word processor and telecommunications software to that collection on the Macintosh. Few gave Microsoft’s Excel much chance on the Mac against Lotus, the darling of the Fortune magazine set, a company which could seemingly do no wrong, a company which was arguably better known than Microsoft at the time and certainly more profitable. But when Jazz shipped on May 27, 1985, it was greeted with unexpectedly lukewarm reviews. It felt slow and painfully bloated, with an interface that felt more like a Baroque fugue than smooth jazz. For the first time since toppling VisiCalc from its throne as the queen of American business software, Lotus had left the competition an opening.

Excel for Mac shipped on September 30, 1985. In addition to feeling elegant, fast, and easy in contrast to the Lotus monstrosity, Microsoft’s latest spreadsheet was also much cheaper. It outdistanced its more heralded competitor in remarkably short order, quickly growing into a whale in the relatively small pond that was the Macintosh business-applications market. In December of 1985, Excel alone accounted for 36 percent of said market, compared to 9 percent for Jazz. By the beginning of 1987, 160,000 copies of Excel had been sold, compared to 10,000 copies of Jazz. And by the end of that same year, 255,000 copies of Excel had been sold — approximately one copy for every five Macs in active use.

Such numbers weren’t huge when set next to the cash cow that was MS-DOS, but Excel for the Macintosh was nevertheless a breakthrough product for Microsoft. Prior to it, system software had been their one and only forte; despite lots and lots of trying, their applications had always been to one degree or another also-rans, chasing but never catching market leaders like Lotus, VisiCorp, and WordPerfect. But the virgin territory of the Macintosh — ironically, the one business computer for which Microsoft didn’t make the system software — had changed all that. Microsoft’s programmers did a much better job of embracing the GUI paradigm than did their counterparts at companies like Lotus, resulting in software that truly felt like it was designed for the Macintosh from the start rather than ported over from another, more old-fashioned platform. Through not only Excel but also a Mac version of their Word word processor, Microsoft came to play a dominant role in the Mac applications market, with both products ranking among the top five best-selling Mac business applications more months than not during the latter 1980s. Now the challenge was to translate that success in the small export market that was the Macintosh to Microsoft’s sprawling home country, the land of MS-DOS.

On August 16, 1987, Microsoft received some encouraging news just as they were about to take up that challenge in earnest. For the first time ever, their total sales over the previous year amounted to enough to make them the biggest software company in the world, a title which they inherited from none other than Lotus, who had enjoyed it since 1984. The internal memo which Bill Gates wrote in response says everything about his future priorities. “[Lotus’s] big distinction of being the largest is being taken away,” he said, “before we have even begun to really compete with them.” The long-awaited version of Excel for PC compatibles would be launched with two important agendas in mind: to hit Lotus where it hurt, and to get Windows 2 — some form of Windows 2 — onto people’s computers.

Excel for Windows — or should we say Windows for Excel? — reached stores by the beginning of November 1987, to a press reception that verged on ecstatic. PC Magazine‘s review was typical:

Microsoft Excel, the new spreadsheet from Microsoft Corp., could be one of those milestone programs that change the way we use computers. Not only does Excel have a real chance of giving 1-2-3 its most serious competition since Lotus Development Corp. introduced that program in 1982, it could finally give the graphics interface a respectable home in the starched-shirt world of DOS.

For people who cut their teeth on 1-2-3 and have never played with a Mac, Excel looks more like a video game than a serious spreadsheet. It comes with a run-time version of Microsoft Windows, so it has cheery colors, scroll bars, icons, and menu bars. But users will soon discover the beauty of Windows. Since it treats the whole screen as graphics, you can have different spreadsheets and charts in different parts of the screen and you can change nearly everything about the way anything looks.

Excel won PC Magazine‘s award for “technical excellence” in 1987 — a year that notably included OS/2 among the field of competitors. The only thing to complain about was performance: like Windows itself, Excel ran like a dog on an 8088-based machine and sluggishly even on an 80286, requiring an 80386 to really unleash its potential.

Especially given the much greater demands Excel placed on its hardware, it would have to struggle long and hard to displace the well-entrenched Lotus 1-2-3, but it would manage to capture 12 percent of the MS-DOS spreadsheet market in its first year alone. In the process, the genius move of packaging Windows itself with by far the most exciting Windows application yet created finally caused significant numbers of people to actually start using Microsoft’s GUI, paving the road to its acceptance among even the most conservative of corporate users. Articles by starch-shirted Luddites asking what GUIs were really good for became noticeably less common in the wake of Excel, a product which answered that question pretty darn comprehensively.

Of course, Excel could never have enjoyed such success as the front edge of Microsoft’s GUI wedge had the version of Windows under which it ran not been much more impressive than the first one. Ironically, many of the most welcome improvements came courtesy of the people from the erstwhile Dynamical Systems Research, the company Microsoft had bought, at IBM’s behest, for their work on a TopView clone that could be incorporated into OS/2. After IBM gave up on that idea, most of the Dynamical folks wound up on the Windows team, where they did stellar work. Indeed, one of them, Nathan Myhrvold, would go on to become Microsoft’s chief software architect and still later head of research, more than justifying his little company’s $1.5 million purchase price all by himself. Take that, IBM!

From the user’s perspective, the most plainly obvious improvement in Windows 2 was the abandonment of Scott MacGregor’s pedantic old tiled-windows system and the embrace of a system of sizable, draggable, overlappable windows like those found on the Macintosh. For the gearheads, though, the real excitement lay in the improvements hidden under the hood of Windows/386, which made heavy use of the 80386’s “virtual” mode. Windows itself and its MS-DOS plumbing ran in one virtual machine, and each vanilla MS-DOS application the user spawned therefrom got another virtual machine of its own. This meant that as many MS-DOS applications and native Windows applications as one wished could now be run in parallel, under a multitasking model that was preemptive for the former and cooperative for the latter. The 640 K barrier still applied to each of the virtual machines and was thus still a headache, requiring the usual inefficient workarounds in the form of extended or expanded memory for applications that absolutely, positively had to have access to more memory. Still, having multiple 640 K virtual machines at one’s disposal was better than having just one.

Windows/386 was arguably the first version of Windows that wasn’t more trouble than it was worth for most users. If you had the hardware to run it, it was a very compelling product, even if the realities of the software market meant that you used it more frequently to multitask old-school MS-DOS applications than to run native Windows applications.

A Quick Tour of Windows/386 2.11

Microsoft themselves didn’t seem to entirely understand the relationship between Windows and OS/2’s Presentation Manager during the late 1980s. This version of Windows inexplicably has the “Presentation Manager” name as well on some of the disks. No wonder users were often confused as to which product was the real Microsoft GUI of the future. (Version 2.11 of Windows/386, the one we’re looking at here, was released some eighteen months after the initial 2.01 release, which I couldn’t ever manage to get running under emulation due to strange hard-drive errors. But the material differences between the two versions are relatively minor.)

Windows 2 remains less advanced than Presentation Manager in many ways, such as the ongoing lack of any concept of application installation. Without it, we’re still left to root around through the individual files on the hard disk in order to start applications.

Especially in the Windows/386 version, most of the really major improvements that came with Windows 2 are architectural enhancements that are hidden under the hood. There is, however, one glaring exception to that rule: the tiled windows are gone, replaced with a windowing system that works the way we still expect such things to work today. You can drag these windows, you can size them, and you can overlap them as you will.

The desktop concept is still lacking, but we’re making progress. Like in Windows 1, icons on the desktop only represent running applications that have been minimized. Unlike in Windows 1, these icons can now be dragged anywhere on the proto-desktop. From here, it would require only one more leap of logic to start using the icons to represent things other than running applications. Baby steps… baby steps.

Windows/386 removes some if by no means all of the sting from the 640 K barrier. Thanks to the 80386’s virtual mode, vanilla MS-DOS applications can now run in memory above the 640 K barrier, leaving the first 640 K free for Windows itself and native Windows applications. So few compelling examples of the latter existed during Windows/386’s heyday that the average user was likely to spend a lot more time running MS-DOS applications with it anyway. In this scenario, memory was ironically much less of a problem than it would have been had the user attempted to run many native applications.

One of the less heralded of Microsoft’s genius marketing moves has been the use of Microsoft Excel as a sort of Trojan horse to get people using Windows. When installed on a machine without Windows, Excel also installs a “run-time” version of the operating environment sufficient only to run itself. Excel would, as Microsoft’s thinking went, get people used to a GUI and get them asking why they couldn’t use one for the other tasks they had to accomplish on the computer. “Why, as a matter of fact you can,” would go Microsoft’s answer. “You just need to buy this product called Windows.” Uptake wouldn’t be instant, but Excel did become quite successful as a standalone product, and did indeed do much to pave the way for the eventual near-universal acceptance of Windows 3.

Excel running under either a complete or a run-time version of Windows. When it appeared alongside Windows 2 in late 1987, it was by far the most sophisticated and compelling application yet made for the environment, giving the MS-DOS-using masses for the first time a proof of concept of what a GUI could mean in the real world.

Greatly improved though it was, Windows 2 didn’t blow the market away. Plenty of the same old problems remained, beginning and for many ending with the fact that seeing it at its best required a pricey 80386-based computer. In light of this, third-party software developers didn’t exactly stampede onto the Windows bandwagon. Still, having been provided with such a wonderful example in the form of Microsoft Excel of how compelling (and profitable) a Windows application done right could be, some of them did begin to commit some resources to Windows as well as vanilla MS-DOS. Throughout the life of Windows 2, Microsoft made a standard practice of giving their run-time version of it to outside application developers as well as their own, all in a bid to give people a taste of a GUI through the word processor, spreadsheet, or paint program they were going to buy anyway. To some extent at least, it worked. Some users turned that taste into a meal by buying boxed copies of Windows 2, and still more were intrigued enough to quit scoffing and start accepting that GUIs in general might truly be a better way to get their work done — if not now, then at some point in the future, when the hardware and the software had both gotten a little bit better.

By the spring of 1988, Windows was still at least an order of magnitude away from meeting the goal Bill Gates had once stated it would manage before the end of 1984: that of being installed on 90 percent of all MS-DOS computers. But, even if Windows 2 hadn’t blown anyone away, it was doing considerably better than Windows 1, and certainly seemed to have more momentum than OS/2’s as-yet-unreleased Presentation Manager. Granted, neither of these were terribly high bars to clear — and yet there was a dawning sense that Windows, six and a half years on from its birth as the humble Interface Manager, might just get the last laugh on the MS-DOS command line after all. Microsoft was already formulating plans for a Windows 3, which was coming to be seen both inside and outside the company as the pivotal version, the point where steadily improving hardware would combine with better software to break the GUI into the business-computing mainstream at long last. No, it wasn’t 1984 any more, but better late than never.

And then a new development threatened to pull the rug out from under all the progress that had been made. On March 17, 1988, Apple blindsided Microsoft by filing a lawsuit against them in federal court, alleging that the latter had stolen the former’s intellectual property by copying the “look and feel” of the Macintosh GUI. With the gauntlet thus thrown down, the stage was set for one of the most titanic legal battles in the history of the computer industry, one with the potential to fundamentally alter the very nature of the software business. At stake as well was the very existence of Windows just as it finally seemed to be getting somewhere. And as went Windows, Bill Gates was coming to believe once again, so went Microsoft. In order for both to survive, he would now have to win a two-front war: one in the marketplace, the other in the court system.

(Sources: the books The Making of Microsoft: How Bill Gates and His Team Created the World’s Most Successful Software Company by Daniel Ichbiah and Susan L. Knepper, Hard Drive: Bill Gates and the Making of the Microsoft Empire by James Wallace and Jim Erickson, Gates: How Microsoft’s Mogul Reinvented an Industry and Made Himself the Richest Man in America by Stephen Manes and Paul Andrews, Computer Wars: The Fall of IBM and the Future of Global Technology by Charles H. Ferguson and Charles R. Morris, and Apple Confidential 2.0: The Definitive History of the World’s Most Colorful Company by Owen W. Linzmayer; PC Magazine of November 10 1987, November 24 1987, December 22 1987, April 12 1988, and September 12 1989; Byte of May 1988 and July 1988; Tandy Trower’s “The Secret Origins of Windows” on the Technologizer website. Finally, I owe a lot to Nathan Lineback for the histories, insights, comparisons, and images found at his wonderful online “GUI Gallery.”)

 

Tags: , , ,