RSS

Monthly Archives: August 2018

The Games of Windows

There are two stories to be told about games on Microsoft Windows during the operating environment’s first ten years on the market. One of them is extremely short, the other a bit longer and far more interesting. We’ll dispense with the former first.

During the first half of the aforementioned decade — the era of Windows 1 and 2 — the big game publishers, like most of their peers making other kinds of software, never looked twice at Microsoft’s GUI. Why should they? Very few people were even using the thing.

Yet even after Windows 3.0 hit the scene in 1990 and makers of other kinds of software stampeded to embrace it, game publishers continued to turn up their noses. The Windows API made life easier in countless ways for makers of word processors, spreadsheets, and databases, allowing them to craft attractive applications with a uniform look and feel. But it certainly hadn’t been designed with games in mind; they were so far down on Microsoft’s list of priorities as to be nonexistent. Games were in fact the one kind of software in which uniformity wasn’t a positive thing; gamers craved diverse experiences. As a programmer, you couldn’t even force a Windows game to go full-screen. Instead you were stuck all the time inside the borders of the window in which it ran; this, needless to say, didn’t do much for immersion. It was true that Windows’s library for programming graphics, known as the Graphics Device Interface, or GDI, liberated programmers from the tyranny of the hardware — from needing to program separate modules to interact properly with every video standard in the notoriously diverse MS-DOS ecosystem. Unfortunately, though, GDI was slow; it was fine for business graphics, but unusable for most of the popular game genres.

For all these reasons, game developers, alone among makers of software, stuck obstinately with MS-DOS throughout the early 1990s, even as everything else in mainstream computing went all Windows, all the time. It wouldn’t be until after the first decade of Windows was over that game developers would finally embrace it, helped along both by a carrot (Microsoft was finally beginning to pay serious attention to their needs) and a stick (the ever-expanding diversity of hardware on the market was making the MS-DOS bare-metal approach to programming untenable).

End of story number one.

The second, more interesting story about games on Windows deals with different kinds of games from the ones the traditional game publishers were flogging to the demographic who were happy to self-identify as gamers. The people who came to play these different kinds of games couldn’t imagine describing themselves in those terms — and, indeed, would likely have been somewhat insulted if you had suggested it to them. Yet they too would soon be putting in millions upon millions of hours every year playing games, albeit more often in antiseptic adult offices than in odoriferous teenage bedrooms. Whatever; the fact was, they were still playing games. In fact, they were playing games enough to make Windows, that alleged game-unfriendly operating environment, quite probably the most successful gaming platform of the early 1990s in terms of sheer number of person-hours spent playing. And all the while the “hardcore” gamers barely even noticed this most profound democratization of computer gaming that the world had yet seen.



Microsoft Windows, like its inspiration the Apple Macintosh, used what’s known as a skeuomorphic interface — an interface built out of analogues to real-world objects, such as paper documents, a desktop,  and a trashcan — to present a friendlier face of computing to people who may have been uncomfortable with the blinking command prompt of yore. It thus comes as little surprise that most of the early Windows games were skeuomorphic as well, being computerized versions of non-threateningly old-fashioned card and board games. In this, they were something of a throwback to the earliest days of personal computing in general, when hobbyists passed around BASIC versions of these same hoary classics, whose simple designs constituted some of the only ones that could be made to fit into the minuscule memories of the first microcomputers. With Windows, it seemed, the old had become new again, as computer gaming started over to try to capture a whole new demographic.

The very first game ever programmed to run in Windows is appropriately prototypical. When Tandy Trower took over the fractious and directionless Windows project at Microsoft in January of 1985, he found that a handful of applets that weren’t, strictly speaking, a part of the operating environment itself had already been completed. These included a calculator, a rudimentary text editor, and a computerized version of a board game called Reversi.

Reversi is an abstract game for two players that looks a bit like checkers and plays like a faster-paced, simplified version of the Japanese classic Go. Its origins are somewhat murky, but it was first popularized as a commercial product in late Victorian England. In 1971, an enterprising Japanese businessman made a couple of minor changes to the rules of this game that had long been considered in the public domain, patented the result, and started selling it as Othello. Under this name, it enjoys modest worldwide popularity to this day. Under both of its names, it also became an early favorite on personal computers, where its simple rules and relatively constrained possibility space lent themselves well to the limitations of programming in BASIC on a 16 K computer; Byte magazine, the bible of early microcomputer hackers, published a type-in Othello as early as its October 1977 issue.

A member of the Windows team named Chris Peters had decided to write a new version of the game under its original (and non-trademarked) name of Reversi in 1984, largely as one of several experiments — proofs of concept, if you will — into Windows application programming. Tandy Trower then pushed to get some of his team’s experimental applets, among them Reversi, included with the first release of Windows in November of 1985:

When the Macintosh was announced, I noted that Apple bundled a small set of applications, which included a small word processor called MacWrite and a drawing application called MacPaint. In addition, Lotus and Borland had recently released DOS products called Metro and SideKick that consisted of a small suite of character-based applications that could be popped up with a keyboard combination while running other applications. Those packages included a simple text editor, a calculator, a calendar, and a business-card-like database. So I went to [Bill] Gates and [Steve] Ballmer with the recommendation that we bundle a similar set of applets with Windows, which would include refining the ones already in development, as well as a few more to match functions comparable to these other products.

Interestingly, MacOS did not include any full-fledged games among its suite of applets; the closest it came was a minimalist sliding-number puzzle that filled all of 600 bytes and a maze on the “Guided Tour of Macintosh” disk that was described as merely a tool for learning to use the mouse. Apple, whose Apple II was found in more schools and homes than businesses and who were therefore viewed with contempt by much of the conservative corporate computing establishment, ran scared from any association of their latest machine with games. But Microsoft, on whose operating system MS-DOS much of corporate America ran, must have felt they could get away with a little more frivolity.

Still, Windows Reversi didn’t ultimately have much impact on much of anyone. Reversi in general was a game more suited to the hacker mindset than the general public, lacking the immediate appeal of a more universally known design, while the execution of this particular version of Reversi was competent but no more. And then, of course, very few people bought Windows 1 in the first place.

For a long time thereafter, Microsoft gave little thought to making more games for Windows. Reversi stuck around unchanged in the only somewhat more successful Windows 2, and was earmarked to remain in Windows 3.0 as well. Beyond that, Microsoft had no major plans for Windows gaming. And then, in one of the stranger episodes in the whole history of gaming, they were handed the piece of software destined to become almost certainly the most popular computer game of all time, reckoned in terms of person-hours played: Windows Solitaire.

The idea of a single-player card game, perfect for passing the time on long coach or railway journeys, had first spread across Europe and then the world during the nineteenth century. The game of Solitaire — or Patience, as it is still more commonly known in Britain — is really a collection of many different games that all utilize a single deck of everyday playing cards. The overarching name is, however, often used interchangeably with the variant known as Klondike, by far the most popular form of Solitaire.

Klondike Solitaire, like the many other variants, has many qualities that make it attractive for computer adaptation on a platform that gives limited scope for programmer ambition. Depending on how one chooses to define such things, a “game” of Solitaire is arguably more of a puzzle than an actual game, and that’s a good thing in this context: the fact that this is a truly single-player endeavor means that the programmer doesn’t have to worry about artificial intelligence at all. In addition, the rules are simple, and playing cards are fairly trivial to represent using even the most primitive computer graphics. Unsurprisingly, then, Solitaire was another favorite among the earliest microcomputer game developers.

It was for all the same reasons that a university student named Wes Cherry, who worked at Microsoft as an intern during the summer of 1988, decided to make a version of Klondike Solitaire for Windows that was similar to one he had spent a lot of time playing on the Macintosh. (Yes, even when it came to the games written by Microsoft’s interns, Windows could never seem to escape the shadow of the Macintosh.) There was, according to Cherry himself, “nothing great” about the code of the game he wrote; it was no better nor worse than a thousand other computerized Solitaire games. After all, how much could you really do with Solitaire one way or the other? It either worked or it didn’t. Thankfully, Cherry’s did, and even came complete with a selection of cute little card backs, drawn by his girlfriend Leslie Kooy. Asked what was the hardest aspect of writing the game, he points today to the soon-to-be-iconic cascade of cards that accompanied victory: “I went through all kinds of hoops to get that final cascade as fast as possible.” (Here we have a fine example of why most game programmers held Windows in such contempt…) At the end of his summer internship, he put his Solitaire on a server full of games and other little experiments that Microsoft’s programmers had created while learning how Windows worked, and went back to university.

Months later, some unknown manager at Microsoft sifted through the same server and discovered Cherry’s Solitaire. It seems that Microsoft had belatedly started looking for a new game — something more interesting than Reversi — to include with the upcoming Windows 3.0, which they intended to pitch as hard to consumers as businesspeople. They now decided that Solitaire ought to be that game. So, they put it through a testing process, getting Cherry to fix the bugs they found from his dorm room in return for a new computer. Meanwhile Susan Kare, the famed designer of MacOS’s look who was now working for Microsoft, gave Leslie Kooy’s cards a bit more polishing.

And so, when Windows 3.0 shipped in May of 1990, Solitaire was included. According to Microsoft, its purpose was to teach people how to use a GUI in a fun way, but that explanation was always something of a red herring. The fact was that computing was changing, machines were entering homes in big numbers once again, and giving people a fun game to play as part of an otherwise serious operating environment was no longer anathema. Certainly huge numbers of people would find Solitaire more than compelling enough as an end unto itself.

The ubiquity that Windows Solitaire went on to achieve — and still maintains to a large extent to this day [1]The game got a complete rewrite for Windows Vista in 2006. Presumably any traces of Wes Cherry’s original code that might have been left were excised at that time. Beginning with Windows 8 in 2012, a standalone Klondike Solitaire game was no longer included as a standard part of every Windows installation — a break with more than twenty years of tradition. Perhaps due to the ensuing public outcry, the advertising-supported Microsoft Solitaire Collection did become a component of Windows 10 upon the latter’s release in 2015. — is as difficult to overstate as it is to quantify. Microsoft themselves soon announced it to be the “most used” Windows application of all, easily besting heavyweight businesslike contenders like Word, Excel, Lotus 1-2-3, and WordPerfect. The game became a staple of office life all over the world, to be hauled out during coffee breaks and down times, to be kept always lurking minimized in the background, much to the chagrin of officious middle managers. By 1994, a Washington Post article would ask, only half facetiously, if Windows Solitaire was sowing the seeds of “the collapse of American capitalism.”

“Yup, sure,” says Frank Burns, a principal in the region’s largest computer bulletin board, the MetaNet. “You used to see offices laid out with the back of the video monitor toward the wall. Now it’s the other way around, so the boss can’t see you playing Solitaire.”

“It’s swallowed entire companies,” says Dennis J. “Gomer” Pyles, president of Able Bodied Computers in The Plains, Virginia. “The water-treatment plant in Warrenton, I installed [Windows on] their systems, and the next time I saw the client, the first thing he said to me was, ‘I’ve got 2000 points in Solitaire.'”

Airplanes full of businessmen resemble not board meetings but video arcades. Large gray men in large gray suits — lugging laptops loaded with spreadsheets — are consumed by beating their Solitaire scores, flight attendants observe.

Some companies, such as Boeing, routinely remove Solitaire from the Windows package when it arrives, or, in some cases, demand that Microsoft not even ship the product with the game inside. Even PC Magazine banned game-playing during office hours. “Our editor wanted to lessen the dormitory feel of our offices. Advertisers would come in and the entire research department was playing Solitaire. It didn’t leave the best impression,” reported Tin Albano, a staff editor.

Such articles have continued to crop up from time to time in the business pages ever since — as, for instance, the time in 2006 when New York City Mayor Michael Bloomberg summarily terminated an employee for playing Solitaire on the job, creating a wave of press coverage both positive and negative. But the crackdowns have always been to no avail; it’s as hard to imagine the modern office without Microsoft Solitaire as it is to imagine it without Microsoft Office.

Which isn’t to say that the Solitaire phenomenon is limited to office life. My retired in-laws, who have quite possibly never played another computer game in either of their lives, both devote hours every week to Solitaire in their living room. A Finnish study from 2007 found it to be the favorite game of 36 percent of women and 13 percent of men; no other game came close to those numbers. Even more so than Tetris, that other great proto-casual game of the early 1990s, Solitaire is, to certain types of personality at any rate, endlessly appealing. Why should that be?

To begin to answer that question, we might turn to the game’s pre-digital past. Whitmore Jones’s Games of Patience for One or More Players, a compendium of many Solitaire variants, was first published in 1898. Its introduction is fascinating, presaging much of the modern discussion about Microsoft Solitaire and casual gaming in general.

In days gone by, before the world lived at the railway speed as it is doing now, the game of Patience was looked upon with somewhat contemptuous toleration, as a harmless but dull amusement for idle ladies, and was ironically described as “a roundabout method of sorting the cards”; but it has gradually won for itself a higher place. For now, when the work, and still more the worries, of life have so enormously increased and multiplied, the value of a pursuit interesting enough to absorb the attention without unduly exciting the brain, and so giving the mind a rest, as it were, a breathing space wherein to recruit its faculties, is becoming more and more recognised and appreciated.

In addition to illustrating how concerns about the pace of contemporary life and nostalgia for the good old days are an eternal part of the human psyche, this passage points to the heart of Solitaire’s appeal, whether played with real cards or on a computer: the way that it can “absorb the attention without unduly exciting the brain.” It’s the perfect game to play when killing time at the end of the workday, as a palate cleanser between one task and another, or, as in the case of my in-laws, as a semi-active accompaniment to the idle practice of watching the boob tube.

Yet Solitaire isn’t a strictly rote pursuit even for those with hundreds of hours of experience playing it; if it was, it would have far less appeal. Indeed, it isn’t even particularly fair. About 20 percent of shuffles will result in a game that isn’t winnable at all, and Wes Cherry’s original computer implementation at least does nothing to protect you from this harsh mathematical reality. Still, when you get stuck there’s always that “Deal” menu option waiting for you up there in the corner, a tempting chance to reshuffle the cards and try your hand at a new combination. So, while Solitaire is the very definition of a low-engagement game, it’s also a game that has no natural end point; somehow the “Deal” option looks equally tempting whether you’ve just won or just lost. After being sucked in by its comfortable similarity to an analog game of cards almost everyone of a certain age has played, people can and do proceed to keep playing it for a lifetime.

As in the case of Tetris, there’s room to debate whether spending so many hours upon such a repetitive activity as playing Solitaire is psychologically healthy. For my own part, I avoid it and similar “time waster” games as just that — a waste of time that doesn’t leave me feeling good about myself afterward. By way of another perspective, though, there is this touching comment that was once left by a Reddit user to Wes Cherry himself:

I just want to tell you that this is the only game I play. I have autism and don’t game due to not being able to cope with the sensory processing – but Solitaire is “my” game.

I have a window of it open all day, every day and the repetitive clicking is really soothing. It helps me calm down and mentally function like a regular person. It makes a huge difference in my quality of life. I’m so glad it exists. Never thought there would be anyone I could thank for this, but maybe I can thank you. *random Internet stranger hugs*

Cherry wrote Solitaire in Microsoft’s offices on company time, and thus it was always destined to be their intellectual property. He was never paid anything at all, beyond a free computer, for creating the most popular computer game in history. He says he’s fine with this. He’s long since left the computer industry, and now owns and operates a cider distillery on Vashon Island in Puget Sound.

The popularity of Solitaire convinced Microsoft, if they needed convincing, that simple games like this had a place — potentially a profitable place — in Windows. Between 1990 and 1992, they released four “Microsoft Entertainment Packs,” each of which contained seven little games of varying degrees of inspiration, largely cobbled together from more of the projects coded by their programmers in their spare time. These games were the polar opposite of the ones being sold by traditional game publishers, which were growing ever more ambitious, with increasingly elaborate storylines and increasing use of video and sound recorded from the real world. The games from Microsoft were instead cast in the mold of Cherry’s Solitaire: simple games that placed few demands on either their players or the everyday office computers Microsoft envisioned running them, as indicated by the blurbs on the boxes: “No more boring coffee breaks!”; “You’ll never get out of the office!” Bruce Ryan, the manager placed in charge of the Entertainment Packs, later summarized the target demographic as “loosely supervised businesspeople.”

The centerpiece of the first Entertainment Pack was a passable version of Tetris, created under license from Spectrum Holobyte, who owned the computer rights to the game. Wes Cherry, still working out of his dorm room, provided a clone of another older puzzle game called Pipe Dream to be the second Entertainment Pack’s standard bearer; he was even compensated this time, at least modestly. As these examples illustrate, the Entertainment Packs weren’t conceptually ambitious in the least, being largely content to provide workmanlike copies of established designs from both the analog and digital realms. Among the other games included were Solitaire variants other than Klondike, a clone of the Activision tile-matching hit Shanghai, a 3D Tic-tac-toe game, a golf game (for the ultimate clichéd business-executive experience), and even a version of John Horton Conway’s venerable study of cellular life cycles, better known as the game of Life. (One does have to wonder what bored office workers made of that).

Established journals of record like Computer Gaming World barely noticed the Entertainment Packs, but they sold more than half a million copies in two years, equaling or besting the numbers of the biggest hardcore hits of the era, such as the Wing Commander series. Yet even that impressive number rather understates the popularity of Microsoft’s time wasters. Given that they had no copy protection, and given that they would run on any computer capable of running Windows, the Entertainment Packs were by all reports pirated at a mind-boggling rate, passed around offices like cakes baked for the Christmas potluck.

For all their success, though, nothing on any of the Entertainment Packs came close to rivaling Wes Cherry’s original Solitaire game in terms of sheer number of person-hours played. The key factor here was that the Entertainment Packs were add-on products; getting access to these games required motivation and effort from the would-be player, along with — at least in the case of the stereotypical coffee-break player from Microsoft’s own promotional literature — an office environment easygoing enough that one could carry in software and install it on one’s work computer. Solitaire, on the other hand, came already included with every fresh Windows installation, so long as an office’s system administrators weren’t savvy and heartless enough to seek it out and delete it. The archetypal low-effort game, its popularity was enabled by the fact that it also took no effort whatsoever to gain access to it. You just sort of stumbled over it while trying to figure out this new Windows thing that the office geek had just installed on your faithful old computer, or when you saw your neighbor in the next cubicle playing and asked what the heck she was doing. Five minutes later, it had its hooks in you.

It was therefore significant when Microsoft added a new game — or rather an old one — to 1992’s Windows 3.1. Minesweeper had actually debuted as part of the first Entertainment Pack, where it had become a favorite of quite a number of players. Among them was none other than Bill Gates himself, who became so addicted that he finally deleted the game from his computer — only to start getting his fix on his colleagues’ machines. (This creates all sorts of interesting fuel for the imagination. How do you handle it when your boss, who also happens to be the richest man in the world, is hogging your computer to play Minesweeper?) Perhaps due to the CEO’s patronage, Minesweeper became part of Windows’s standard equipment in 1992, replacing the unloved Reversi.

Unlike Solitaire and most of the Entertainment Pack games, Minesweeper was an original design, written by staff programmers Robert Donner and Curt Johnson in their spare time. That said, it does owe something to the old board game Battleship, to very early computer games like Hunt the Wumpus, and in particular to a 1985 computer game called Relentless Logic. You click on squares in a grid to uncover their contents, which can be one of three things: nothing at all, indicating that neither this square nor any of its adjacent squares contain mines; a number, indicating that this square is clear but said number of its adjacent squares do contain mines; or — unlucky you! — an actual mine, which kills you, ending the game. Like Solitaire, Minesweeper straddles the line — if such a line exists — between game and puzzle, and it isn’t a terribly fair take on either: while the program does protect you to the extent that the first square you click will never contain a mine, it’s possible to get into a situation through no fault of your own where you can do nothing but play the odds on your next click. But, unlike Solitaire, Minesweeper does have more of the trappings of a conventional videogame, including a timer which encourages you to play quickly to achieve the maximum score.

Doubtless because of those more overt videogame trappings, Minesweeper never became quite the office fixture that Solitaire did. Those who did get sucked in by it, however, found it even more addictive, perhaps not least because it does demand a somewhat higher level of engagement. It too became an iconic part of life with Microsoft Windows, and must rank high on any list of most-played computer games of all time, if the data only existed to compile such a thing. After all, it did enjoy one major advantage over even Solitaire for office workers with uptight bosses: it ran in a much smaller window, and thus stood out far less on a crowded screen when peering eyes glanced into one’s cubicle.

Microsoft included a third game with Windows for Workgroups 3.1, a variant intended for a networked office environment. True to that theme, Hearts was a version of the evergreen card game which could be played against computer opponents, but which was most entertaining when played together by up to four real people, all on separate computers. Its popularity was somewhat limited by the fact that it came only with Windows for Workgroups, but, again, that adjective is relative. By any normal computer-gaming standard, Hearts was hugely popular indeed for quite some years, serving for many people as their introduction to the very concept of online gaming — a concept destined to remake much of the landscape of computer gaming in general in years to come. Certainly I can remember many a spirited Hearts tournament at my workplaces during the 1990s. The human, competitive element always made Hearts far more appealing to me than the other games I’ve discussed in this article.

But whatever your favorite happened to be, the games of Windows became a vital part of a process I’ve been documenting in fits and starts over the last year or two of writing this history: an expansion of the demographics that were playing games, accomplished not by making parents and office workers suddenly fall in love with the massive, time-consuming science-fiction or fantasy epics upon which most of the traditional computer-game industry remained fixated, but rather by meeting them where they lived. Instead of five-course meals, Microsoft provided ludic snacks suited to busy lives and limited attention spans. None of the games I’ve written about here are examples of genius game design in the abstract; their genius, to whatever extent it exists, is confined to worming their way into the psyche in a way that can turn them into compulsions. Yet, simply by being a part of the software that just about everybody, with the exception of a few Macintosh stalwarts, had on their computers in the 1990s, they got hundreds of millions of people playing computer games for the first time. The mainstream Ludic Revolution, encompassing the gamification of major swaths of daily life, began in earnest on Microsoft Windows.

(Sources: the book A Casual Revolution: Reinventing Video Games and Their Players by Jesper Juul; Byte of October 1977; Computer Gaming World of September 1992; Washington Post of March 9 1994; New York Times of February 10 2006; online articles at Technologizer, The Verge, B3TA, Reddit, Game Set Watch, Tech Radar, Business Insider, and Danny Glasser’s personal blog.)

Footnotes

Footnotes
1 The game got a complete rewrite for Windows Vista in 2006. Presumably any traces of Wes Cherry’s original code that might have been left were excised at that time. Beginning with Windows 8 in 2012, a standalone Klondike Solitaire game was no longer included as a standard part of every Windows installation — a break with more than twenty years of tradition. Perhaps due to the ensuing public outcry, the advertising-supported Microsoft Solitaire Collection did become a component of Windows 10 upon the latter’s release in 2015.
 
 

Tags: ,

Doing Windows, Part 9: Windows Comes Home

This series of articles so far has been a story of business-oriented personal computing. Corporate America had been running for decades on IBM before the IBM PC appeared, so it was only natural that the standard IBM introduced would be embraced as the way to get serious, businesslike things done on a personal computer. Yet long before IBM entered the picture, personal computing in general had been pioneered by hackers and hobbyists, many of whom nursed grander dreams than giving secretaries a better typewriter or giving accountants a better way to add up figures. These pioneers didn’t go away after 1981, but neither did they embrace the IBM PC, which most of them dismissed as technically unimaginative and aesthetically disastrous. Instead they spent the balance of the 1980s using computers like the Apple II, the Commodore 64, the Commodore Amiga, and the Atari ST to communicate with one another, to draw pictures, to make music, and of course to write and play lots and lots of games. Dwarfed already in terms of dollars and cents at mid-decade by the business-computing monster the IBM PC had birthed, this vibrant alternative computing ecosystem — sometimes called home computing, sometimes consumer computing — makes a far more interesting subject for the cultural historian of today than the world of IBM and Microsoft, with its boring green screens and boring corporate spokesmen running scared from the merest mention of digital creativity. It’s for this reason that, a few series like this one aside, I’ve spent the vast majority of my time on this blog talking about the cultures of creative computing rather than those of IBM and Microsoft.

Consumer computing did enjoy one brief boom in the 1980s. From roughly 1982 to 1984, a narrative took hold within the mainstream media and the offices of venture capitalists alike that full-fledged computers would replace the Atari VCS and other game consoles in American homes on a massive scale. After all, computers could play games just like the consoles, but they alone could also be used to educate the kids, write school reports and letters, balance the checkbook, and — that old favorite to which the pundits returned again and again — store the family recipes.

All too soon, though, the limitations of the cheap 8-bit computers that had fueled the boom struck home. As a consumer product, those early computers with their cryptic blinking command prompts were hopeless; at least with an Atari VCS you could just put a cartridge in the slot, turn it on, and play. There were very few practical applications for which they weren’t more trouble than they were worth. If you needed to write a school report, a standalone word-processing machine designed for that purpose alone was often a cheaper and better solution, and the family accounts and recipes were actually much easier to store on paper than in a slow, balky computer program. Certainly paper was the safer choice over a pile of fragile floppy disks.

So, what we might call the First Home Computer Revolution fizzled out, with most of the computers that had been purchased over its course making the slow march of shame from closet to attic to landfill. That minority who persisted with their new computers was made up of the same sorts of personalities who had had computers in their homes before the boom — for the one concrete thing the First Home Computer Revolution had achieved was to make home computers in general more affordable, and thus put them within the reach of more people who were inclined toward them anyway. People with sufficient patience continued to find home computers great for playing games that offered more depth than the games on the consoles, while others found them objects of wonder unto themselves, new oceans just waiting to have their technological depths plumbed by intrepid digital divers. It was mostly young people, who had free time on their hands, who were open to novelty, who were malleable enough to learn something new, and who were in love with escapist fictions of all stripes, who became the biggest home-computer users.

Their numbers grew at a modest pace every year, but the real money, it was now clear, was in business computing. Why try to sell computers piecemeal to teenagers when you could sell them in bulk to corporations? IBM, after having made one abortive stab at capturing home computing as well via the ill-fated PCjr, went where the money was, and all but a few other computer makers — most notable among these home-computer loyalists were Commodore, Atari, and Radio Shack — followed them there. The teenagers, for their part, responded to the business-computing majority’s contempt in kind, piling scorn onto the IBM PC’s ludicrously ugly CGA graphics and its speaker that could do little more than beep and fart at you, all while embracing their own more colorful platforms with typical adolescent zeal.

As the 1980s neared their end, however, the ugly old MS-DOS computer started down an unanticipated road of transformation. In 1987, as part of the misbegotten PS/2 line, IBM introduced a new graphics standard called VGA that, with up to 256 onscreen colors from a palette of more than 260,000, outdid all of the common home computers of the time. Soon after, enterprising third parties like Ad Lib and Creative Labs started making add-on sound cards for MS-DOS machines that could make real music and — just as important for game fanatics — real explosions. Many a home hacker woke up one morning to realize that the dreaded PC clone suddenly wasn’t looking all that bad. No, the technical architecture wasn’t beautiful, but it was robust and mature, and the pressure of having dozens of competitors manufacturing machines meeting the standard kept the bang-for-your-buck ratio very good. And if you — or your parents — did want to do any word processing or checkbook balancing, the software for doing so was excellent, honed by years of catering to the most demanding of corporate users. Ditto the programming tools that were nearer to a hacker’s heart; Borland’s Turbo Pascal alone was a thing of wonder, better than any other programming environment on any other personal computer.

Meanwhile 8-bit home computers like the Apple II and the Commodore 64 were getting decidedly long in the tooth, and the companies that made them were doing a peculiarly poor job of replacing them. The Apple Macintosh was so expensive as to be out of reach of most, and even the latest Apple II, known as the IIGS, was priced way too high for what it was; Apple, having joined the business-computing rat race, seemed vaguely embarrassed by the continuing existence of the Apple II, the platform that had made them. The Commodore Amiga 500 was perhaps a more promising contender to inherit the crown of the Commodore 64, but its parent company had mismanaged their brand almost beyond hope of redemption in the United States.

So, in 1988 and 1989 MS-DOS-based computing started coming home, thanks both to its own sturdy merits and a lack of compelling alternatives from the traditional makers of home computers. The process was helped along by Sierra Online, a major publisher of consumer software who had bet big and early on the MS-DOS standard conquering the home in the end, and were thus out in front of its progress now with a range of appealing games that took full advantage of the new graphics and sound cards. Other publishers, reeling before a Nintendo onslaught that was devastating the remnants of the 8-bit software market, soon followed their lead. By 1990, the vast majority of the American consumer-software industry had joined their counterparts in business software in embracing MS-DOS as their platform of first — often, of only — priority.

Bill Gates had always gone where the most money was. In years past, the money had been in business computing, and so Microsoft, after experimenting briefly with consumer software in the period just before the release of the IBM PC, had all but ignored the consumer market in favor of system software and applications targeted squarely at corporate America. Now, though, the times were changing, as home computers became powerful and cheap enough to truly go mainstream. The media was buzzing about the subject as they hadn’t for years; everywhere it was multimedia this, CD-ROM that. Services like Prodigy and America Online were putting a new, friendlier face on the computer as a tool for communicating and socializing, and game developers were buzzing about an emerging new form of mass-market entertainment, a merger of Silicon Valley and Hollywood. Gates wasn’t alone in smelling a Second Home Computer Revolution in the wind, one that would make the computer a permanent fixture of modern American home life in all the ways the first had failed to do so.

This, then, was the zeitgeist into which Microsoft Windows 3.0 made its splashy debut in May of 1990. It was perfectly positioned both to drive the Second Home Computer Revolution and to benefit from it. Small wonder that Microsoft undertook a dramatic branding overhaul this year, striving to project a cooler, more entertaining image — an image appropriate for a company which marketed not to other companies but to individual consumers. One might say that the Microsoft we still know today was born on May 22, 1990, when Bill Gates strode onto a stage — tellingly, not a stage at Comdex or some other stodgy business-oriented computing event — to introduce the world to Windows 3.0 over a backdrop of confetti cannons, thumping music, and huge projection screens.

The delirious sales of Windows 3.0 that followed were not — could not be, given their quantity — driven exclusively by sales to corporate America. The world of computing had turned topsy-turvy; consumer computing was where the real action was now. Even as they continued to own business-oriented personal computing, Microsoft suddenly dominated in the home as well, thanks to the capitulation without much of a fight of all of the potential rivals to MS-DOS and Windows. Countless copies of Windows 3.0 were sold by Microsoft directly to Joe Public to install on his existing home computer, through a toll-free hotline they set up for the purpose. (“Have your credit card ready and call!”) Even more importantly, as new computers entered American homes in mass quantities for the second time in history, they did so with Windows already on their hard drives, thanks to Microsoft’s longstanding deals with the companies that made them.

In April of 1992,  Windows 3.1 appeared, sporting as one of its most important new features a set of “multimedia extensions” — this meaning tools for recording and playing back sounds, for playing audio CDs, and, most of all, for running a new generation of CD-ROM-based software sporting digitized voices and music and video clips — which were plainly aimed at the home rather than the business user.  Although Windows 3.1 wasn’t as dramatic a leap forward as its predecessor had been, Microsoft nevertheless hyped it to the skies in the mass media, rolling out an $8 million television-advertising campaign among other promotional strategies that would have been unthinkable from the business-focused Microsoft of just a few years earlier. It sold even faster than had its predecessor.

A Quick Tour of Windows for Workgroups 3.1


Released in April of 1992, Windows 3.1 was the ultimate incarnation of Windows’s third generation. (A version 3.11 was released the following year, but it confined itself to bug fixes and modest performance tweaks, introducing no significant new features.) It dropped support for 8088-based machines, and with it the old “real mode” of operation; it now ran only in protected mode or 386 enhanced mode. It made welcome strides in terms of stability, even as it still left much to be desired on that front. And this Windows was the last to be sold as an add-on to an MS-DOS which had to be purchased separately. Consumer-grade incarnations of Windows would continue to be built on top of MS-DOS for the rest of the decade, but from Windows 95 on Microsoft would do a better job of hiding their humble foundation by packaging the whole software stack together as a single product.

Stuff like this is the reason Windows always took such a drubbing in comparison to other, slicker computing platforms. In truth, Microsoft was doing the best they could to support a bewildering variety of hardware, a problem with which vendors of turnkey systems like Apple didn’t have to contend. Still, it’s never a great look to have to tell your customers, “If this crashes your computer, don’t worry about it, just try again.” Much the same advice applied to daily life with Windows, noted the scoffers.

Microsoft was rather shockingly lax about validating Windows 3 installations. The product had no copy protection of any sort, meaning one person in a neighborhood could (and often did) purchase a copy and share it with every other house on the block. Others in the industry had a sneaking suspicion that Microsoft really didn’t mind that much if Windows was widely pirated among their non-business customers — that they’d rather people run pirated copies of Windows than a competing product. It was all about achieving the ubiquity which would open the door to all sorts of new profit potential through the sale of applications. And indeed, Windows 3 was pirated like crazy, but it also became thoroughly ubiquitous. As for the end to which Windows’s ubiquity was the means: by the time applications came to represent 25 percent of Microsoft’s unit sales, they already accounted for 51 percent of their revenue. Bill Gates always had an instinct for sniffing out where the money was.

Probably the most important single enhancement in Windows 3.1 was its TrueType fonts. The rudimentary bitmap fonts which shipped with older versions looked… not all that nice on the screen or on the page, reportedly due to Bill Gates’s adamant refusal to pay a royalty for fonts to an established foundry like Adobe, as Apple had always done. This decision led to a confusion of aftermarket fonts in competing formats. If you used some of these more stylish fonts in a document, you couldn’t share that document with anyone else unless she also had installed the same fonts. So, you could either share ugly documents or keep nice-looking ones to yourself. Some choice! Thankfully, TrueType came along to fix all that, giving Macintosh users at least one less thing to laugh at when it came to Windows.

The TrueType format was the result of an unusual cooperative project led by Microsoft and Apple — yes, even as they were battling one another in court. The system of glyphs and the underlying technology to render them were intended to break the stranglehold Adobe Systems enjoyed over high-end printing; Adobe charged a royalty of up to $100 per gadget that employed their own PostScript font system, and were widely seen in consequence as a retrograde force holding back the entire desktop-publishing and GUI ecosystem. TrueType would succeed splendidly in its monopoly-busting goal, to such an extent that it remains the standard for fonts on Microsoft Windows and Apple’s OS X to this day. Bill Gates, no stranger to vindictiveness, joked that “we made [the widely disliked Adobe head] John Warnock cry.”

The other big addition to Windows 3.1 was the “multimedia extensions.” These let you do things like record sounds using an attached microphone and play your audio CDs on your computer. That they were added to what used to be a very businesslike operating environment says much about how important home users had become to Microsoft’s strategy.

In a throwback to an earlier era of computing, MS-DOS still shipped with a copy of BASIC included, and Windows 3.1 automatically found it and configured it for easy access right out of the box — this even though home computing was now well beyond the point where most users would ever try to become programmers. Bill Gates’s sentimental attachment to BASIC, the language on which he built his company before the IBM PC came along, has often been remarked upon by his colleagues, especially since he wasn’t normally a man given to much sentimentality. It was the widespread perception of Borland’s Turbo Pascal as the logical successor to BASIC — the latest great programming tool for the masses — that drove the longstanding antipathy between Gates and Borland’s flamboyant leader, Philippe Kahn. Later, it was supposedly at Gates’s insistence that Microsoft’s Visual BASIC, a Pascal-killer which bore little resemblance to BASIC as most people knew it, nevertheless bore the name.

Windows for Workgroups — a separate, pricier version of the environment aimed at businesses — was distinguished by having built-in support for networking. This wasn’t, however, networking as we think of it today. It was rather intended to connect machines together only in a local office environment. No TCP/IP stack — the networking technology that powers the Internet — was included.

But you could get on the Internet with the right additional software. Here, just for fun, I’m trying to browse the web using Internet Explorer 5 from 1999, the last version made for Windows 3. Google is one of the few sites that work at all — albeit, as you can see, not very well.

All this success — this reality of a single company now controlling almost all personal computing, in the office and in the home — brought with it plenty of blowback. The metaphor of Microsoft as the Evil Empire, and of Bill Gates as the computer industry’s very own Darth Vader, began in earnest in these years of Windows 3’s dominance. Neither Gates nor his company had ever been beloved among their peers, having always preferred making money to making friends. Now, though, the naysayers came out in force. Bob Metcalfe, a Xerox PARC alum famous in hacker lore as the inventor of the Ethernet networking protocol, talked about Microsoft’s expanding “death grip” on innovation in the computer industry. Indeed, zombie imagery was prevalent among many of Microsoft’s rivals; Mitch Kapor of Lotus called the new Windows-driven industry “the kingdom of the dead”: “The revolution is over, and free-wheeling innovation in the software industry has ground to a halt.” Any number of anonymous commenters mused about doing Gates any number of forms of bodily harm. “It’s remarkable how widespread the negative feelings toward Microsoft are,” mused Stewart Alsop. “No one wants to work with Microsoft anymore,” said noted Gates-basher Phillipe Kahn of Borland. “We sure won’t. They don’t have any friends left.” Channeling such sentiments, Business Month magazine cropped his nerdy face onto a body-builder’s body and labeled him the “Silicon Bully” on its cover: “How long can Bill Gates kick sand in the face of the computer industry?”

Setting aside the jealousy that always follows great success, even setting aside for the moment the countless ways in which Microsoft really did play hardball with their competitors, something about Bill Gates rubbed many people the wrong way on a personal, visceral level. In keeping with their new, consumer-friendly image, Microsoft had hired consultants to fix up his wardrobe and work on his speaking style — not to mention to teach him the value of personal hygiene — and he could now get through a canned presentation ably enough. When it came to off-the-cuff interactions, though, he continued to strike many as insufferable. To judge him on the basis of his weedy physique and nasally speaking voice — the voice of the kid who always had to show how smart he was to the rest of the class — was perhaps unfair. But one certainly could find him guilty of a thoroughgoing lack of graciousness.

His team of PR coaches could have told him that, when asked who had contributed the most to the personal-computer revolution, he ought to politely decline to answer, or, even better, modestly reflect on the achievements of someone like his old friend Steve Jobs. But they weren’t in the room with him one day when that exact question was put to him by a smiling reporter, and so, after acknowledging that it really should be answered by “others less biased than me,” he proceeded to make the case for himself: “I will say that I started the first microcomputer-software company. I put BASIC in micros before 1980. I was influential in making the IBM PC a 16-bit machine. My DOS is in 50 million computers. I wrote software for the Mac.” I, I, I. Everything he said was true, at least if one presumed that “I” meant “Bill Gates and the others at Microsoft” in this context. Yet there was something unappetizing about this laundry list of achievements he could so easily rattle off, and about the almost pathological competitiveness it betrayed. We love to praise ambition in the abstract, but most of us find such naked ambition as that constantly displayed by Gates profoundly off-putting. The growing dislike for Microsoft in the computer industry and even in much of the technology press was fueled to a large extent by a personal aversion to their founder.

Which isn’t to say that there weren’t valid grounds for concern when it came to Microsoft’s complete dominance of personal-computer system software. Comparisons to the Standard Oil trust of the Gilded Age were in the air, so much so that by 1992 it was already becoming ironically useful for Microsoft to keep the Macintosh and OS/2 alive and allow them their paltry market share, just so the alleged monopolists could point to a couple of semi-viable competitors to Windows. It was clear that Microsoft’s ambitions didn’t end with controlling the operating system installed on the vast majority of computers in the country and, soon, the world. On the contrary, that was only a means to their real end. They were already using their status as the company that made Windows to cut deep into the application market, invading territory that had once belonged to the likes of Lotus 1-2-3 and WordPerfect. Now, those names were slowly being edged out by Microsoft Excel and Microsoft Word. Microsoft wanted to own more or less all of the software on your computer. Any niche outside developers that remained in computing’s new order, it seemed, would do so at Microsoft’s sufferance. The established makers of big-ticket business applications would have been chilled if they had been privy to the words spoken by Mike Maples, Microsoft’s head of applications, to his own people: “If someone thinks we’re not after Lotus and after WordPerfect and after Borland, they’re confused. My job is to get a fair share of the software applications market, and to me that’s 100 percent.” This was always the problem with Microsoft. They didn’t want to compete in the markets they entered; they wanted to own them.

Microsoft’s control of Windows gave them all sorts of advantages over other application developers which may not have been immediately apparent to the non-technical public. Take, for instance, the esoteric-sounding technology of Object Linking and Embedding, or OLE, which debuted with Windows 3.0 and still exists in current versions. OLE allows applications to share all sorts of dynamic data between themselves. Thanks to it, a word-processor document can include charts and graphs from a spreadsheet, with the one updating itself automatically when the other gets updated. Microsoft built OLE support into new versions of Word and Excel that accompanied Windows 3.0’s release, but refused for many months to tell outside developers how to use it.  Thus Microsoft’s applications had hugely desirable capabilities which their competitors did not for a long, long time. Similar stories played out again and again, driving the competition to distraction while Bill Gates shrugged his shoulders and played innocent. “We bend over backwards to make sure we’re not getting special advantage,” he said, while Steve Ballmer talked about a “Chinese wall” between Microsoft’s application and system programmers — a wall which people who had actually worked there insisted simply didn’t exist.

On March 1, 1991, news broke that the Federal Trade Commission was investigating Microsoft for anti-trust violations and monopolistic practices. The investigators specifically pointed to that agreement with IBM that had been announced at the Fall 1989 Comdex, to target low-end computers with Microsoft’s Windows and high-end computers with the two companies’ joint operating system OS/2 — ironically, an “anti-competitive” initiative that Microsoft had never taken all that seriously. Once the FTC started digging, however, they found that there was plenty of other evidence to be turned up, from both the previous decade and this new one.

There was, for instance, little question that Microsoft had always leveraged their status as the maker of MS-DOS in every way they could. When Windows 3.0 came out, they helped to ensure its acceptance by telling hardware makers that the only way they would continue to be allowed to buy MS-DOS for pre-installation on their computers was to buy Windows and start pre-installing that too. Later, part of their strategy for muscling into the application market was to get Microsoft Works, a stripped-down version of the full Microsoft Office suite, pre-installed on computers as well. How many people were likely to go out and buy Lotus 1-2-3 or WordPerfect when they already had similar software on their computer? Of course, if they did need something more powerful, said the little card included with every computer, they could have the more advanced version of Microsoft Works for the cost of a nominal upgrade fee…

And there were other, far more nefarious stories to tell. There was, for instance, the tale of DR-DOS, a 1988 alternative to MS-DOS from Digital Research which was compatible with Microsoft’s operating system but offered a lot of welcome enhancements. Microsoft went after any clone maker who tried to offer DR-DOS pre-installed on their machines with both carrots (they would undercut Digital Research’s price to the point of basically giving MS-DOS away if necessary) and sticks (they would refuse to license them the upcoming, hotly anticipated Windows 3.0 if they persisted in their loyalty to Digital Research). Later, once the DR-DOS threat had been quelled, most of the features that had made it so desirable turned up in the next release of MS-DOS. Digital Research — a company which Bill Gates seemed to delight in tormenting — had once again been, in the industry’s latest parlance, “Microslimed.”

But Digital Research was neither the first nor the last such company. Microsoft, it was often claimed, had a habit of negotiating with smaller companies under false pretenses, learning what made their technology tick under the guise of due diligence, and then launching their own product based on what they had learned. In early 1990, Microsoft told Intuit, a maker of a hugely successful money-management package called Quicken, that they were interested in acquiring them. After several weeks of negotiations, including lots of discussions about how Quicken was programmed, how it was used in the wild, and what marketing strategies had been most effective, Microsoft abruptly broke off the talks, saying they “couldn’t find a way to make it work.” Before the end of 1990, they had announced Microsoft Money, their own money-management product.

More and more of these types of stories were being passed around. A startup who called themselves Go came to Microsoft with a pen-based computing interface. (The latter was all the rage at the time; Apple as well was working on something called the Newton, a sort of pen-based proto-iPad that, like all of the other initiatives in this direction, would turn into an expensive failure.) After spending weeks examining Go’s technology, Microsoft elected not to purchase it or sign them to a contract. But, just days later, they started an internal project to create a pen-based interface for Windows, headed by the engineer who had been in charge of “evaluating” Go’s technology. A meme was emerging, by no means entirely true but perhaps not entirely untrue either, of Microsoft as a company better at doing business than doing technology, who would prefer to copy the innovations of others than do the hard work of coming up with their own ideas.

In a way, though, this very quality was a source of strength for Microsoft, the reason that corporate clients flocked to them now like they once had to IBM; the mantra that “no one ever got fired for buying IBM” was fast being replaced in corporate America by “no one ever got fired for buying Microsoft.” “We don’t do innovative stuff, like completely new revolutionary stuff,” Bill Gates admitted in an unguarded moment. “One of the things we are really, really good at doing is seeing what stuff is out there and taking the right mix of good features from different products.” For businesses and, now, tens of millions of individual consumers, Microsoft really was the new IBM: they were safe. You bought a Windows machine not because it was the slickest or sexiest box on the block but because you knew it was going to be well-supported, knew there would be software on the shelves for it for a long time to come, knew that when you did decide to upgrade the transition would be a relatively painless one. You didn’t get that kind of security from any other platform. If Microsoft’s business practices were sometimes a little questionable, even if Windows crashed sometimes or kept on running inexplicably slower the longer you had it on your computer, well, you could live with that. Alan Boyd, an executive at Microsoft for a number of years:

Does Bill have a vision? No. Has he done it the right way? Yes. He’s done it by being conservative. I mean, Bill used to say to me that his job is to say no. That’s his job.

Which is why I can understand [that] he’s real sensitive about that. Is Bill innovative? Yes. Does he appear innovative? No. Bill personally is a lot more innovative than Microsoft ever could be, simply because his way of doing business is to do it very steadfastly and very conservatively. So that’s where there’s an internal clash in Bill: between his ability to innovate and his need to innovate. The need to innovate isn’t there because Microsoft is doing well. And innovation… you get a lot of arrows in your back. He lets things get out in the market and be tried first before he moves into them. And that’s valid. It’s like IBM.

Of course, the ethical problem with this approach to doing business was that it left no space for the little guys who actually had done the hard work of innovating the technologies which Microsoft then proceeded to co-opt. “Seeing what stuff is out there and taking it” — to use Gates’s own words against him — is a very good way indeed to make yourself hated.

During the 1990s, Windows was widely seen by the tech intelligentsia as the archetypal Microsoft product, an unimaginative, clunky amalgam of other people’s ideas. In his seminal (and frequently hilarious) 1999 essay “In the Beginning… Was the Command Line,” Neal Stephenson described operating systems in terms of vehicles. Windows 3 was a moped in this telling, “a Rube Goldberg contraption that, when bolted onto a three-speed bicycle [MS-DOS], enabled it to keep up, just barely, with Apple-cars. The users had to wear goggles and were always picking bugs out of their teeth while Apple owners sped along in hermetically sealed comfort, sneering out the windows. But the Micro-mopeds were cheap, and easy to fix compared with the Apple-cars, and their market share waxed.”

And yet if we wished to identify one Microsoft product that truly was visionary, we could do worse than boring old ramshackle Windows. Bill Gates first put his people to work on it, we should remember, before the original IBM PC and the first version of MS-DOS had even been released — so strongly did he believe even then, just as much as that more heralded visionary Steve Jobs, that the GUI was the future of computing. By the time Windows finally reached the market four years later, it had had occasion to borrow much from the Apple Macintosh, the platform with which it was doomed always to be unfavorably compared. But Windows 1 also included vital features of modern computing that the Mac did not, such as multitasking and virtual memory. No, it didn’t take a genius to realize that these must eventually make their way to personal computers; Microsoft had fine examples of them to look at from the more mature ecosystems of institutional computing, and thus could be said, once again, to have implemented and popularized but not innovated them.

Still, we should save some credit for the popularizers. Apple, building upon the work done at Xerox, perfected the concept of the GUI to such an extent in LisaOS and MacOS that one could say that all of the improvements made to it since have been mere details. But, entrenched in a business model that demanded high profit margins and proprietary hardware, they were doomed to produce luxury products rather than ubiquitous ones. This was the logical flaw at the heart of the much-discussed “1984” television advertisement and much of the rhetoric that continued to surround the Macintosh in the years that followed. If you want to change the world through better computing, you have to give the people a computer they can afford. Thanks to Apple’s unwillingness or inability to do that, it was Microsoft that brought the GUI to the world in their stead — in however imperfect a form.

The rewards for doing so were almost beyond belief. Microsoft’s revenues climbed by roughly 50 percent every year in the few years after the introduction of Windows 3.0, as the company stormed past Boeing to become the biggest corporation in the Pacific Northwest. Someone who had invested $1000 in Microsoft in 1986 would have seen her investment grow to $30,000 by 1991. By the same point, over 2000 employees or former employees had become millionaires. In 1992, Bill Gates was anointed by Forbes magazine the richest person in the world, a distinction he would enjoy for the next 25 years by most reckonings. The man who had been so excited when his company grew to be bigger than Lotus in 1987 now owned a company that was larger than the next five biggest software publishers combined. And as for Lotus alone? Well, Microsoft was now over four times their size. And the Decade of Microsoft had only just begun.

In 2000, the company’s high-water point, an astonishing 97 percent of all consumer computing devices would have some sort of Microsoft software installed on them. In the vast majority of cases, of course, said software would include Microsoft Windows. There would be all sorts of grounds for concern about this kind of dominance even had it not been enjoyed by a company with such a reputation for playing rough as Microsoft. (Or would a company that didn’t play rough ever have gotten to be so dominant in the first place?) In future articles, we’ll be forced to spend a lot more time dealing with Microsoft’s various scandals and controversies, along with reactions to them that took the form of legal challenges from the American government and the European Union and the rise of an alternative ideology of software called the open-source movement.

But, as we come to the end of this particular series of articles on the early days of Windows, we really should give Bill Gates some credit as well. Had he not kept doggedly on with Windows in the face of a business-computing culture that for years wanted nothing to do with it, his company could very easily have gone the way of VisiCorp, Lotus, WordPerfect, Borland, and, one might even say, IBM and Apple for a while: a star of one era of computing that was unable to adapt to the changing times. Instead, by never wavering in his belief that the GUI was computing’s future, Gates conquered the world. That he did so while still relying on the rickety foundation of MS-DOS is, yes, kind of appalling for anyone who values clean, beautiful computer engineering. Yet it also says much about his programmers’ creativity and skill, belying any notion of Microsoft as a place bereft of such qualities. Whatever else you can say about the sometimes shaky edifices that were Windows 3 and its next few generations of successors, the fact that they worked at all was something of a minor miracle.

Most of all, we should remember the huge role that Windows played in bringing computing home once again — and, this time, permanently. The third generation of Microsoft’s GUI arrived at the perfect time, just when the technology and the culture were ready for it. Once a laughingstock, Windows became for quite some time the only face of computing many people knew — in the office and in the home. Who could have dreamed it? Perhaps only one person: a not particularly dreamy man named Bill Gates.

(Sources: the books Hard Drive: Bill Gates and the Making of the Microsoft Empire by James Wallace and Jim Erickson, Gates: How Microsoft’s Mogul Reinvented an Industry and Made Himself the Richest Man in America by Stephen Manes and Paul Andrews, and In the Beginning… Was the Command Line by Neal Stephenson; Computer Power User of October 2004; InfoWorld of May 20 1991 and January 31 1994. Finally, I owe a lot to Nathan Lineback for the histories, insights, comparisons, and images found at his wonderful online “GUI Gallery.”)

 
 

Tags: , , ,

Doing Windows, Part 8: The Outsiders

Microsoft Windows 3.0’s conquest of the personal-computer marketplace was bad news for a huge swath of the industry. On the software side, companies like Lotus and WordPerfect, only recently so influential that it was difficult to imagine a world that didn’t include them, would never regain the clout they had enjoyed during the 1980s, and would gradually fade away entirely. On the hardware side, it was true that plenty of makers of commodity PC clones were happier to work with a Microsoft who believed a rising tide lifted all their boats than against an IBM that was continually trying to put them out of business. But what of Big Blue themselves, still the biggest hardware maker of all, who were accustomed to dictating the direction of the industry rather than being dictated to by any mere maker of software? And what, for that matter, of Apple? Both Apple and IBM found themselves in the unaccustomed position of being the outsiders in this new Windows era of computing. Each must come to terms with Microsoft’s newfound but overwhelming power, even as each remained determined not to give up the heritage of innovation that had gotten them this far.

Having chosen to declare war on Microsoft in 1988, Apple seemed to have a very difficult road indeed in front of them — and that was before Xerox unexpectedly reentered the picture. On December 14, 1989, the latter shocked everyone by filing a $150 million lawsuit of their own, accusing Apple of ripping off the user interface employed by the Xerox Star office system before Microsoft allegedly ripped the same thing off from Apple.

The many within the computer industry who had viewed the implications of Apple’s recent actions with such concern couldn’t help but see this latest development as the perfect comeuppance for their overweening position on “look and feel” and visual copyright. These people now piled on with glee. “Apple can’t have it both ways,” said John Shoch, a former Xerox PARC researcher, to the New York Times. “They can’t complain that Microsoft [Windows has] the look and feel of the Macintosh without acknowledging the Mac has the look and feel of the Star.” In his 1987 autobiography, John Sculley himself had written the awkward words that “the Mac, like the Lisa before it, was largely a conduit for technology” developed by Xerox. How exactly was it acceptable for Apple to become a conduit for Xerox’s technology but unacceptable for Microsoft to become a conduit for Apple’s? “Apple is running around persecuting Microsoft over things they borrowed from Xerox,” said one prominent Silicon Valley attorney. The Xerox lawsuit raised uncomfortable questions of the sort which Apple would have preferred not to deal with: questions about the nature of software as an evolutionary process — ideas building upon ideas — and what would happen to that process if everyone started suing everyone else every time somebody built a better mousetrap.

Still, before we join the contemporary commentators in their jubilation at seeing Apple hoisted with their own petard, we should consider the substance of this latest case in more detail. Doing so requires that we take a closer look at what Xerox had actually created back in the day, and take particularly careful note of which of those creations was named in their lawsuit.

Broadly speaking, Xerox created two different GUI environments in the course of their years of experimentation in this area. The first and most heralded of these was known as the Smalltalk environment, pioneered by the researcher Alan Kay in 1975 on a machine called the Xerox Alto, which had been designed at PARC and was built only in limited quantities, without ever being made available for sale through traditional commercial channels. This was the machine and the environment which Steve Jobs so famously saw on his pair of visits to PARC in December of 1979 — visits which directly inspired first the Apple Lisa and later the Macintosh.

The Smalltalk environment running on a Xerox Alto, a machine built at Xerox PARC in the mid-1970s but never commercially released. Many of the basic ideas of the GUI are here, but much remains to be developed and much is implemented only in a somewhat rudimentary way. For instance, while windows can overlap one another, windows that are obscured by other windows are never redrawn. In this way the PARC researchers neatly avoided one of the most notoriously difficult aspects of implementing a windowing system. When Apple programmer Bill Atkinson was part of the delegation who made that December 1979 visit to PARC, he thought he did see windows that continued to update even when partially obscured by other windows. He then proceeded to find a way to give the Lisa and Macintosh’s windowing engine this capability. Seldom has a misunderstanding had such a fortuitous result.

Xerox’s one belated attempt to parlay PARC’s work on the GUI into a real commercial product took the form of the Xerox Star, an integrated office-productivity system costing $16,500 per workstation upon its release in 1981. Neither Kay nor most of the other key minds behind the Alto and Smalltalk were involved in its development. Yet its GUI strikes modern eyes as far more refined than that of Smalltalk. Importantly, the metaphor of the desktop, and the soon-to-be ubiquitous idea of a skeuomorphic user interface built from stand-ins for real-world office equipment — a trash can, file folders, paper documents, etc. — were apparently the brainchildren of the product-focused Star team rather than the blue-sky researchers who worked at PARC during the 1970s.

The Xerox Star office system, which was released in 1981. This system looks much more familiar to our modern eyes than the Xerox Alto’s Smalltalk, sporting such GUI staples as menus, widgets, and icons. Yet it was still lacking in many areas compared to the GUIs that would follow. Windows were neither free-dragging nor overlapping, and its menus were one-shot commands, not drop-down lists. It most resembles VisiCorp’s Visi On among the GUIs we’ve looked at closely in this series of articles. Both products serve as a telling snapshot of the state of the art in GUIs just before Apple shook everything up with the Lisa and Macintosh.

The Star, which failed dismally due to its high price and Xerox’s lack of marketing acumen, is often reduced to little more than a footnote to the story of PARC, treated as a workmanlike translation of PARC’s grand ideas and technologies into a somewhat problematic product. Yet there’s actually an important philosophical difference between Smalltalk and the Star, born of the different engineering cultures that produced them. Smalltalk emphasized programming, to the point that the environment could literally be re-programmed on the fly as you used it. This was very much in keeping with the early ethos of home computing as well, when all machines booted into BASIC and an ability to program was considered key for every young person’s future — when every high school, it seemed, was instituting classes in BASIC or Pascal. The Star, on the other hand, was engineered to ensure that the non-technical office worker never needed to see a line of code; this machine conformed to the human rather than asking the human to conform to it. One might say that Smalltalk was intended to make the joy of computing — of using the computer as the ultimate anything machine — as accessible as possible, while the Star was intended to make you forget that you were using a computer at all.

While I certainly don’t wish to dismiss or minimize the visionary work down at PARC in the 1970s, I do believe that historians of early microcomputer GUIs have tended to somewhat over-emphasize the innovations of Smalltalk and the Alto while selling the Xerox Star’s influence rather short. Steve Jobs’s early visits to PARC are given much weight in the historical record, but it’s sometimes forgotten that anything Apple wished to copy from Smalltalk had to be done from memory; they had no regular access to the PARC technology after those visits. The Star, on the other hand, did ship as a commercial product some two years before the Lisa. Notably, the Star’s philosophy of hiding the “computery” aspects of computing from the user would turn out to be much more in line with the one that guided the Lisa and Macintosh than was Smalltalk’s approach of exposing its innards for all to see and modify. The Star was a closed black box, capable of running only the software provided for it by Xerox. Similarly, the Lisa couldn’t be programmed at all except by buying a second Lisa and chaining the two machines together, and even the Macintosh never had the reputation of being a hacker’s plaything in the way of the earlier, more hobbyist-oriented Apple II. The Lisa and Macintosh thus joined the Star in embracing a clear divide between coding professionals, who wrote the software, and end users, who bought it and used it to get stuff done. One could thus say that they resemble the Star much more than Smalltalk not only visually but philosophically.

Counter-intuitive though it is to the legend of the Macintosh being a direct descendant of the work Steve Jobs saw at PARC, Xerox sued Apple over the interface elements they had allegedly stolen from the Star rather than Smalltalk. In evaluating the merits of their claim today, I’m somewhat hamstrung by the fact that no working emulators of the original Star exist,[1]This has changed since this article was written; see Ian Crossfield’s comment below. forcing me to rely on screenshots, manuals, and contemporary articles about the system. Nevertheless, those sources are enough to identify an influence of the Star upon the Macintosh that’s every bit as clear-cut as that of the Macintosh upon Microsoft Windows. It strains the bounds of credibility to believe that the Mac team coincidentally developed a skeuomorphic interface using many of the very same metaphors — including the central metaphor of the desktop — without taking the example of the Star to heart. To this template they added much innovation, including such modern GUI staples as free-dragging and overlapping windows, drop-down menus, and draggable icons, along with staple mouse gestures like the hold-and-drag and the double-click. Nonetheless, the foundations of the Mac can be seen in the Star much more obviously than they can in Smalltalk. Crudely put, Apple copied the Star while adding a whole lot of original ideas to the mix, and then Microsoft copied Apple, adding somewhat fewer ideas of their own. The people rejoicing over the Xerox lawsuit, in other words, had this aspect of the story basically correct, even if they did have a tendency to confuse Smalltalk and the Star and misunderstand which of them Xerox was actually suing over.

MacOS started with the skeuomorphic desktop model of the Xerox Star and added it to such fundamental modern GUI concepts as pull-down menus, hold-and-drag, the double-click, and free-dragging, overlapping windows that update themselves even when partially occluded by others.

Of course, the Xerox lawsuit against Apple was legally suspect for all the same reasons as the Apple lawsuit against Microsoft. If anything, there were even more reasons to question the good faith of Xerox’s lawsuit than Apple’s. The source of Xerox’s sudden litigiousness was none other than Bill Lowe, the former IBM executive whose disastrous PS/2 brainchild had already made his attitude toward intellectual property all too clear. Lowe had made a soft landing at Xerox after leaving IBM, and was now telling the press about the “aggressive stand on copyright and patent issues” his new company would be taking from now on. It certainly sounded like he intended to weaponize the long string of innovations credited to Xerox PARC and the Star — using these ideas not to develop products, but to sue others who dared to do so. Lowe’s hoped-for endgame was weirdly similar to his misbegotten hopes for the PS/2’s Micro Channel Architecture: Xerox would eventually license the right to make GUIs and other products to companies like Apple and Microsoft, profiting off their innovations of the past without having to do much of anything in the here and now. This understandably struck many of the would-be licensees as a less than ideal outcome. That, at least, was something on which Apple, Microsoft, and just about everyone else in the computer industry could agree.

Apple’s legal team was left in one heck of an awkward fix. They would seemingly have to argue against Xerox’s broad interpretation of visual copyright while arguing for that same broad interpretation in their own lawsuit against Microsoft — and all in the same court in front of the same judge. Any victory against Xerox could lead to their own words being used against them to precipitate a loss against Microsoft, and vice versa.

It was therefore extremely fortunate for Apple that Judge Vaughn R. Walker struck down Xerox’s lawsuit almost before it had gotten started. At the time of their court filing, Xerox was already outside the statute of limitations for a copyright-infringement claim of the type that Apple had filed against Microsoft. They had thus been forced to make a claim of “unfair competition” instead — a claim which carried with it a much higher evidentiary standard. On March 24, 1990, Judge Walker tossed the Xerox lawsuit, saying it didn’t meet this standard and making the unhelpful observation to Xerox that it would have made a lot more sense as a copyright claim. Apple had dodged a bullet, and Bill Lowe would have to find some other way to make money for his new company.

With the Xerox sideshow thus dispensed with, Apple’s lawyers could turn their attention back to the main event, their case against Microsoft. The same Judge Walker who had decided in their favor against Xerox had taken over from Judge William Schwarzer in the other case as well. No longer needing to worry about protecting their flank from Xerox, Apple’s lawyers pushed for what they called “total concept” or “gestalt” look and feel as the metric for deciding whether Windows infringed upon MacOS. But on March 6, 1991, Judge Walker agreed with Microsoft’s contention that the case should be decided on a “function by function” basis instead. Microsoft began assembling reels of video demonstrating what they claimed to be pre-Macintosh examples of each one of the ten interface elements that were at issue in the case.

So, even as Windows 3.0 was conquering the world outside the courtroom, both sides remained entrenched in their positions inside it, and the case, already three years old, ground on and on through motion after counter-motion. “We’re going to trial,” insisted Edward B. Stead, Apple’s general counsel, but it wasn’t at all clear when that trial would take place. Part of the problem was the sheer pace of external events. As Windows 3.0 became the fastest-selling piece of commercial software the world had ever seen, the scale and scope of Apple’s grievances just kept growing to match. From the beginning, a key component of Microsoft’s strategy had been to gum up the works in court while Windows 3.0 became a fait accompli, the new standard in personal computing, too big for any court to dare attack. That strategy seemed to be working beautifully. Meanwhile Apple’s motions grew increasingly far-fetched, beginning to take on a distinct taint of desperation.

In May of 1991, for example, Apple’s lawyers surprised everyone with a new charge. Still looking for a way to expand the case beyond those aspects of Windows 2 and 3 which hadn’t existed in Windows 1, they now claimed that the 1985 agreement which had been so constantly troublesome to them in that respect was invalid. Microsoft had allegedly defrauded Apple by saying they wouldn’t make future versions of Windows any more similar to the Macintosh than the first was, and then going against their word. This new charge was a hopeful exercise at best, especially given that the agreement Apple claimed Microsoft had broken had been, if it ever existed, strictly a verbal one; absolutely no language to this effect was to be found in the text of the 1985 agreement. Microsoft’s lawyers, once they picked their jaws up off the floor, were left fairly spluttering with indignation. Attorney David T. McDonald labeled the argument “desperate” and “preposterous”: “We’re on the five-yard line, the goal is in sight, and Apple now shows up and says, ‘How about lacrosse instead of football?'” Thankfully, Judge Walker found Apple’s argument to be as ludicrous as McDonald did, thus sparing us all any more sports metaphors.

On April 14, 1992 — now more than four years on from Apple’s original court filing, in a computing climate transformed almost beyond recognition by the rise of Windows — Judge Walker ruled against Apple’s remaining contentions in devastating fashion. Much of the 1985 agreement was indeed invalid, he said, but not for the reason Apple had claimed. What Microsoft had licensed in that agreement were largely “generic ideas” that should never be susceptible to copyright protection in the first place. Apple was entitled to protect very specific visual elements of their displays, such as the actual icons they used, but they weren’t entitled to protect the notion of a screen with icons in the abstract, nor even that of icons representing specific real-world objects, such as a disk, a folder, or a trash can. Microsoft or anyone else could, in other words, make a GUI with a trash-can icon if they wished; they just couldn’t transplant Apple’s specific rendering of a trash can into their own work. Applying the notion of visual copyright any more broadly than this “would afford too much protection and yield too little competition,” said the judge. Apple’s slippery notion of look and feel, it appeared, was dead as a basis for copyright. After all the years of struggle and at least $10 million in attorney fees on both sides, Judge Walker ruled that Apple’s case was too weak to even present before a jury. “Through five years, there were many points where the case got continuously refined and focused and narrowed,” said a Microsoft spokesman. “Eventually, there was nothing left.”

Still, one can’t accuse Apple of giving up without a fight. They dragged the case out for almost three more years after this seemingly definitive defeat. When the Ninth Circuit Court of Appeals upheld Judge Walker’s judgment in 1994, Apple tried to take the case all the way to the Supreme Court. That august body announced that they would not hear it on February 21, 1995, thus finally putting an end to the whole tortuous odyssey.

The same press which had been so consumed by the case circa 1988 barely noticed its later developments. The narrative of Microsoft’s utter dominance and Apple’s weakness had become so prevalent by the early 1990s that it had become difficult to imagine any outcome other than a Microsoft victory. Yet the case’s anticlimactic ending obscured how dangerous it had once been, not only for Microsoft but for the software industry as a whole. Whatever one thinks in general of the products and business practices of the opposing sides, a victory for Apple would have been a terrible result for the personal-computer industry. The court got this one right in striking all of Apple’s claims down so thoroughly — something that can’t always be said about collisions between technology and the law. Bill Gates could walk away knowing the long struggle had struck an important blow for an ongoing culture of innovation in the software industry. Indeed, like the victory of his hero Henry Ford over a group of automotive patent trolls eighty years before, his victory would benefit his whole industry along with his company — which isn’t to say, of course, that he would have fought the war purely for the sake of altruism.

John Sculley, for his part, was gone from Apple well before the misguided lawsuit he had fostered came to its final conclusion. He was ousted by his board of directors in 1993, after it became clear that Apple would post a loss of close to $200 million for the year. Yet his departure brought no relief to the problems of dwindling market share, dwindling focus, and, most worrisome of all, a dwindling sense of identity. Apple languished, embittered about the ideas Microsoft had “stolen” from them, while Windows conquered the world. One could certainly argue that they deserved a better fate on the basis of a Macintosh GUI that still felt far slicker and more intuitive than Microsoft’s, but the reality was that their own poor decisions, just as much as Microsoft’s ruthlessness, had led them to this sorry place. The mid-1990s saw them mired in the greatest crisis of confidence of their history, licensing the precious Macintosh technology to clone makers and seriously considering breaking themselves up into two companies to appease their angriest shareholder contingents. For several years to come, there would be a real question of whether any part of the company would survive to see the new millennium. Gone were the Jobsian dreams of changing the world through better computing; Apple was reduced to living on Microsoft’s scraps. Microsoft had won in the marketplace as thoroughly as they had in court.

But the full story of Apple’s 1990s travails is one to take up at another time. Now, we should turn to IBM, to see how they coped after the MS-DOS-based Windows, rather than the OS/2-based Presentation Manager, made the world safe for the GUI.

Throughout 1990, that year of wall-to-wall hype over Windows 3.0, Microsoft persisted in dampening expectations for OS/2 in a way that struck IBM as deliberate. The agreement that MS-DOS and Windows were for low-end computers, OS/2 and the Presentation Manager for high-end ones, seemed to have been forgotten by Microsoft as soon as Bill Gates and Steve Ballmer left the Fall 1989 Comdex at which it had been announced. Gates now said that it could take OS/2 another three or four years to inherit the throne from MS-DOS, and by that time it would probably be running Windows rather than Presentation Manager anyway. Ballmer said that OS/2 was really meant to compete with high-end client/server operating systems like Unix, not with desktop operating systems like MS-DOS. They both said that “there will be a DOS 5, 6, and 7, and a Windows 4 and 5.” Meanwhile IBM was predictably incensed by Windows 3.0’s use of protected mode and the associated shattering of the 640 K barrier; that sort of thing was supposed to have been the purview of the more advanced OS/2.

Back in late 1988, Microsoft had hired a system-software architect from DEC named David Cutler to oversee the development of OS/2 2.0. No shrinking violet, he promptly threw out virtually all of the existing OS/2 code, which he pronounced a bloated mess, and started over from scratch on an operating system that would fulfill Microsoft’s original vision for OS/2, being targeted at machines with an 80386 or better processor. The scope and ambition of this project, along with the fact that Microsoft wished to keep it entirely in-house, had turned into yet one more source of tension between the two companies; it could be years still before Cutler’s OS/2 2.0 was ready. There remained little semblance of any coordinated strategy between the two companies, in public or in private.

And yet, in September of 1990, IBM and Microsoft announced a new roadmap for OS/2’s future. The two companies together would finish up one more version of the first-generation OS/2 — OS/2 1.3, which was scheduled to ship the following month — and that would be the end of that lineage. Then IBM would develop an OS/2 2.0 alone — a project they hoped to have done in a year or so — while Cutler’s team at Microsoft continued with the complete rewrite that was now to be marketed as OS/2 3.0.

The announcement, whose substance amounted to a tacit acknowledgement that the two companies simply couldn’t work together anymore on the same project, caused heated commentary in the press. It seemed a convoluted way to evolve an operating system at best, and it was happening at the same time that Microsoft seemed to be charging ahead — and with massive commercial success at that — on MS-DOS and Windows as the long-term face of personal computing in the 1990s. InfoWorld wrote of a “deepening rift” between Microsoft and IBM, characterizing the latest agreement as IBM “seizing control of OS/2’s future.” “Although in effect IBM and Microsoft will say they won’t divorce ‘for the sake of the children,'” said an inside source to the magazine, “in fact they are already separated, and seeking new relationships.” Microsoft pushed back against the “divorce” meme only in the most tepid fashion. “You may not understand our marriage,” said Steve Ballmer, “but we’re not getting divorced.” (One might note that when a couple have to start telling friends that they aren’t getting a divorce, it usually isn’t a good sign about the state of their relationship…)

Charles Petzold, writing in PC Magazine, summed up the situation created by all the mixed messaging: “The key words in operating systems are confusion, uncertainty, anxiety, and doubt. Unfortunately, the two guiding lights of this industry — IBM and Microsoft — are part of the problem rather than part of the solution.” If anything, this view of IBM as an ongoing “guiding light” was rather charitable.  OS/2 was drowning in the Windows hype. “The success of Windows 3.0 has already caused OS/2 acceptance to go from dismal to cataclysmic,” wrote InfoWorld. “Analysts have now pushed back their estimates of when OS/2 will gain broad popularity to late this decade, with some predicting that the so-called next-generation operating system is all but dead.”

The final divorce of Microsoft from IBM came soon after to give the lie to all of the denials. In July of 1991, Microsoft announced that the erstwhile OS/2 3.0 was to become its own operating system, separate from both OS/2 and MS-DOS, called Windows NT. With this news, which barely made an impression in the press — it took up less than one quarter of page 87 of that week’s InfoWorld — a decade of cooperation came to an end. From now on, Microsoft and IBM would exist strictly as competitors in a marketplace where Microsoft enjoyed all the advantages. In the final divorce settlement, IBM gave up all rights to the upcoming Windows NT and agreed to pay a small royalty on all future sales of OS/2 (whatever those might amount to), while Microsoft paid a lump sum of around $30 million to be free and clear of their last obligations to the computing giant that had made them what they now were. They greeted this watershed moment with no sentimentality whatever. In a memo that leaked to the press, Bill Gates instead rejoiced that Microsoft was finally free of IBM’s “poor code, poor design, and other overhead.”

Even as the unlikely partnership’s decade of dominance was passing away, Microsoft’s decade of sole dominion was just beginning. The IBM PC and its clones had become the Wintel standard, and would require no further input from Big Blue, thank you very much. IBM’s share of the standard’s sales was already down to 17 percent, and would just keep on falling from there. “Microsoft is now driving the industry, not IBM,” wrote the newsletter Software Publishing by way of stating the obvious.

Which isn’t to say that IBM was going away. While Microsoft was celebrating their emancipation, IBM continued plodding forward with OS/2 2.0, which, like the aborted version 3.0 that was now to be known as Windows NT, ran only on an 80386 or better. They made a big deal of the work-in-progress at the Fall 1991 Comdex without managing to change the narrative around it one bit. The total bill for OS/2 was approaching an astonishing $1 billion, and they had very little to show for it. One Wall Street analyst pronounced OS/2 “the greatest disaster in IBM’s history. The reverberations will be felt throughout the decade.”

At the end of that year, IBM had to report — incredibly, for the very first time in their history — an annual loss. And it was no trivial loss either. The deficit was $2.8 billion, on revenues that had fallen 6.1 percent from the year before. The following year would be even worse, to the tune of a $5 billion loss. No company in the history of the world had ever lost this much money this quickly; by the last quarter of 1993, IBM would be losing $45 million every day. Microcomputers were continuing to replace the big mainframes and minicomputers that had once been the heart of IBM’s business. Now, though, fewer and fewer of those replacement machines were IBM personal computers; whole segments of their business were simply evaporating. The vague distrust IBM had evinced toward Microsoft for most of the 1980s now seemed amply justified, as all of their worst nightmares came true. IBM seemed old, bloated, and, worst of all, irrelevant next to the fresh-faced young Microsoft.

OS/2 2.0 started reaching consumers in May of 1992. It was a surprisingly impressive piece of work; perhaps the relationship with Microsoft had been as frustrating for IBM’s programmers as it had been for their counterparts. Certainly OS/2 2.0 was a far more sophisticated environment than Windows 3.0. Being designed to run only on 32-bit microprocessors like the 80386 and 80486, it utilized them to their maximum potential, which was much more than one could say for Windows, while also being much more stable than Microsoft’s notoriously crash-prone environment. In addition to native OS/2 software, it could run multiple MS-DOS applications at the same time with complete compatibility, and, in a new wrinkle added to the mix by IBM, could now run many Windows applications as well. IBM called it “a better DOS than DOS and a better Windows than Windows,” a claim which carried a considerable degree of truth. They pointedly cut its suggested list price of $140 to just $50 for Windows users looking to “upgrade.”

A Quick Tour of OS/2 2.0


Shipping on more than twenty 3.5-inch diskettes, OS/2 2.0 was by far more the most elaborate operating system yet made for its family of personal computers. When we boot it up for the first time, we’re given a lengthy interactive tutorial of a sort that was seldom seen in software of 1992 vintage.

The notion of a “Presentation Manager” GUI that’s separate from the core OS/2 operating system has been dropped; OS/2 is now simply OS/2, with a GUI as the standard, built-in interface. From the opening tutorial to the look of its desktop, the whole package reminds one of nothing of so much as the much later Windows 95. We have a full-fledged, functioning desktop workspace here, with icons representing folders and disks, and a “shredder” to replace the usual trash can.

After shipping earlier versions of OS/2 with no extra tools or applets whatsoever, IBM got wise this time around and included plenty of stuff to play with, like this neat little music editor.

Some aspects of the interface are a little strange. Dragging with the mouse is accomplished using the right button rather than the left — a fine example of OS/2’s superficial similarity and granular dissimilarity to Windows, which so many users who had to move back and forth between the environments found so frustrating.

Of course, MS-DOS is still around if you need it. Unlike in OS/2 1.x, here you can have as many MS-DOS windows and applications open as you like.

But, despite its many merits, OS/2 2.0 was a lost cause from the start, at least if one’s standard for success was Windows. Windows 3.1 rolled out of Microsoft at almost the same instant, and no amount of comparisons in techie magazines pointing out the alternative operating system’s superiority could have any impact on a mass market that was now thoroughly conditioned to accept Windows as the standard. Giant IBM’s operating system had become, as the New York Times put it, “an unlikely underdog.”

In truth, the contest was so lopsided by this point as to be laughable. Microsoft, who had long-established relationships with the erstwhile clone makers — now known as makers of hardware conforming to the Wintel standard — understood early, as IBM did only much too late, that the best and perhaps only way to get your system software widely accepted was to sell it pre-installed on the computers that ran it. Thus, by the time OS/2 2.0 shipped, Windows already came pre-installed on nine out of ten personal computers on the market, thanks to a smart and well-funded “original equipment manufacturer” sales team that was overseen personally by Steve Ballmer. And thus, simply by buying a new computer, one automatically became a Windows user. Running OS/2, on the other hand, required that the purchaser of one of these machines decide to go out and buy an alternative to the perfectly good Microsoft software already on her hard drive, and then go through all the trouble of installing and configuring it. Very few people had the requisite combination of motivation and technical skill for an exercise like that.

As a final indignity, IBM themselves had to bow to customer demand and offer MS-DOS and Windows as an optional alternative to OS/2 on their own machines. People wanted the system software that they used at the office, that their friends had, that could run all of the products on the shelves of their local computer store with 100-percent fidelity (with the exception of that oddball Mac stuff off in the corner, of course). Only the gearheads were going to buy OS/2 because it was a 32-bit instead of a 16-bit operating system or because it offered preemptive instead of cooperative multitasking, and they were a tiny slice of an exploding mass market in personal computing.

That said, OS/2 did have a better fate than many another alternative operating system during this period of Windows, Windows everywhere. It stayed around for years even in the face of that juggernaut, going through two more major revisions and many minor ones, the very last coming as late as December of 2001. It remained always a well-respected operating system that just couldn’t break through Microsoft’s choke hold on mainstream computing, having to content itself with certain niches — powering automatic teller machines was a big one for a long time — where its stability and robustness served it well.

So, IBM, and Apple as well, had indeed become the outsiders of personal computing. They would retain that dubious status for the balance of the decade of the 1990s, offering alternatives to the monoculture of Windows computing that appealed only to the tech-obsessed, the idealistic, or the just plain contrarian. Even as much of what I’ve related in this article was taking place, they were being forced into one another’s arms for the sake of sheer survival. But the story of that second unlikely IBM partnership — an awkward marriage of two corporate cultures even more dissimilar than those of Microsoft and IBM — must, like so much else, be told at another time. All that’s left to tell in this series is the story of how Windows, with the last of its great rivals bested, finished the job of conquering the world.

(Sources: the books The Making of Microsoft: How Bill Gates and His Team Created the World’s Most Successful Software Company by Daniel Ichbiah and Susan L. Knepper, Hard Drive: Bill Gates and the Making of the Microsoft Empire by James Wallace and Jim Erickson, Gates: How Microsoft’s Mogul Reinvented an Industry and Made Himself the Richest Man in America by Stephen Manes and Paul Andrews, Computer Wars: The Fall of IBM and the Future of Global Technology by Charles H. Ferguson and Charles R. Morris, and Apple Confidential 2.0: The Definitive History of the World’s Most Colorful Company by Owen W. Linzmayer; PC Week of September 24 1990 and January 15 1991; InfoWorld of September 17 1990, May 29 1991, July 29 1991, October 28 1991, and September 6 1993; New York Times of December 29 1989, March 24 1990, March 7 1991, May 24 1991, January 18 1992, August 8 1992, January 20 1993, April 19 1993, and June 2 1993; Seattle Times of June 2 1993. Finally, I owe a lot to Nathan Lineback for the histories, insights, comparisons, and images found at his wonderful online “GUI Gallery.”)

Footnotes

Footnotes
1 This has changed since this article was written; see Ian Crossfield’s comment below.
 
 

Tags: , , , , ,

Doing Windows, Part 7: Third Time’s the Charm

Microsoft entered the last year of the 1980s looking toward a new decade that seemed equally rife with opportunity and danger. On the one hand, profits were up, and Bill Gates and any number of his colleagues could retire as very rich men indeed even if it all ended tomorrow — not that that outcome looked likely. The company was coming to be seen as the standard setter of the personal-computer industry, more important even than an IBM that had been gravely weakened by the PS/2 debacle and the underwhelming reception of OS/2. Microsoft Windows, now once again viewed by Gates as the keystone of his company’s future after the disappointment that had been OS/2, stood to benefit greatly from Microsoft’s new clout. Windows 2 had gained some real traction, and the upcoming Windows 3 was being talked about with mounting expectation by an MS-DOS marketplace that finally seemed to be technologically and psychologically ready for a GUI environment.

The more worrisome aspects of the future, on the other hand, all swirled around the other two most important companies in American business computing. Through most of the decade now about to pass away, Microsoft had managed to maintain cordial if not always warm relationships with both IBM and Apple — until, that is, the latter declared war by filing a copyright-infringement lawsuit against Windows in 1988. The stakes of that lawsuit were far greater than any mere monetary settlement; they were rather the very right of Windows to continue to exist. It wasn’t at all clear what Microsoft could or would do next if they lost the case and with it Windows. Meanwhile their relationship with IBM was becoming almost equally strained. Disagreements about the technical design of OS/2, along with disputes over the best way to market it, had caused Microsoft to assume the posture of little more than subcontractors working on IBM’s operating system of the future at the same time that they pushed hard on their own Windows. OS/2 and Windows, those two grand bids for the future of mainstream business computing, seemingly had to come into conflict with one another at some point. What happened then? IBM’s reputation had unquestionably been tarnished by recent events, but at the end of the day they were still IBM, the legendary Big Blue, the most important and influential company in the history of computing to date. Was Microsoft ready to take on both Apple and IBM as full-fledged enemies?

So, the people working on Windows 3 had plenty of potential distractions to contend with as they tried to devise a GUI environment good enough to leap into the mainstream. “Just buckle down and make it as good as possible,” said Bill Gates, “and let our lawyers and business strategists deal with the distractions.” By all indications, the Windows people managed to do just that; there’s little indication that all of the external chaos had much effect on their work.

That said, when they did raise their heads from their keyboards, they could take note of encouraging signs that Microsoft might be able to navigate through their troubles with Apple and IBM. As I described in my previous article, on March 18, 1989, Judge William Schwarzer ruled that the 1985 agreement between the two companies applied only to those aspects of Windows 2 — and by inference of an eventual Windows 3 — which had also been a part of Windows 1. Thus the 1985 agreement wouldn’t be Microsoft’s ticket to a quick victory; it appeared that they would rather have to invalidate the very premise of “visual copyright” as applied by Apple in this case in order to win. On July 21, however, Microsoft got some more positive news when Judge Schwarzer made his ruling on exactly which features of Windows 2 weren’t covered by the old agreement. He threw out no less than 250 of Apple’s 260 instances of claimed infringement, vastly simplifying the case — and vastly reducing the amount of damages which Apple could plausibly ask for. The case remained a potential existential threat to Windows, but disposing of what Microsoft’s lawyers trumpeted was the “vast bulk” of it at one stroke did give some reason to take heart. Now, what remained of the case seemed destined to grind away quietly in the background for a long, long time to come — a Sword of Damocles perhaps, but one which Bill Gates at any rate was determined not to let affect the rest of his company’s strategy. If he could make Windows a hit — a fundamental piece of the world’s computing infrastructure — while the case was still grinding on, it would be very difficult indeed for any judge to order the nuclear remedy of banning Microsoft from continuing to sell it.

Microsoft’s strategy with regard to IBM was developing along a similarly classic Gatesian line. Inveterate bets-hedger that he was, Gates wasn’t willing to cut ties completely with IBM, just in case OS/2 and possibly even PS/2 turned around and some of Big Blue’s old clout returned. Instead he was careful to maintain at least a semblance of good relations, standing ready to jump off the Windows bandwagon and back onto OS/2, if it should prove necessary. He was helped immensely in this by the unlamented departure from IBM of Bill Lowe, architect of the disastrous PS/2 strategy, an executive with whom Gates by this point was barely on speaking terms. Replacing Lowe as head of IBM’s PC division was one Jim Cannavino, a much more tech-savvy executive who trusted Gates not in the slightest but got along with him much better one-on-one, and was willing to continue to work with him for the time being.

At the Fall 1989 Comdex, the two companies made a big show of coming together — the latest of the series of distancings and rapprochements that had always marked their relationship. They trotted out a new messaging strategy that had Windows as the partnership’s “low-end” GUI, OS/2’s Presentation Manager as the high-end GUI of the future, suitable at present only for machines with an 80386 processor and at least 4 MB of memory. (The former specification was ironic in light of all the bickering IBM and Microsoft had done in earlier years on the issue of supporting the 80286 in OS/2.) The press release stated that “Windows is not intended to be used as a server, nor will future releases contain advanced OS/2 features [some of which were only planned for future OS/2 releases at this point] such as distributed processing, the 32-bit flat memory model, threads, or long filenames.” The pair even went so far as to recommend that developers working on really big, ambitious applications for the longer-term future focus their efforts on OS/2. (“No advice,” InfoWorld magazine would wryly note eighteen months later, “could have been worse.”)

But Microsoft’s embrace of the plan seemed tentative at best even in the moment. It certainly didn’t help IBM’s comfort level when Steve Ballmer in an unguarded moment blurted out that “face it: in the future, everyone’s gonna run Windows.” Likewise, Bill Gates showed little personal enthusiasm for this idea of Windows as the cut-price, temporary alternative to OS/2 and the Presentation Manager. As usual, he was just trying to keep everyone placated while he worked out for himself what the future held. And as time went on, he seemed to find more and more to like about the idea of a Windows-centric future. Several months after the Comdex show, he got slightly drunk at a big industry dinner, and confessed to rather more than he might have intended. “Six months after Windows 3 ships,” he said, “it will have a greater market share than Presentation Manager will ever have — OS/2 applications won’t have a chance.” He further admitted to deliberately dragging his feet on updates to OS/2 in order to ensure that Windows 3.0 got all the attention in 1990.

He needn’t have worried too much on that front: press coverage of the next Windows was reaching a fever pitch, and evincing little of the skepticism that had accompanied Windows 1 and 2. Throughout 1989, rumors and even the occasional technical document leaked out of Microsoft — and not, one senses, by accident. Carefully timed grist for the rumor mill though it may have been, the news was certainly intriguing on its own merits. The press wrote that Tandy Trower, the manager who had done the oft-thankless job of bringing Windows 1 and 2 to fruition, had transferred off the team, but the team itself was growing like never before, and now being personally supervised once again by the ever-flexible Steve Ballmer, who had left Microsoft’s OS/2 camp and rejoined the Windows zealots. Ballmer had hired visual designer Susan Kare, known throughout the industry as the author of MacOS’s clean and crisp look, to apply some of the same magic to their own GUI.

But for those who understood Windows’s longstanding technical limitations, another piece of news was the most intriguing and exciting of all. Already before the end of 1989, Microsoft started talking openly about their plans to accomplish two things which had heretofore been considered mutually exclusive: to continue running Windows on top of hoary old MS-DOS, and yet to shatter the 640 K barrier once and for all.

It had all begun back in June of 1988, when Microsoft programmer David Weise, one of the former Dynamical Systems Research people who had proved such a boon to Windows, bumped into an old friend named Murray Sargent, a physics professor at the University of Arizona who happened to do occasional contract programming for Microsoft on the side. At the moment, he told Weise, he was working on adding new memory-management functionality to Microsoft’s CodeView debugger, using an emerging piece of software technology known as a “DOS extender,” which had been pioneered over the last couple of years by an innovative company in the system-software space called Quarterdeck Office Systems.

As I’ve had occasion to describe in multiple articles by now, the most crippling single disadvantage of MS-DOS had always been its inability to directly access more than 640 K of memory, due to its origins on the Intel 8088 microprocessor, which had a sharply limited address space. Intel’s newer 80286 and 80386 processors could run MS-DOS only in their 8088-compatible “real” mode, where they too were limited to 640 K, rather than being able to use their “protected” mode to address up to 16 MB (in the case of the 80286) or 4 GB (in the case of the 80386). Because they ran on top of MS-DOS, most versions of Windows as well had been forced to run in real mode — the sole exception was Windows/386, which made extensive use of the 80386’s virtual mode to ease some but not all of the constant headache that was memory management in the world of MS-DOS. Indeed, when he asked himself what were the three biggest aggravations which working with Windows entailed, Weise had no doubt about the answer: “memory, memory, and memory.” But now, he thought that Sargent might just have found a solution through his tinkering with a DOS extender.

It turned out that the very primitiveness of MS-DOS could be something of a saving grace. Its functions mostly dealt only with the basics of file management. Almost all of the other functions that we think of as rightfully belonging to an operating system were handled either by an extended operating environment like Windows, or not handled at all — i.e., left to the programmer to deal with by banging directly on the hardware. Quarterdeck Office Systems had been the first to realize that it should be possible to run the computer most of the time in protected mode, if only some way could be found to down-shift into real mode when there was a need for MS-DOS, as when a file on disk needed to be read from or written to. This, then, was what a DOS extender facilitated. Its code was stashed into an unused corner of memory and hooked into the function calls that were used for communicating with MS-DOS. That done, the processor could be switched into protected mode for running whatever software you liked with unfettered access to memory beyond 640 K. When said software tried to talk to MS-DOS after that, the DOS extender trapped that function call and performed some trickery: it copied any data that MS-DOS might need to access in order to carry out the task into the memory space below 640 K, switched the CPU into real mode, and then reissued the function call to let MS-DOS act on that data. Once MS-DOS had done its work, the DOS extender switched the CPU back into protected mode, copied any necessary data back to where the protected-mode software expected it to be, and returned control to it.

One could argue that a DOS extender was just as much a hack as any of the other workarounds for the 640 K barrier; it certainly wasn’t as efficient as a more straightforward contiguous memory model, like that enjoyed by OS/2, would have been. It was particularly inefficient on the 80286, which unlike the 80386 had to perform a costly reset every time it was switched between protected and real mode and vice versa. But even so, it was clearly a better hack than any of the ones that had been devised to date. It finally let Intel’s more advanced processors run, most of the time anyway, as their designers had intended them to run. And from the programmer’s perspective it was, with only occasional exceptions, transparent; you just asked for the memory you needed and went about your business from there, and let the DOS extender worry about all the details going on behind the scenes. The technology was still in an imperfect state that summer of 1988, but if it could be perfected it would be a dream come true for programmers, the next best thing to a world completely free of MS-DOS and its limitations. And it might just be a dream come true for Windows as well, thought David Weise.

Quarterdeck may have pioneered the idea of the DOS extender, but their implementation was lacking in the view of Weise and his sometime colleague Murray Sargent. With Sargent’s help in the early stages, Weise implemented his own DOS extender and then his own protected-mode version of Windows which used it over three feverish months of nights and weekends. “We’re not gonna ask anybody, and then if we’re done and they shoot it down, they shoot it down,” he remembers thinking.

There are all these little gotchas throughout it, but basically you just work through the gotchas one at a time. You just close your eyes, and you just charge ahead. You don’t think of the problems, or you’re not gonna do it. It’s fun. Piece by piece, it’s coming. Okay, here come the keyboard drivers, here come the display drivers, here comes GDI — oh, look, here’s USER!

By the fall of of 1988, Weise had his secret project far enough along to present to Bill Gates, Steve Ballmer, and the rest of the Windows team. In addition to plenty of still-unresolved technical issues, the question of whether a protected-mode Windows would step too much on the toes of OS/2, an operating system whose allure over MS-DOS was partially that it could run in protected mode all the time, haunted the discussion. But Gates, exasperated beyond endurance by IBM, wasn’t much inclined to defer to them anymore. Never a boss known for back-patting, he told Weise simply, “Okay, let’s do it.”

Microsoft would eventually release their approach to the DOS extender as an open protocol called the “DOS Protected Mode Interface,” or DPMI. It would change the way MS-DOS-based computers were programmed forever, not only inside Windows but outside of it as well. The revolutionary non-Windows game Doom, for example, would have been impossible without the standalone DOS extender DOS/4GW, which implemented the DPMI specification and was hugely popular among game programmers in particular for years. So, DPMI became by far the most important single innovation of Windows 3.0. Ironically given that it debuted as part of an operating environment designed to hide the ongoing existence of MS-DOS from the user, it single-handedly made MS-DOS a going concern right through the decade of the 1990s, giving the Quick and Dirty Operating System That Refused to Die a lifespan absolutely no one would ever have dreamed for it back in 1981.

But the magic of DPMI wouldn’t initially apply to all Windows systems. Windows 3.0 could still run, theoretically at least, on even a lowly 8088-based PC compatible from the early 1980s — a computer whose processor didn’t have a protected mode to be switched into. For all that he had begged and cajoled IBM to make OS/2 an 80386-exclusive operating system, Bill Gates wasn’t willing to abandon less powerful machines for Microsoft’s latest operating environment. In addition to fueling conspiracy theories that Gates had engineered OS/2 to fail from the beginning, this data point did fit the brief-lived official line that OS/2 was for high-end machines, Windows for low-end machines. Yet the real reasons behind it were more subtle. Partially due to a global chip shortage that made all sorts of computers more expensive in the late 1980s and briefly threatened to derail the inexorable march of Moore’s Law, users hadn’t flocked to the 80386-based machines quite as quickly as Microsoft had anticipated when the OS/2 debate was raging back in 1986. The fattest part of the market’s bell curve circa 1989 was still the 80286 generation of computers, with a smattering of pace-setting 80386s and laggardly 8088s on either side of them. Microsoft thus ironically judged the 80386 to be exactly the bridge too far in 1989 that IBM had claimed it to be in 1986. Even before Windows 3.0 came out, the chip shortage was easing and Moore’s Law was getting back on track; Intel started producing their fourth-generation microprocessor, the 80486, in the last weeks of 1989. [1]The 80486 was far more efficient than its predecessor, boasting roughly twice the throughput when clocked at the same speed. But, unlike the 80286 and 80386, it didn’t sport any new operating modes or fundamentally new capabilities, and thus didn’t demand any special consideration from software like Windows/386 and Windows 3.0 that was already utilizing the 80386 to its full potential. For the time being, though, Windows was expected to support the full range of MS-DOS-based computers, reaching all the way back to the beginning.

And yet, as we’ve seen, DPMI was just too brilliant an innovation to give up in the name of maintaining compatibility with antiquated 8088-based machines. MS-DOS had for years been forcing owners of higher-end hardware to use their machines in a neutered fashion, and Microsoft wasn’t willing to continue that dubious tradition in the dawning era of Windows. So, they decided to ship three different versions of Windows in every box. When started on an 8088-class machine, or on any machine without memory beyond 640 K, Windows ran in “real mode.” When started on an 80286 with more than 640 K of memory, or on an 80386 with more than 640 K but less than 2 MB of memory, it ran in “standard mode.” And when started on an 80386 with at least 2 MB of memory, it ran in its ultimate incarnation: “386 enhanced mode.”

In both of the latter modes, Windows 3.0 could offer what had long been the Holy Grail for any MS-DOS-hosted GUI environment: an application could simply request as much memory as it needed, without having to worry about what physical addresses that memory included or whether it added up to more than 640 K. [2]This wasn’t quite the “32-bit flat memory model” which Microsoft had explicitly promised Windows would never include in the joint statement with IBM. That referred to an addressing mode unique to the 80386 and its successors, which allowed them to access up to 4 GB of memory in a very flexible way. Having been written to support the 80286, Windows 3.0, even in 386 enhanced mode, was still limited to 16 MB of memory, and had to use a somewhat more cumbersome form of addressing known as a segmented memory model. Still, it was close enough that it arguably went against the spirit of the statement, something that wouldn’t be lost on IBM. No earlier GUI environment, from Microsoft or anyone else, had met this standard.

In the 386-enhanced mode, Windows 3.0 also incorporated elements of the earlier Windows/386 for running vanilla MS-DOS applications. Such applications ran in the 80386’s virtual mode; thus Windows 3.0 used all three operating modes of the 80386 in tandem, maximizing the potential of a chip whose specifications owed a lot to Microsoft’s own suggestions. When running on an 8088 or 80286, Windows still served as little more than a task launcher for MS-DOS applications, but on an 80386 with enough memory they multitasked as seamlessly as native Windows applications — or perhaps more so: vanilla MS-DOS applications running inside their virtual machines actually multitasked preemptively, while normal Windows applications only multitasked cooperatively. So, on an 80386 in particular, Windows 3.0 had a lot going for it even for someone who couldn’t care less about Susan Kare’s slick new icons. It was much, much more than just a pretty face. [3]Memory management on MS-DOS-based versions of Windows is an extremely complicated subject, one which alone has filled thick technical manuals. This article has presented by no means a complete picture, only the most cursory of overviews intended to convey the importance of Windows 3.0’s central innovation of DPMI. In addition to that innovation, though, Windows 3.0 and its successors employed plenty of other tricks, many of them making yet more clever use of the 80386’s virtual mode, Intel’s gift that kept on giving. For truly dedicated historians of a technical bent, I recommend a book such as Unauthorized Windows 95 by Andrew Schulman (which does cover memory management under earlier versions of Windows as well), Windows Internals by Matt Pietrek, and/or DOS and Windows Protected Mode by Al Williams.

Which isn’t to say that the improved aesthetics weren’t hugely significant in their own right. While the full technical import of Windows 3.0’s new underpinnings would take some time to fully grasp, it was immediately obvious that it was slicker and far more usable than what had come before. Macintosh zealots would continue to scoff, at times with good reason, at the clunkier aspects of the environment, but it unquestionably came far closer than anything yet to that vision which Bill Gates had expressed in an unguarded moment back in 1984 — the vision of “the Mac on Intel hardware.”

A Quick Tour of Windows 3.0


Windows 3.0 really is a dramatic leap compared to what came before. The text-based “MS-DOS Executive” — just the name sounds clunky, doesn’t it? — has been replaced by the “Program Manager.” Applications are now installed, and are represented as icons; we’re no longer forced to scroll through long lists of filenames just to start our word processor. Indeed, the whole environment is much more attractive in general, having finally received some attention from real visual designers like Susan Kare of Macintosh fame.

One area that’s gotten a lot of attention from the standpoint of both usability and aesthetics is the Control Panel. Much of this part of Windows 3.0 is lifted directly from the OS/2 Presentation Manager — with just enough differences introduced to frustrate.

In one of the countless new customization and personalization options, we can now use images as our desktop background, .

The help system is extensive and comprehensive. Years before a web browser became a standard Windows component, Windows Help was a full-fledged hypertext reader, a maze of twisty little links complete with embedded images and sounds.

The icons on the desktop still represent only running applications that have been minimized. We would have to wait until Windows 95 for the desktop-as-general-purpose-workspace concept to reach fruition.

For all the aesthetic improvements, the most important leap made by Windows 3.0 is its shattering of the 640 K barrier. When run on an 80286 or 80386, it uses Microsoft’s new DPMI technology to run in those processors’ protected mode, leaving the user and (for the most part) the programmer with just one heap of memory to think about; no more “conventional” and “extended” and “expanded” memory to scratch your head over. It’s difficult to exaggerate what a miracle this felt like after all the years of struggle. Finally, the amount of memory you had in your machine was the amount of memory you had to run Windows and its applications — end of story.

In contrast to all of the improvements in the operating environment itself, the set of standard applets that shipped with Windows 3.0 is almost unchanged since the days of Windows 1.

The Program Manager, like the MS-DOS Executive before it, in a sense is Windows; we close it to exit the operating environment itself and return to the MS-DOS prompt.

A consensus emerged well ahead of Windows 3.0’s release that this was the GUI which corporate America could finally embrace — that the GUI’s time had come, and that this GUI was the one destined to become the standard. One overheated pundit declared that “this is probably the most anticipated product in the history of the world.” Microsoft did everything possible to stoke those fires of anticipation. Rather than aligning the launch with a Comdex show, they opted to put-on a glitzy Apple-style self-standing media event to mark the beginning of the Windows 3.0 era. In fact, one might even say that they rather outdid the famously showy Apple.

The big rollout took place on May 22, 1990, at New York’s Center City at Columbus Circle. A hundred third-party publishers showed up with Windows 3.0 applications, along with fifty hardware makers who were planning to ship it pre-installed on every machine they sold. Closed-circuit television feeds beamed the proceedings to big-screen theaters in half a dozen other cities in the United States, along with London, Paris, Madrid, Singapore, Stockholm, Milan, and Mexico City. Everywhere standing-room-only crowds clustered, made up of those privileged influence-wielders who could score a ticket to what Bill Gates himself described as “the most extravagant, extensive, and elaborate software introduction ever,” to the tune of a $3 million price tag. Microsoft had tried to go splashy from time to time before, but never had they indulged in anything like this. It was, Gates’s mother reckoned, the “happiest day of Bill’s life” to date.

The industry press was carried away on Microsoft’s river of hype, manifesting on their behalf a messianic complex that was as worthy of Apple as had been the big unveiling. “If you think technology has changed the world in the last few years, hold on to your seats,” wrote one pundit. Gates made the rounds of talk shows like Good Morning America, as Microsoft spent another $10 million on an initial advertising campaign and carpet-bombed the industry with 400,000 demonstration copies of Windows 3.0, sent to anyone who was or might conceivably become a technology taste-maker.

The combination of wall-to-wall hype and a truly compelling product was a winning one; this time, Microsoft wouldn’t have to fudge their Windows sales numbers. When they announced that they had sold 1 million boxed copies of Windows 3.0 in the first four months, each for $80, no one doubted them. “There is nothing that even compares or comes close to the success of this product,” said industry analyst Tim Bajarin. He went on to note in a more ominous vein that “Microsoft is on a path to continue dominating everything in desktop computing when it comes to software. No one can touch or even slow them down.”

Windows 3.0 inevitably won “Best Business Program” for 1990 from the Software Publishers Association, an organization that ran on the hype generated by its members. More persuasive were the endorsements from other sources. For example, after years of skepticism toward previous versions of Windows, the hardcore tech-heads at Byte magazine were effusive in their praise of this latest one, titling their first review thereof simply “Three’s the One.” “On both technical and strategic grounds,” they wrote, “Windows 3.0 succeeds brilliantly. After years of twists and turns, Microsoft has finally nailed this product. Try it. You’ll like it.” PC Computing put an even more grandiose spin on things, straining toward a scriptural note (on the Second Day, Microsoft created the MS-DOS GUI, and it was Good):

When the annals of the PC are written, May 22, 1990, will mark the first day of the second era of IBM-compatible PCs. On that day, Microsoft released Windows 3.0. And on that day, the IBM-compatible PC, a machine hobbled by an outmoded, character-based operating system and 1970s-style programs, was transformed into a computer that could soar in a decade of multitasking graphical operating environments and powerful new applications. Windows 3.0 gets right what its predecessors — Visi On, GEM, earlier versions of Windows, and OS/2 Presentation Manager — got wrong. It delivers adequate performance, it accommodates existing DOS applications, and it makes you believe that it belongs on a PC.

Windows 3.0 sold and sold and sold, like no piece of software had ever sold before, transforming in a matter of months the picture that sprang to most people’s minds when they thought of personal computing from a green screen with a blinking command prompt to a mouse pointer, icons, and windows — thus accomplishing the mainstream computing revolution that Apple had never been able to manage, despite the revolutionary rhetoric of their old “1984” advertisement. Windows became so ubiquitous so quickly that the difficult questions that had swirled around Microsoft prior to its launch — the question of Apple’s legal case and the question of Microsoft’s ongoing relationship with IBM and OS/2 — faded into the background noise, just as Bill Gates had hoped they would.

Sure, Apple zealots and others could continue to scoff, could note that Windows crashed all too easily, that too many things were still implemented clunkily in comparison to MacOS, that the inefficiencies that came with building on such a narrow foundation as MS-DOS meant that it craved far better hardware than it ought to in order to run decently. None of it mattered. All that mattered was that Windows 3.0 was a usable, good-enough GUI that ran on cheap commodity hardware, was free of the worst drawbacks that came with MS-DOS, and had plenty of software available for it — enough native software, in fact, to make its compatibility with vanilla MS-DOS software, once considered so vital for any GUI hoping to make a go of it, almost moot. The bet Bill Gates had first put down on something called the Interface Manager before the IBM PC even officially existed, which he had doubled down on again and again only to come up dry every time, had finally paid off on a scale even he hadn’t ever imagined. Microsoft would sell 2.75 million copies of Windows 3.0 by the end of 1990 — and then the surge really began. Sales hit 15 million copies by the end of 1991. And yet if anything such numbers underestimate its ubiquity at the end of its first eighteen months on the market. Thanks to widespread piracy which Microsoft did virtually nothing to prevent, estimates were that at least two copies of Windows had been installed for every one boxed copy that had been purchased. Windows was the new standard for mainstream personal computing in the United States and, increasingly, all over the world.

At the Comdex show in November of 1990, Bill Gates stepped onstage to announce that Windows 3.0 had already gotten so big that no general-purpose trade show could contain it. Instead Microsoft would inaugurate the Windows World Exposition Conference the following May. Then, after that and the other big announcements were all done, he lapsed into a bit of uncharacteristic (albeit carefully scripted) reminiscing. He remembered coming onstage at the Fall Comdex of seven years before to present the nascent first version of Windows, infamously promising that it would be available by April of 1984. Everyone at that show had talked about how relentlessly Microsoft laid on the Windows hype, how they had never seen anything quite like it. Yet, looking back, it all seemed so unbearably quaint now. Gates had spent all of an hour preparing his big speech to announce Windows 1.0, strolled onto a bare stage carrying his own slide projector, and had his father change the slides for him while he talked. Today, the presentation he had just completed had consisted of four big screens, each featuring people with whom he had “talked” in a carefully choreographed one-man show — all in keeping with the buzzword du jour of 1990, “multimedia.”

The times, they were indeed a-changing. An industry, a man, a piece of software, and, most of all, a company had grown up. Gates left no doubt that it was only the beginning, that he intended for Microsoft to reign supreme over the glorious digital future.

All these new technologies await us. Unless they are implemented in standard ways on standard platforms, any technical benefits will be wasted by the further splintering of the information base. Microsoft’s role is to move the current generation of PC software users, which is quickly approaching 60 million, to an exciting new era of improved desktop applications and truly portable PCs in a way that keeps users’ current applications, and their huge investment in them, intact. Microsoft is in a unique position to unify all those efforts.

Once upon a time, words like these could have been used only by IBM. But now Microsoft’s software, not IBM’s hardware, was to define the new “standard platform” — the new safe choice in personal computing. The PC clone was dead. Long live the Wintel standard.

(Sources: the books The Making of Microsoft: How Bill Gates and His Team Created the World’s Most Successful Software Company by Daniel Ichbiah and Susan L. Knepper, Hard Drive: Bill Gates and the Making of the Microsoft Empire by James Wallace and Jim Erickson, Gates: How Microsoft’s Mogul Reinvented an Industry and Made Himself the Richest Man in America by Stephen Manes and Paul Andrews, Computer Wars: The Fall of IBM and the Future of Global Technology by Charles H. Ferguson and Charles R. Morris, and Apple Confidential 2.0: The Definitive History of the World’s Most Colorful Company by Owen W. Linzmayer; New York Times of March 18 1989 and July 22 1989; PC Magazine of February 12 1991; Byte of June 1990 and January 1992; InfoWorld of May 20 1991; Computer Gaming World of June 1991. Finally, I owe a lot to Nathan Lineback for the histories, insights, comparisons, and images found at his wonderful online “GUI Gallery.”)

Footnotes

Footnotes
1 The 80486 was far more efficient than its predecessor, boasting roughly twice the throughput when clocked at the same speed. But, unlike the 80286 and 80386, it didn’t sport any new operating modes or fundamentally new capabilities, and thus didn’t demand any special consideration from software like Windows/386 and Windows 3.0 that was already utilizing the 80386 to its full potential.
2 This wasn’t quite the “32-bit flat memory model” which Microsoft had explicitly promised Windows would never include in the joint statement with IBM. That referred to an addressing mode unique to the 80386 and its successors, which allowed them to access up to 4 GB of memory in a very flexible way. Having been written to support the 80286, Windows 3.0, even in 386 enhanced mode, was still limited to 16 MB of memory, and had to use a somewhat more cumbersome form of addressing known as a segmented memory model. Still, it was close enough that it arguably went against the spirit of the statement, something that wouldn’t be lost on IBM.
3 Memory management on MS-DOS-based versions of Windows is an extremely complicated subject, one which alone has filled thick technical manuals. This article has presented by no means a complete picture, only the most cursory of overviews intended to convey the importance of Windows 3.0’s central innovation of DPMI. In addition to that innovation, though, Windows 3.0 and its successors employed plenty of other tricks, many of them making yet more clever use of the 80386’s virtual mode, Intel’s gift that kept on giving. For truly dedicated historians of a technical bent, I recommend a book such as Unauthorized Windows 95 by Andrew Schulman (which does cover memory management under earlier versions of Windows as well), Windows Internals by Matt Pietrek, and/or DOS and Windows Protected Mode by Al Williams.
 
 

Tags: , , ,