RSS

Tag Archives: macintosh

Doing Windows, Part 8: The Outsiders

Microsoft Windows 3.0’s conquest of the personal-computer marketplace was bad news for a huge swath of the industry. On the software side, companies like Lotus and WordPerfect, only recently so influential that it was difficult to imagine a world that didn’t include them, would never regain the clout they had enjoyed during the 1980s, and would gradually fade away entirely. On the hardware side, it was true that plenty of makers of commodity PC clones were happier to work with a Microsoft who believed a rising tide lifted all their boats than against an IBM that was continually trying to put them out of business. But what of Big Blue themselves, still the biggest hardware maker of all, who were accustomed to dictating the direction of the industry rather than being dictated to by any mere maker of software? And what, for that matter, of Apple? Both Apple and IBM found themselves in the unaccustomed position of being the outsiders in this new Windows era of computing. Each must come to terms with Microsoft’s newfound but overwhelming power, even as each remained determined not to give up the heritage of innovation that had gotten them this far.

Having chosen to declare war on Microsoft in 1988, Apple seemed to have a very difficult road indeed in front of them — and that was before Xerox unexpectedly reentered the picture. On December 14, 1989, the latter shocked everyone by filing a $150 million lawsuit of their own, accusing Apple of ripping off the user interface employed by the Xerox Star office system before Microsoft allegedly ripped the same thing off from Apple.

The many within the computer industry who had viewed the implications of Apple’s recent actions with such concern couldn’t help but see this latest development as the perfect comeuppance for their overweening position on “look and feel” and visual copyright. These people now piled on with glee. “Apple can’t have it both ways,” said John Shoch, a former Xerox PARC researcher, to the New York Times. “They can’t complain that Microsoft [Windows has] the look and feel of the Macintosh without acknowledging the Mac has the look and feel of the Star.” In his 1987 autobiography, John Sculley himself had written the awkward words that “the Mac, like the Lisa before it, was largely a conduit for technology” developed by Xerox. How exactly was it acceptable for Apple to become a conduit for Xerox’s technology but unacceptable for Microsoft to become a conduit for Apple’s? “Apple is running around persecuting Microsoft over things they borrowed from Xerox,” said one prominent Silicon Valley attorney. The Xerox lawsuit raised uncomfortable questions of the sort which Apple would have preferred not to deal with: questions about the nature of software as an evolutionary process — ideas building upon ideas — and what would happen to that process if everyone started suing everyone else every time somebody built a better mousetrap.

Still, before we join the contemporary commentators in their jubilation at seeing Apple hoisted with their own petard, we should consider the substance of this latest case in more detail. Doing so requires that we take a closer look at what Xerox had actually created back in the day, and take particularly careful note of which of those creations was named in their lawsuit.

Broadly speaking, Xerox created two different GUI environments in the course of their years of experimentation in this area. The first and most heralded of these was known as the Smalltalk environment, pioneered by the researcher Alan Kay in 1975 on a machine called the Xerox Alto, which had been designed at PARC and was built only in limited quantities, without ever being made available for sale through traditional commercial channels. This was the machine and the environment which Steve Jobs so famously saw on his pair of visits to PARC in December of 1979 — visits which directly inspired first the Apple Lisa and later the Macintosh.

The Smalltalk environment running on a Xerox Alto, a machine built at Xerox PARC in the mid-1970s but never commercially released. Many of the basic ideas of the GUI are here, but much remains to be developed and much is implemented only in a somewhat rudimentary way. For instance, while windows can overlap one another, windows that are obscured by other windows are never redrawn. In this way the PARC researchers neatly avoided one of the most notoriously difficult aspects of implementing a windowing system. When Apple programmer Bill Atkinson was part of the delegation who made that December 1979 visit to PARC, he thought he did see windows that continued to update even when partially obscured by other windows. He then proceeded to find a way to give the Lisa and Macintosh’s windowing engine this capability. Seldom has a misunderstanding had such a fortuitous result.

Xerox’s one belated attempt to parlay PARC’s work on the GUI into a real commercial product took the form of the Xerox Star, an integrated office-productivity system costing $16,500 per workstation upon its release in 1981. Neither Kay nor most of the other key minds behind the Alto and Smalltalk were involved in its development. Yet its GUI strikes modern eyes as far more refined than that of Smalltalk. Importantly, the metaphor of the desktop, and the soon-to-be ubiquitous idea of a skeuomorphic user interface built from stand-ins for real-world office equipment — a trash can, file folders, paper documents, etc. — were apparently the brainchildren of the product-focused Star team rather than the blue-sky researchers who worked at PARC during the 1970s.

The Xerox Star office system, which was released in 1981. This system looks much more familiar to our modern eyes than the Xerox Alto’s Smalltalk, sporting such GUI staples as menus, widgets, and icons. Yet it was still lacking in many areas compared to the GUIs that would follow. Windows were neither free-dragging nor overlapping, and its menus were one-shot commands, not drop-down lists. It most resembles VisiCorp’s Visi On among the GUIs we’ve looked at closely in this series of articles. Both products serve as a telling snapshot of the state of the art in GUIs just before Apple shook everything up with the Lisa and Macintosh.

The Star, which failed dismally due to its high price and Xerox’s lack of marketing acumen, is often reduced to little more than a footnote to the story of PARC, treated as a workmanlike translation of PARC’s grand ideas and technologies into a somewhat problematic product. Yet there’s actually an important philosophical difference between Smalltalk and the Star, born of the different engineering cultures that produced them. Smalltalk emphasized programming, to the point that the environment could literally be re-programmed on the fly as you used it. This was very much in keeping with the early ethos of home computing as well, when all machines booted into BASIC and an ability to program was considered key for every young person’s future — when every high school, it seemed, was instituting classes in BASIC or Pascal. The Star, on the other hand, was engineered to ensure that the non-technical office worker never needed to see a line of code; this machine conformed to the human rather than asking the human to conform to it. One might say that Smalltalk was intended to make the joy of computing — of using the computer as the ultimate anything machine — as accessible as possible, while the Star was intended to make make you forget that you were using a computer at all.

While I certainly don’t wish to dismiss or minimize the visionary work down at PARC in the 1970s, I do believe that historians of early microcomputer GUIs have tended to somewhat over-emphasize the innovations of Smalltalk and the Alto while selling the Xerox Star’s influence rather short. Steve Jobs’s early visits to PARC are given much weight in the historical record, but it’s sometimes forgotten that anything Apple wished to copy from Smalltalk had to be done from memory; they had no regular access to the PARC technology after those visits. The Star, on the other hand, did ship as a commercial product some two years before the Lisa. Notably, the Star’s philosophy of hiding the “computery” aspects of computing from the user would turn out to be much more in line with the one that guided the Lisa and Macintosh than was Smalltalk’s approach of exposing its innards for all to see and modify. The Star was a closed black box, capable of running only the software provided for it by Xerox. Similarly, the Lisa couldn’t be programmed at all except by buying a second Lisa and chaining the two machines together, and even the Macintosh never had the reputation of being a hacker’s plaything in the way of the earlier, more hobbyist-oriented Apple II. The Lisa and Macintosh thus joined the Star in embracing a clear divide between coding professionals, who wrote the software, and end users, who bought it and used it to get stuff done. One could thus say that they resemble the Star much more than Smalltalk not only visually but philosophically.

Counter-intuitive though it is to the legend of the Macintosh being a direct descendant of the work Steve Jobs saw at PARC, Xerox sued Apple over the interface elements they had allegedly stolen from the Star rather than Smalltalk. In evaluating the merits of their claim today, I’m somewhat hamstrung by the fact that no working emulators of the original Star exist, forcing me to rely on screenshots, manuals, and contemporary articles about the system. Nevertheless, those sources are enough to identify an influence of the Star upon the Macintosh that’s every bit as clear-cut as that of the Macintosh upon Microsoft Windows. It strains the bounds of credibility to believe that the Mac team coincidentally developed a skeuomorphic interface using many of the very same metaphors — including the central metaphor of the desktop — without taking the example of the Star to heart. To this template they added much innovation, including such modern GUI staples as free-dragging and overlapping windows, drop-down menus, and draggable icons, along with staple mouse gestures like the hold-and-drag and the double-click. Nonetheless, the foundations of the Mac can be seen in the Star much more obviously than they can in Smalltalk. Crudely put, Apple copied the Star while adding a whole lot of original ideas to the mix, and then Microsoft copied Apple, adding somewhat fewer ideas of their own. The people rejoicing over the Xerox lawsuit, in other words, had this aspect of the story basically correct, even if they did have a tendency to confuse Smalltalk and the Star and misunderstand which of them Xerox was actually suing over.

MacOS started with the skeuomorphic desktop model of the Xerox Star and added it to such fundamental modern GUI concepts as pull-down menus, hold-and-drag, the double-click, and free-dragging, overlapping windows that update themselves even when partially occluded by others.

Of course, the Xerox lawsuit against Apple was legally suspect for all the same reasons as the Apple lawsuit against Microsoft. If anything, there were even more reasons to question the good faith of Xerox’s lawsuit than Apple’s. The source of Xerox’s sudden litigiousness was none other than Bill Lowe, the former IBM executive whose disastrous PS/2 brainchild had already made his attitude toward intellectual property all too clear. Lowe had made a soft landing at Xerox after leaving IBM, and was now telling the press about the “aggressive stand on copyright and patent issues” his new company would be taking from now on. It certainly sounded like he intended to weaponize the long string of innovations credited to Xerox PARC and the Star — using these ideas not to develop products, but to sue others who dared to do so. Lowe’s hoped-for endgame was weirdly similar to his misbegotten hopes for the PS/2’s Micro Channel Architecture: Xerox would eventually license the right to make GUIs and other products to companies like Apple and Microsoft, profiting off their innovations of the past without having to do much of anything in the here and now. This understandably struck many of the would-be licensees as a less than ideal outcome. That, at least, was something on which Apple, Microsoft, and just about everyone else in the computer industry could agree.

Apple’s legal team was left in one heck of an awkward fix. They would seemingly have to argue against Xerox’s broad interpretation of visual copyright while arguing for that same broad interpretation in their own lawsuit against Microsoft — and all in the same court in front of the same judge. Any victory against Xerox could lead to their own words being used against them to precipitate a loss against Microsoft, and vice versa.

It was therefore extremely fortunate for Apple that Judge Vaughn R. Walker struck down Xerox’s lawsuit almost before it had gotten started. At the time of their court filing, Xerox was already outside the statute of limitations for a copyright-infringement claim of the type that Apple had filed against Microsoft. They had thus been forced to make a claim of “unfair competition” instead — a claim which carried with it a much higher evidentiary standard. On March 24, 1990, Judge Walker tossed the Xerox lawsuit, saying it didn’t meet this standard and making the unhelpful observation to Xerox that it would have made a lot more sense as a copyright claim. Apple had dodged a bullet, and Bill Lowe would have to find some other way to make money for his new company.

With the Xerox sideshow thus dispensed with, Apple’s lawyers could turn their attention back to the main event, their case against Microsoft. The same Judge Walker who had decided in their favor against Xerox had taken over from Judge William Schwarzer in the other case as well. No longer needing to worry about protecting their flank from Xerox, Apple’s lawyers pushed for what they called “total concept” or “gestalt” look and feel as the metric for deciding whether Windows infringed upon MacOS. But on March 6, 1991, Judge Walker agreed with Microsoft’s contention that the case should be decided on a “function by function” basis instead. Microsoft began assembling reels of video demonstrating what they claimed to be pre-Macintosh examples of each one of the ten interface elements that were at issue in the case.

So, even as Windows 3.0 was conquering the world outside the courtroom, both sides remained entrenched in their positions inside it, and the case, already three years old, ground on and on through motion after counter-motion. “We’re going to trial,” insisted Edward B. Stead, Apple’s general counsel, but it wasn’t at all clear when that trial would take place. Part of the problem was the sheer pace of external events. As Windows 3.0 became the fastest-selling piece of commercial software the world had ever seen, the scale and scope of Apple’s grievances just kept growing to match. From the beginning, a key component of Microsoft’s strategy had been to gum up the works in court while Windows 3.0 became a fait accompli, the new standard in personal computing, too big for any court to dare attack. That strategy seemed to be working beautifully. Meanwhile Apple’s motions grew increasingly far-fetched, beginning to take on a distinct taint of desperation.

In May of 1991, for example, Apple’s lawyers surprised everyone with a new charge. Still looking for a way to expand the case beyond those aspects of Windows 2 and 3 which hadn’t existed in Windows 1, they now claimed that the 1985 agreement which had been so constantly troublesome to them in that respect was invalid. Microsoft had allegedly defrauded Apple by saying they wouldn’t make future versions of Windows any more similar to the Macintosh than the first was, and then going against their word. This new charge was a hopeful exercise at best, especially given that the agreement Apple claimed Microsoft had broken had been, if it ever existed, strictly a verbal one; absolutely no language to this effect was to be found in the text of the 1985 agreement. Microsoft’s lawyers, once they picked their jaws up off the floor, were left fairly spluttering with indignation. Attorney David T. McDonald labeled the argument “desperate” and “preposterous”: “We’re on the five-yard line, the goal is in sight, and Apple now shows up and says, ‘How about lacrosse instead of football?'” Thankfully, Judge Walker found Apple’s argument to be as ludicrous as McDonald did, thus sparing us all any more sports metaphors.

On April 14, 1992 — now more than four years on from Apple’s original court filing, in a computing climate transformed almost beyond recognition by the rise of Windows — Judge Walker ruled against Apple’s remaining contentions in devastating fashion. Much of the 1985 agreement was indeed invalid, he said, but not for the reason Apple had claimed. What Microsoft had licensed in that agreement were largely “generic ideas” that should never be susceptible to copyright protection in the first place. Apple was entitled to protect very specific visual elements of their displays, such as the actual icons they used, but they weren’t entitled to protect the notion of a screen with icons in the abstract, nor even that of icons representing specific real-world objects, such as a disk, a folder, or a trash can. Microsoft or anyone else could, in other words, make a GUI with a trash-can icon if they wished; they just couldn’t transplant Apple’s specific rendering of a trash can into their own work. Applying the notion of visual copyright any more broadly than this “would afford too much protection and yield too little competition,” said the judge. Apple’s slippery notion of look and feel, it appeared, was dead as a basis for copyright. After all the years of struggle and at least $10 million in attorney fees on both sides, Judge Walker ruled that Apple’s case was too weak to even present before a jury. “Through five years, there were many points where the case got continuously refined and focused and narrowed,” said a Microsoft spokesman. “Eventually, there was nothing left.”

Still, one can’t accuse Apple of giving up without a fight. They dragged the case out for almost three more years after this seemingly definitive defeat. When the Ninth Circuit Court of Appeals upheld Judge Walker’s judgment in 1994, Apple tried to take the case all the way to the Supreme Court. That august body announced that they would not hear it on February 21, 1995, thus finally putting an end to the whole tortuous odyssey.

The same press which had been so consumed by the case circa 1988 barely noticed its later developments. The narrative of Microsoft’s utter dominance and Apple’s weakness had become so prevalent by the early 1990s that it had become difficult to imagine any outcome other than a Microsoft victory. Yet the case’s anticlimactic ending obscured how dangerous it had once been, not only for Microsoft but for the software industry as a whole. Whatever one thinks in general of the products and business practices of the opposing sides, a victory for Apple would have been a terrible result for the personal-computer industry. The court got this one right in striking all of Apple’s claims down so thoroughly — something that can’t always be said about collisions between technology and the law. Bill Gates could walk away knowing the long struggle had struck an important blow for an ongoing culture of innovation in the software industry. Indeed, like the victory of his hero Henry Ford over a group of automotive patent trolls eighty years before, his victory would benefit his whole industry along with his company — which isn’t to say, of course, that he would have fought the war purely for the sake of altruism.

John Sculley, for his part, was gone from Apple well before the misguided lawsuit he had fostered came to its final conclusion. He was ousted by his board of directors in 1993, after it became clear that Apple would post a loss of close to $200 million for the year. Yet his departure brought no relief to the problems of dwindling market share, dwindling focus, and, most worrisome of all, a dwindling sense of identity. Apple languished, embittered about the ideas Microsoft had “stolen” from them, while Windows conquered the world. One could certainly argue that they deserved a better fate on the basis of a Macintosh GUI that still felt far slicker and more intuitive than Microsoft’s, but the reality was that their own poor decisions, just as much as Microsoft’s ruthlessness, had led them to this sorry place. The mid-1990s saw them mired in the greatest crisis of confidence of their history, licensing the precious Macintosh technology to clone makers and seriously considering breaking themselves up into two companies to appease their angriest shareholder contingents. For several years to come, there would be a real question of whether any part of the company would survive to see the new millennium. Gone were the Jobsian dreams of changing the world through better computing; Apple was reduced to living on Microsoft’s scraps. Microsoft had won in the marketplace as thoroughly as they had in court.

But the full story of Apple’s 1990s travails is one to take up at another time. Now, we should turn to IBM, to see how they coped after the MS-DOS-based Windows, rather than the OS/2-based Presentation Manager, made the world safe for the GUI.

Throughout 1990, that year of wall-to-wall hype over Windows 3.0, Microsoft persisted in dampening expectations for OS/2 in a way that struck IBM as deliberate. The agreement that MS-DOS and Windows were for low-end computers, OS/2 and the Presentation Manager for high-end ones, seemed to have been forgotten by Microsoft as soon as Bill Gates and Steve Ballmer left the Fall 1989 Comdex at which it had been announced. Gates now said that it could take OS/2 another three or four years to inherit the throne from MS-DOS, and by that time it would probably be running Windows rather than Presentation Manager anyway. Ballmer said that OS/2 was really meant to compete with high-end client/server operating systems like Unix, not with desktop operating systems like MS-DOS. They both said that “there will be a DOS 5, 6, and 7, and a Windows 4 and 5.” Meanwhile IBM was predictably incensed by Windows 3.0’s use of protected mode and the associated shattering of the 640 K barrier; that sort of thing was supposed to have been the purview of the more advanced OS/2.

Back in late 1988, Microsoft had hired a system-software architect from DEC named David Cutler to oversee the development of OS/2 2.0. No shrinking violet, he promptly threw out virtually all of the existing OS/2 code, which he pronounced a bloated mess, and started over from scratch on an operating system that would fulfill Microsoft’s original vision for OS/2, being targeted at machines with an 80386 or better processor. The scope and ambition of this project, along with the fact that Microsoft wished to keep it entirely in-house, had turned into yet one more source of tension between the two companies; it could be years still before Cutler’s OS/2 2.0 was ready. There remained little semblance of any coordinated strategy between the two companies, in public or in private.

And yet, in September of 1990, IBM and Microsoft announced a new roadmap for OS/2’s future. The two companies together would finish up one more version of the first-generation OS/2 — OS/2 1.3, which was scheduled to ship the following month — and that would be the end of that lineage. Then IBM would develop an OS/2 2.0 alone — a project they hoped to have done in a year or so — while Cutler’s team at Microsoft continued with the complete rewrite that was now to be marketed as OS/2 3.0.

The announcement, whose substance amounted to a tacit acknowledgement that the two companies simply couldn’t work together anymore on the same project, caused heated commentary in the press. It seemed a convoluted way to evolve an operating system at best, and it was happening at the same time that Microsoft seemed to be charging ahead — and with massive commercial success at that — on MS-DOS and Windows as the long-term face of personal computing in the 1990s. InfoWorld wrote of a “deepening rift” between Microsoft and IBM, characterizing the latest agreement as IBM “seizing control of OS/2’s future.” “Although in effect IBM and Microsoft will say they won’t divorce ‘for the sake of the children,'” said an inside source to the magazine, “in fact they are already separated, and seeking new relationships.” Microsoft pushed back against the “divorce” meme only in the most tepid fashion. “You may not understand our marriage,” said Steve Ballmer, “but we’re not getting divorced.” (One might note that when a couple have to start telling friends that they aren’t getting a divorce, it usually isn’t a good sign about the state of their relationship…)

Charles Petzold, writing in PC Magazine, summed up the situation created by all the mixed messaging: “The key words in operating systems are confusion, uncertainty, anxiety, and doubt. Unfortunately, the two guiding lights of this industry — IBM and Microsoft — are part of the problem rather than part of the solution.” If anything, this view of IBM as an ongoing “guiding light” was rather charitable.  OS/2 was drowning in the Windows hype. “The success of Windows 3.0 has already caused OS/2 acceptance to go from dismal to cataclysmic,” wrote InfoWorld. “Analysts have now pushed back their estimates of when OS/2 will gain broad popularity to late this decade, with some predicting that the so-called next-generation operating system is all but dead.”

The final divorce of Microsoft from IBM came soon after to give the lie to all of the denials. In July of 1991, Microsoft announced that the erstwhile OS/2 3.0 was to become its own operating system, separate from both OS/2 and MS-DOS, called Windows NT. With this news, which barely made an impression in the press — it took up less than one quarter of page 87 of that week’s InfoWorld — a decade of cooperation came to an end. From now on, Microsoft and IBM would exist strictly as competitors in a marketplace where Microsoft enjoyed all the advantages. In the final divorce settlement, IBM gave up all rights to the upcoming Windows NT and agreed to pay a small royalty on all future sales of OS/2 (whatever those might amount to), while Microsoft paid a lump sum of around $30 million to be free and clear of their last obligations to the computing giant that had made them what they now were. They greeted this watershed moment with no sentimentality whatever. In a memo that leaked to the press, Bill Gates instead rejoiced that Microsoft was finally free of IBM’s “poor code, poor design, and other overhead.”

Even as the unlikely partnership’s decade of dominance was passing away, Microsoft’s decade of sole dominion was just beginning. The IBM PC and its clones had become the Wintel standard, and would require no further input from Big Blue, thank you very much. IBM’s share of the standard’s sales was already down to 17 percent, and would just keep on falling from there. “Microsoft is now driving the industry, not IBM,” wrote the newsletter Software Publishing by way of stating the obvious.

Which isn’t to say that IBM was going away. While Microsoft was celebrating their emancipation, IBM continued plodding forward with OS/2 2.0, which, like the aborted version 3.0 that was now to be known as Windows NT, ran only on an 80386 or better. They made a big deal of the work-in-progress at the Fall 1991 Comdex without managing to change the narrative around it one bit. The total bill for OS/2 was approaching an astonishing $1 billion, and they had very little to show for it. One Wall Street analyst pronounced OS/2 “the greatest disaster in IBM’s history. The reverberations will be felt throughout the decade.”

At the end of that year, IBM had to report — incredibly, for the very first time in their history — an annual loss. And it was no trivial loss either. The deficit was $2.8 billion, on revenues that had fallen 6.1 percent from the year before. The following year would be even worse, to the tune of a $5 billion loss. No company in the history of the world had ever lost this much money this quickly; by the last quarter of 1993, IBM would be losing $45 million every day. Microcomputers were continuing to replace the big mainframes and minicomputers that had once been the heart of IBM’s business. Now, though, fewer and fewer of those replacement machines were IBM personal computers; whole segments of their business were simply evaporating. The vague distrust IBM had evinced toward Microsoft for most of the 1980s now seemed amply justified, as all of their worst nightmares came true. IBM seemed old, bloated, and, worst of all, irrelevant next to the fresh-faced young Microsoft.

OS/2 2.0 started reaching consumers in May of 1992. It was a surprisingly impressive piece of work; perhaps the relationship with Microsoft had been as frustrating for IBM’s programmers as it had been for their counterparts. Certainly OS/2 2.0 was a far more sophisticated environment than Windows 3.0. Being designed to run only on 32-bit microprocessors like the 80386 and 80486, it utilized them to their maximum potential, which was much more than one could say for Windows, while also being much more stable than Microsoft’s notoriously crash-prone environment. In addition to native OS/2 software, it could run multiple MS-DOS applications at the same time with complete compatibility, and, in a new wrinkle added to the mix by IBM, could now run many Windows applications as well. IBM called it “a better DOS than DOS and a better Windows than Windows,” a claim which carried a considerable degree of truth. They pointedly cut its suggested list price of $140 to just $50 for Windows users looking to “upgrade.”

A Quick Tour of OS/2 2.0


Shipping on more than twenty 3.5-inch diskettes, OS/2 2.0 was by far more the most elaborate operating system yet made for its family of personal computers. When we boot it up for the first time, we’re given a lengthy interactive tutorial of a sort that was seldom seen in software of 1992 vintage.

The notion of a “Presentation Manager” GUI that’s separate from the core OS/2 operating system has been dropped; OS/2 is now simply OS/2, with a GUI as the standard, built-in interface. From the opening tutorial to the look of its desktop, the whole package reminds one of nothing of so much as the much later Windows 95. We have a full-fledged, functioning desktop workspace here, with icons representing folders and disks, and a “shredder” to replace the usual trash can.

After shipping earlier versions of OS/2 with no extra tools or applets whatsoever, IBM got wise this time around and included plenty of stuff to play with, like this neat little music editor.

Some aspects of the interface are a little strange. Dragging with the mouse is accomplished using the right button rather than the left — a fine example of OS/2’s superficial similarity and granular dissimilarity to Windows, which so many users who had to move back and forth between the environments found so frustrating.

Of course, MS-DOS is still around if you need it. Unlike in OS/2 1.x, here you can have as many MS-DOS windows and applications open as you like.

But, despite its many merits, OS/2 2.0 was a lost cause from the start, at least if one’s standard for success was Windows. Windows 3.1 rolled out of Microsoft at almost the same instant, and no amount of comparisons in techie magazines pointing out the alternative operating system’s superiority could have any impact on a mass market that was now thoroughly conditioned to accept Windows as the standard. Giant IBM’s operating system had become, as the New York Times put it, “an unlikely underdog.”

In truth, the contest was so lopsided by this point as to be laughable. Microsoft, who had long-established relationships with the erstwhile clone makers — now known as makers of hardware conforming to the Wintel standard — understood early, as IBM did only much too late, that the best and perhaps only way to get your system software widely accepted was to sell it pre-installed on the computers that ran it. Thus, by the time OS/2 2.0 shipped, Windows already came pre-installed on nine out of ten personal computers on the market, thanks to a smart and well-funded “original equipment manufacturer” sales team that was overseen personally by Steve Ballmer. And thus, simply by buying a new computer, one automatically became a Windows user. Running OS/2, on the other hand, required that the purchaser of one of these machines decide to go out and buy an alternative to the perfectly good Microsoft software already on her hard drive, and then go through all the trouble of installing and configuring it. Very few people had the requisite combination of motivation and technical skill for an exercise like that.

As a final indignity, IBM themselves had to bow to customer demand and offer MS-DOS and Windows as an optional alternative to OS/2 on their own machines. People wanted the system software that they used at the office, that their friends had, that could run all of the products on the shelves of their local computer store with 100-percent fidelity (with the exception of that oddball Mac stuff off in the corner, of course). Only the gearheads were going to buy OS/2 because it was a 32-bit instead of a 16-bit operating system or because it offered preemptive instead of cooperative multitasking, and they were a tiny slice of an exploding mass market in personal computing.

That said, OS/2 did have a better fate than many another alternative operating system during this period of Windows, Windows everywhere. It stayed around for years even in the face of that juggernaut, going through two more major revisions and many minor ones, the very last coming as late as December of 2001. It remained always a well-respected operating system that just couldn’t break through Microsoft’s choke hold on mainstream computing, having to content itself with certain niches — powering automatic teller machines was a big one for a long time — where its stability and robustness served it well.

So, IBM, and Apple as well, had indeed become the outsiders of personal computing. They would retain that dubious status for the balance of the decade of the 1990s, offering alternatives to the monoculture of Windows computing that appealed only to the tech-obsessed, the idealistic, or the just plain contrarian. Even as much of what I’ve related in this article was taking place, they were being forced into one another’s arms for the sake of sheer survival. But the story of that second unlikely IBM partnership — an awkward marriage of two corporate cultures even more dissimilar than those of Microsoft and IBM — must, like so much else, be told at another time. All that’s left to tell in this series is the story of how Windows, with the last of its great rivals bested, finished the job of conquering the world.

(Sources: the books The Making of Microsoft: How Bill Gates and His Team Created the World’s Most Successful Software Company by Daniel Ichbiah and Susan L. Knepper, Hard Drive: Bill Gates and the Making of the Microsoft Empire by James Wallace and Jim Erickson, Gates: How Microsoft’s Mogul Reinvented an Industry and Made Himself the Richest Man in America by Stephen Manes and Paul Andrews, Computer Wars: The Fall of IBM and the Future of Global Technology by Charles H. Ferguson and Charles R. Morris, and Apple Confidential 2.0: The Definitive History of the World’s Most Colorful Company by Owen W. Linzmayer; PC Week of September 24 1990 and January 15 1991; InfoWorld of September 17 1990, May 29 1991, July 29 1991, October 28 1991, and September 6 1993; New York Times of December 29 1989, March 24 1990, March 7 1991, May 24 1991, January 18 1992, August 8 1992, January 20 1993, April 19 1993, and June 2 1993; Seattle Times of June 2 1993. Finally, I owe a lot to Nathan Lineback for the histories, insights, comparisons, and images found at his wonderful online “GUI Gallery.”)

 
 

Tags: , , , , ,

Doing Windows, Part 6: Look and Feel

From left, Dan Fylstra of VisiCorp, Bill Gates of Microsoft, and Gary Kildall of Digital Research in 1984. As usual, Gates looks rumpled, high-strung, and vaguely tortured, while Kildall looks polished, relaxed, and self-assured. (Which of these men would you rather chat with at a party?) Pictures like these perhaps reveal one of the key reasons that Gates consistently won against more naturally charismatic characters like Kildall: he personally needed to win in ways that they did not.

In the interest of clarity and concision, I’ve restricted this series of articles about non-Apple GUI environments to the efforts of Microsoft and IBM, making an exception to that rule only for VisiCorp’s Visi On, the very first product of its type. But, as I have managed to acknowledge in passing, those GUIs hardly constituted the sum total of the computer industry’s efforts in this direction. Among the more impressive and prominent of what we might label the alternative MS-DOS GUIs was a product from none other than Gary Kildall and Digital Research — yes, the very folks whom Bill Gates once so slyly fleeced out of a contract to provide the operating system for the first IBM PC.

To his immense credit, Kildall didn’t let the loss of that once-in-a-lifetime opportunity get him down for very long. Digital Research accepted the new MS-DOS-dominated order with remarkable alacrity, and set about making the best of things by publishing system software, such as the multitasking Concurrent DOS, which tried to do the delicate dance of improving on MS-DOS while maintaining compatibility. In the same spirit, they made a GUI of their own, called the “Graphics Environment Manager” — GEM.

After futzing around with various approaches, the GEM team found their muse on the day in early 1984 when team-member Darrell Miller took Apple’s new Macintosh home to show his wife: “Her eyes got big and round, and she hates computers. If the Macintosh gets that kind of reaction out of her, this is powerful.” Miller is blunt about what happened next: “We copied it exactly.” When they brought their MacOS clone to the Fall 1984 Comdex, Steve Jobs expressed nothing but approbation. “You did a great job!” he said. No one from Apple seemed the slightest bit concerned at this stage about the resemblance to the Macintosh, and GEM hit store shelves the following spring as by far the most elegant and usable MS-DOS GUI yet.

A few months later, though, Apple started singing a very different tune. In the summer of 1985, they sent a legal threat to Digital Research which included a detailed list of all the ways that they believed GEM infringed on their MacOS copyrights. Having neither the stomach nor the cash for an extended court battle and fearing a preliminary injunction which might force them to withdraw GEM from the market entirely, Digital Research caved without a fight. They signed an agreement to replace the current version of GEM with a new one by November 15, doing away with such distinctive and allegedly copyright-protected Macintosh attributes as “the trash-can icon, the disk icons, and the close-window button in the upper-left-hand corner of a window.” They also agreed to an “undisclosed monetary settlement,” and to “provide programming services to Apple at a reduced rate.”

Any chance GEM might have had to break through the crowded field of MS-DOS GUIs was undone by these events. Most of the third-party developers Digital Research so desperately needed were unnerved by the episode, abandoning any plans they might have hatched to make native GEM applications. And so GEM, despite being vastly more usable than the contemporaneous Microsoft Windows even in its somewhat bowdlerized post-agreement form, would go on to become just another also-ran in the GUI race.1

For the industry at large, the GEM smackdown was most significant as a sign of changing power structures inside Apple — changes which carried with them a new determination that others shouldn’t be allowed to rip off all of the Mac’s innovations. The former Pepsi marketing manager John Sculley was in the ascendant at Apple by the summer of 1985, Steve Jobs already being eased out the door. The former had been taught by the Cola Wars that a product’s secret formula was everything, and had to be protected at all costs. And the Macintosh’s secret formula was its beautiful interface; without it, it was just an overpriced chunk of workmanlike hardware — a bad joke when set next to a better, cheaper Motorola 68000-based computer like the new Commodore Amiga. The complaint against Digital Research was a warning shot to an industry that Sculley believed had gotten far too casual about throwing around phrases like “Mac-like.” “Apple is going after everybody,” warned one fearful software executive to the press. The relationship between Microsoft and Apple in particular was about to get a whole lot more complicated.

Said relationship had been a generally good one during the years when Steve Jobs was calling many of Apple’s shots. Jobs and Bill Gates, dramatically divergent in countless ways but equally ambitious, shared a certain esprit de corp born of having been a part of the microcomputer industry since before there was a microcomputer industry. Jobs genuinely appreciated his counterpart’s refusal to frame business computing as a zero-sum game between the Macintosh and the MS-DOS standard, even when provoked by agitprop like Apple’s famous “1984” Super Bowl advertisement. Instead Gates, contrary to his established popular reputation as the ultimate zero-sum business warrior, supported Apple’s efforts as well as IBM’s with real enthusiasm: signing up to produce Macintosh software two full years before the finished Mac was released, standing at Jobs’s side when Apple made major announcements, coming to trade shows conspicuously sporting a Macintosh tee-shirt. All indications are that the two truly liked and respected one another. For all that Apple and Microsoft through much of these two men’s long careers would be cast as the yin and yang of personal computing — two religions engaged in the most righteous of holy wars — they would have surprisingly few negative words to say about one another personally down through the years.

But when Steve Jobs decided or was forced to submit his resignation letter to Apple on September 17, 1985, trouble for Microsoft was bound to follow. John Sculley, the man now charged with cleaning up the mess Jobs had supposedly made of the Macintosh, enjoyed nothing like the same camaraderie with Bill Gates. He and his management team were openly suspicious of Microsoft, whose Windows was already circulating widely in beta form. Gates and others at Microsoft had gone on the record repeatedly saying they intended for Windows and the Macintosh to be sufficiently similar that they and other software developers would be able to port applications in short order between the two. Few prospects could have sounded less appealing to Sculley. Apple, whose products then as now enjoyed the highest profit margins in the industry thanks to their allure as computing’s hippest luxury brand, could see their whole business model undone by the appearance of cheap commodity clones that had been transformed by the addition of Windows into Mac-alikes. Of course, one look at Windows as it actually existed in 1985 could have disabused Sculley of the notion that it was likely to win any converts among people who had so much as glanced at MacOS. Still, he wasn’t happy about the idea of the Macintosh losing its status, now or in the future, as the only GUI environment that could serve as a true, comprehensive solution to all of one’s computing needs. So, within weeks of Jobs’s departure, feeling his oats after having so thoroughly cowed Digital Research, he threatened to sue Microsoft as well for copying the “look and feel” of the Macintosh in Windows.

He really ought to have thought things through a bit more before doing so. Threatening Bill Gates was always a dangerous game to play, and it was sheer folly when Gates had the upper hand, as he largely did now. Apple was at their lowest ebb of the 1980s when they tried to tell Microsoft that Windows would have to be cancelled or radically redesigned to excise any and all similarities to the Macintosh. Sales of the Mac had fallen to some 20,000 units per month, about one-fifth of Apple’s pre-launch projections for this point. The stream of early adopters with sufficient disposable income to afford the pricey gadget had ebbed away, and other potential buyers had started asking what you could really do with a Macintosh that justified paying two or three times as much for it as for an equivalent MS-DOS-based computer. Aldus PageMaker, the first desktop-publishing package for the Mac, had been released the previous summer, and would eventually go down in history as the product that, when combined with the Apple LaserWriter printer, saved the platform by providing a usage scenario that ugly old MS-DOS clearly, obviously couldn’t duplicate. But the desktop-publishing revolution would take time to show its full import. In the meantime, Apple was hard-pressed, and needed Microsoft — one of the few major publishers of business software actively supporting the Mac — far too badly to go around issuing threats to them.

Gates responded to Sculley’s threat with several of his own. If Sculley followed through with a lawsuit, Gates said, he’d stop all work at Microsoft on applications for the Macintosh and withdraw those that were already on store shelves, treating business computing henceforward as exactly the zero-sum game which he had never believed it to be in the past. This was a particularly potent threat in light of Microsoft’s new Excel spreadsheet, which had just been released to rave reviews and already looked likely to join PageMaker as the leading light among the second generation of Mac applications. In light of the machine’s marketplace travails, Apple was in no position to toss aside a sales driver like that one, the first piece of everyday Mac business software that was not just as good as but in many ways quite clearly better than equivalent offerings for MS-DOS. Yet Gates wouldn’t stop there. He would also, he said, refuse to renew Apple’s license to use Microsoft’s BASIC on their Apple II line of computers. This was a serious threat indeed, given that the aged Apple II line was the only thing keeping Apple as a whole afloat as the newer, sexier Macintosh foundered. Duly chastised, Apple backed down quickly — whereupon Gates, smelling blood in the water, pressed his advantage relentlessly, determined to see what else he could get out of finishing the fight Sculley had so foolishly begun.

One ongoing source of frustration between the two companies, dating back well into the days of Steve Jobs’s power and glory, was the version of BASIC for the Mac which Microsoft had made available for purchase on the day the machine first shipped. In the eyes of Apple and most of their customers, the mere fact of its existence on a platform that wasn’t replete with accessible programming environments was its only virtue. In practice, it didn’t work all that differently from Microsoft’s Apple II BASIC, offering almost no access to the very things which made the Macintosh the Macintosh, like menus, windows, and dialogs. A second release a year later had improved matters somewhat, but nowhere near enough in most people’s view. So, Apple had started work on a BASIC of their own, to be called simply MacBASIC, to supersede Microsoft’s. Microsoft BASIC for the Macintosh was hardly a major pillar of his company’s finances, but Bill Gates was nevertheless bothered inordinately by the prospect of it being cast aside. “Essentially, since Microsoft started their company with BASIC, they felt proprietary towards it,” speculates Andy Hertzfeld, one of the most important of the Macintosh software engineers. “They felt threatened by Apple’s BASIC, which was a considerably better implementation than theirs.” Gates said that Apple would have to kill their own version of BASIC and — just to add salt to the wound — sign over the name “MacBASIC” to Microsoft if they wished to retain the latter’s services as a Mac application developer and retain Microsoft BASIC on the Apple II.

And that wasn’t even the worst form taken by Gates’s escalation. Apple would also have to sign what amounted to a surrender document, granting Microsoft the right to create “derivative works of the visual displays generated by Apple’s Lisa and Macintosh graphic-user-interface programs.” The specific “derivative works” covered by the agreement were the user interfaces already found in Microsoft Windows for MS-DOS and five Microsoft applications for the Macintosh, including Word and Excel. The agreement provided Microsoft with nothing less than a “non-exclusive, worldwide, royalty-free, perpetual, non-transferable license to use those derivative works in present and future software programs, and to license them to and through third parties for use in their software programs.” In return, Microsoft would promise only to support Word and Excel on the Mac until October 1, 1986 — something they would certainly have done anyway. Gates was making another of those deviously brilliant tactical moves that were already establishing his reputation as the computer industry’s most infamous villain. Rather than denying that a “visual display” could fall under the domain of copyright, as many might have been tempted to do, he would rather affirm the possibility while getting Apple to grant Microsoft an explicit exception to being bound by it. Thus Apple — or, for that matter, Microsoft — could continue to sue MacOS’s — and potentially Windows’s — competitors out of existence while Windows trundled on unmolested.

Sculley called together his management team to discuss what to do about this Apple threat against Microsoft that had suddenly boomeranged into a Microsoft threat against Apple. Most at the meeting insisted that Gates had to be bluffing, that he would never cut off several extant revenue streams just to spite Apple and support this long-overdue Windows product of his which had been an industry laughingstock for so long. But Sculley wasn’t sure; he kept coming back to the fact that Microsoft could undoubtedly survive without Apple, but Apple might not be able to survive without Microsoft — at least not right now, given the Mac’s current travails. “I’m not ready to bloody the company,” he said, and signed the surrender document two days after Windows 1.01 first appeared in its boxed form at the Fall 1985 Comdex show’s Microsoft Roast. His tone toward Gates now verged on pleading: “What I’m really asking for, Bill, is a good relationship. I’m glad to give you the rights to this stuff.”

After the full scale of what John Sculley had given away to Bill Gates became clear, Apple fans started drawing pointed comparisons between Sculley and Neville Chamberlain. As it happened, Sculley’s version of “peace for our time” would last scarcely longer than Chamberlain’s. And as for Gates… well, plenty of Apple fans would indeed soon be calling him the Adolf Hitler of the computer industry in the midst of plenty of other overheated rhetoric.

Bill Gates wrote a jubilant email to eleven colleagues at Microsoft’s partner companies, saying that he had “received a release from Apple for any possible copyright, trade-secret, or patent issue relating to our products, including Windows.” The people at Apple were less jubilant. “Everyone was somewhat disgusted over [the agreement],” remembers Donn Denman, the chief programmer of Apple’s much superior but shitcanned MacBASIC. Sculley could only say to him and his colleagues that “it was the right decision for the company. It was a business decision.” They didn’t find him very convincing. The bad feelings engendered by the agreement would never entirely go away, and the relationship between Apple and Microsoft would never be quite the same again — not even when Excel became one of the prime drivers of something of a Macintosh Renaissance in the following year.

We jump forward now to March 17, 1988, by which time the industry had changed considerably. Microsoft was still entangled with IBM in the development of OS/2 and its Presentation Manager, but was also continuing to push Windows, which had come out in a substantially revised version 2 some six months earlier. The Macintosh, meanwhile, had carved out a reasonable niche for itself as a tool for publishers and creative professionals of various stripes, even as the larger world of business-focused personal computing continued to run on MS-DOS.

Sitting in his office that day, Bill Gates agreed to take a call from a prominent technology journalist, who asked him if he had a comment to make about the new lawsuit from Apple against Microsoft. “Lawsuit? What lawsuit?” Gates asked. He had just met with Sculley the day before to discuss Microsoft’s latest Mac applications. “He never mentioned it to me. Not one word,” said Gates to the reporter on the other end of the line.

Sculley, it seemed, had decided not to risk losing his nerve again. Apple had gone straight to filing their lawsuit in court, without giving Microsoft so much as a warning, much less a chance to negotiate a remedy.2 It appeared that the latest version of Microsoft’s GUI environment for MS-DOS, which with its free-dragging and overlapping windows hewed much closer to the Macintosh than its predecessor, had both scared and enraged Sculley to such an extent that he had judged this declaration of war to be his only option. “Windows 2 is an unconscionable ripoff of MacOS,” claimed Apple. They demanded $50,000 per infringement per unit of Windows sold — adding up to a downright laughable total of $4.5 billion by their current best estimate — and the “impoundment and destruction” of all extant or future copies of Windows. Microsoft replied that Apple had signed over the rights to the Mac’s “visual displays” for use in Windows in 1985, and, even if they hadn’t, such things weren’t really copyrightable anyway.

So, who had the right of this thing? As one might expect, the answer to that question is far more nuanced than the arguments which either side would present in court. Writing years after the lawsuit had passed into history but repeating the arguments he had once made in court, Tandy Trower, the Windows project leader at Microsoft from 1985 to 1988, stated that “the allegation clearly had no merit as I had never intended to copy the Macintosh interface, was never given any directive to do that, and never directed my team to do that. The similarities between the two products were largely due to the fact that both Windows and Macintosh had common ancestors, that being many of the earlier windowing systems, such as those like Alto and Star that were created at Xerox PARC.” This is, to put it bluntly, nonsense. To deny the massive influence of the Macintosh on Windows is well-nigh absurd — although, I should be careful to say, I have no reason to believe that Trower makes his absurd argument out of anything but ignorance here. By the time he arrived on the Windows team, Apple’s implementation of the GUI had already been so thoroughly internalized by the industry in general that the huge strides it had made over the Xerox PARC model were being forgotten, and the profoundly incorrect and unfair meme that Apple had simply copied Xerox’s work and called it a day was already taking hold.

The people at Xerox PARC had indeed originated the idea of the GUI, but, as playing with a Xerox Alto emulator will quickly reveal, hadn’t been able to take it anywhere close to the Macintosh’s place of elegant, intuitive usability. By the time the Xerox GUI made its one appearance as a commercial product, in the form of the Xerox Star office system, it had actually regressed in at least one way even as it progressed in many others: overlapping windows, which had been possible in Xerox PARC’s Smalltalk environment, were not allowed on the Star. Tellingly, the aspect of Windows 1 which attracted the most derision back in the day, and which still makes it appear so hapless today, is a similar rigid system of tiled windows. (The presence of this retrograde-seeming element was largely thanks to Scott MacGregor, who arrived at Microsoft to guide the Windows project after having been one of the major architects of the Star.) Meanwhile, as I also noted in my little tour of Windows 1 in a previous article, many of those aspects of it which do manage to feel natural and intuitive today — such as the drop-down menus — are those that work more like the Macintosh than anything developed at Xerox PARC. In light of this reality, Microsoft’s GUI would only hew closer to the Mac model as time went on, for the incontrovertible reason that the Mac model was just better for getting real stuff done in the real world.

And there are plenty of other disconcerting points of similarity between early versions of MacOS and early versions of Windows. Right from the beginning, Windows 1 shipped with a suite of applets — a calculator, a “control panel” for system settings, a text editor that went by the name of “notepad,” etc. — that were strikingly similar to those included in MacOS. Further, if other members of the Windows team itself are to be believed, Microsoft’s Neil Konzen, a programmer intimately familiar with the Macintosh, duplicated some of MacOS’s internal structures so closely as to introduce some of the same bugs. In short, to believe that the Macintosh wasn’t the most important influence on the Windows user interface by far, given not only the similarities in the finished product but the knowledge that Microsoft had been working daily with the evolving Macintosh since January of 1982, is to actively deny reality out of either ignorance or some ulterior motive.

Which isn’t to say that Microsoft’s designers had no ideas of their own. In fact, some of those ideas are still in place in current versions of Windows. To take perhaps the most immediately obvious example, Windows then and now places its drop-down menus at the top of the windows themselves, while the Macintosh has a menu bar at the top of the screen which changes to reflect the currently selected window.3 And Microsoft’s embrace of the two-button mouse, contrasted with Apple’s stubborn loyalty to the one-button version of same, has sparked constant debate for decades. Still, differing details like these should be seen as exactly that in light of all the larger-scale similarities.

And yet just acknowledging that Windows was, shall we say, strongly influenced by MacOS hardly got to the bottom of the 1988 case. There was still the matter of that November 1985 agreement, which Microsoft was now waving in the face of anyone in the legal or journalistic professions who would look at it. The bone of contention between the two companies here was whether the “visual displays” of Windows 2 as well as Windows 1 were covered by the agreement. Microsoft naturally contended that they were; Apple contended that the Windows 2 interface had changed so much in comparison to its predecessor — coming to resemble the Macintosh even more in the process — that it could no longer be considered one of the specific “derivative works” to which Apple had granted Microsoft a license.

We’ll return to the court’s view of this question shortly. For now, though, let’s give Apple the benefit of the doubt as we continue to explore the full ramifications of their charges against Microsoft. The fact was that if one accepted Apple’s contention that Windows 2 wasn’t covered by the agreement, the questions surrounding the case grew more rather than less momentous. Could and should one be able to copyright the “look and feel” of a user interface, as opposed to the actual code used to create it? In pressing their claim, Apple was relying on an amorphous, under-explicated area of copyright law known as “visual copyright.”

In terms of computer software, the question of the bounds of visual copyright had been most thoroughly explored in the context of videogames. Back in 1980, Midway, a major producer of standup-arcade games, had sued a much smaller company called Dirkschneider for producing a clone of their popular game Galaxian. The judge in that case ruled in favor of Midway, formulating a new legal standard called the “Ten-foot Rule”: “If a reasonable person could not, at ten feet, tell the difference between two competitive products, then there was cause to believe an infringement was occurring.” Atari, the biggest videogame producer of all, then proceeded to use this precedent to pressure dozens of companies into withdrawing their clones of Atari games — in arcades, on game consoles, and on computers — from the market.

Somewhat later, in 1985, Brøderbund Software sued Kyocera for bundling with their printers an application called Printmaster, a thinly veiled clone of Brøderbund’s own hugely popular Print Shop package for making signs, greeting cards, and banners. They won their case the following year, with Judge William H. Orrick stating that Brøderbund’s copyright did indeed cover “the overall appearance, structure, and sequence” of screens in their software, and that Kyocera had thus infringed on same. Brøderbund’s Gary Carlston called the ruling “historic”: “If we don’t have copyright protection for our products, then it is going to be significantly more difficult to maintain a competitive advantage.” Encouraged by this ruling, in 1987 a maker of telecommunications software called Digital Communications Associates sued a company called Softklone Corporation — their name certainly didn’t help their cause — for copying the status display of their terminal software, and won their case as well. The Ten-Foot Rule, it seemed, could be successfully applied to software other than games. Both of these cases were cited by Apple’s lawyers in their own suit against Microsoft.

Brøderbund’s Print Shop side-by-side with Kyocera’s Printmaster.

Yet the Ten-Foot Rule, at least when applied to general-purpose software rather than games, struck many as deeply problematic. One of the most important advantages of a GUI was the way it made diverse types of software from diverse developers work and look the same, thereby keeping the user from having to relearn how to do the same basic tasks over and over again. What sort of chaos would follow if people started suing each other willy-nilly over this much-needed uniformity? And what did the Ten-Foot Rule mean for the many GUI environments, on MS-DOS and other platforms, that looked so similar to one another and to MacOS? That, of course, was the real crux of the matter for Microsoft and Apple as they faced one another in court.

The debate over the Ten-Foot Rule  and its potential ramifications wasn’t actually a new one, having already been taken up in public by the software industry before Apple filed their lawsuit. Fully thirteen months before that momentous day, Larry Tesler, an Apple executive, clashed heatedly with Bill Gates over this very issue at a technology conference. Tesler insisted that there was no problem inherent in applying the Ten-Foot Rule to operating systems and operating environments: “When someone comes up with a very good and popular look and feel, as we’ve done with the Macintosh, then they can make that available by licensing [it] to other people.”

But Gates was having none of this:

There’s no control of look and feel. I don’t know anybody who has asserted that things like drop-down menus and dialog boxes and just those general form-type aspects are subject to this look-and-feel stuff. Certainly it’s our view that the consistency of the user interface has become a very spreading thing, and that it’s open, generic technology. All of these approaches — how you click on [menu] bars, and certainly all those user-interface techniques and windows — there’s absolutely no restriction in any way on how people use those.

He thus ironically argued against the very premise of the 1985 agreement between Apple and Microsoft — that Apple had created a “visual display” subject to copyright protection, to which they were now granting Microsoft a license for certain products. But then, Gates seldom let deeply-held philosophical beliefs interfere with his pursuit of short-term advantage. In this latest debate as well, Gates’s arguments were undoubtedly self-serving, but they were no less valid in this case for being so. The danger he could point to if this sort of thing should spread was that of every innovative new application seeking copyright protection not just for its code but for the very ideas that made it up. Because form — i.e., look and feel — ideally followed function in software engineering. What would have happened if VisiCorp had been able to copyright the look and feel of the first spreadsheet? (VisiCalc and Lotus 1-2-3 looked pretty much identical from ten feet.) If WordStar had been able to copyright the look and feel of the word processor? (Try telling WordStar from the much more powerful WordPerfect from ten feet away.) If, to choose a truly absurd example, the first individual to devise a command-line interface back in the mists of time had been able to copyright that? It wasn’t at all clear where the lines could be drawn once the law started down this slippery slope. If Apple owned the set of ideas and approaches that everyone now thought of as the GUI in general, where did that leave the rest of the industry?

For this reason, Apple’s lawsuit, when it came, was greeted with deep concern even by many of those who weren’t particularly friendly with Microsoft. “Although Apple has a right to protect the results of its development and marketing efforts,” said the respected Silicon Valley pundit Larry Magid, “it should not try to thwart the obvious direction of the industry.” “If Apple is trying to push this as far as they appear to be trying to push it,” said Dan Bricklin of VisiCalc fame, “this is a sad day for the software industry in America.” More surprisingly, MacOS architect Andy Hertzfeld said that “in general, it’s a horrible thing. Apple could really end up hurting itself.” Most surprisingly of all, even Steve Jobs, now running a new company called NeXT, found Apple’s arguments as dangerous as they were unconvincing: “When we were developing the Macintosh, we kept in mind a famous quote of Picasso: ‘Good artists copy, great artists steal.’ What do I think of the suit? I personally don’t understand it. Can I copyright gravity? No.”

Interestingly, the lawyers pressing the lawsuit on Apple’s behalf didn’t ask for a preliminary injunction that would have forced Microsoft to withdraw Windows from the market. Some legal watchers interpreted this fact as a sign that they themselves weren’t certain about the real merits of their case, and hoped to win it as much through bluster as finely-honed legal arguments. Ditto Apple’s request that the eventual trial be decided by a jury of ordinary people who might be prone to weigh the case based on everyday standards of “fairness,” rather than by a judge who would be well-versed in the niceties of the law and the full ramifications of a verdict against Microsoft.

At this point, and especially given those ramifications, one feels compelled to ask just why Apple chose at this juncture to embark on such a lengthy, expensive, and fraught enterprise as a lawsuit against the company that remained the most important single provider of serious business software for the Macintosh, a platform whose cup still wasn’t exactly running over with such things. By way of an answer, we should consider that John Sculley was as proud a man as most people who rise to his elevated status in business tend to be. The belief, widespread both inside and outside of Apple, that he had let Bill Gates bully, outsmart, and finally rob him blind back in 1985 had to rankle him badly. In addition, Apple in general had long nursed a grievance, unproductive but understandable, against all the outsiders who had copied the interface they had worked so long and hard to perfect; thus those threatened lawsuits against Digital Research and Microsoft all the way back in 1985. A wiser leader might have told his employees to take their competitors’ imperfect copying as proof of Apple’s superiority, might have exhorted them to look toward their next big innovation rather than litigate their innovations of the past. But, at least on March 17, 1988, John Sculley wasn’t that wiser leader. Thus this lawsuit, dangerous not just to Apple and Microsoft but to their entire industry.

Bill Gates, for his part, remained more accustomed to bullying than being bullied. It had been spelled out for him right there in the court filing that a loss to Apple would almost certainly mean the end of Windows, the operating environment which was quite possibly the key to Microsoft’s future. Even widespread fear of such an event, he realized, could be devastating to Windows’s — and thus to Microsoft’s — prospects. So, he struck back fiercely so as to leave no doubt where he stood. Microsoft filed a counter-suit in April of 1988, accusing Apple of breaking the 1985 agreement and of filing their own lawsuit in bad faith, in the hope of creating fear, uncertainty, and doubt around Windows and thus “wrongfully inhibiting” its commercial future. Adding weight to their argument that the original lawsuit was a form of business competition by other means was the fact that Apple was being oddly selective in choosing whom to sue over the alleged copyright violations. Asked why they weren’t going after other products just as similar to MacOS as Windows, such as IBM’s forthcoming OS/2 Presentation Manager, Apple refused to comment.

The first skirmishes took place in the press rather than a courtroom: Sculley accusing Gates of having tricked him into signing the 1985 agreement, Gates saying a contract was a contract, and what sort of a chief executive let himself be tricked anyway? The exchanges just kept getting uglier from there. The technology journalists, naturally, loved every minute of it, while the software industry was thrown into a tizzy, wondering what this would mean for Windows just as it finally seemed to be gaining some traction. Phillipe Kahn, CEO of Borland, described the situation in colorful if non-politically-correct language: it was like “waking up and finding your partner might have AIDS.”

The court case marched forward much more slowly than the tabloid war of words. Gates stated in a sworn deposition that “from a user’s perspective, the visual displays which appear in Windows 2 are virtually identical to those which appear in Windows 1.” “This assertion,” Apple replied, “is contradicted by even the most casual observation of the two products.” On March 18, 1989, Judge William Schwarzer of the Federal District Court in San Francisco marked the one-year anniversary of the case by ruling against Microsoft on this issue, stating that only those attributes of Windows 2 which had also existed in Windows 1 were covered by the 1985 agreement. This meant most notably that the newer GUI’s system of overlapping windows stood outside the boundaries of that document, and thus that, as the judge put it, the 1985 agreement alone “was not a complete defense” for Microsoft. It did not, he ruled, give Microsoft the right “to develop future versions of Windows as it pleases. What Microsoft received was a license to use the visual displays in the named software products as they appeared to the user in November 1985. The displays [of Windows 1 and Windows 2] are fundamentally different.” Microsoft’s stock price promptly plummeted by 27 percent. It was an undeniable setback. “Microsoft’s major defense has been shot down,” crowed Apple’s attorneys.

“Major” was perhaps not the right choice of adjectives, but certainly Microsoft’s simplest possible form of defense had proved insufficient to bail them out. It seemed that total victory could be achieved now only by invalidating the whole notion of visual copyright which underlay both the 1985 agreement and Apple’s lawsuit based on its violation. That meant a long, tough slog at best. And with Windows 3 — the version that Microsoft was convinced would finally be the breakthrough version — getting closer and closer to release and looking more and more like the Macintosh all the while, the stakes were higher than ever.

The question of look and feel and visual copyright as applied to software had implications transcending even the fate of Windows or either company. If Apple’s suit succeeded, it would transform the software business overnight, making it extremely difficult to borrow or build on the ideas of others in the way that software had always done in the past. Bill Gates was an avid student of business history. As was his wont, he now looked back to compare his current plight with that of an earlier titan of industry. Back in 1903, just as the Ford Motor Company was getting off the ground, Henry Ford had been hit with a lawsuit from a group of inventors claiming to own a patent on the very concept of the automobile. He had battled them for years, vowing to fight on even after losing in open court in 1909: “There will be no let-up in the legal fight,” he declared on that dark day. At last, in 1911, he won the case on appeal — winning it not only for Ford Motor Company but for the future of the automobile industry as a field of open competition. His own legal war had similar stakes, Gates believed, and he and Microsoft intended to prosecute it in equally stalwart fashion — to win it not just for themselves but for the future of the software industry. This was necessary, he wrote in a memo, “to help set the boundaries of where copyrights should and should not be applied. We will prevail.”

(Sources: the books The Making of Microsoft: How Bill Gates and His Team Created the World’s Most Successful Software Company by Daniel Ichbiah and Susan L. Knepper, Hard Drive: Bill Gates and the Making of the Microsoft Empire by James Wallace and Jim Erickson, Gates: How Microsoft’s Mogul Reinvented an Industry and Made Himself the Richest Man in America by Stephen Manes and Paul Andrews, and Apple Confidential 2.0: The Definitive History of the World’s Most Colorful Company by Owen W. Linzmayer; Wall Street Journal of September 25 1987; Creative Computing of May 1985; InfoWorld of October 7 1985 and October 20 1986; MacWorld of October 1993; New York Times of March 18 1988 and March 18 1989.)


  1. Reworked to run under a 68000 architecture, GEM would enjoy some degree of sustained success in another realm: not as an MS-DOS-hosted GUI but as the GUI hosted in the Atari ST’s ROM. In this form, it would survive well into the 1990s. 

  2. Apple sued Hewlett-Packard at the same time, for an application called NewWave which ran on top of Windows and provided many of the Mac-like features, such as icons representing programs and disks and a desktop workspace, which Windows 2 alone still lacked. But that lawsuit would always remain a sideshow in comparison to the main event to whose fate its own must inevitably be tied. So, in the interest of that aforementioned clarity and concision, we won’t concern ourselves with it here. 

  3. Both Microsoft and Apple have collected reams of data which they claim prove that their approach is the best one. I do suspect, however, that the original impetus can be found in the fact that MacOS was originally a single-tasking operating system, meaning that only one menu bar would need to be available at any one time. Windows, on the other hand, was designed as a multitasking environment from the start. 

 

Tags: , , , , ,

The Manhole

The Manhole

Because the CD-ROM version of The Manhole sold in relatively small numbers in comparison to the original floppy version, the late Russell Lieblich’s surprisingly varied original soundtrack is too seldom heard today. So, in the best tradition of multimedia computing (still a very new and sexy idea in the time about which I’m writing), feel free to listen while you read.

The Manhole



Were HyperCard “merely” the essential bridge between Ted Nelson’s Xanadu fantasy and the modern World Wide Web, it would stand as one of the most important pieces of software of the 1980s. But, improbably, HyperCard was even more than that. It’s easy to get so dazzled by its early implementation of hypertext that one loses track entirely of the other part of Bill Atkinson’s vision for the environment. True to the Macintosh, “the computer for the rest of us,” Atkinson designed HyperCard as a sort of computerized erector set for everyday users who might not care a whit about hypertext for its own sake. With HyperCard, he hoped, “a whole new body of people who have creative ideas but aren’t programmers will be able to express their ideas or expertise in certain subjects.”

He made good on that goal. An incredibly diverse group of people worked with HyperCard, a group in which traditional hackers were very much the minority. Danny Goodman, the man who became known as the world’s foremost authority on HyperCard programming, was actually a journalist whose earlier experiences with programming had been limited to a few dabblings in BASIC. In my earlier article about hypertext and HyperCard, I wrote how “a professor of music converted his entire Music Appreciation 101 course into a stack.” Well, readers, I meant that literally. He did it himself. Industry analyst and HyperCard zealot Jan Lewis:

You can do things with it [HyperCard] immediately. And you can do sexy things: graphics, animation, sound. You can do it without knowing how to program. You get immediate feedback; you can make a change and see or hear it immediately. And as you go up on the learning curve — let’s say you learn how to use HyperTalk [the bundled scripting language] — again, you can make changes easily and simply and get immediate feedback. It just feels good. It’s fun!

And yet HyperCard most definitely wasn’t a toy. People could and did make great, innovative, commercial-quality software using it. Nowhere is the power of HyperCard — a cultural as well as a technical power — illustrated more plainly than in the early careers of Rand and Robyn Miller.

The Manhole

Rand and Robyn had a very unusual upbringing. The first and third of the four sons of a wandering non-denominational preacher, they spent their childhoods moving wherever their father’s calling took him: from Dallas to Albuquerque, from Hawaii to Haiti to Spokane. They were a classic pairing of left brain and right brain. Rand had taken to computers from the instant he was introduced to them via a big time-shared system whilst still in junior high, and had made programming them into his career. By 1987, the year HyperCard dropped, he was to all appearances settled in life: 28 years old, married with children, living in a small town in East Texas, working for a bank as a programmer, and nurturing a love for the Apple Macintosh (he’d purchased his first Mac within days of the machine’s release back in 1984). He liked to read books on science. His brother Robyn, seven years his junior, was still trying to figure out what to do with his life. He was attending the University of Washington in somewhat desultory fashion as an alleged anthropology major, but devoted most of his energy to drawing pictures and playing the guitar. He liked to read adventure novels.

HyperCard struck Rand Miller, as it did so many, with all the force of a revelation. While he was an accomplished enough programmer to make a living at it, he wasn’t one who particularly enjoyed the detail work that went with the trade. “There are a lot of people who love digging down into the esoterics of compilers and C++, getting down and dirty with typed variables and all that stuff,” he says. “I wanted a quick return on investment. I just wanted to get things done.” HyperCard offered the chance to “get things done” dramatically faster and more easily than any programming environment he had ever seen. He became an immediate convert.

The Manhole

With two small girls of his own, Rand felt keenly the lack of quality children’s software for the Macintosh. He hit upon the idea of making a sort of interactive storybook using HyperCard, a very natural application for a hypertext tool. Lacking the artistic talent to make a go of the pictures, he thought of his little brother Robyn. The two men, so far apart in years and geography and living such different lives, weren’t really all that close. Nevertheless, Rand had a premonition that Robyn would be the perfect partner for his interactive storybook.

But Robyn, who had never owned a computer and had never had any interest in doing so, wasn’t immediately enticed by the idea of becoming a software developer. Getting him just to consider the idea took quite a number of letters and phone calls. At last, however, Robyn made his way down to the Macintosh his parents kept in the basement of the family home in Spokane and loaded up the copy of HyperCard his brother had sent him. There, like so many others, he was seduced by Bill Atkinson’s creation. He started playing around, just to see what he could make. What he made right away became something very different from the interactive storybook, complete with text and metaphorical pages, that Rand had envisioned. Robyn:

I started drawing this picture of a manhole — I don’t even know why. You clicked on it and the manhole cover would slide off. Then I made an animation of a vine growing out. The vine was huge, “Jack and the Beanstalk”-style. And then I didn’t want to turn the page. I wanted to be able to navigate up the vine, or go down into the manhole. I started creating a navigable world by using the very simple tools [of HyperCard]. I created this place.  I improvised my way through this world, creating one thing after another. Pretty soon I was creating little canals, and a forest with stars. I was inventing it as I went. And that’s how the world was born.

For his part, Rand had no problem accepting the change in approach:

Immediately you are enticed to explore instead of turning the page. Nobody sees a hole in the ground leading downward and a vine growing upward and in the distance a fire hydrant that says, “Touch me,” and wants to turn the page. You want to see what those things are. Instead of drawing the next page [when the player clicked a hotspot], he [Robyn] drew a picture that was closer — down in the manhole or above on the vine. It was kind of a stream of consciousness, but it became a place instead of a book. He started sending me these images, and I started connecting them, trying to make them work, make them interactive.

The Manhole

In this fashion, they built the world of The Manhole together: Robyn pulling its elements from the flotsam and jetsam of his consciousness and drawing them on the screen, Rand binding it all together into a contiguous place, and adding sound effects and voice snippets here and there. If they had tried to make a real game of the thing, with puzzles and goals, such a non-designed approach to design would likely have gone badly wrong in a hurry.

Luckily, puzzles and goals were never the point of The Manhole. It was intended always as just an endlessly interesting space to explore. As such, it would prove capable of captivating children and the proverbial young at heart for hours, full as it was of secrets and Easter eggs hidden in the craziest of places. One can play with The Manhole on and off for literally years, and still continue to stumble upon the occasional new thing. Interactions are often unexpected, and unexpectedly delightful. Hop in a rowboat to take a little ride and you might emerge in a rabbit’s teacup. Start watching a dragon’s television — Why does a dragon have a television? Who knows! — and you can teleport yourself into the image shown on the screen to emerge at the top of the world. Search long enough, and you might just discover a working piano you can actually play. The spirit of the thing is perhaps best conveyed by the five books you find inside the friendly rabbit’s home: Alice in Wonderland; The Wind in the Willows; The Lion, the Witch, and the Wardrobe; Winnie the Pooh; and Metaphors of Intercultural Philosophy (“This book isn’t about anything!”). Like all of those books excepting, presumably, the last, The Manhole is pretty wonderful, a perfect blend of sweet cuteness and tart whimsy.

The Manhole

With no contacts whatsoever within the Macintosh software industry, the brothers decided to publish The Manhole themselves via a tiny advertisement in the back of Macworld magazine, taken out under the auspices of Prolog, a consulting company Rand had founded as a moonlighting venture some time before. They rented a tiny booth to show The Manhole publicly for the first time at the Hyper Expo in San Francisco in June of 1988. (Yes, HyperCard mania had gotten so intense that there were entire trade shows dedicated just to it.) There they were delighted to receive a visit from none other than HyperCard’s creator Bill Atkinson, with his daughter Laura in tow; not yet five years old, she had no trouble navigating through their little world. Incredibly, Robyn had never even heard the word “hypertext” prior to the show, had no idea about the decades of theory that underpinned the program he had used, savant-like, to create The Manhole. When he met a band of Ted Nelson’s disgruntled Xanadu disciples on the show floor, come to crash the HyperCard party, he had no idea what they were on about.

But the brothers’ most important Hyper Expo encounter was a meeting with Richard Lehrberg, Vice President for Product Development at Mediagenic,1 who took a copy of The Manhole away with him for evaluation. Lehrberg showed it to William Volk, whom he had just hired away from the small Macintosh and Amiga publisher Aegis to become Mediagenic’s head of technology; he described it to Volk unenthusiastically as “this little HyperCard thing” done by “two guys in Texas.” Volk was much more impressed. He was immediately intrigued by one aspect of The Manhole in particular: the way that it used no buttons or conventional user-interface elements at all. Instead, the pictures themselves were the interface; you could just click where you would and see what happened. It was perhaps a product of Robyn Miller’s sheer naivetee as much anything else; seasoned computer people, so used to conventional interface paradigms, just didn’t think like that. But regardless of where it came from, Volk thought it was genius, a breaking down of a wall that had heretofore always separated the user from the virtual world. Volk:

The Miller brothers had come up with what I call the invisible interface. They had gotten rid of the idea of navigation buttons, which was what everyone was doing: go forward, go backward, turn right, turn left. They had made the scenes themselves the interface. You’re looking at a fire hydrant. You click on the fire hydrant; the fire hydrant sprays water. You click on the fire hydrant again; you zoom in to the fire hydrant, and there’s a little door on the fire hydrant. That was completely new.

Of course, other games did have you clicking “into” their world to make things happen; the point-and-click adventure genre was evolving rapidly during this period to replace the older parser-driven adventure games. But even games like Déjà Vu and Maniac Mansion, brilliantly innovative though they were, still surrounded their windows into their worlds with a clutter of “verb” buttons, legacies of the genre’s parser-driven roots. The Manhole, however, presented the player with nothing but its world. What with its defiantly non-Euclidean — not to say nonsensical — representation of space and its lack of goals and puzzles, The Manhole wasn’t a conventional adventure game by any stretch. Nevertheless, it pointed the way to what the genre would become, not least in the later works of the Miller brothers themselves.

Much of Volk’s working life for the next two years would be spent on The Manhole, by the end of which period he would quite possibly be more familiar with its many nooks and crannies than its own creators were. He became The Manhole‘s champion inside Mediagenic, convincing his colleagues to publish it, thereby bringing it to a far wider audience than the Miller brothers could ever have reached on their own. Released by Mediagenic under their Activision imprint, it became a hit by the modest standards of the Macintosh consumer-software market. Macworld magazine named The Manhole the winner of their “Wild Card” category in a feature article on the best HyperCard stacks, while the Software Publishers Association gave it an “Excellence in Software” award for “Best New Use of a Computer.”

We aware that The Manhole was collecting a certain computer-chic cachet, Mediagenic/Activision didn't hesitate to play that angle up in their advertising.

Well aware that The Manhole was collecting a certain chic cachet to itself, Mediagenic/Activision didn’t hesitate to play that angle up in their advertising.

Had that been left to be that, The Manhole would remain historically interesting as both a delightful little curiosity of its era and as the starting point of the hugely significant game-development careers of the Miller brothers. Yet there’s more to the story.

William Volk, frustrated with the endless delays of CD-I and the state of paralysis the entire industry was in when it came to the idea of publishing entertainment software on CD, had been looking for some time for a way to break the logjam. It was Stewart Alsop, an influential tech journalist, who first suggested to Volk that the answer to his dilemma was already part of Mediagenic’s catalog — that The Manhole would be perfect for CD-ROM. Volk was just the person to see such a project through, having already experimented extensively with CD-ROM and CD-I  as part of Aegis as well as Mediagenic. With the permission of the Miller brothers, he recruited Russell Lieblich, Mediagenic’s longstanding guru in all things music- and sound-related, to compose and perform a soundtrack for The Manhole which would play from the CD as the player explored.

An important difference separates the way the music worked in the CD-ROM version of The Manhole from way it worked in virtually all computer games to appear before it. The occasional brief digitized snippet aside, music in computer games had always been generated on the computer, whether by sound chips like the Commodore 64’s famous SID or entire sound boards like the top-of-its-class Roland MT-32 (we shall endeavor to forget the horrid beeps and squawks that issued from the IBM PC and Apple II’s native sound hardware). But The Manhole‘s music, while having been originally generated entirely or almost entirely on computers in Lieblich’s studio, was then recorded onto CD for digital playback, just like a song on a music CD. This method, made possible only by evolving computer sound hardware and, most importantly, by the huge storage capacity of a CD-ROM, would in the years to come slowly become simply the way that computer-game music was done. Today many big-budget titles hire entire orchestras to record soundtracks as elaborate and ambitious as the ones found in big Hollywood feature films, whilst also including digitized recordings of voices, squealing tires, explosions, and all the inevitable rest. In fact, surprisingly little of the sound present in most modern games is synthesized sound, a situation that has long since relegated elaborate setups like the Roland MT-32 to the status of white elephants; just pipe your digitized recording through a digital-to-analog converter and be done with it already.

As the very first title to go all digitized all the time, The Manhole didn’t have a particularly easy time of it; getting the music to play without breaking up or stuttering as the player explored presented a huge challenge on the Macintosh, a machine whose minimalist design burdened the CPU with all of the work of sound generation. However, Volk and his colleagues got it going at last. Published in the spring of 1989, the CD-ROM version of The Manhole marked a major landmark in the history of computing, the first American game — or, at least, software toy (another big buzzword of the age, as it happens) — to be released on CD-ROM.2 Volk, infuriated with Philips for the chaos and confusion CD-I’s endless delays had wrought in an industry he believed was crying out for the limitless vistas of optical storage, sent them a copy of The Manhole along with a curt note: “See! We did it! We’re tired of waiting!”

And they weren’t done yet. Having gotten The Manhole working on CD-ROM on the Macintosh, Volk and his colleagues at Mediagenic next tackled the daunting task of porting it to the most popular platform for consumer software, MS-DOS — a platform without HyperCard. To address this lack, Mediagenic developed a custom engine for CD-ROM titles on MS-DOS, dubbing it the Multimedia Applications Development Environment, or MADE.3 Mediagenic’s in-house team of artists redrew Robyn Miller’s original black-and-white illustrations in color, and The Manhole on CD-ROM for MS-DOS shipped in 1990.

In my opinion, The Manhole lost a little bit of its charm when it was colorized. The VGA graphics, impressive in their day, look a bit garish today.

In my opinion, The Manhole lost some of its unique charm when it was colorized for MS-DOS. The VGA graphics, impressive in their day, look just a bit garish and overdone today in comparison to the classic pen-and-ink style of the original.

The Manhole, idiosyncratic piece of artsy children’s software that it was, could hardly have been expected to break the industry’s optical logjam all on its own. Its CD-ROM incarnation, for that matter, wasn’t all that hugely different from the floppy version. In the end, one has to acknowledge that The Manhole on CD-ROM was little more than the floppy version with a soundtrack playing in the background — a nice addition certainly, but perhaps not quite the transformative experience which all of the rhetoric surrounding CD-ROM’s potential might have led one to expect. It would take another few excruciating years for a CD-ROM drive to become a must-have accessory for everyday American computers. Yet every revolution has to start somewhere, and William Volk deserves his full measure of credit for doing what he could to push this one forward in the only way that could ultimately matter: by stepping up and delivering a real, tangible product at long last. As Steve Jobs used to say, “Real artists ship.”

The importance of The Manhole, existing as it does right there at the locus of so much that was new and important in computing in the late 1980s, can be read in so many ways that there’s always a danger of losing some of them in the shuffle. But it should never be forgotten whilst trying to sort through the tangle that this astonishingly creative little world was principally designed by someone who had barely touched a computer in his life before he sat down with HyperCard. That he wound up with something so fascinating is a huge tribute not just to Robyn Miller and his enabling brother Rand, but also to Bill Atkinson’s HyperCard itself. Apple has long since abandoned HyperCard, and we enjoy no precise equivalent to it today. Indeed, its vision of intuitive, non-pretentious, fun programming is one that we’re in danger of losing altogether. Being one who loves the computer most of all as the most exciting tool for creation ever invented, I can’t help but see that as a horrible shame.

The Miller brothers had, as most of you reading this probably know, a far longer future in front of them than HyperCard would get to enjoy. Already well before 1988 was through they had rechristened themselves Cyan Productions, a name that felt much more appropriate for a creative development house than the businesslike Prolog. As Cyan, they made two more pieces of children’s software, Cosmic Osmo and the Worlds Beyond the Makerei and Spelunx and the Caves of Mr. Seudo. Both were once again made using HyperCard, and both were very much made in the spirit of The Manhole. And like The Manhole both were published on CD-ROM as well as floppy disk; the Miller brothers, having learned much from Mediagenic’s process of moving their first title to CD-ROM, handled the CD-ROM as well as the floppy versions themselves when it came to these later efforts. Opinions are somewhat divided on whether the two later Cyan children’s titles fully recapture the magic that has led so many adults and children alike over the years to spend so much time plumbing the depths of The Manhole. None, however, can argue with the significance of what came next, the Miller brothers’ graduation to games for adults — and, as it happens, another huge milestone in the slow-motion CD-ROM revolution. But that story, like so many others, is one that we’ll have to tell at another time.

(Sources: Amstrad Action of January 1990; Macworld of July 1988, October 1988, November 1988, March 1989, April 1989, and December 1989; Wired of August 1994 and October 1999; The New York Times of November 28 1989. Also the books Myst and Riven: The World of the D’ni by Mark J.P. Wolf and Prima’s Official Strategy Guide: Myst by Rick Barba and Rusel DeMaria, and the Computer Chronicles television episodes entitled “HyperCard,” “MacWorld Special 1988,” “HyperCard Update,” and “Hypertext.” Online sources include Robyn Miller’s Myst postmortem from the 2013 Game Developer’s Conference; Richard Moss’s Ludiphilia podcast; a blog post by Robyn Miller. Finally, my huge thanks to William Volk for sharing his memories and impressions with me in an interview and for sending me an original copy of The Manhole on CD-ROM for my research.

The original floppy-disk-based version of The Manhole can be played online at archive.org. The Manhole: Masterpiece Edition, a remake supervised by the Miller brothers in 1994 which sports much-improved graphics and sound, is available for purchase on Steam.)


  1. Activision was renamed Mediagenic at almost the very instant that Lehrberg first met the Miller brothers. When the name change was greeted with universal derision, Activision/Mediagenic CEO Bruce Davis quickly began backpedaling on his hasty decision. The Manhole, for instance, was released by Mediagenic under their “Activision” label — which was odd because under the new ordering said label was supposed to be reserved for games, and The Manhole was considered children’s software, not a traditional game. I just stick with the name “Mediagenic” in this article as the least confusing way to address a confusing situation. 

  2. The first CD-based software to reach European consumers says worlds about the differences that persisted between American and European computing, and about the sheer can-do ingenuity that so often allowed British programmers in particular to squeeze every last ounce of potential out of hardware that was usually significantly inferior to that enjoyed by their American counterparts. Codemasters, a budget software house based in Warwickshire, came up with a very unique shovelware package for the 1989 Christmas season. They transferred thirty old games from cassette to a conventional audio CD, which they then sold along with a special cable to run the output from an ordinary music-CD player into a Sinclair or Amstrad home computer. “Here’s your CD-ROM,” they said. “Have a ball.” By all accounts, Codemasters’s self-proclaimed “CD revolution,” kind of hilarious and kind of brilliant, did quite well for them. When it came to doing more with less in computing, you never could beat the Brits. 

  3. MADE’s scripting language was to some extent based on AdvSys, a language for amateur text-adventure creation that never quite took off like the contemporaneous AGT

 
 

Tags: , ,

The Freedom to Associate

In 1854, an Austrian priest and physics teacher named Gregor Mendel sought and received permission from his abbot to plant a two-acre garden of pea plants on the grounds of the monastery at which he lived. Over the course of the next seven years, he bred together thousands upon thousands of the plants under carefully controlled circumstances, recording in a journal the appearance of every single offspring that resulted, as defined by seven characteristics: plant height, pod shape and color, seed shape and color, and flower position and color. In the end, he collected enough data to formulate the basis of the modern science of genetics, in the form of a theory of dominant and recessive traits passed down in pairs from generation to generation. He presented his paper on the subject, “Experiments on Plant Hybridization,” before the Natural History Society of Austria in 1865, and saw it published in a poorly circulated scientific journal the following year.

And then came… nothing. For various reasons — perhaps due partly to the paper’s unassuming title, perhaps due partly to the fact that Mendel was hardly a known figure in the world of biology, undoubtedly due largely to the poor circulation of the journal in which it was published — few noticed it at all, and those who did dismissed it seemingly without grasping its import. Most notably, Charles Darwin, whose On the Origin of Species had been published while Mendel was in the midst of his own experiments, seems never to have been aware of the paper at all, thereby missing this key gear in the mechanism of evolution. Mendel was promoted to abbot of his monastery shortly after the publication of his paper, and the increased responsibilities of his new post ended his career as a scientist. He died in 1884, remembered as a quiet man of religion who had for a time been a gentleman dabbler in the science of botany.

But then, at the turn of the century, the German botanist Carl Correns stumbled upon Mendel’s work while conducting his own investigations into floral genetics, becoming in the process the first to grasp its true significance. To his huge credit, he advanced Mendel’s name as the real originator of the set of theories which he, along with one or two other scientists working independently, was beginning to rediscover. Correns effectively shamed those other scientists as well into acknowledging that Mendel had figured it all out decades before any of them even came close. It was truly a selfless act; today the name of Carl Correns is unknown except in esoteric scientific circles, while Gregor Mendel’s has been done the ultimate honor of becoming an adjective (“Mendelian”) and a noun (“Mendelism”) locatable in any good dictionary.

Vannevar Bush

Vannevar Bush

So, all’s well that ends well, right? Well, maybe, but maybe not. Some 30 years after the rediscovery of Mendel’s work, an American named Vannevar Bush, dean of MIT’s School of Engineering, came to see the 35 years that had passed between the publication of Mendel’s theory and the affirmation of its importance as a troubling symptom of the modern condition. Once upon a time, all knowledge had been regarded as of a piece, and it had been possible for a great mind to hold within itself huge swathes of this collective knowledge of humanity, everything informing everything else. Think of that classic example of a Renaissance man, Leonardo da Vinci, who was simultaneously a musician, a physicist, a mathematician, an anatomist, a botanist, a geologist, a cartographer, an alchemist, an astronomer, an engineer, and an inventor. Most of all, of course, he was a great visual artist, but he used everything else he was carrying around in that giant brain of his to create paintings and drawings as technically meticulous as they were artistically sublime.

By Bush’s time, however, the world had long since entered the Age of the Specialist. As the sheer quantity of information in every field exploded, those who wished to do worthwhile work in any given field — even those people gifted with giant brains — were increasingly being forced to dedicate their intellectual lives entirely to that field and only that field, just to keep up. The intellectual elite were in danger of becoming a race of mole people, closeted one-dimensionals fixated always on the details of their ever more specialized trades, never on the bigger picture. And even then, the amount of information surrounding them was so vast, and existing systems for indexing and keeping track of it all so feeble, that they could miss really important stuff within their own specialties; witness the way the biologists of the late nineteenth century had missed Gregor Mendel’s work, and the 35-years head start it had cost the new science of genetics. “Mendel’s work was lost,” Bush would later write, “because of the crudity with which information is transmitted between men.” How many other major scientific advances were lying lost in the flood of articles being published every year, a flood that had increased by an order of magnitude just since Mendel’s time? “In this are thoughts,” wrote Bush, “certainly not often as great as Mendel’s, but important to our progress. Many of them become lost; many others are repeated over and over.” “This sort of catastrophe is undoubtedly being repeated all around us,” he believed, “as truly significant attainments become lost in the sea of the inconsequential.”

Bush’s musings were swept aside for a time by the rush of historical events. As the prospect of another world war loomed, he became President Franklin Delano Roosevelt’s foremost advisor on matters involving science and engineering. During the war, he shepherded through countless major advances in the technologies of attack and defense, culminating in the most fearsome weapon the world had ever known: the atomic bomb. It was actually this last that caused Bush to return to the seemingly unrelated topic of information management, a problem he now saw in a more urgent light than ever. Clearly the world was entering a new era, one with far less tolerance for the human folly, born of so much context-less mole-person ideology, that had spawned the current war.

Practical man that he was, Bush decided there was nothing for it but to roll up his sleeves and make a concrete proposal describing how humanity could solve the needle-in-a-haystack problem of the modern information explosion. Doing so must entail grappling with something as fundamental as “how creative men think, and what can be done to help them think. It is a problem of how the great mass of material shall be handled so that the individual can draw from it what he needs — instantly, correctly, and with utter freedom.”

As revolutionary manifestos go, Vannevar Bush’s “As We May Think” is very unusual in terms of both the man that wrote it and the audience that read it. Bush was no Karl Marx, toiling away in discontented obscurity and poverty. On the contrary, he was a wealthy upper-class patrician who was, as a member of the White House inner circle, about as fabulously well-connected as it was possible for a man to be. His article appeared first in the July 1945 edition of the Atlantic Monthly, hardly a bastion of radical thought. Soon after, it was republished in somewhat abridged form by Life, the most popular magazine on the planet. Thereby did this visionary document reach literally millions of readers.

With the atomic bomb still a state secret, Bush couldn’t refer directly to his real reasons for wanting so urgently to write down his ideas now. Yet the dawning of the atomic age nevertheless haunts his article.

It is the physicists who have been thrown most violently off stride, who have left academic pursuits for the making of strange destructive gadgets, who have had to devise new methods for their unanticipated assignments. They have done their part on the devices that made it possible to turn back the enemy, have worked in combined effort with the physicists of our allies. They have felt within themselves the stir of achievement. They have been part of a great team. Now, as peace approaches, one asks where they will find objectives worthy of their best.

Seen in one light, Bush’s essay is similar to many of those that would follow from other Manhattan Project alumni during the uncertain interstitial period between the end of World War II and the onset of the Cold War. Bush was like many of his colleagues in feeling the need to advance a utopian agenda to counter the apocalyptic potential of the weapon they had wrought, in needing to see the ultimate evil that was the atomic bomb in almost paradoxical terms as a potential force for good that would finally shake the world awake.

Bush was true to his engineer’s heart, however, in basing his utopian vision on technology rather than politics. The world was drowning in information, making the act of information synthesis — intradisciplinary and interdisciplinary alike — ever more difficult.

The difficulty seems to be, not so much that we publish unduly in view of the extent and variety of present-day interests, but rather that publication has been extended far beyond our present ability to make real use of the record. The summation of human experience is being expanded at a prodigious rate, and the means we use for threading through the consequent maze to the momentarily important item is the same as was used in the days of square-rigged ships.

Our ineptitude in getting at the record is largely caused by the artificiality of systems of indexing. When data of any sort are placed in storage, they are filed alphabetically or numerically, and information is found (when it is) by tracing it down from subclass to subclass. It can be in only one place, unless duplicates are used; one has to have rules as to which path will locate it, and the rules are cumbersome. Having found one item, moreover, one has to emerge from the system and reenter on a new path.

The human mind does not work that way. It operates by association. With one item in its grasp, it snaps instantly to the next that is suggested by the association of thoughts, in accordance with some intricate web of trails carried by the cells of the brain. It has other characteristics, of course; trails that are not frequently followed are prone to fade, items are not fully permanent, memory is transitory. Yet the speed of action, the intricacy of trails, the detail of mental pictures, is awe-inspiring beyond all else in nature.

Man cannot hope fully to duplicate this mental process artificially, but he certainly ought to be able to learn from it. In minor ways he may even improve it, for his records have relative permanency. The first idea, however, to be drawn from the analogy concerns selection. Selection by association, rather than indexing, may yet be mechanized. One cannot hope thus to equal the speed and flexibility with which the mind follows an associative trail, but it should be possible to beat the mind decisively in regard to the permanence and clarity of the items resurrected from storage.

Bush was not among the vanishingly small number of people who were working in the nascent field of digital computing in 1945. His “memex,” the invention he proposed to let an individual free-associate all of the information in her personal library, was more steampunk than cyberpunk, all whirring gears, snickering levers, and whooshing microfilm strips. But really, those things are just details; he got all of the important stuff right. I want to quote some more from “As We May Think,” and somewhat at length at that, because… well, because its vision of the future is just that important. This is how the memex should work:

When the user is building a trail, he names it, inserts the name in his code book, and taps it out on his keyboard. Before him are the two items to be joined, projected onto adjacent viewing positions. At the bottom of each there are a number of blank code spaces, and a pointer is set to indicate one of these on each item. The user taps a single key, and the items are permanently joined. In each code space appears the code word. Out of view, but also in the code space, is inserted a set of dots for photocell viewing; and on each item these dots by their positions designate the index number of the other item.

Thereafter, at any time, when one of these items is in view, the other can be instantly recalled merely by tapping a button below the corresponding code space. Moreover, when numerous items have been thus joined together to form a trail, they can be reviewed in turn, rapidly or slowly, by deflecting a lever like that used for turning the pages of a book. It is exactly as though the physical items had been gathered together from widely separated sources and bound together to form a new book. It is more than this, for any item can be joined into numerous trails.

The owner of the memex, let us say, is interested in the origin and properties of the bow and arrow. Specifically he is studying why the short Turkish bow was apparently superior to the English long bow in the skirmishes of the Crusades. He has dozens of possibly pertinent books and articles in his memex. First he runs through an encyclopedia, finds an interesting but sketchy article, leaves it projected. Next, in a history, he finds another pertinent item, and ties the two together. Thus he goes, building a trail of many items. Occasionally he inserts a comment of his own, either linking it into the main trail or joining it by a side trail to a particular item. When it becomes evident that the elastic properties of available materials had a great deal to do with the bow, he branches off on a side trail which takes him through textbooks on elasticity and tables of physical constants. He inserts a page of longhand analysis of his own. Thus he builds a trail of his interest through the maze of materials available to him.

And his trails do not fade. Several years later, his talk with a friend turns to the queer ways in which a people resist innovations, even of vital interest. He has an example, in the fact that the outraged Europeans still failed to adopt the Turkish bow. In fact he has a trail on it. A touch brings up the code book. Tapping a few keys projects the head of the trail. A lever runs through it at will, stopping at interesting items, going off on side excursions. It is an interesting trail, pertinent to the discussion. So he sets a reproducer in action, photographs the whole trail out, and passes it to his friend for insertion in his own memex, there to be linked into the more general trail.

Wholly new forms of encyclopedias will appear, ready-made with a mesh of associative trails running through them, ready to be dropped into the memex and there amplified. The lawyer has at his touch the associated opinions and decisions of his whole experience, and of the experience of friends and authorities. The patent attorney has on call the millions of issued patents, with familiar trails to every point of his client’s interest. The physician, puzzled by a patient’s reactions, strikes the trail established in studying an earlier similar case, and runs rapidly through analogous case histories, with side references to the classics for the pertinent anatomy and histology. The chemist, struggling with the synthesis of an organic compound, has all the chemical literature before him in his laboratory, with trails following the analogies of compounds, and side trails to their physical and chemical behavior.

The historian, with a vast chronological account of a people, parallels it with a skip trail which stops only on the salient items, and can follow at any time contemporary trails which lead him all over civilization at a particular epoch. There is a new profession of trail blazers, those who find delight in the task of establishing useful trails through the enormous mass of the common record. The inheritance from the master becomes, not only his additions to the world’s record, but for his disciples the entire scaffolding by which they were erected.

Ted Nelson

Ted Nelson

There is no record of what all those millions of Atlantic Monthly and Life readers made of Bush’s ideas in 1945 — or for that matter if they made anything of them at all. In the decades that followed, however, the article became a touchstone of the burgeoning semi-underground world of creative computing. Among its discoverers was Ted Nelson, who is depending on whom you talk to either one of the greatest visionaries in the history of computing or one of the greatest crackpots — or, quite possibly, both. Born in 1937 to a Hollywood director and his actress wife, then raised by his wealthy and indulgent grandparents following the inevitable Hollywood divorce, Nelson’s life would largely be defined by, as Gary Wolf put it in his classic profile for Wired magazine, his “aversion to finishing.” As in, finishing anything at all, or just the concept of finishing in the abstract. Well into middle-age, he would be diagnosed with attention-deficit disorder, an alleged malady he came to celebrate as his “hummingbird mind.” This condition perhaps explains why he was so eager to find a way of forging permanent, retraceable associations among all the information floating around inside and outside his brain.

Nelson coined the terms “hypertext” and “hypermedia” at some point during the early 1960s, when he was a graduate student at Harvard. (Typically, he got a score of Incomplete in the course for which he invented them, not to mention an Incomplete on his PhD as a whole.) While they’re widely used all but interchangeably today, in Nelson’s original formulation the former term was reserved for purely textual works, the later for those incorporating others forms of media, like images and sound. But today we’ll just go with the modern flow, call them all hypertexts, and leave it at that. In his scheme, then, hypertexts were texts capable of being “zipped” together with other hypertexts, memex-like, wherever the reader or writer wished to preserve associations between them. He presented his new buzzwords to the world at a conference of the Association for Computing Machinery in 1965, to little impact. Nelson, possessed of a loudly declamatory style of discourse and all the rabble-rousing fervor of a street-corner anarchist, would never be taken all that seriously by the academic establishment.

Instead, it being the 1960s and all, he went underground, embracing computing’s burgeoning counterculture. His eventual testament, one of the few things he ever did manage to complete — after a fashion, at any rate — was a massive 1200-page tome called Computer Lib/Dream Machines, self-published in 1974, just in time for the heyday of the Altair and the Homebrew Computer Club, whose members embraced Nelson as something of a patron saint. As the name would indicate, Computer Lib/Dream Machines was actually two separate books, bound back to back. Theoretically, Computer Lib was the more grounded volume, full of practical advice about gaining access to and using computers, while Dream Machines was full of the really out-there ideas. In practice, though, they were often hard to distinguish. Indeed, it was hard to even find anything in the books, which were published as mimeographed facsimile copies filled with jotted marginalia and cartoons drafted in Nelson’s shaky hand, with no table of contents or page numbers and no discernible organizing principle beyond the stream of consciousness of Nelson’s hummingbird mind. (I trust that the irony of a book concerned with finding new organizing principles for information itself being such an impenetrable morass is too obvious to be worth belaboring further.) Nelson followed Computer Lib/Dream Machines with 1981’s Literary Machines, a text written in a similar style that dwelt, when it could be bothered, at even greater length on the idea of hypertext.

The most consistently central theme of Nelson’s books, to whatever extent one could be discerned, was an elaboration of the hypertext concept he called Xanadu, after the pleasure palace in Samuel Taylor Coleridge’s poem “Kubla Khan.” The product of an opium-fueled hallucination, the 54-line poem is a mere fragment of a much longer work Coleridge had intended to write. Problem was, in the course of writing down the first part of his waking dream he was interrupted; by the time he returned to his desk he had simply forgotten the rest.

So, Nelson’s Xanadu was intended to preserve information that would otherwise be lost, which goal it would achieve through associative linking on a global scale. Beyond that, it was almost impossible to say precisely what Xanadu was or wasn’t. Certainly it sounds much like the World Wide Web to modern ears, but Nelson insists adamantly that the web is a mere bad implementation of the merest shadow of his full idea. Xanadu has been under allegedly active development since the late 1960s, making it the most long-lived single project in the history of computer programming, and by far history’s most legendary piece of vaporware. As of this writing, the sum total of all those years of work are a set of web pages written in Nelson’s inimitable declamatory style, littered with angry screeds against the World Wide Web, along with some online samples that either don’t work quite right or are simply too paradigm-shattering for my poor mind to grasp.

In my own years on this planet, I’ve come to reserve my greatest respect for people who finish things, a judgment which perhaps makes me less than the ideal critic of Ted Nelson’s work. Nevertheless, even I can recognize that Nelson deserves huge credit for transporting Bush’s ideas to their natural habitat of digital computers, for inventing the term “hypertext,” for defining an approach to links (or “zips”) in a digital space, and, last but far from least, for making the crucial leap from Vannevar Bush’s concept of the single-user memex machine to an interconnected global network of hyperlinks.

But of course ideas, of which both Bush and Nelson had so many, are not finished implementations. During the 1960s, 1970s, and early 1980s, there were various efforts — in addition, that is, to the quixotic effort that was Xanadu — to wrestle at least some of the concepts put forward by these two visionaries into concrete existence. Yet it wouldn’t be until 1987 that a corporation with real financial resources and real commercial savvy would at last place a reasonably complete implementation of hypertext before the public. And it all started with a frustrated programmer looking for a project.

Steve Jobs and Bill Atkinson

Steve Jobs and Bill Atkinson

Had he never had anything to do with hypertext, Bill Atkinson’s place in the history of computing would still be assured. Coming to Apple Computer in 1978, when the company was only about eighteen months removed from that famous Cupertino garage, Atkinson was instrumental in convincing Steve Jobs to visit the Xerox Palo Alto Research Center, thereby setting in motion the chain of events that would lead to the Macintosh. A brilliant programmer by anybody’s measure, he eventually wound up on the Lisa team. He wrote the routines to draw pixels onto the Lisa’s screen — routines on which, what with the Lisa being a fundamentally graphical machine whose every display was bitmapped, every other program depended. Jobs was so impressed by Atkinson’s work on what he named LisaGraf that he recruited him to port his routines over to the nascent Macintosh. Atkinson’s routines, now dubbed QuickDraw, would remain at the core of MacOS for the next fifteen years. But Atkinson’s contribution to the Mac went yet further: after QuickDraw, he proceeded to design and program MacPaint, one of the two applications included with the finished machine, and one that’s still justifiably regarded as a little marvel of intuitive user-interface design.

Atkinson’s work on the Mac was so essential to the machine’s success that shortly after its release he became just the fourth person to be named an Apple Fellow — an honor that carried with it, implicitly if not explicitly, a degree of autonomy for the recipient in the choosing of future projects. The first project that Atkinson chose for himself was something he called the Magic Slate, based on a gadget called the Dynabook that had been proposed years ago by Xerox PARC alum (and Atkinson’s fellow Apple Fellow) Alan Kay: a small, thin, inexpensive handheld computer controlled via a touch screen. It was, as anyone who has ever seen an iPhone or iPad will attest, a prescient project indeed, but also one that simply wasn’t realizable using mid-1980s computer technology. Having been convinced of this at last by his skeptical managers after some months of flailing,  Atkinson wondered if he might not be able to create the next best thing in the form of a sort of software version of the Magic Slate, running on the Macintosh desktop.

In a way, the Magic Slate had always had as much to do with the ideas of Bush and Nelson as it did with those of Kay. Atkinson had envisioned its interface as a network of “pages” which the user navigated among by tapping links therein — a hypertext in its own right. Now he transported the same concept to the Macintosh desktop, whilst making his metaphorical pages into metaphorical stacks of index cards. He called the end result, the product of many months of design and programming, “Wildcard.” Later, when the trademark “Wildcard” proved to be tied up by another company, it turned into “HyperCard” — a much better name anyway in my book.

By the time he had HyperCard in some sort of reasonably usable shape, Atkinson was all but convinced that he would have to either sell the thing to some outside software publisher or start his own company to market it. With Steve Jobs now long gone and with him much of the old Jobsian spirit of changing the world through better computing, Apple was heavily focused on turning the Macintosh into a practical business machine. The new, more sober mood in Cupertino — not to mention Apple’s more buttoned-down public image — would seem to indicate that they were hardly up for another wide-eyed “revolutionary” product. It was Alan Kay, still kicking around Cupertino puttering with this and that, who convinced Atkinson to give CEO John Sculley a chance before he took HyperCard elsewhere. Kay brokered a meeting between Sculley and Atkinson, in which the latter was able to personally demonstrate to the former what he’d been working on all these months. Much to Atkinson’s surprise, Sculley loved HyperCard. Apparently at least some of the old Jobsian fervor was still alive and well after all inside Apple’s executive suite.

At its most basic, a HyperCard stack to modern eyes resembles nothing so much as a PowerPoint presentation, albeit one which can be navigated non-linearly by tapping links on the slides themselves. Just as in PowerPoint, the HyperCard designer could drag and drop various forms of media onto a card. Taken even at this fairly superficial level, HyperCard was already a full-fledged hypertext-authoring (and hypertext-reading) tool — by no means the first specimen of its kind, but the first with the requisite combination of friendliness, practicality, and attractiveness to make it an appealing environment for the everyday computer user. One of Atkinson’s favorite early demo stacks had many cards with pictures of people wearing hats. If you clicked on a hat, you were sent to another card showing someone else wearing a hat. Ditto for other articles of fashion. It may sound banal, but this really was revolutionary, organization by association in action. Indeed, one might say that HyperCard was Vannevar Bush’s memex, fully realized at last.

But the system showed itself to have much, much more to offer when the author started to dig into HyperTalk, the included scripting language. All sorts of logic, simple or complex, could be accomplished by linking scripts to clicks on the surface of the cards. At this level, HyperCard became an almost magical tool for some types of game development, as we’ll see in future articles. It was also a natural fit for many other applications: information kiosks, interactive tutorials, educational software, expert systems, reference libraries, etc.

HyperCard in action

HyperCard in action

John Sculley himself premiered HyperCard at the August 1987 MacWorld show. Showing unusual largess in his determination to get HyperCard into the hands of as many people as possible as quickly as possible, he announced that henceforward all new Macs would ship with a free copy of the system, while existing owners could buy copies for their machines for just $49. He called HyperCard the most important product Apple had released during his tenure there. Considering that Sculley had also been present for the launch of the original Macintosh, this was certainly saying something. And yet he wasn’t clearly in the wrong either. As important as the Macintosh, the realization in practical commercial form of the computer-interface paradigms pioneered at Xerox PARC during the 1970s, has been to our digital lives of today, the concept of associative indexing — hyperlinking — has proved at least as significant. But then, the two do go together like strawberries and cream, the point-and-click paradigm providing the perfect way to intuitively navigate through a labyrinth of hyperlinks. It was no coincidence that an enjoyable implementation of hypertext appeared first on the Macintosh; the latter almost seemed a prerequisite for the former.

The full revolutionary nature of the concept of hypertext was far from easy to get across in advertising copy, but Apple gave it a surprisingly good go, paying due homage to Vannevar Bush in the process.

The full import of the concept of hypertext was far from easy to get across in advertising copy, but Apple gave it a surprisingly serious go, paying due homage to Vannevar Bush in the process.

In the wake of that MacWorld presentation, a towering tide of HyperCard hype rolled from one side of the computer industry to the other, out into the mainstream media, and then back again, over and over. Hypertext’s time had finally come. In 1985, it was an esoteric fringe concept known only to academics and a handful of hackers, being treated at real length and depth in print only in Ted Nelson’s own sprawling, well-nigh impenetrable tomes. Four years later, every bookstore in the land sported a shelf positively groaning with trendy paperbacks advertising hypertext this and hypertext that. By then the curmudgeons had also begun to come out in force, always a sure sign that an idea has truly reached critical mass. Presentations showed up in conference catalogs with snarky titles like “Hypertext: Will It Cook Me Breakfast Too?.”

The curmudgeons had plenty of rabid enthusiasm to push back against. HyperCard, even more so than the Macintosh itself, had a way of turning the most sober-minded computing veterans into starry-eyed fanatics. Jan Lewis, a long time business-computing analyst, declared that “HyperCard is going to revolutionize the way computing is done, and possibly the way human thought is done.” Throwing caution to the wind, she abandoned her post at InfoWorld to found HyperAge, the first magazine dedicated to the revolution. “There’s a tremendous demand,” she said. “If you look at the online services, the bulletin boards, the various ad hoc meetings, user groups — there is literally a HyperCulture developing, almost a cult.” To judge from her own impassioned statements, she should know. She recruited Ted Nelson himself — one of the HyperCard holy trinity of Bush, Nelson, and Atkinson — to write a monthly column.

HyperCard effectively amounted to an entirely new computing platform that just happened to run atop the older platform that was the Macintosh. As Lewis noted, user-created HyperCard stacks — this new platform’s word for “programs” or “software” — were soon being traded all over the telecommunications networks. The first commercial publisher to jump into the HyperCard game was, somewhat surprisingly, Mediagenic.1 Bruce Davis, Mediagenic’s CEO, has hardly gone down into history as a paradigm of progressive thought in the realms of computer games and software in general, but he defied his modern reputation in this one area at least by pushing quickly and aggressively into “stackware.” One of the first examples of same that Mediagenic published was Focal Point, a collection of business and personal-productivity tools written by one Danny Goodman, who was soon to publish a massive bible called The Complete HyperCard Handbook, thus securing for himself the mantle of the new ecosystem’s go-to programming guru. Focal Point was a fine demonstration that just about any sort of software could be created by the sufficiently motivated HyperCard programmer. But it was another early Mediagenic release, City to City, that was more indicative of the system’s real potential. It was a travel guide to most major American cities — an effortlessly browsable and searchable guide to “the best food, lodgings, and other necessities” to be found in each of the metropolises in its database.

City to City

City to City

Other publishers — large, small, and just starting out — followed Mediagenic’s lead, releasing a bevy of fascinating products. The people behind The Whole Earth Catalog — themselves the inspiration for Ted Nelson’s efforts in self-publication — converted their current edition into a HyperCard stack filling a staggering 80 floppy disks. A tiny company called Voyager combined HyperCard with a laser-disc player — a very common combination among ambitious early HyperCard developers — to offer an interactive version of the National Gallery of Art which could be explored using such associative search terms as “Impressionist landscapes with boats.” Culture 1.0 let you explore its namesake through “3700 years of Western history — over 200 graphics, 2000 hypertext links, and 90 essays covering topics from the Black Plague to Impressionism,” all on just 7 floppy disks. Mission: The Moon, from the newly launched interactive arm of ABC News, gathered together details of every single Mercury, Gemini, and Apollo mission, including videos of each mission hosted on a companion laser disc. A professor of music converted his entire Music Appreciation 101 course into a stack. The American Heritage Dictionary appeared as stackware. And lots of what we might call “middlestackware” appeared to help budding programmers with their own creations: HyperComposer for writing music in HyperCard, Take One for adding animations to cards.

Just two factors were missing from HyperCard to allow hypertext to reach its full potential. One was a storage medium capable of holding lots of data, to allow for truly rich multimedia experiences, combining the lavish amounts of video, still pictures, music, sound, and of course text that the system clearly cried out for. Thankfully, that problem was about to be remedied via a new technology which we’ll be examining in my very next article.

The other problem was a little thornier, and would take a little longer to solve. For all its wonders, a HyperCard stack was still confined to the single Macintosh on which it ran; there was no provision for linking between stacks running on entirely separate computers. In other words, one might think of a HyperCard stack as equivalent to a single web site running locally off a single computer’s hard drive, without the ability to field external links alongside its internal links. Thus the really key component of Ted Nelson’s Xanadu dream, that of a networked hypertext environment potentially spanning the entire globe, remained unrealized. In 1990, Bill Nisen, the developer of a hypertext system called Guide that slightly predated HyperCard but wasn’t as practical or usable, stated the problem thus:

The one thing that is precluding the wide acceptance of hypertext and hypermedia is adequate broadcast mechanisms. We need to find ways in which we can broadcast the results of hypermedia authoring. We’re looking to in the future the ubiquitous availability of local-area networks and low-cost digital-transmission facilities. Once we can put the results of this authoring into the hands of more users, we’re going to see this industry really explode.

Already at the time Nisen made that statement, a British researcher named Tim Berners-Lee had started to experiment with something he called the Hypertext Transfer Protocol. The first real web site, the beginning of the World Wide Web, would go online in 1991. It would take a few more years even from that point, but a shared hypertextual space of a scope and scale the likes of which few could imagine was on the way. The world already had its memex in the form of HyperCard. Now — and although this equivalency would scandalize Ted Nelson — it was about to get its Xanadu.

Associative indexing permeates our lives so thoroughly today that, as with so many truly fundamental paradigm shifts, the full scope of the change it has wrought can be difficult to fully appreciate. A century ago, education was still largely an exercise in retention: names, dates, Latin verb cognates. Today’s educational institutions  — at least the more enlightened ones — recognize that it’s more important to teach their pupils how to think than it is to fill their heads with facts; facts, after all, are now cheap and easy to acquire when you need them. That such a revolution in the way we think about thought happened in just a couple of decades strikes me as incredible. That I happened to be present to witness it strikes me as amazing.

What I’ve witnessed has been a revolution in humanity’s relationship to information itself that’s every bit as significant as any political revolution in history. Some Singularity proponents will tell you that it marks the first step on the road to a vast worldwide consciousness. But even if you choose not to go that far, the ideas of Vannevar Bush and Ted Nelson are still with you every time you bring up Google. We live in a world in which much of the sum total of human knowledge is available over an electronic connection found in almost every modern home. This is wondrous. Yet what’s still more wondrous is the way that we can find almost any obscure fact, passage, opinion, or idea we like from within that mass, thanks to selection by association. Mama, we’re all cyborgs now.

(Sources: the books Hackers: Heroes of the Computer Revolution and Insanely Great: The Life and Times of the Macintosh, the Computer That Changed Everything by Steven Levy; Computer Lib/Dream Machines and Literary Machines by Ted Nelson; From Memex to Hypertext: Vannevar Bush and the Mind’s Machine, edited by James M. Nyce and Paul Kahn; The New Media Reader, edited by Noah Wardrip-Fruin and Nick Montfort; Multimedia and Hypertext: The Internet and Beyond by Jakob Nielsen; The Making of the Atomic Bomb by Richard Rhodes. Also the June 1995 Wired magazine profile of Ted Nelson; Andy Hertzfeld’s website Folklore; and the Computer Chronicles television episodes entitled “HyperCard,” “MacWorld Special 1988,” “HyperCard Update,” and “Hypertext.”)


  1. Mediagenic was known as Activision until mid-1988. To avoid confusion, I just stick with the name “Mediagenic” in this article. 

 
53 Comments

Posted by on September 23, 2016 in Digital Antiquaria, Interactive Fiction

 

Tags: ,

Cracking Open the Mac

The Macintosh II

The biggest problem with the Macintosh hardware was pretty obvious, which was its limited expandability. But the problem wasn’t really technical as much as philosophical, which was that we wanted to eliminate the inevitable complexity that was a consequence of hardware expandability, both for the user and the developer, by having every Macintosh be identical. It was a valid point of view, even somewhat courageous, but not very practical, because things were still changing too fast in the computer industry for it to work, driven by the relentless tides of Moore’s Law.

— original Macintosh team-member Andy Hertzfeld

Jef Raskin and Steve Jobs didn’t agree on much, but they did agree on their loathing for expansion slots. The absence of slots was one of the bedrock attributes of Raskin’s original vision for the Macintosh, the most immediately obvious difference between it and Apple’s then-current flagship product, the Apple II. In contrast to Steve Wozniak’s beloved hacker plaything, Raskin’s computer for the people would be as effortless to set up and use as a stereo, a television, or a toaster.

When Jobs took over the Macintosh project — some, including Raskin himself, would say stole it — he changed just about every detail except this one. Yet some members of the tiny team he put together, fiercely loyal to their leader and his vision of a “computer for the rest of us” though they were, were beginning to question the wisdom of this aspect of the machine by the time the Macintosh came together in its final form. It was a little hard in January of 1984 not to question the wisdom of shipping an essentially unexpandable appliance with just 128 K of memory and a single floppy-disk drive for a price of $2495. At some level, it seemed, this just wasn’t how the computer market worked.

Jobs would reply that the whole point of the Macintosh was to change how computers worked, and with them the workings of the computer market. He wasn’t entirely without concrete arguments to back up his position. One had only to glance over at the IBM clone market — always Jobs’s first choice as the antonym to the Mac — to see how chaotic a totally open platform could be. Clone users were getting all too familiar with the IRQ and memory-address conflicts that could result from plugging two cards that were determined not to play nice together into the same machine, and software developers were getting used to chasing down obscure bugs that only popped up when their programs ran on certain combinations of hardware.

Viewed in the big picture, we could actually say that Jobs was prescient in his determination to stamp out that chaos, to make every Macintosh the same as every other, to make the platform in general a thoroughly known quantity for software developers. The norm in personal computing as most people know it — whether we’re talking phones, tablets, laptops, or increasingly even desktop computers — has long since become sealed boxes of one stripe or another. But there are some important factors that make said sealed boxes a better idea now than they were back then. For one thing, the pace of hardware and software development alike has slowed enough that a new computer can be viable just as it was purchased for ten years or more. For another, prices have come down enough that throwing a device away and starting over with a new one isn’t so cost-prohibitive as it once was. With personal computers still exotic, expensive machines in a constant state of flux at the time of the Mac’s introduction, the computer as a sealed appliance was a vastly more problematic proposition.

Determined to do everything possible to keep users out of the Macintosh's innards, Apple used Torx screws, which were almost unheard of at the time, and even threatened them with electrocution should they persist. The contrast with the Apple II, whose top could be popped in seconds, could hardly have been more striking.

Determined to do everything possible to keep users out of the Mac’s innards, Apple used Torx screws for which screwdrivers weren’t commonly available to seal it, and even threatened users with electrocution should they persist in trying to open it. The contrast with the Apple II, whose top could be popped in seconds using nothing more than a pair of hands to reveal seven tempting expansion slots, could hardly have been more striking.

It was the early adopters who spotted the potential in that first slow, under-powered Macintosh, the people who believed Jobs’s promise that the machine’s success or failure would be determined by the number who bought it in its first hundred days on the market, who bore the brunt of Apple’s decision to seal it as tightly as Fort Knox. When Apple in September of 1984 released the so-called “Fat Mac” with 512 K of memory, the quantity that in the opinion of just about everyone — including most of those at Apple not named Steve Jobs — the machine should have shipped with in the first place, owners of the original model were offered the opportunity to bring their machines to their dealers and have them retro-fitted to the new specifications for $995. This “deal” sparked considerable outrage and even a letter-writing campaign that tried to shame Apple into bettering the terms of the upgrade. Disgruntled existing owners pointed out that their total costs for a 512 K Macintosh amounted to $3490, while a Fat Mac could be bought outright by a prospective new member of the Macintosh fold for $2795. “Apple should have bent over backward for the people who supported it in the beginning,” said one of the protest’s ringleaders. “I’m never going to feel the same about Apple again.” Apple, for better or for worse never a company that was terribly susceptible to such public shaming, sent their disgruntled customers a couple of free software packages and told them to suck it up.

The Macintosh Plus

The Macintosh Plus

Barely fifteen months later, when Apple released the Macintosh Plus with 1 MB of memory among other advancements, the merry-go-round spun again. This time the upgrade would cost owners of the earlier models over $1000, along with lots of downtime while their machines sat in queues at their dealers. With software developers rushing to take advantage of the increased memory of each successive model, dedicated users could hardly stand to regard each successive upgrade as optional. As things stood, then, they were effectively paying a service charge of about $1000 per year just to remain a part of the Macintosh community. Owning a Mac was like owning a car that had to go into the shop for a week for a complete engine overhaul once every year. Apple, then as now, was famous for the loyalty of their users, but this was stretching even that legendary goodwill to the breaking point.

For some time voices within Apple had been mumbling that this approach simply couldn’t continue if the Macintosh was to become a serious, long-lived computing platform; Apple simply had to open the Mac up, even if that entailed making it a little more like all those hated beige IBM clones. During the first months after the launch, Steve Jobs was able to stamp out these deviations from his dogma, but as sales stalled and his relationship with John Sculley, the CEO he’d hand-picked to run the company he’d co-founded, deteriorated, the grumblers grew steadily more persistent and empowered.

The architect of one of the more startling about-faces in Apple’s corporate history would be Jean-Louis Gassée, a high-strung marketing executive newly arrived in Silicon Valley from Apple’s French subsidiary. Gassée privately — very privately in the first months after his arrival, when Jobs’s word still was law — agreed with many on Apple’s staff that the only way to achieve the dream of making the Macintosh into a standard to rival or beat the Intel/IBM/Microsoft trifecta was to open the platform. Thus he quietly encouraged a number of engineers to submit proposals on what direction they would take the platform in if given free rein. He came to favor the ideas of Mike Dhuey and Brian Berkeley, two young engineers who envisioned a machine with slots as plentiful and easily accessible as those of the Apple II or an IBM clone. Their “Little Big Mac” would be based around the 32-bit Motorola 68020 chip rather than the 16-bit 68000 of the current models, and would also sport color — another Jobsian heresy.

In May of 1985, Jobs made the mistake of trying to recruit Gassée into a rather clumsy conspiracy he was formulating to oust Sculley, with whom he was now in almost constant conflict. Rather than jump aboard the coup train, Gassée promptly blew the whistle to Sculley, precipitating an open showdown between Jobs and Sculley in which, much to Jobs’s surprise, the entirety of Apple’s board backed Sculley. Stripped of his power and exiled to a small office in a remote corner of Apple’s Cupertino campus, Jobs would soon depart amid recriminations and lawsuits to found a new venture called NeXT.

Gassée’s betrayal of Jobs’s confidence may have had a semi-altruistic motivation. Convinced that the Mac needed to open up to survive, perhaps he concluded that that would only happen if Jobs was out of the picture. Then again, perhaps it came down to a motivation as base as personal jealously. With a penchant for leather and a love of inscrutable phraseology — “the Apple II smelled like infinity” is a typical phrase from his manifesto The Third Apple, “an invitation to voyage into a region of the mind where technology and poetry exist side by side, feeding each other” — Gassée seemed to self-consciously adopt the persona of a Gallic version of Jobs himself. But regardless, with Jobs now out of the picture Gassée was able to consolidate his own power base, taking over Jobs’s old role as leader of the Macintosh division. He went out and bought a personalized license plate for his sports car: “OPEN MAC.”

Coming some four months after Jobs’s final departure, the Mac Plus already included such signs of the changing times as a keyboard with arrow keys and a numeric keypad, anathema to Jobs’s old mouse-only orthodoxy. But much, much bigger changes were also well underway. Apple’s 1985 annual report, released in the spring of 1986, dropped a bombshell: a Mac with slots was on the way. Dhuey and Berkeley’s open Macintosh was now proceeding… well, openly.

The Macintosh II

The Macintosh II

When it debuted five months behind schedule in March of 1987, the Macintosh II was greeted as a stunning but welcome repudiation of much of what the Mac had supposedly stood for. In place of the compact all-in-one-case designs of the past, the new Mac was a big, chunky box full of empty space and empty slots — six of them altogether — with the monitor an item to be purchased separately and perched on top. Indeed, one could easily mistake the Mac II at a glance for a high-end IBM clone; its big, un-stylish case even included a cooling fan, an item that placed even higher than expansion slots and arrow keys on Steve Jobs’s old list of forbidden attributes.

Apple’s commitment to their new vision of a modular, open Macintosh was so complete that the Mac II didn’t include any on-board video at all; the buyer of the $6500 machine would still have to buy the video card of her choice separately. Apple’s own high-end video card offered display capabilities unprecedented in a personal computer: a palette of over 16 million colors, 256 of them displayable onscreen at any one time at resolutions as high as 640 X 480. And, in keeping with the philosophy behind the Mac II as a whole, the machine was ready and willing to accept a still more impressive graphics card just as soon as someone managed to make one. The Mac II actually represented colors internally using 48 bits, allowing some 281 trillion different shades. These idealized colors were then translated automatically into the closest approximations the actual display hardware could manage. This fidelity to the subtlest vagaries of color would make the Mac II the favorite of people working in many artistic and image-processing fields, especially when those aforementioned even better video cards began to hit the market in earnest. Even today no other platform can match the Mac in its persnickety attention to the details of accurate color reproduction.

Some of the Mac II's capabilities truly were ahead of their time. Here we see a desktop extended across two monitors, each powered by a separate video card.

Some of the Mac II’s capabilities truly were ahead of their time. Here we see a desktop extended across two monitors, each powered by its own video card.

The irony wasn’t lost on journalists or users when, just weeks after the Mac II’s debut, IBM debuted their new PS/2 line, marked by sleeker, slimmer cases and many features that would once have been placed on add-on-cards now integrated into the motherboards. While Apple was suddenly encouraging the sort of no-strings-attached hardware hacking on the Macintosh that had made their earlier Apple II so successful, IBM was trying to stamp that sort of thing out on their own heretofore open platform via their new Micro Channel Architecture, which demanded that anyone other than IBM who wanted to expand a PS/2 machine negotiate a license and pay for the privilege. “The original Mac’s lack of slots stunted its growth and forced Apple to expand the machine by offering new models,” wrote Byte. “With the Mac II, Apple — and, more importantly, third-party developers — can expand the machine radically without forcing you to buy a new computer. This is the design on which Apple plans to build its Macintosh empire.” It seemed like the whole world of personal computing was turning upside down, Apple turning into IBM and IBM turning into Apple.

The Macintosh SE

The Macintosh SE

If so, however, Apple’s empire would be a very exclusive place. By the time you’d bought a monitor, video card, hard drive, keyboard — yes, even the keyboard was a separate item — and other needful accessories, a Mac II system could rise uncomfortably close to the $10,000 mark. Those who weren’t quite flush enough to splash out that much money could still enjoy a taste of the Mac’s new spirit of openness via the simultaneously released Mac SE, which cost $3699 for a hard-drive-equipped model. The SE was a 68000-based machine that looked much like its forefathers — built-in black-and-white monitor included — but did have a single expansion slot inside its case. The single slot was a little underwhelming in comparison to the Mac II, but it was better than nothing, even if Apple did still recommend that customers take their machines to their dealers if they wanted to actually install something in it. Apple’s not-terribly-helpful advice for those needing to employ more than one expansion card was to buy an “integrated” card that combined multiple functions. If you couldn’t find a card that happened to combine exactly the functions you needed, you were presumably just out of luck.

During the final years of the 1980s, Apple would continue to release new models of the Mac II and the Mac SE, now established as the two separate Macintosh flavors. These updates enhanced the machines with such welcome goodies as 68030 processors and more memory, but, thanks to the wonders of open architecture, didn’t immediately invalidate the models that had come before. The original Mac II, for instance, could be easily upgraded from the 68020 to the 68030 just by dropping a card into one of its slots.

The Steve Jobs-less Apple, now thoroughly under the control of the more sober and pragmatic John Sculley, toned down the old visionary rhetoric in favor of a more businesslike focus. Even the engineers dutifully toed the new corporate line, at least publicly, and didn’t hesitate to denigrate Apple’s erstwhile visionary-in-chief in the process. “Steve Jobs thought that he was right and didn’t care what the market wanted,” Mike Dhuey said in an interview to accompany the Mac II’s release. “It’s like he thought everyone wanted to buy a size-nine shoe. The Mac II is specifically a market-driven machine, rather than what we wanted for ourselves. My job is to take all the market needs and make the best computer. It’s sort of like musicians — if they make music only to satisfy their own needs, they lose their audience.” Apple, everyone was trying to convey, had grown up and left all that changing-the-world business behind along with Steve Jobs. They were now as sober and serious as IBM, their machines ready to take their places as direct competitors to those of Big Blue and the clonesters.

To a rather surprising degree, the world of business computing accepted Apple and the Mac’s new persona. Through 1986, the machines to which the Macintosh was most frequently compared were the Commodore Amiga and Atari ST. In the wake of the Mac II and Mac SE, however, the Macintosh was elevated to a different plane. Now the omnipresent point of comparison was high-end IBM compatibles; the Amiga and ST, despite their architectural similarities, seldom even saw their existence acknowledged in relation to the Mac. There were some good reasons for this neglect beyond the obvious ones of pricing and parent-company rhetoric. For one, the Macintosh was always a far more polished experience for the end user than either of the other 68000-based machines. For another, Apple had enjoyed a far more positive reputation with corporate America than Commodore or Atari had even well before any of the three platforms in question had existed. Still, the nature of the latest magazine comparisons was a clear sign that Apple’s bid to move the Mac upscale was succeeding.

Whatever one thought of Apple’s new, more buttoned-down image, there was no denying that the market welcomed the open Macintosh with a matching set of open arms. Byte went so far as to call the Mac II “the most important product that Apple has released since the original Apple II,” thus elevating it to a landmark status greater even that that of the first Mac model. While history hasn’t been overly kind to that judgment, the fact remains that third-party software and hardware developers, who had heretofore been constipated by the frustrating limitations of the closed Macintosh architecture, burst out now in myriad glorious ways. “We can’t think of everything,” said an ebulliant Jean-Louis Gassée. “The charm of a flexible, open product is that people who know something you don’t know will take care of it. That’s what they’re doing in the marketplace.” The biannual Macworld shows gained a reputation as the most exciting events on the industry’s calendar, the beat to which every journalist lobbied to be assigned. The January 1988 show in San Francisco, the first to reflect the full impact of Apple’s philosophical about-face, had 20,000 attendees on its first day, and could have had a lot more than that had there been a way to pack them into the exhibit hall. Annual Macintosh sales more than tripled between 1986 and 1988, with cumulative sales hitting 2 million machines in the latter year. And already fully 200,000 of the Macs out there by that point were Mac IIs, an extraordinary number really given that machine’s high price. Granted, the Macintosh had hit the 2-million mark fully three years behind the pace Steve Jobs had foreseen shortly after the original machine’s introduction. But nevertheless, it did look like at least some of the more modest of his predictions were starting to come true at last.

An Apple Watch 27 years before its time? Just one example of the extraordinary innovation of the Macintosh market was the WristMac from Ex Machina, a "personal information manager" that could be synchronized with a Mac to function as your appointment calendar and a telephone Rolodex among other possibilities.

An Apple Watch 27 years before its time? Just one example of the extraordinary innovation of the Macintosh market was the WristMac from Ex Machina, a “personal information manager” that could be synchronized with a Mac to take the place of your appointment calendar, to-do list, and Rolodex.

While the Macintosh was never going to seriously challenge the IBM standard on the desks of corporate America when it came to commonplace business tasks like word processing and accounting, it was becoming a fixture in design departments of many stripes, and the staple platform of entire niche industries — most notably, the publishing industry, thanks to the revolutionary combination of Aldus PageMaker (or one of the many other desktop-publishing packages that followed it) and an Apple LaserWriter printer (or one of the many other laser printers that followed it). By 1989, Apple could claim about 10 percent of the business-computing market, making them the third biggest player there after IBM and Compaq — and of course the only significant player there not running a Microsoft operating system. What with Apple’s premium prices and high profit margins, third place really wasn’t so bad, especially in comparison with the moribund state of the Macintosh of just a few years before.

Steve Jobs and John Sculley in happier times.

Steve Jobs and John Sculley in happier times.

So, the Macintosh was flying pretty high as the curtain began to come down on the 1980s. It’s instructive and more than a little ironic to contrast the conventional wisdom that accompanied that success with the conventional wisdom of today. Despite the strong counterexample of Nintendo’s exploding walled garden over in the videogame-console space, the success the Macintosh had enjoyed since Apple’s decision to open up the platform was taken as incontrovertible proof that openness in terms of software and hardware alike was the only viable model for computing’s future. In today’s world of closed iOS and Android ecosystems and computing via disposable black boxes, such an assertion sounds highly naive.

But even more striking is the shift in the perception of Steve Jobs. In the late 1980s, he was loathed even by many strident Mac fans, whilst being regarded in the business and computer-industry press and, indeed, much of the popular press in general as a dilettante, a spoiled enfant terrible whose ill-informed meddling had very nearly sunk a billion-dollar corporation. John Sculley, by contrast, was lauded as exactly the responsible grown-up Apple had needed to scrub the company of Jobs’s starry-eyed hippie meanderings and lead them into their bright businesslike present. Today popular opinion on the two men has neatly reversed itself: Sculley is seen as the unimaginative corporate wonk who mismanaged Jobs’s brilliant vision, Jobs as the greatest — or at least the coolest — computing visionary of all time. In the end, of course, the truth must lie somewhere in the middle. Sculley’s strengths tended to be Jobs’s weaknesses, and vice versa. Apple would have been far better off had the two been able to find a way to continue to work together. But, in Jobs’s case especially, that would have required a fundamental shift in who these men were.

The loss among Apple’s management of that old Jobsian spirit of zealotry, overblown and impractical though it could sometimes be, was felt keenly by the Macintosh even during these years of considerable success. Only Jean-Louis Gassée was around to try to provide a splash of the old spirit of iconoclastic idealism, and everyone had to agree in the end that he made a rather second-rate Steve Jobs. When Sculley tried on the mantle of visionary — as when he named his fluffy corporate autobiography Odyssey and subtitled it “a journey of adventure, ideas, and the future” — it never quite seemed to fit him right. The diction was always off somehow, like he was playing a Silicon Valley version of Mad Libs. “This is an adventure of passion and romance, not just progress and profit,” he told the January 1988 Macworld attendees, apparently feeling able to wax a little more poetic than usual before this audience of true believers. “Together we set a course for the world which promises to elevate the self-esteem of the individual rather than a future of subservience to impersonal institutions.” (Apple detractors might note that elevating their notoriously smug users’ self-esteem did indeed sometimes seem to be what the company was best at.)

It was hard not to feel that the Mac had lost something. Jobs had lured Sculley from Pepsi because the latter was widely regarded as a genius of consumer marketing; the Pepsi Challenge, one of the most iconic campaigns in the long history of the cola wars, had been his brainchild. And yet, even before Jobs’s acrimonious departure, Sculley, bowing to pressure from Apple’s stockholders, had oriented the Macintosh almost entirely toward taking on the faceless legions of IBM and Compaq that dominated business computing. Consumer computing was largely left to take care of itself in the form of the 8-bit Apple II line, whose final model, the technically impressive but hugely overpriced IIGS, languished with virtually no promotion. Sculley, a little out of his depth in Silicon Valley, was just following the conventional wisdom that business computing was where the real money was. Businesspeople tended to be turned off by wild-eyed talk of changing the world; thus Apple’s new, more sober facade. And they were equally turned off by any whiff of fun or, God forbid, games; thus the old sense of whimsy that had been one of the original Mac’s most charming attributes seemed to leach away a little more with each successive model.

Those who pointed out that business computing had a net worth many times that of home computing weren’t wrong, but they were missing something important and at least in retrospect fairly obvious: namely, the fact that most of the companies who could make good use of computers had already bought them by now. The business-computing industry would doubtless continue to be profitable for many and even to grow steadily alongside the economy, but its days of untapped potential and explosive growth were behind it. Consumer computing, on the other hand, was still largely virgin territory. Millions of people were out there who had been frustrated by the limitations of the machines at the heart of the brief-lived first home-computer boom, but who were still willing to be intrigued by the next generation of computing technology, still willing to be sold on computers as an everyday lifestyle accessory. Give them a truly elegant, easy-to-use computer — like, say, the Macintosh — and who knew what might happen. This was the vision Jef Raskin had had in starting the ball rolling on the Mac back in 1979, the one that had still been present, if somewhat obscured even then by a high price, in the first released version of the machine with its “the computer for the rest of us” tagline. And this was the vision that Sculley betrayed after Jobs’s departure by keeping prices sky-high and ignoring the consumer market.

“We don’t want to castrate our computers to make them inexpensive,” said Jean-Louis Gassée. “We make Hondas, we don’t make Yugos.” Fair enough, but the Mac was priced closer to Mercedes than Honda territory. And it was common knowledge that Apple’s profit margins remained just about the fattest in the industry, thus raising the question of how much “castration” would really be necessary to make a more reasonably priced Mac. The situation reached almost surrealistic levels with the release of the Mac IIfx in March of 1990, an admittedly “wicked fast” addition to the product line but one that cost $9870 sans monitor or video card, thus replacing the metaphorical with the literal in Gassée’s favored comparison: a complete Mac IIfx system cost more than most actual brand-new Hondas. By now, the idea of the Mac as “the computer for the rest of us” seemed a bitter joke.

Apple was choosing to fight over scraps of the business market when an untapped land of milk and honey — the land of consumer computing — lay just over the horizon. Instead of the Macintosh, the IBM-compatible machines lurched over in fits and starts to fill that space, adopting in the process most of the Mac’s best ideas, even if they seldom managed to implement those ideas quite as elegantly. By the time Apple woke up to what was happening in the 1990s and rushed to fill the gap with a welter of more reasonably priced consumer-grade Macs, it was too late. Computing as most Americans knew it was exclusively a Wintel world, Macs incompatible, artsy-fartsy oddballs. All but locked out of the fastest-growing sectors of personal computing, the very sectors the Macintosh had been so perfectly poised to absolutely own, Apple was destined to have a very difficult 1990s. So difficult, in fact, that they would survive the decade’s many lows only by the skin of their teeth.

This cartoon by Tom Meyer, published in the San Francisco Chronicle, shows the popular consensus about Apple by the early 1990s -- increasingly: overpriced inelegant designs and increasingly clueless management.

This cartoon by Tom Meyer, published in the San Francisco Chronicle, shows the emerging new popular consensus about Apple by the early 1990s: increasingly overpriced, bloated designs and increasingly clueless management.

Now that the 68000 Wars have faded into history and passions have cooled, we can see that the Macintosh was in some ways almost as ill-served by its parent company as was the Commodore Amiga by its. Apple’s management in the post-Jobs era, like Commodore’s, seemed in some fundamental way not to get the very creation they’d unleashed on the world. And so, as with the Amiga, it was left to the users of the Macintosh to take up the slack, to keep the vision thing in the equation. Thankfully, they did a hell of job with that. Something in the Mac’s DNA, something which Apple’s new sobriety could mask but never destroy, led it to remain a hotbed of inspiring innovations that had little to do with the nuts and bolts of running a day-to-day business. Sometimes seemingly in spite of Apple best efforts, the most committed Mac loyalists never forgot the Jobsian rhetoric that had greeted the platform’s introduction, continuing to see it as something far more compelling and beautiful than a tool for business. A 1988 survey by Macworld magazine revealed that 85 percent of their readers, the true Mac hardcore, kept their Macs at home, where they used them at least some of the time for pleasure rather than business.

So, the Mac world remained the first place to look if you wanted to see what the artists and the dreamers were getting up to with computers. We’ve already seen some examples of their work in earlier articles. In the course of the next few, we’ll see some more.

(Sources: Amazing Computing of February 1988, April 1988, May 1988, and August 1988; Info of July/August 1988; Byte of May 1986, June 1986, November 1986, April 1987, October 1987, and June 1990; InfoWorld of November 26 1984; Computer Chronicles television episodes entitled “The New Macs,” “Macintosh Business Software,” “Macworld Special 1988,” “Business Graphics Part 1,” “Macworld Boston 1988,” “Macworld San Francisco 1989,” and “Desktop Presentation Software Part 1”; the books West of Eden: The End of Innocence at Apple Computer by Frank Rose, Apple Confidential 2.0: The Definitive History of the World’s Most Colorful Computer Company by Owen W. Linzmayer, and Insanely Great: The Life and Times of Macintosh, the Computer that Changed Everything by Steven Levy; Andy Hertzfeld’s website Folklore.)

 
38 Comments

Posted by on September 16, 2016 in Digital Antiquaria, Interactive Fiction

 

Tags: