RSS

Tag Archives: aol

The Rise of POMG, Part 1: It Takes a Village…

No one on their deathbed ever said, “I wish I had spent more time alone with my computer!”

— Dani Bunten Berry

If you ever want to feel old, just talk to the younger generation.

A few years ago now, I met the kids of a good friend of mine for the very first time: four boys between the ages of four and twelve, all more or less crazy about videogames. As someone who spends a lot of his time and earns a lot of his income writing about games, I arrived at their house with high expectations attached.

Alas, I’m afraid I proved a bit of a disappointment to them. The distance between the musty old games that I knew and the shiny modern ones that they played was just too far to bridge; shared frames of reference were tough to come up with. This was more or less what I had anticipated, given how painfully limited I already knew my knowledge of modern gaming to be. But one thing did genuinely surprise me: it was tough for these youngsters to wrap their heads around the very notion of a game that you played to completion by yourself and then put on the shelf, much as you might a book. The games they knew, from Roblox to Fortnite, were all social affairs that you played online with friends or strangers, that ended only when you got sick of them or your peer group moved on to something else. Games that you played alone, without at the very least leader boards and achievements on-hand to measure yourself against others, were utterly alien to them. It was quite a reality check for me.

So, I immediately started to wonder how we had gotten to this point — a point not necessarily better or worse than the sort of gaming that I knew growing up and am still most comfortable with, just very different. This series of articles should serve as the beginning of an answer to that complicated question. Their primary focus is not so much how computer games went multiplayer, nor even how they first went online; those things are in some ways the easy, obvious parts of the equation. It’s rather how games did those things persistently — i.e., permanently, so that each session became part of a larger meta-game, if you will, embedded in a virtual community. Or perhaps the virtual community is embedded in the game. It all depends on how you look at it, and which precise game you happen to be talking about. Whichever way, it has left folks like me, whose natural tendency is still to read games like books with distinct beginnings, middles, and ends, anachronistic iconoclasts in the eyes of the youthful mainstream.

Which, I hasten to add, is perfectly okay; I’ve always found the ditch more fun than the middle of the road anyway. Still, sometimes it’s good to know how the other 90 percent lives, especially if you claim to be a gaming historian…



“Persistent online multiplayer gaming” (POMG, shall we say?) is a mouthful to be sure, but it will have to do for lack of a better descriptor of the phenomenon that has created such a divide between myself and my friend’s children.  It’s actually older than you might expect, having first come to be in the 1970s on PLATO, a non-profit computer network run out of the University of Illinois but encompassing several other American educational institutions as well. Much has been written about this pioneering network, which uncannily presaged in so many of its particulars what the Internet would become for the world writ large two decades later. (I recommend Brian Dear’s The Friendly Orange Glow for a book-length treatment.) It should suffice for our purposes today to say that PLATO became host to, among other online communities of interest, an extraordinarily vibrant gaming culture. Thanks to the fact that PLATO games lived on a multi-user network rather than standalone single-user personal computers, they could do stuff that most gamers who were not lucky enough to be affiliated with a PLATO-connected university would have to wait many more years to experience.

The first recognizable single-player CRPGs were born on PLATO in the mid-1970s, inspired by the revolutionary new tabletop game known as Dungeons & Dragons. They were followed by the first multiplayer ones in amazingly short order. Already in 1975’s Moria,[1]The PLATO Moria was a completely different game from the 1983 single-player roguelike that bore the same name. players met up with their peers online to chat, brag, and sell or trade loot to one another. When they were ready to venture forth to kill monsters, they could do so in groups of up to ten, pooling their resources and sharing the rewards. A slightly later PLATO game called Oubliette implemented the same basic concept in an even more sophisticated way. The degree of persistence of these games was limited by a lack of storage capacity — the only data that was saved between sessions were the statistics and inventory of each player’s character, with the rest of the environment being generated randomly each time out — but they were miles ahead of anything available for the early personal computers that were beginning to appear at the same time. Indeed, Wizardry, the game that cemented the CRPG’s status as a staple genre on personal computers in 1981, was in many ways simply a scaled-down version of Oubliette, with the multiplayer party replaced by a party of characters that were all controlled by the same player.

Chester Bolingbroke, better known online as The CRPG Addict, plays Moria. Note the “Group Members” field at bottom right. Chester is alone here, but he could be adventuring with up to nine others.

A more comprehensive sort of persistence arrived with the first Multi-User Dungeon (MUD), developed by Roy Trubshaw and Richard Bartle, two students at the University of Essex in Britain, and first deployed there in a nascent form in late 1978 or 1979. A MUD borrowed the text-only interface and presentation of Will Crowther and Don Woods’s seminal game of Adventure, but the world it presented was a shared, fully persistent one between its periodic resets to a virgin state, chockablock with other real humans to interact with and perhaps fight. “The Land,” as Bartle dubbed his game’s environs, expanded to more than 600 rooms by the early 1980s, even as its ideas and a good portion of its code were used to set up other, similar environments at many more universities.

In the meanwhile, the first commercial online services were starting up in the United States. By 1984, you could, for the price of a substantial hourly fee, dial into the big mainframes of services like CompuServe using your home computer. Once logged in there, you could socialize, shop, bank, make travel reservations, read newspapers, and do much else that most people wouldn’t begin to do online until more than a decade later — including gaming. For example, CompuServe offered MegaWars, a persistent grand-strategy game of galactic conquest whose campaigns took groups of up to 100 players four to six weeks to complete. (Woe betide the ones who couldn’t log in for some reason of an evening in the midst of that marathon!) You could also find various MUDs, as well as Island of Kesmai, a multiplayer CRPG boasting most of the same features as PLATO’s Oubliette in a genuinely persistent world rather than a perpetually regenerated one. CompuServe’s competitor GEnie had Air Warrior, a multiplayer flight simulator with bitmapped 3D graphics and sound effects to rival any of the contemporaneous single-player simulators on personal computers. For the price of $11 per hour, you could participate in grand Air Warrior campaigns that lasted three weeks each and involved hundreds of other subscribers, organizing and flying bombing raids and defending against the enemy’s attacks on their own lines. In 1991, America Online put up Neverwinter Nights,[2]Not the same game as the 2002 Bioware CRPG of the same name. which did for the “Gold Box” line of licensed Dungeons & Dragons CRPGs what MUD had done for Adventure and Air Warrior had done for flight simulators, transporting the single-player game into a persistent multiplayer space.

All of this stuff was more or less incredible in the context of the times. At the same time, though, we mustn’t forget that it was strictly the purview of a privileged elite, made up of those with login credentials for institutional-computing networks or money in their pockets to pay fairly exorbitant hourly fees to feed their gaming habits. So, I’d like to back up now and tell a different story of POMG — one with more of a populist thrust, focusing on what was actually attainable by the majority of people out there, the ones who neither had access to a university’s mainframe nor could afford to spend hundreds of dollars per month on a hobby. Rest assured that the two narratives will meet before all is said and done.



POMG came to everyday digital gaming in the reverse order of the words that make up the acronym: first games were multiplayer, then they went online, and then these online games became persistent. Let’s try to unpack how that happened.

From the very start, many digital games were multiplayer, optionally if not unavoidably so. Spacewar!, the program generally considered the first fully developed graphical videogame, was exclusively multiplayer from its inception in the early 1960s. Ditto Pong, the game that launched Atari a decade later, and with it a slow-building popular craze for electronic games, first in public arcades and later in living rooms. Multiplayer here was not so much down to design intention as technological affordances. Pong was an elaborate analog state machine rather than a full-blown digital computer, relying on decentralized resistors and potentiometers and the like to do its “thinking.” It was more than hard enough just to get a couple of paddles and a ball moving around on the screen of a gadget like this; a computerized opponent was a bridge too far.

Very quickly, however, programmable microprocessors entered the field, changing everyone’s cost-benefit analyses. Building dual controls into an arcade cabinet was expensive, and the end result tended to take up a lot of space. The designers of arcade classics like Asteroids and Galaxian soon realized that they could replace the complications of a human opponent with hordes of computer-controlled enemies, flying in rudimentary, partially randomized patterns. Bulky multiplayer machines thus became rarer and rarer in arcades, replaced by slimmer, more standardized single-player cabinets. After all, if you wanted to compete with your friends in such games, there was still a way to do so: you could each play a round against the computerized enemies and compare your scores afterward.

While all of this was taking shape, the Trinity of 1977 — the Radio Shack TRS-80, Apple II, and Commodore PET — had ushered in the personal-computing era. The games these early microcomputers played were sometimes ports or clones of popular arcade hits, but just as often they were more cerebral, conceptually ambitious affairs where reflexes didn’t play as big — or any — role: flight simulations, adventure games, war and other strategy games. The last were often designed to be played optimally or even exclusively against another human, largely for the same reason Pong had been made that way: artificial intelligence was a hard thing to implement under any circumstances on an 8-bit computer with as little as 16 K of memory, and it only got harder when you were asking said artificial intelligence to formulate a strategy for Operation Barbarossa rather than to move a tennis racket around in front of a bouncing ball. Many strategy-game designers in these early days saw multiplayer options almost as a necessary evil, a stopgap until the computer could fully replace the human player, thus alleviating that eternal problem of the war-gaming hobby on the tabletop: the difficulty of finding other people in one’s neighborhood who were able and willing to play such weighty, complex games.

At least one designer, however, saw multiplayer as a positive advantage rather than a kludge — in fact, as the way the games of the future by all rights ought to be. “When I was a kid, the only times my family spent together that weren’t totally dysfunctional were when we were playing games,” remembered Dani Bunten Berry. From the beginning of her design career in 1979, when she made an auction game called Wheeler Dealers for the Apple II,[3]Wheeler Dealers and all of her other games that are mentioned in this article were credited to Dan Bunten, the name under which she lived until 1992. multiplayer was her priority. In fact, she was willing to go to extreme lengths to make it possible; in addition to a cassette tape containing the software, Wheeler Dealers shipped with a custom-made hardware add-on, the only method she could come up with to let four players bid at once. Such experiments culminated in M.U.L.E., one of the first four games ever published by Electronic Arts, a deeply, determinedly social game of economics and, yes, auctions for Atari and Commodore personal computers that many people, myself included, still consider her unimpeachable masterpiece.

A M.U.L.E. auction in progress.

And yet it was Seven Cities of Gold, her second game for Electronic Arts, that became a big hit. Ironically, it was also the first she had ever made with no multiplayer option whatsoever. She was learning to her chagrin that games meant to be played together on a single personal computer were a hard sell; such machines were typically found in offices and bedrooms, places where people went to isolate themselves, not in living rooms or other spaces where they went to be together. She decided to try another tack, thereby injecting the “online” part of POMG into our discussion.

In 1988, Electronic Arts published Berry’s Modem Wars, a game that seems almost eerily prescient in retrospect, anticipating the ludic zeitgeist of more than a decade later with remarkable accuracy. It was a strategy game played in real time (although not quite a real-time strategy of the resource-gathering and army-building stripe that would later be invented by Dune II and popularized by Warcraft and Command & Conquer). And it was intended to be played online against another human sitting at another computer, connected to yours by the gossamer thread of a peer-to-peer modem hookup over an ordinary telephone line. Like most of Berry’s games, it didn’t sell all that well, being a little too far out in front of the state of her nation’s telecommunications infrastructure.

Nevertheless, she continued to push her agenda of computer games as ways of being entertained together rather than alone over the years that followed. She never did achieve the breakout hit she craved, but she inspired countless other designers with her passion. She died far too young in 1998, just as the world was on the cusp of embracing her vision on a scale that even she could scarcely have imagined. “It is no exaggeration to characterize her as the world’s foremost authority on multiplayer computer games,” said Brian Moriarty when he presented Dani Bunten Berry with the first ever Game Developers Conference Lifetime Achievement Award two months before her death. “Nobody has worked harder to demonstrate how technology can be used to realize one of the noblest of human endeavors: bringing people together. Historians of electronic gaming will find in these eleven boxes [representing her eleven published games] the prototypes of the defining art form of the 21st century.” Let this article and the ones that will follow it, written well into said century, serve as partial proof of the truth of his words.

Danielle Bunten Berry, 1949-1998.

For by the time Moriarty spoke them, other designers had been following the trails she had blazed for quite some time, often with much more commercial success. A good early example is Populous, Peter Molyneux’s strategy game in real time (although, again, not quite a real-time strategy) that was for most of its development cycle strictly a peer-to-peer online multiplayer game, its offline single-player mode being added only during the last few months. An even better, slightly later one is DOOM, John Carmack and John Romero’s game of first-person 3D mayhem, whose star attraction, even more so than its sadistic single-player levels, was the “deathmatch” over a local-area network. Granted, these testosterone-fueled, relentlessly zero-sum contests weren’t quite the same as what Berry was envisioning for gaming’s multiplayer future near the end of her life; she wished passionately for games with a “people orientation,” directed toward “the more mainstream, casual players who are currently coming into the PC market.” Still, as the saying goes, you have to start somewhere.

But there is once more a caveat to state here about access, or rather the lack thereof. Being built for local networks only — i.e., networks that lived entirely within a single building or at most a small complex of them — DOOM deathmatches were out of reach on a day-to-day basis for those who didn’t happen to be students or employees at institutions with well-developed data-processing departments and permissive or oblivious authority figures. Outside of those ivory towers, this was the era of the “LAN party,” when groups of gamers would all lug their computers over to someone’s house, wire them together, and go at it over the course of a day or a weekend. These occasions went on to become treasured memories for many of their participants, but they achieved that status precisely because they were so sporadic and therefore special.

And yet DOOM‘s rise corresponded with the transformation of the Internet from an esoteric tool for the technological elite to the most flexible medium of communication ever placed at the disposal of the great unwashed, thanks to a little invention out of Switzerland called the World Wide Web. What if there was a way to move DOOM and other games like it from a local network onto this one, the mother of all wide-area networks? Instead of deathmatching only with your buddy in the next cubicle, you would be able to play against somebody on another continent if you liked. Now wouldn’t that be cool?

The problem was that local-area networks ran over a protocol known as IPX, while the Internet ran on a completely different one called TCP/IP. Whoever could bridge that gap in a reasonably reliable, user-friendly way stood to become a hero to gamers all over the world.



Jay Cotton discovered DOOM in the same way as many another data-processing professional: when it brought down his network. He was employed at the University of Georgia at the time, and was assigned to figure out why the university’s network kept buckling under unprecedented amounts of spurious traffic. He tracked the cause down to DOOM, the game that half the students on campus seemed to be playing more than half the time. More specifically, the problem was caused by a bug, which was patched out of existence by John Carmack as soon as he was informed. Problem solved. But Cotton stuck around to play, the warden seduced by the inmates of the asylum.

He was soon so much better at the game than anyone else on campus that he was getting a bit bored. Looking for worthier opponents, he stumbled across a program called TCPSetup, written by one Jake Page, which was designed to translate IPX packets into TCP/IP ones and vice versa on the fly, “tricking” DOOM into communicating across the vast Internet. It was cumbersome to use and extremely unreliable, but on a good day it would let you play DOOM over the Internet for brief periods of time at least, an amazing feat by any standard. Cotton would meet other players on an Internet chat channel dedicated to the game, they’d exchange IP addresses, and then they’d have at it — or try to, depending on the whims of the Technology Gods that day.

On August 22, 1994, Cotton received an email from a fellow out of the University of Illinois — yes, PLATO’s old home — whom he’d met and played in this way (and beaten, he was always careful to add). His name was Scott Coleman. “I have some ideas for hacking TCPSetup to make it a little easier. Care to do some testing later?” Coleman wrote. “I’ve already emailed Jake [Page] on this, but he hasn’t responded (might be on vacation or something). If he approves, I’m hoping some of these ideas might make it into the next release of TCPSetup. In the meantime, I want to do some experimenting to see what’s feasible.”

Jake Page never did respond to their queries, so Cotton and Coleman just kept beavering away on their own, eventually rewriting TCPSetup entirely to create iDOOM, a more reliable and far less fiddly implementation of the same concept, with support for three- or four-player deathmatches instead of just one-on-one duels. It took off like a rocket; the pair were bombarded with feature requests, most notably to make iDOOM work with other IPX-only games as well. In January of 1995, they added support for Heretic, one of the most popular of the first wave of so-called “DOOM clones.” They changed their program’s name to “iFrag” to reflect the fact that it was now about more than just DOOM.

Having come this far, Cotton and Coleman soon made the conceptual leap that would transform their software from a useful tool to a way of life for a time for many, many thousands of gamers. Why not add support for more games, they asked themselves, not in a bespoke way as they had been doing to date, but in a more sustainable one, by turning their program into a general-purpose IPX-to-TCP/IP bridge, suitable for use with the dozens of other multiplayer games out there that supported only local-area networks out of the box. And why not make their tool into a community while they were at it, by adding an integrated chat service? In addition to its other functions, the program could offer a list of “servers” hosting games, which you could join at the click of a button; no more trolling for opponents elsewhere on the Internet, then laboriously exchanging IP addresses and meeting times and hoping the other guy followed through. This would be instant-gratification online gaming. It would also provide a foretaste at least of persistent online multiplayer gaming; as people won matches, they would become known commodities in the community, setting up a meta-game, a sporting culture of heroes and zeroes where folks kept track of win-loss records and where everybody clamored to hear the results when two big wheels faced off against one another.

Cotton and Coleman renamed their software for the third time in less than nine months, calling it Kali, a name suggested by Coleman’s Indian-American girlfriend (later his wife). “The Kali avatar is usually depicted with swords in her hands and a necklace of skulls from those she has killed,” says Coleman, “which seemed appropriate for a deathmatch game.” Largely at the behest of Cotton, always the more commercially-minded of the pair, they decided to make Kali shareware, just like DOOM itself: multiplayer sessions would be limited to fifteen minutes at a time until you coughed up a $20 registration fee. Cotton went through the logistics of setting up and running a business in Georgia while Coleman did most of the coding in Illinois. (Rather astonishingly, Cotton and Coleman had still never met one another face to face in 2013, when gaming historian David L. Craddock conducted an interview with them that has been an invaluable source of quotes and information for this article.)

Kali certainly wasn’t the only solution in this space; a commercial service called DWANGO had existed since December of 1994, with the direct backing of John Carmack and John Romero, whose company id Software collected 20 percent of its revenue in return for the endorsement. But DWANGO ran over old-fashioned direct-dial-up connections rather than the Internet, meaning you had to pay long-distance charges to use it if you weren’t lucky enough to live close to one of its host computers. On top of that, it charged $9 for just five hours of access per month, with the fees escalating from there. Kali, by contrast, was available to you forever for as many hours per month as you liked after you plunked down your one-time fee of $20.

So, Kali was popular right from its first release on April 26, 1995. Yet it was still an awkward piece of software for the casual user despite the duo’s best efforts, being tied to MS-DOS, whose support for TCP/IP relied on a creaky edifice of third-party tools. The arrival of Windows 95 was a godsend for Kali, as it was for computer gaming in general, making the hobby accessible in a way it had never been before. The so-called “Kali95” was available by early 1996, and things exploded from there. Kali struck countless gamers with all the force of a revelation; who would have dreamed that it could be so easy to play against another human online? Lloyd Case, for example, wrote in Computer Gaming World magazine that using Kali for the first time was “one of the most profound gaming experiences I’ve had in a long time.” Reminiscing seventeen years later, David L. Craddock described how “using Kali for the first time was like magic. Jumping into a game and playing with other people. It blew my fourteen-year-old mind.” In late 1996, the number of registered Kali users ticked past 50,000, even as quite possibly just as many or more were playing with cracked versions that bypassed the simplistic serial-number-registration process. First-person-shooter deathmatches abounded, but you could also play real-time strategies like Command & Conquer and Warcraft, or even the Links golf simulation. Computer Gaming World gave Kali a special year-end award for “Online-Enabling Technology.”

Kali for Windows 95.

Competitors were rushing in at a breakneck pace by this time, some of them far more conventionally “professional” than Kali, whose origin story was, as we’ve seen, as underground and organic as that of DOOM itself. The most prominent of the venture-capital-funded startups were MPlayer (co-founded by Brian Moriarty of Infocom and LucasArts fame, and employing Dani Bunten Berry as a consultant during the last months of her life) and the Total Entertainment Network, better known as simply TEN. In contrast to Kali’s one-time fee, they, like DWANGO before them, relied on subscription billing: $20 per month for MPlayer, $15 per month for TEN. Despite slick advertising and countless other advantages that Kali lacked, neither would ever come close to overtaking its scruffy older rival, which had price as well as oodles of grass-roots goodwill on its side. Jay Cotton:

It was always my belief that Kali would continue to be successful as long as I never got greedy. I wanted everyone to be so happy with their purchase that they would never hesitate to recommend it to a friend. [I would] never charge more than someone would be readily willing to pay. It also became a selling point that Kali only charged a one-time fee, with free upgrades forever. People really liked this, and it prevented newcomers (TEN, Heat [a service launched in 1997 by Sega of America], MPlayer, etc.) from being able to charge enough to pay for their expensive overheads.

Kali was able to compete with TEN, MPlayer, and Heat because it already had a large established user base (more users equals more fun) and because it was much, much cheaper. These new services wanted to charge a subscription fee, but didn’t provide enough added benefit to justify the added expense.

It was a heady rush indeed, although it would also prove a short-lived one; Kali’s competitors would all be out of business within a year or so of the turn of the millennium. Kali itself stuck around after that, but as a shadow of what it had been, strictly a place for old-timers to reminisce and play the old hits. “I keep it running just out of habit,” said Jay Cotton in 2013. “I make just enough money on website ads to pay for the server.” It still exists today, presumably as a result of the same force of habit.

One half of what Kali and its peers offered was all too obviously ephemeral from the start: as the Internet went mainstream, developers inevitably began building TCP/IP support right into their games, eliminating the need for an external IPX-to-TCP/IP bridge. (For example, Quake, id Software’s much-anticipated follow-up to DOOM, did just this when it finally arrived in 1996.) But the other half of what they offered was community, which may have seemed a more durable sort of benefit. As it happened, though, one clever studio did an end-run around them here as well.



The folks at Blizzard Entertainment, the small studio and publisher that was fast coming to rival id Software for the title of the hottest name in gaming, were enthusiastic supporters of Kali in the beginning, to the point of hand-tweaking Warcraft II, their mega-hit real-time strategy, to run optimally over the service. They were rewarded by seeing it surpass even DOOM to become the most popular game there of all. But as they were polishing their new action-CRPG Diablo for release in 1996, Mike O’Brien, a Blizzard programmer, suggested that they launch their own service that would do everything Kali did in terms of community, albeit for Blizzard’s games alone. And then he additionally suggested that they make it free, gambling that knowledge of its existence would sell enough games for them at retail to offset its maintenance costs. Blizzard’s unofficial motto had long been “Let’s be awesome,” reflecting their determination to sell exactly the games that real hardcore gamers were craving, honed to a perfect finish, and to always give them that little bit extra. What better way to be awesome than by letting their customers effortlessly play and socialize online, and to do so for free?

The idea was given an extra dollop of urgency by the fact that Westwood Games, the maker of Warcraft‘s chief competitor Command & Conquer, had introduced a service called Westwood Chat that could launch people directly into a licensed version of Monopoly. (Shades of Dani Bunten Berry’s cherished childhood memories…) At the moment it supported only Monopoly, a title that appealed to a very different demographic from the hardcore crowd who favored Blizzard’s games, but who knew how long that would last?[4]Westwood Chat would indeed evolve eventually into Westwood Online, with full support for Command & Conquer, but that would happen only after Blizzard had rolled out their own service.

So, when Diablo shipped in the last week of 1996, it included something called Battle.net, a one-click chat and matchmaking service and multiplayer facilitator. Battle.net made everything easier than it had ever been before. It would even automatically patch your copy of the game to the latest version when you logged on, pioneering the “software as a service” model in gaming that has become everyday life in our current age of Steam. “It was so natural,” says Blizzard executive Max Schaefer. “You didn’t think about the fact that you were playing with a dude in Korea and a guy in Israel. It’s really a remarkable thing when you think about it. How often are people casually matched up in different parts of the world?” The answer to that question, of course, was “not very often” in the context of 1997. Today, it’s as normal as computers themselves, thanks to groundbreaking initiatives like this one. Blizzard programmer Jeff Strain:

We believed that in order for it [Battle.net] to really be embraced and adopted, that accessibility had to be there. The real catch for Battle.net was that it was inside-out rather than outside-in. You jumped right into the game. You connected players from within the game experience. You did not alt-tab off into a Web browser to set up your games and have the Web browser try to pass off information or something like that. It was a service designed from Day One to be built into actual games.

The combination of Diablo and Battle.net brought a new, more palpable sort of persistence to online gaming. Players of DOOM or Warcraft II might become known as hotshots on services like Kali, but their reputation conferred no tangible benefit once they entered a game session. A DOOM deathmatch or a Warcraft II battle was a one-and-done event, which everyone started on an equal footing, which everyone would exit again within an hour or so, with nothing but memories and perhaps bragging rights to show for what had transpired.

Diablo, however, was different. Although less narratively and systemically ambitious than many of its recent brethren, it was nevertheless a CRPG, a genre all about building up a character over many gaming sessions. Multiplayer Diablo retained this aspect: the first time you went online, you had to pick one of the three pre-made first-level characters to play, but after that you could keep bringing the same character back to session after session, with all of the skills and loot she had already collected. Suddenly the link between the real people in the chat rooms and their avatars that lived in the game proper was much more concrete. Many found it incredibly compelling. People started to assume the roles of their characters even when they were just hanging out in the chat rooms, started in some very real sense to live the game.

But it wasn’t all sunshine and roses. Battle.net became a breeding ground of the toxic behaviors that have continued to dog online gaming to this day, a social laboratory demonstrating what happens when you take a bunch of hyper-competitive, rambunctious young men and give them carte blanche to have at it any way they wish with virtual swords and spells. The service was soon awash with “griefers,” players who would join others on their adventures, ostensibly as their allies in the dungeon, then literally stab them in the back when they least expected it, killing their characters and running off with all of their hard-won loot. The experience could be downright traumatizing for the victims, who had thought they were joining up with friendly strangers simply to have fun together in a cool new game. “Going online and getting killed was so scarring,” acknowledges David Brevick, Diablo‘s original creator. “Those players are still feeling a little bit apprehensive.”

To make matters worse, many of the griefers were also cheaters. Diablo had been born and bred a single-player game; multiplayer had been a very late addition. This had major ramifications. Diablo stored all the information about the character you played online on your local hard drive rather than the Battle.net server. Learn how to modify this file, and you could create a veritable god for yourself in about ten minutes, instead of the dozens of hours it would take playing the honest way. “Trainers” — programs that could automatically do the necessary hacking for you — spread like wildfire across the Internet. Other folks learned to hack the game’s executable files themselves. Most infamously, they figured out ways to attack other players while they were still in the game’s above-ground town, supposedly a safe space reserved for shopping and healing. Battle.net as a whole took on a siege mentality, as people who wanted to play honorably and honestly learned to lock the masses out with passwords that they exchanged only with trusted friends. This worked after a fashion, but it was also a betrayal of the core premise and advantage of Battle.net, the ability to find a quick pick-up game anytime you wanted one. Yet there was nothing Blizzard could do about it without rewriting the whole game from the ground up. They would eventually do this — but they would call the end result Diablo II. In the meanwhile, it was a case of player beware.

It’s important to understand that, for all that it resembled what would come later all too much from a sociological perspective, multiplayer Diablo was still no more persistent than Moria and Oubliette had been on the old PLATO network: each player’s character was retained from session to session, but nothing about the state of the world. Each world, or instance of the game, could contain a maximum of four human players, and disappeared as soon as the last player left it, leaving as its legacy only the experience points and items its inhabitants had collected from it while it existed. Players could and did kill the demon Diablo, the sole goal of the single-player game, one that usually required ten hours or more of questing to achieve, over and over again in the online version. In this sense, multiplayer Diablo was a completely different game from single-player Diablo, replacing the simple quest narrative of the latter with a social meta-game of character-building and player-versus-player combat.

For lots and lots of people, this was lots and lots of fun; Diablo was hugely popular despite all of the exploits it permitted — indeed, for some players perchance, because of them. It became one of the biggest computer games of the 1990s, bringing online gaming to the masses in a way that even Kali had never managed. Yet there was still a ways to go to reach total persistence, to bring a permanent virtual world to life. Next time, then, we’ll see how mainstream commercial games of the 1990s sought to achieve a degree of persistence that the first MUD could boast of already in 1979. These latest virtual worlds, however, would attempt to do so with all the bells and whistles and audiovisual niceties that a new generation of gamers raised on multimedia and 3D graphics demanded. An old dog in the CRPG space was about to learn a new trick, creating in the process a new gaming acronym that’s even more of a mouthful than POMG.



Did you enjoy this article? If so, please think about pitching in to help me make many more like it. You can pledge any amount you like.


Sources: the books Stay Awhile and Listen Volumes 1 and 2 by David L. Craddock, Masters of Doom by David Kushner, and The Friendly Orange Glow by Brian Dear; Retro Gamer 43, 90, and 103; Computer Gaming World of September 1996 and May 1997; Next Generation of March 1997. Online sources include “The Story of Battle.net” by Wes Fenlon at PC Gamer, Dan Griliopoulos’s collection of interviews about Command & Conquer, Brian Moriarty’s speech honoring Dani Bunten Berry from the 1998 Game Developers Conference, and Jay Cotton’s history of Kali on the DOOM II fan site. Plus some posts on The CRPG Addict, to which I’ve linked in the article proper.

Footnotes

Footnotes
1 The PLATO Moria was a completely different game from the 1983 single-player roguelike that bore the same name.
2 Not the same game as the 2002 Bioware CRPG of the same name.
3 Wheeler Dealers and all of her other games that are mentioned in this article were credited to Dan Bunten, the name under which she lived until 1992.
4 Westwood Chat would indeed evolve eventually into Westwood Online, with full support for Command & Conquer, but that would happen only after Blizzard had rolled out their own service.
 
 

Tags: , , , , , , , , , , , ,

Doing Windows, Part 12: David and Goliath

Microsoft, intent on its mission to destroy Netscape, rolled out across the industry with all the subtlety and attendant goodwill of Germany invading Poland…

— Merrill R. Chapman

No one reacted more excitedly to the talk of Java as the dawn of a whole new way of computing than did the folks at Netscape. Marc Andreessen, whose head had swollen exactly as much as the average 24-year-old’s would upon being repeatedly called a great engineer, businessman, and social visionary all rolled into one, was soon proclaiming Netscape Navigator to be far more than just a Web browser: it was general-purpose computing’s next standard platform, possibly the last one it would ever need. Java, he said, generously sharing the credit for this development, was “as revolutionary as the Web itself.” As for Microsoft Windows, it was merely “a poorly debugged set of device drivers.” Many even inside Netscape wondered whether he was wise to poke the bear from Redmond so, but he was every inch a young man feeling his oats.

Just two weeks before the release of Windows 95, the United States Justice Department had ended a lengthy antitrust investigation of Microsoft’s business practices with a decision not to bring any charges. Bill Gates and his colleague took this to mean it was open season on Netscape.

Thus, just a few weeks after the bravura Windows 95 launch, a war that would dominate the business and computing press for the next three years began. The opening salvo from Microsoft came in a weirdly innocuous package: something called the “Windows Plus Pack,” which consisted mostly of slightly frivolous odds and ends that hadn’t made it into the main Windows 95 distribution — desktop themes, screensavers, sound effects, etc. But it also included the very first release of Microsoft’s own Internet Explorer browser, the fruit of the deal with Spyglass. After you put the Plus! CD into the drive and let the package install itself, it was as hard to get rid of Internet Explorer as it was a virus. For unlike all other applications, there appeared no handy “uninstall” option for Internet Explorer. Once it had its hooks in your computer, it wasn’t letting go for anything. And its preeminent mission in life there seemed to be to run roughshod over Netscape Navigator. It inserted itself in place of its arch-enemy in your file associations and everywhere else, so that it kept turning up like a bad penny every time you clicked a link. If you insisted on bringing up Netscape Navigator in its stead, you were greeted with the pointed “suggestion” that Internet Explorer was the better, more stable option.

Microsoft’s biggest problem at this juncture was that that assertion didn’t hold water; Internet Explorer 1.0 was only a modest improvement over the old NCSA Mosaic browser on whose code it was based. Meanwhile Netscape was pushing aggressively forward with its vision of the browser as a platform, a home for active content of all descriptions. Netscape Navigator 2.0, whose first beta release appeared almost simultaneously with Internet Explorer 1.0, doubled down on that vision by including an email and Usenet client. More importantly, it supported not only Java but a second programming language for creating active content on the Web — a language that would prove much more important to the evolution of the Web in the long run.

Even at this early stage — still four months before Sun would deign to grant Java its own 1.0 release — some of the issues with using it on the Web were becoming clear: namely, the weight of the virtual machine that had to be loaded and started before a Java applet could run, and said applet’s inability to communicate easily with the webpage that had spawned it. Netscape therefore decided to create something that lay between the static simplicity of vanilla HTML and the dynamic complexity of Java. The language called JavaScript would share much of its big brother’s syntax, but it would be interpreted rather than compiled, and would live in the same environment as the HTML that made up a webpage rather than in a sandbox of its own. In fact, it would be able to manipulate that HTML directly and effortlessly, changing the page’s appearance on the fly in response to the user’s actions. The idea was that programmers would use JavaScript for very simple forms of active content — like, say, a popup photo gallery or a scrolling stock ticker — and use Java for full-fledged in-browser software applications — i.e., your word processors and the like.

In contrast to Java, a compiled language walled off inside its own virtual machine, JavaScript is embedded directly into the HTML that makes up a webpage, using the handy “<script>” tag.

​There’s really no way to say this kindly: JavaScript was (and is) a pretty horrible programming language by any objective standard. Unlike Java, which was the product of years of thought, discussion, and experimentation, JavaScript was the very definition of “quick and dirty” in a computer-science context. Even its principal architect Brendan Eich doesn’t speak of it like an especially proud parent; he calls it “Java’s dumb little brother” and “a rush job.” Which it most certainly was: he designed and implemented JavaScript from scratch in a matter of bare weeks.

What he ended up with would revolutionize the Web not because it was good, but because it was good enough, filling a craving that turned out to be much more pressing and much more satisfiable in the here and now than the likes of in-browser word processing. The lightweight JavaScript could be used to bring the Web alive, to make it a responsive and interactive place, more quickly and organically than the heavyweight Java. Once JavaScript had reached a critical mass in that role, it just kept on rolling with all the relentlessness of a Microsoft operating system. Today an astonishing 98 percent of all webpages contain at least a little bit of JavaScript in addition to HTML, and a cottage industry has sprung up to modify and extend the language — and attempt to fix the many infelicities that haunt the sleep of computer-science professors all over the world. JavaScript has become, in other words, the modern world’s nearest equivalent to what BASIC was in the 1980s, a language whose ease of use, accessibility, and populist appeal make up for what it lacks in elegance. These days we even do online word processing in JavaScript. If you had told Brendan Eich that that would someday be the case back in 1995, he would have laughed as loud and long at you as anyone.

Although no one could know it at the time, JavaScript also represents the last major building block to the modern Web for which Marc Andreessen can take a substantial share of the credit, following on from the “image” tag for displaying inline graphics, the secure sockets layer (SSL) for online encryption (an essential for any form of e-commerce), and to a lesser extent the Java language. Microsoft, by contrast, was still very much playing catch-up.

Nevertheless, on December 7, 1995 — the symbolism of this anniversary of the United States’s entry into World War II was lost on no one — Bill Gates gave a major address to the Microsoft faithful and assembled press, in which he made it clear that Microsoft was in the browser war to win it. In addition to announcing that his company too would bite the bullet and license Java for Internet Explorer, he said that the latter browser would no longer be a Windows 95 exclusive, but would soon be made available for Windows 3 and even MacOS as well. And everywhere it appeared, it would continue to sport the very un-Microsoft price tag of free, proof that this old dog was learning some decidedly new tricks for achieving market penetration in this new era of online software distribution. “When we say the browser’s free, we’re saying something different from other people,” said Gates, in a barbed allusion to Netscape’s shareware distribution model. “We’re not saying, ‘You can use it for 90 days,’ or, ‘You can use it and then maybe next year we’ll charge you a bunch of money.'” Netscape, whose whole business revolved around its browser, couldn’t afford to give Navigator away, a fact of which Gates was only too well aware. (Some pundits couldn’t resist contrasting this stance with Gates’s famous 1976 “Open Letter To Hobbyists,” in which he had asked, “Who can afford to do professional work for nothing?” Obviously Microsoft now could…)

Netscape’s stock price dropped by $28.75 that day. For Microsoft’s research budget alone was five times the size of Netscape’s total annual revenues, while the bigger company now had more than 800 people — twice Netscape’s total headcount — working on Internet Explorer alone. Marc Andreessen could offer only vague Silicon Valley aphorisms when queried about these disparities: “In a fight between a bear and an alligator, what determines the victor is the terrain” — and Microsoft, he claimed, had now moved “onto our terrain.” The less abstractly philosophical Larry Ellison, head of the database giant Oracle and a man who had had more than his share of run-ins with Bill Gates in the past, joked darkly about the “four stages” of Microsoft stealing someone else’s innovation. Stage 1: to “ridicule” it. Stage 2: to admit that, “yeah, there are a few interesting ideas here.” Stage 3: to make its own version. Stage 4: to make the world forget that the non-Microsoft version had ever existed.

Yet for the time being the Netscape tail continued to wag the Microsoft dog. A more interactive and participatory vision of the Web, enabled by the magic of JavaScript, was spreading like wildfire by the middle of 1996. You still needed Netscape Navigator to experience this first taste of what would eventually be labelled Web 2.0, a World Wide Web that blurred the lines between readers and writers, between content consumers and content creators. For if you visited one of these cutting-edge sites with Internet Explorer, it simply wouldn’t work. Despite all of Microsoft’s efforts, Netscape in June of 1996 could still boast of a browser market share of 85 percent. Marc Andreessen’s Sun Tzu-lite philosophy appeared to have some merit to it after all; his company was by all indications still winning the browser war handily. Even in its 2.0 incarnation, which had been released at about the same time as Gates’s Pearl Harbor speech, Internet Explorer remained something of a joke among Windows users, the annoying mother-in-law you could never seem to get rid of once she showed up.

But then, grizzled veterans like Larry Ellison had seen this movie before; they knew that it was far too early to count Microsoft out. That August, both Netscape and Microsoft released 3.0 versions of their browsers. Netscape’s was a solid evolution of what had come before, but contained no game changers like JavaScript. Microsoft’s, however, was a dramatic leap forward. In addition to Java support, it introduced JScript, a lightweight scripting language that just so happened to have the same syntax as JavaScript. At a stroke, all of those sites which hadn’t worked with earlier versions of Internet Explorer now displayed perfectly well in either browser.

With his browser itself more or less on a par with Netscape’s, Bill Gates decided it was time to roll out his not-so-secret weapon. In October of 1996, Microsoft began shipping Windows 95’s “Service Pack 2,” the second substantial revision of the operating system since its launch. Along with a host of other improvements, it included Internet Explorer. From now on, the browser would ship with every single copy of Windows 95 and be installed automatically as part of the operating system, whether the user wanted it or not. New Windows users would have to make an active choice and then an active effort to go to Netscape’s site — using Internet Explorer, naturally! — and download the “alternative” browser. Microsoft was counting on the majority of these users not knowing anything about the browser war and/or just not wanting to be bothered.

Microsoft employed a variety of carrots and sticks to pressure other companies throughout the computing ecosystem to give or at the bare minimum to recommend Internet Explorer to their customers in lieu of Netscape Navigator. It wasn’t above making the favorable Windows licensing deals it signed with big consumer-computer manufacturers like Compaq dependent on precisely this. But the most surprising pact by far was the one Microsoft made with America Online (AOL).

Relations between the face of the everyday computing desktop and the face of the Internet in the eyes of millions of ordinary Americans had been anything but cordial in recent years. Bill Gates had reportedly told Steve Case, his opposite number at AOL, that he would “bury” him with his own Microsoft Network (MSN). Meanwhile Case had complained long and loud about Microsoft’s bullying tactics to the press, to the point of mooting a comparison between Gates and Adolf Hitler on at least one occasion. Now, though, Gates was willing to eat crow and embrace AOL, even at the expense of his own MSN, if he could stick it to Netscape in the process.

For its part, AOL had come as far as it could with its Booklink browser. The Web was evolving too rapidly for the little development team it had inherited with that acquisition to keep up. Case grudgingly accepted that he needed to offer his customers one of the Big Two browsers. All of his natural inclinations bent toward Netscape. And indeed, he signed a deal with Netscape to make Navigator the browser that shipped with AOL’s turnkey software suite — or so Netscape believed. It turned out that Netscape’s lawyers had overlooked one crucial detail: they had never stipulated exclusivity in the contract. This oversight wasn’t lost on the interested bystander Microsoft, which swooped in immediately to take advantage of it. AOL soon announced another deal, to provide its customers with Internet Explorer as well. Even worse for Netscape, this deal promised Microsoft not only availability but priority: Internet Explorer would be AOL’s recommended, default browser, Netscape Navigator merely an alternative for iconoclastic techies (of which there were, needless to say, very few in AOL’s subscriber base).

What did AOL get in return for getting into bed with Adolf Hitler and “jilting Netscape at the altar,” as the company’s own lead negotiator would later put it? An offer that was impossible for a man with Steve Case’s ambitions to refuse, as it happened. Microsoft would put an AOL icon on the desktop of every new Windows 95 installation, where the hundreds of thousands of Americans who were buying a computer every month in order to check out this Internet thing would see it sitting there front and center, and know, thanks to AOL’s nonstop advertising blitz, that the wonders of the Web were just one click on it away. It was a stunning concession on Microsoft’s part, not least because it came at the direct cost of MSN, the very online network Bill Gates had originally conceived as his method of “burying” AOL. Now, though, no price was too high to pay in his quest to destroy Netscape.

Which raises the question of why he was so obsessed, given that Microsoft was making literally no money from Internet Explorer. The answer is rooted in all that rhetoric that was flying around at the time about the browser as a computing platform — about the Web effectively turning into a giant computer in its own right, floating up there somewhere in the heavens, ready to give a little piece of itself to anyone with a minimalist machine running Netscape Navigator. Such a new world order would have no need for a Microsoft Windows — perish the thought! But if, on the other hand, Microsoft could wrest the title of leading browser developer out of the hands of Netscape, it could control the future evolution of this dangerously unruly beast known as the World Wide Web, and ensure that it didn’t encroach on its other businesses.

That the predictions which prompted Microsoft’s downright unhinged frenzy to destroy Netscape were themselves wildly overblown is ironic but not material. As tech journalist Merrill R. Chapman has put it, “The prediction that anyone was going to use Navigator or any other browser anytime soon to write documents, lay out publications, build budgets, store files, and design presentations was a fantasy. The people who made these breathless predictions apparently never tried to perform any of these tasks in a browser.” And yet in an odd sort of way this reality check didn’t matter. Perception can create its own reality, and Bill Gates’s perception of Netscape Navigator as an existential threat to the software empire he had spent the last two decades building was enough to make the browser war feel like a truly existential clash for both parties, even if the only one whose existence actually was threatened — urgently threatened! — was Netscape. Jim Clark, Marc Andreessen’s partner in founding Netscape, makes the eyebrow-raising claim that he “knew we were dead” in the long run well before the end of 1996, when the Department of Justice declined to respond to an urgent plea on Netscape’s part to take another look at Microsoft’s business practices.

Perhaps the most surprising aspect of the conflict is just how long Netscape’s long run proved to be. It was in most respects David versus Goliath: Netscape in 1996 had $300 million in annual revenues to Microsoft’s nearly $9 billion. But whatever the disparities of size, Netscape had built up a considerable reservoir of goodwill as the vehicle through which so many millions had experienced the Web for the first time. Microsoft found this soft power oddly tough to overcome, even with a browser of its own that was largely identical in functional terms. A remarkable number of people continued to make the active choice to use Netscape Navigator instead of the passive one to use Internet Explorer. By October of 1997, one year after Microsoft brought out the big gun and bundled Internet Explorer right into Windows 95, its browser’s market share had risen as high as 39 percent — but it was Netscape that still led the way at 51 percent.

Yet Netscape wasn’t using those advantages it did possess all that effectively. It was not a happy or harmonious company: there were escalating personality clashes between Jim Clark and Marc Andreessen, and also between Andreessen and his programmers, who thought their leader had become a glory hound, too busy playing the role of the young dot.com millionaire to pay attention to the vital details of software development. Perchance as a result, Netscape’s drive to improve its browser in paradigm-shifting ways seemed to slowly dissipate after the landmark Navigator 2.0 release.

Netscape, so recently the darling of the dot.com age, was now finding it hard to make a valid case for itself merely as a viable business. The company’s most successful quarter in financial terms was the third of 1996 — just before Internet Explorer became an official part of Windows 95 — when it brought in $100 million in revenue. Receipts fell precipitously after that point, all the way down to just $18.5 million in the last quarter of 1997. By so aggressively promoting Internet Explorer as entirely and perpetually free, Bill Gates had, whether intentionally or inadvertently, instilled in the general public an impression that all browsers were or ought to be free, due to some unstated reason inherent in their nature. (This impression has never been overturned, as has been testified over the years by the failure of otherwise worthy commercial browsers like Opera to capture much market share.) Thus even the vast majority of those who did choose Netscape’s browser no longer seemed to feel any ethical compulsion to pay for it. Netscape was left in a position all too familiar to Web firms of the past and present alike: that of having immense name recognition and soft power, but no equally impressive revenue stream to accompany them. It tried frantically to pivot into back-end server architecture and corporate intranet solutions, but its efforts there were, as its bottom line will attest, not especially successful. It launched a Web portal and search engine known as Netcenter, but struggled to gain traction against Yahoo!, the leader in that space. Both Jim Clark and Marc Andreessen sold off large quantities of their personal stock, never a good sign in Silicon Valley.

Netscape Navigator was renamed Netscape Communicator for its 4.0 release in June of 1997. As the name would imply, Communicator was far more than just a browser, or even just a browser with an integrated email client and Usenet reader, as Navigator had been since version 2.0. Now it also sported an integrated editor for making your own websites from scratch, a real-time chat system, a conference caller, an appointment calendar, and a client for “pushing” usually unwanted content to your screen. It was all much, much too much, weighted down with features most people would never touch, big and bloated and slow and disturbingly crash-prone; small wonder that even many Netscape loyalists chose to stay with Navigator 3 after the release of Communicator. Microsoft had not heretofore been known for making particularly svelte software, but Internet Explorer, which did nothing but browse the Web, was a lean ballerina by comparison with the lumbering Sumo wrestler that was Netscape Communicator. The original Netscape Navigator had sprung from the hacker culture of institutional computing, but the company had apparently now forgotten one of that culture’s key dictums in its desire to make its browser a platform unto itself: the best programs are those that do only one thing, but do that one thing very, very well, leaving all of the other things to other programs.

Netscape Communicator. I’m told that there’s an actual Web browser buried somewhere in this pile. Probably a kitchen sink too, if you look hard enough.

Luckily for Netscape, Internet Explorer 4.0, which arrived three months after Communicator, violated the same dictum in an even more inept way. It introduced what Microsoft called the “Active Desktop,” which let it bury its hooks deeper than ever into Windows itself. The Active Desktop was — or tried to be —  Bill Gates’s nightmare of a Web that was impossible to separate from one’s local computer come to life, but with Microsoft’s own logo on it. Ironically, it blurred the distinction between the local computer and the Internet more thoroughly than anything the likes of Sun or Netscape had produced to date; local files and applications became virtually indistinguishable from those that lived on the Internet in the new version of the Windows desktop it installed in place of the old. The end result served mainly to illustrate how half-baked all of the prognostications about a new era of computing exclusively in the cloud really were. The Active Desktop was slow and clumsy and confusing, and absolutely everyone who was exposed to it seemed to hate it and rush to find a way to turn it off. Fortunately for Microsoft, it was possible to do so without removing the Internet Explorer 4 browser itself.

The dreaded Active Desktop. Surprisingly, it was partially defended on philosophical grounds by Tim Berners-Lee, not normally a fan of Microsoft. “It was ridiculous for a person to have two separate interfaces, one for local information (the desktop for their own computer) and one for remote information (a browser to reach other computers),” he writes. “Why did we need an entire desktop for our own computer, but only get little windows through which to view the rest of the planet? Why, for that matter, should we have folders on our desktop but not on the Web? The Web was supposed to be the universe of all accessible information, which included, especially, information that happened to be stored locally. I argued that the entire topic of where information was physically stored should be made invisible to the user.” For better or for worse, though, the public didn’t agree. And even he had to allow that “this did not have to imply that the operating system and browser should be the same program.”

The Active Desktop damaged Internet Explorer’s reputation, but arguably not as badly as Netscape’s had been damaged by the bloated Communicator. For once you turned off all that nonsense, Internet Explorer 4 proved to be pretty good at doing the rest of its job. But there was no similar method for trimming the fat from Netscape Communicator.

While Microsoft and Netscape, those two for-profit corporations, had been vying with one another for supremacy on the Web, another, quieter party had been looking on with great concern. Before the Web had become the hottest topic of the business pages, it had been an idea in the head of the mild-mannered British computer scientist Tim Berners-Lee. He had built the Web on the open Internet, using a new set of open standards; his inclination had never been to control his creation personally. It was to be a meeting place, a library, a forum, perhaps a marketplace if you liked — but always a public commons. When Berners-Lee formed the non-profit World Wide Web Consortium (W3C) in October of 1994 in the hope of guiding an orderly evolution of the Web that kept it independent of the moneyed interests rushing to join the party, it struck many as a quaint endeavor at best. Key technologies like Java and JavaScript appeared and exploded in popularity without giving the W3C a chance to say anything about them. (Tellingly, the word “JavaScript” never even appears in Berners-Lee’s 1999 book about his history with and vision for the Web, despite the scripting language’s almost incalculable importance to making it the dynamic and diverse place it had become by that point.)

From the days when he had been a mere University of Illinois student making a browser on the side, Marc Andreessen had blazed his own trail without giving much thought to formal standards. When the things he unilaterally introduced proved useful, others rushed to copy them, and they became de-facto standards. This was as true of JavaScript as it was of anything else. As we’ve seen, it began as a Netscape-exclusive feature, but was so obviously transformative to what the Web could do and be that Microsoft had no choice but to copy it, to incorporate its own implementation of it into Internet Explorer.

But JavaScript was just about the last completely new feature to be rolled out and widely adopted in this ad-hoc fashion. As the Web reached a critical mass, with Netscape Navigator and Internet Explorer both powering users’ experiences of it in substantial numbers, site designers had a compelling reason not to use any technology that only worked on the one or the other; they wanted to reach as many people as possible, after all. This brought an uneasy sort of equilibrium to the Web.

Nevertheless, the first instinct of both Netscape and Microsoft remained to control rather than to share the Web. Both companies’ histories amply demonstrated that open standards meant little to them; they preferred to be the standard. What would happen if and when one company won the browser war, as Microsoft seemed slowly to be doing by 1997, what with the trend lines all going in its favor and Netscape in veritable financial free fall? Once 90 percent or more of the people browsing the Web were doing so with Internet Explorer, Microsoft would be free to give its instinct for dominance free rein. With an army of lawyers at its beck and call, it would be able to graft onto the Web proprietary, patented technologies that no upstart competitor would be able to reverse-engineer and copy, and pragmatic website designers would no longer have any reason not to use them, if they could make their sites better. And once many or most websites depended on these features that were available only in Internet Explorer, that would be that for the open Web. Despite its late start, Microsoft would have managed to embrace, extend, and in a very real sense destroy Tim Berners-Lee’s original vision of a World Wide Web. The public commons would have become a Microsoft-branded theme park.

These worries were being bandied about with ever-increasing urgency in January of 1998, when Netscape made what may just have been the most audacious move of the entire dot.com boom. Like most such moves, it was born of sheer desperation, but that shouldn’t blind us to its importance and even bravery. First of all, Netscape made its browser free as in beer, finally giving up on even asking people to pay for the thing. Admittedly, though, this in itself was little more than an acceptance of the reality on the ground, as it were. It was the other part of the move that really shocked the tech world: Netscape also made its browser free as in freedom — it opened up its source code to all and sundry. “This was radical in its day,” remembers Mitchell Baker, one of the prime drivers of the initiative at Netscape. “Open source is mainstream now; it was not then. Open source was deep, deep, deep in the technical community. It never surfaced in a product. [This] was a very radical move.”

Netscape spun off a not-for-profit organization, led by Baker and called Mozilla, after a cartoon dinosaur that had been the company’s office mascot almost from day one. Coming well before the Linux operating system began conquering large swaths of corporate America, this was to be open source’s first trial by fire in the real world. Mozilla was to concentrate on the core code required for rendering webpages — the engine room of a browser, if you will. Then others — not least among them the for-profit arm of Netscape — would build the superstructures of finished applications around that sturdy core.

Alas, Netscape the for-profit company was already beyond saving. If anything, this move only hastened the end; Netscape had chosen to give away the one product it had that some tiny number of people were still willing to pay for. Some pundits talked it up as a dying warrior’s last defiant attempt to pass the sword to others, to continue the fight against Microsoft and Internet Explorer: “From the depths of Hell, I spit at thee!” Or, as Tim Berners-Lee put it more soberly: “Microsoft was bigger than Netscape, but Netscape was hoping the Web community was bigger than Microsoft.” And there may very well be something to these points of view. But regardless of the motivations behind it, the decision to open up Netscape’s browser proved both a landmark in the history of open-source software and a potent weapon in the fight to keep the Web itself open and free. Mozilla has had its ups and downs over the years since, but it remains with us to this day, still providing an alternative to the corporate-dominated browsers almost a quarter-century on, having outlived the more conventional corporation that spawned it by a factor of six.

Mozilla’s story is an important one, but we’ll have to leave the details of it for another day. For now, we return to the other players in today’s drama.

While Microsoft and Netscape were battling one another, AOL was soaring into the stratosphere, the happy beneficiary of Microsoft’s decision to give it an icon on the Windows 95 desktop in the name of vanquishing Netscape. In 1997, in a move fraught with symbolic significance, AOL bought CompuServe, its last remaining competitor from the pre-Web era of closed, proprietary online services. By the time Netscape open-sourced its browser, AOL had 12 million subscribers and annual profits — profits, mind you, not revenues — of over $500 million, thanks not only to subscription fees but to the new frontier of online advertising, where revenues and profits were almost one and the same. At not quite 40 years old, Steve Case had become a billionaire.

“AOL is the Internet blue chip,” wrote the respected stock analyst Henry Blodget. And indeed, for all of its association with new and shiny technology, there was something comfortingly stolid — even old-fashioned — about the company. Unlike so many of his dot.com compatriots, Steve Case had found a way to combine name recognition and a desirable product with a way of getting his customers to actually pay for said product. He liked to compare AOL with a cable-television provider; this was a comparison that even the most hidebound investors could easily understand. Real, honest-to-God checks rolled into AOL’s headquarters every month from real, honest-to-God people who signed up for real, honest-to-God paid subscriptions. So what if the tech intelligentsia laughed and mocked, called AOL “the cockroach of cyberspace,” and took an “@AOL.com” suffix on someone’s email address as a sign that they were too stupid to be worth talking to? Case and his shareholders knew that money from the unwashed masses spent just as well as money from the tech elites.

Microsoft could finally declare victory in the browser war in the summer of 1998, when the two browsers’ trend lines crossed one another. At long last, Internet Explorer’s popularity equaled and then rapidly eclipsed that of Netscape Navigator/Communicator. It hadn’t been clean or pretty, but Microsoft had bludgeoned its way to the market share it craved.

A few months later, AOL acquired Netscape through a stock swap that involved no cash, but was worth a cool $9.8 billion on paper — an almost comical sum in relation to the amount of actual revenue the purchased company had brought in during its lifetime. Jim Clark and Marc Andreessen walked away very, very rich men. Just as Netscape’s big IPO had been the first of its breed, the herald of the dot.com boom, Netscape now became the first exemplar of the boom’s unique style of accounting, which allowed people to get rich without ever having run a profitable business.

Even at the time, it was hard to figure out just what it was about Netscape that AOL thought was worth so much money. The deal is probably best understood as a product of Steve Case’s fear of a Microsoft-dominated Web; despite that AOL icon on the Windows desktop, he still didn’t trust Bill Gates any farther than he could throw him. In the end, however, AOL got almost nothing for its billions. Netscape Communicator was renamed AOL Communicator and offered to the service’s subscribers, but even most of them, technically unsophisticated though they tended to be, could see that Internet Explorer was the cleaner and faster and just plain better choice at this juncture. (The open-source coders working with Mozilla belatedly realized the same; they would wind up spending years writing a brand-new browser engine from scratch after deciding that Netscape’s just wasn’t up to snuff.)

Most of Netscape’s remaining engineers walked soon after the deal was made. They tended to describe the company’s meteoric rise and fall in the terms of a Shakespearean tragedy. “At least the old timers among us came to Netscape to change the world,” lamented one. “Getting killed by the Evil Empire, being gobbled up by a big corporation — it’s incredibly sad.” If that’s painting with rather too broad a brush — one should always run away screaming when a Silicon Valley denizen starts talking about “changing the world” — it can’t be denied that Netscape at no time enjoyed a level playing field in its war against Microsoft.

But times do change, as Microsoft was about to learn to its cost. In May of 1998, the Department of Justice filed suit against Microsoft for illegally exploiting its Windows monopoly in order to crush Netscape. The suit came too late to save the latter, but it was all over the news even as the first copies of Windows 98, the hotly anticipated successor to Windows 95, were reaching store shelves. Bill Gates had gotten his wish; Internet Explorer and Windows were now indissolubly bound together. Soon he would have cause to wish that he had not striven for that outcome quite so vigorously.

(Sources: the books Overdrive: Bill Gates and the Race to Control Cyberspace by James Wallace, The Silicon Boys by David A. Kaplan, Architects of the Web by Robert H. Reid, Competing on Internet Time: Lessons from Netscape and Its Battle with Microsoft by Michael Cusumano and David B. Yoffie, dot.con: The Greatest Story Ever Sold by John Cassidy, Stealing Time: Steve Case, Jerry Levin, and the Collapse of AOL Time Warner by Alec Klein, Fools Rush In: Steve Case, Jerry Levin, and the Unmaking of AOL Time Warner by Nina Munk, There Must be a Pony in Here Somewhere: The AOL Time Warner Debacle by Kara Swisher, In Search of Stupidity: Over Twenty Years of High-Tech Marketing Disasters by Merrill R. Chapman, Coders at Work: Reflections on the Craft of Programming by Peter Seibel, and Weaving the Web by Tim Berners-Lee. Online sources include “1995: The Birth of JavaScript” at Web Development History, the New York Times timeline of AOL’s history, and Mitchell Baker’s talk about the history of Mozilla, which is available on Wikipedia.)

 
43 Comments

Posted by on December 23, 2022 in Digital Antiquaria, Interactive Fiction

 

Tags: , , , ,

Doing Windows, Part 11: The Internet Tidal Wave

On August 6, 1991, when Microsoft was still in the earliest planning stages of creating the operating system that would become known as Windows 95, an obscure British researcher named Tim Berners-Lee, working out of the Conseil Européen pour la Recherche Nucléaire (CERN) in Switzerland, put the world’s first publicly accessible website online. For years to come, these two projects would continue to evolve separately, blissfully unconcerned by if not unaware of one another’s existence. And indeed, it is difficult to imagine two computing projects with more opposite personalities. Mirroring its co-founder and CEO Bill Gates, Microsoft was intensely pragmatic and maniacally competitive. Tim Berners-Lee, on the other hand, was a classic academic, a theorist and idealist rather than a businessman. The computers on which he and his ilk built the early Web ran esoteric operating systems like NeXTSTEP and Unix, or at their most plebeian MacOS, not Microsoft’s mass-market workhorse Windows. Microsoft gave you tools for getting everyday things done, while the World Wide Web spent the first couple of years of its existence as little more than an airy proof of concept, to be evangelized by wide-eyed adherents who often appeared to have read one too many William Gibson novels. Forbes magazine was soon to anoint Bill Gates the world’s richest person, his reward for capturing almost half of the international software market; the nascent Web was nowhere to be found in the likes of Forbes.

Those critics who claim that Microsoft was never a visionary company — that it instead thrived by letting others innovate, then swooping in and taking taking over the markets thus opened — love to point to its history with the World Wide Web as Exhibit Number One. Despite having a role which presumably demanded that he stay familiar with all leading-edge developments in computing, Bill Gates by his own admission never even heard of the Web until April of 1993, twenty months after that first site went up. And he didn’t actually surf the Web for himself until another six months after that — perhaps not coincidentally, shortly after a Windows version of NCSA Mosaic, the user-friendly graphical browser that made the Web a welcoming place even for those whose souls didn’t burn with a passion for information theory, had finally been released.

Gates focused instead on a different model of online communication, one arguably more in keeping with his instincts than was the free and open Web. For almost a decade and a half by 1993, various companies had been offering proprietary dial-up services aimed at owners of home computers. These came complete with early incarnations of many of the staples of modern online life: email, chat lines, discussion forums, online shopping, online banking, online gaming, even online dating. They were different from the Web in that they were walled gardens that provided no access to anything that lay beyond the big mainframes that hosted them. Yet within their walls lived bustling communities whose citizens paid their landlords by the minute for the privilege of participation.

The 500-pound gorilla of this market had always been CompuServe, which had been in the business since the days when a state-of-the-art home computer had 16 K of memory and used cassette tapes for storage. Of late, however, an upstart service called America Online (AOL) had been making waves. Under Steve Case, its wunderkind CEO, AOL aimed its pitch straight at the heart of Middle America rather than the tech-savvy elite. Over the course of 1993 alone, it went from 300,000 to 500,000 subscribers. But that was only the beginning if one listened to Case. For a second Home Computer Revolution, destined to be infinitely more successful and long-lasting than the first, was now in full swing, powered along by the ease of use of Windows 3 and by the latest consumer-grade hardware, which made computing faster and more aesthetically attractive than it had ever been before. AOL’s quick and easy custom software fit in perfectly with these trends. Surely this model of the online future — of curated content offered up by a firm whose stated ambition was to be the latest big player in mass media as a whole; of a subscription model that functioned much like the cable television which the large majority of Americans were already paying for — was more likely to take hold than the anarchic jungle that was the World Wide Web. It was, at any rate, a model that Bill Gates could understand very well, and naturally gravitated toward. Never one to leave cash on the table, he started asking himself how Microsoft could get a piece of this action as well.

Steve Case celebrates outside the New York Stock Exchange on March 19, 1992, the day America Online went public.

Gates proceeded in his standard fashion: in May of 1993, he tried to buy AOL outright. But Steve Case, who nursed dreams of becoming a media mogul on the scale of Walt Disney or Jack Warner, turned him down flat. At this juncture, Russ Siegelman, a 33-year-old physicist-by-education whom Gates had made his point man for online strategy, suggested a second classically Microsoft solution to the dilemma: they could build their own online service that copied AOL in most respects, then bury their rival with money and sheer ubiquity. They could, Siegelman suggested, make their own network an integral part of the eventual Windows 95, make signing up for it just another step in the installation process. How could AOL possibly compete with that? It was the first step down a fraught road that would lead to widespread outrage inside the computer industry and one of the most high-stakes anti-trust investigations in the history of American business — but for all that, the broad strategy would prove very, very effective once it reached its final form. It had a ways still to go at this stage, though, targeting as it did AOL instead of the Web.

Gates put Siegelman in charge of building Microsoft’s online service, which was code-named Project Marvel. “We were not thinking about the Internet at all,” admits one of the project’s managers. “Our competition was CompuServe and America Online. That’s what we were focused on, a proprietary online service.” At the time, there were exactly two computers in Microsoft’s sprawling Redmond, Washington, campus that were connected to the Internet. “Most college kids knew much more than we did because they were exposed to it,” says the Marvel manager. “If I had wanted to connect to the Internet, it would have been easier for me to get into my car and drive over to the University of Washington than to try and get on the Internet at Microsoft.”

It came down to the old “not built here” syndrome that dogs so many large institutions, as well as the fact that the Web and the Internet on which it lived were free, and Bill Gates tended to hold that which was free in contempt. Anyone who attempted to help him over his mental block — and there were more than a few of them at Microsoft — was greeted with an all-purpose rejoinder: “How are we going to make money off of free?” The biggest revolution in computing since the arrival of the first pre-assembled personal computers back in 1977 was taking place all around him, and Gates seemed constitutionally incapable of seeing it for what it was.

In the meantime, others were beginning to address the vexing question of how you made money out of free. On April 4, 1994, Marc Andreessen, the impetus behind the NCSA Mosaic browser, joined forces with Jim Clark, a veteran Silicon Valley entrepreneur, to found Netscape Communications for the purpose of making a commercial version of the Mosaic browser. A team of programmers, working without consulting the Mosaic source code so as to avoid legal problems, soon did just that, and uploaded Netscape Navigator to the Web on October 13, 1994. Distributed under the shareware model, with a $39 licensing fee requested but not demanded after a 90-day trial period was up, the new browser was installed on more than 10 million computers within nine months.

AOL’s growth had continued apace despite the concurrent explosion of the open Web; by the time of Netscape Navigator’s release, the service had 1.25 million subscribers. Yet Steve Case, no one’s idea of a hardcore techie, was ironically faster to see the potential — or threat — of the Web than was Bill Gates. He adopted a strategy in response that would make him for a time at least a superhero of the business press and the investor set. Instead of fighting the Web, AOL would embrace it — would offer its own Web browser to go along with its proprietary content, thereby adding a gate to its garden wall and tempting subscribers with the best of both worlds. As always for AOL, the whole package would be pitched toward neophytes, with a friendly interface and lots of safeguards — “training wheels,” as the tech cognoscenti dismissively dubbed them — to keep the unwashed masses safe when they did venture out into the untamed wilds of the Web.

But Case needed a browser of his own in order to execute his strategy, and he needed it in a hurry. He needed, in short, to buy a browser rather than build one. He saw three possibilities. One was to bring Netscape and its Navigator into the AOL fold. Another was a small company called Spyglass, a spinoff of the National Center for Supercomputing (NCSA) which was attempting to commercialize the original NCSA Mosaic browser. And the last was a startup called Booklink Technologies, which was making a browser from scratch.

Netscape was undoubtedly the superstar of the bunch, but that didn’t help AOL’s cause any; Marc Andreessen and Jim Clark weren’t about to sell out to anyone. Spyglass, on the other hand, struck Case as an unimaginative Johnny-come-lately that was trying to shut the barn door long after the horse called Netscape had busted out. That left only Booklink. In November of 1994, AOL paid $30 million for the company. The business press scoffed, deeming it a well-nigh flabbergasting over-payment. But Case would get the last laugh.

While AOL was thus rushing urgently to “embrace and extend” the Web, to choose an ominous phrase normally associated with Microsoft, the latter was dawdling along more lackadaisically toward a reckoning with the Internet. During that same busy fall of 1994, IBM released OS/2 3.0, which was marketed as OS/2 Warp in the hope of lending it some much-needed excitement. By either name, it was the latest iteration of an operating system that IBM had originally developed in partnership with Microsoft, an operating system that had once been regarded by both companies as nothing less than the future of mainstream computing. But since the pair’s final falling out in 1991, OS/2 had become an irrelevancy in the face of the Windows juggernaut, winning a measure of affection only in some hacker circles and a few other specialized niches. Despite its snazzy new name and despite being an impressive piece of software from a purely technical perspective, OS/2 Warp wasn’t widely expected to change those fortunes before its release, and this lack of expectations proved well-founded afterward. Yet it was a landmark in another way, being the first operating system to include a Web browser as an integral component, in this case a program called Web Explorer, created by IBM itself because no one else seemed much interested in making a browser for the unpopular OS/2.

This appears to have gotten some gears turning in Bill Gates’s head. Microsoft already planned to include more networking tools than ever before in Windows 95. They had, for example, finally decided to bow to customer demand and build right into the operating system TCP/IP, the networking protocol that allowed a computer to join the Internet; Windows 3 required the installation of a third-party add-on for the same purpose. (“I don’t know what it is, and I don’t want to know what it is,” said Steve Ballmer, Gates’s right-hand man, to his programmers on the subject of TCP/IP. “[But] my customers are screaming about it. Make the pain go away.”) Maybe a Microsoft-branded Web browser for Windows 95 would be a good idea as well, if they could acquire one without breaking the bank.

Just days after AOL bought Booklink for $30 million, Microsoft agreed to give $2 million to Spyglass. In return, Spyglass would give Microsoft a copy of the Mosaic source code, which it could then use as the basis for its own browser. But, lest you be tempted to see this transaction as evidence that Gates’s opinions about the online future had already undergone a sea change by this date, know that the very day this deal went down was also the one on which he chose to publicly announce Microsoft’s own proprietary AOL competitor, to be known as simply the Microsoft Network, or MSN. At most, Gates saw the open Web at this stage as an adjunct to MSN, just as it would soon become to AOL. MSN would come bundled into Windows 95, he told the assembled press, so that anyone who wished to could become a subscriber at the click of a mouse.

The announcement caused alarm bells to ring at AOL. “The Windows operating system is what the dial tone is to the phone industry,” said Steve Case. He thus became neither the first nor the last of Gates’s rival to hint at the need for government intervention: “There needs to be a level playing field on which companies compete.” Some pundits projected that Microsoft might sign up 20 million subscribers to MSN before 1995 was out. Others — the ones whom time would prove to have been more prescient — shook their heads and wondered how Microsoft could still be so clueless about the revolutionary nature of the World Wide Web.

AOL leveraged the Booklink browser to begin offering its subscribers Web access very early in 1995, whereupon its previously robust rate of growth turned downright torrid. By November of 1995, it would have 4 million subscribers. The personable and photogenic Steve Case became a celebrity in his own right, to the point of starring in a splashy advertising campaign for The Gap’s line of khakis; the man and the pants represented respectively the personification and the uniform of the trend in corporate America toward “business casual.” Meanwhile Case’s company became an indelible part of the 1990s zeitgeist. “You’ve got mail!,” the words AOL’s software spoke every time a new email arrived — something that was still very much a novel experience for many subscribers — was featured as a sample in a Prince song, and eventually became the name of a hugely popular romantic comedy starring Tom Hanks and Meg Ryan. CompuServe and AOL’s other old rivals in the proprietary space tried to compete by setting up Internet gateways of their own, but were never able to negotiate the transition from one era of online life to another with the same aplomb as AOL, and gradually faded into irrelevancy.

Thankfully for Microsoft’s shareholders, Bill Gates’s eyes were opened before his company suffered the same fate. At the eleventh hour, with what were supposed to be the final touches being put onto Windows 95, he made a sharp swerve in strategy. He grasped at last that the open Web was the here, the now, and the future, the first major development in mainstream consumer computing in years that hadn’t been more or less dictated by Microsoft — but be that as it may, the Web wasn’t going anywhere. On May 26, 1995, he wrote a memo to every Microsoft employee that exuded an all-hands-on-deck sense of urgency. Gates, the longstanding Internet agnostic, had well and truly gotten the Internet religion.

I want to make clear that our focus on the Internet is critical to every part of our business. The Internet is the most important single development to come along since the IBM PC was introduced in 1981. It is even more important than the arrival of [the] graphical user interface (GUI). The PC analogy is apt for many reasons. The PC wasn’t perfect. Aspects of the PC were arbitrary or even poor. However, a phenomena [sic] grew up around the IBM PC that made it a key element of everything that would happen for the next fifteen years. Companies that tried to fight the PC standard often had good reasons for doing so, but they failed because the phenomena overcame any weakness that [the] resistors identified.

Over the last year, a number of people [at Microsoft] have championed embracing TCP/IP, hyperlinking, HTML, and building clients, tools, and servers that compete on the Internet. However, we still have a lot to do. I want every product plan to try and go overboard on Internet features.

Everything changed that day. Instead of walling its campus off from the Internet, Microsoft put the Web at every employee’s fingertips. Gates himself sent his people lists of hot new websites to explore and learn from. The team tasked with building the Microsoft browser, who had heretofore labored in under-staffed obscurity, suddenly had all the resources of the company at their beck and call. The fact was, Gates was scared; his fear oozes palpably from the aggressive language of the memo above. (Other people talked of “joining” the Internet; Gates wanted to “compete” on it.)

But just what was he so afraid of? A pair of data points provides us with some clues. Three days before he wrote his memo, a new programming language and run-time environment had taken the industry by storm. And the day after he did so, a Microsoft executive named Ben Slivka sent out a memo of his own with Gate’s blessing, bearing the odd title of “The Web Is the Next Platform.” To understand what Slivka was driving at, and why Bill Gates took it as such an imminent existential threat to his company’s core business model, we need to back up a few years and look at the origins of the aforementioned programming language.


Bill Joy, an old-school hacker who had made fundamental contributions to the Unix operating system, was regarded as something between a guru and an elder statesman by 1990s techies, who liked to call him “the other Bill.” In early 1991, he shared an eye-opening piece of his mind at a formal dinner for select insiders. Microsoft was then on the ascendant, he acknowledged, but they were “cruising for a bruising.” Sticking with the automotive theme, he compared their products to the American-made cars that had dominated until the 1970s — until the Japanese had come along peddling cars of their own that were more efficient, more reliable, and just plain better than the domestic competition. He said that the same fate would probably befall Microsoft within five to seven years, when a wind of change of one sort or another came along to upend the company and its bloated, ugly products. Just four years later, people would be pointing to a piece of technology from his own company Sun Microsystems as the prophesied agent of Microsoft’s undoing.

Sun had been founded in 1982 to leverage the skills of Joy along with those of a German hardware engineer named Andy Bechtolsheim, who had recently built an elegant desktop computer inspired by the legendary Alto machines of Xerox’s Palo Alto Research Center. Over the remainder of the 1980s, Sun made a good living as the premier maker of Unix-based workstations: computers that were a bit too expensive to be marketed to even the most well-heeled consumers, but were among the most powerful of their day that could be fit onto or under a single desktop. Sun possessed a healthy antipathy for Microsoft, for all of the usual reasons cited by the hacker contingent: they considered Microsoft’s software derivative and boring, considered the Intel hardware on which it ran equally clunky and kludgy (Sun first employed Motorola chips, then processors of their own design), and loathed Microsoft’s intensely adversarial and proprietorial approach to everything it touched. For some time, however, Sun’s objections remained merely philosophical; occupying opposite ends of the market as they did, the two companies seldom crossed one another’s paths. But by the end of the decade, the latest Intel hardware had advanced enough to be comparable with that being peddled by Sun. And by the time that Bill Joy made his prediction, Sun knew that something called Windows NT was in the works, knew that Microsoft would be coming in earnest for the high-end-computing space very soon.

About six months after Joy played the oracle, Sun’s management agreed to allow one of their star programmers, a fellow named James Gosling, to form a small independent group in order to explore an idea that had little obviously to do with the company’s main business. “When someone as smart as James wants to pursue an area, we’ll do our best to provide an environment,” said Chief Technology Officer Eric Schmidt.

James Gosling

The specific “area” — or, perhaps better said, problem — that Gosling wanted to address was one that still exists to a large extent today: the inscrutability and lack of interoperability of so many of the gadgets that power our daily lives. The problem would be neatly crystalized almost five years later by one of the milquetoast jokes Jay Leno made at the Windows 95 launch, about how the VCR in even Bill Gates’s living room was still blinking “12:00” because he had never figured out how to set the thing’s clock. What if everything in your house could be made to talk together, wondered Gosling, so that setting one clock would set all of them — so that you didn’t have to have a separate remote control for your television and your VCR, each with about 80 buttons on it that you didn’t understand what they did and never, ever pressed. “What does it take to watch a videotape?” he mused. “You go plunk, plunk, plunk on all of these things in certain magic sequences before you can actually watch your videotape! Why is it so hard? Wouldn’t it be nice if you could just slide the tape into the VCR, [and] the system sort of figures it out: ‘Oh, gee, I guess he wants to watch it, so I ought to power up the television set.'”

But when Gosling and his colleagues started to ponder how best to realize their semi-autonomous home of the future, they tripped over a major stumbling block. While it was true that more and more gadgets were becoming “smart,” in the sense of incorporating programmable microprocessors, the details of their digital designs varied enormously. Each program to link each individual model of, say, VCR into the home network would have to be written, tested, and debugged from scratch. Unless, that is, the program could be made to run in a virtual machine.

A virtual machine is an imaginary computer which a real computer can be programmed to simulate. It permits a “write once, run everywhere” approach to software: once a given real computer has an interpreter for a given virtual machine, it can run any and all programs that have been or will be written for that virtual machine, albeit at some cost in performance.

Like almost every other part of the programming language that would eventually become known as Java, the idea of a virtual machine was far from new in the abstract. (“In some sense, I would like to think that there was nothing invented in Java,” says Gosling.) For example, a decade before Gosling went to work on his virtual machine, the Apple Pascal compiler was already targeting one that ran on the lowly Apple II, even as the games publisher Infocom was distributing its text adventures across dozens of otherwise incompatible platforms thanks to its Z-Machine.

Unfortunately, Gosling’s new implementation of this old concept proved unable to solve by itself the original problem for which it had been invented. Even Wi-Fi didn’t exist at this stage, much less the likes of Bluetooth. Just how were all of these smart gadgets supposed to actually talk to one another, to say nothing of pulling down the regular software updates which Gosling envisioned as another benefit of his project? (Building a floppy-disk drive into every toaster was an obvious nonstarter.) After reluctantly giving up on their home of the future, the team pivoted for a while toward “interactive television,” a would-be on-demand streaming system much like our modern Netflix. But Sun had no real record in the consumer space, and cable-television providers and other possible investors were skeptical.

While Gosling was trying to figure out just what this programming language and associated runtime environment he had created might be good for, the World Wide Web was taking off. In July of 1994, a Sun programmer named Patrick Naughton did something that would later give Bill Gates nightmares: he wrote a fairly bare-bones Web browser in Java, more for the challenge than anything else. A couple of months later there came the eureka moment: Naughton and another programmer named Jonathan Payne made it possible to run other Java programs, or “applets” as they would soon be known, right inside their browser. They stuck one of the team’s old graphical demos on a server and clicked the appropriate link, whereupon they were greeted with a screen full of dancing Coca-Cola cans. Payne found it “breathtaking”: “It wasn’t just playing an animation. It was physics calculations going on inside a webpage!”

In order to appreciate his awe, we need to understand what a static place the early Web was. HTML, the “language” in which pages were constructed, was an abbreviation for “Hypertext Markup Language.” In form and function, it was more akin to a typesetting specification than a Turing-complete programming language like C or Pascal or Java; the only form of interactivity it allowed for was the links that took the reader from static page to static page, while its only visual pizazz came in the form of static in-line images (themselves a relatively recent addition to the HTML specification, thanks to NCSA Mosaic). Java stood to change all that at a stroke. If you could embed programs running actual code into your page layouts, you could in theory turn your pages into anything you wanted them to be: games, word processors, spreadsheets, animated cartoons, stock-market tickers, you name it. The Web could almost literally come alive.

The potential was so clearly extraordinary that Java went overnight from a moribund project on the verge of the chopping block to Sun’s top priority. Even Bill Joy, now living in blissful semi-retirement in Colorado, came back to Silicon Valley for a while to lend his prodigious intellect to the process of turning Java into a polished tool for general-purpose programming. There was still enough of the old-school hacker ethic left at Sun that management bowed to the developers’ demand that the language be made available for free to individual programmers and small businesses; Sun would make its money on licensing deals with bigger partners, who would pay for the Java logo on their products and the right to distribute the virtual machine. The potential of Java certainly wasn’t lost on Netscape’s Marc Andreessen, who had long been leading the charge to make the Web more visually exciting. He quickly agreed to pay Sun $750,000 for the opportunity to build Java into the Netscape Navigator browser. In fact, it was Andreessen who served as master of ceremonies at Java’s official coming-out party at a SunWorld conference on May 23, 1995 — i.e., three days before Bill Gates wrote his urgent Internet memo.

What was it that so spooked him about Java? On the one hand, it represented a possible if as-yet unrealized challenge to Microsoft’s own business model of selling boxed software on floppy disks or CDs. If people could gain access to a good word processor just by pointing their browsers to a given site, they would presumably have little motivation to invest in Microsoft Office, the company’s biggest cash cow after Windows. But the danger Java posed to Microsoft might be even more extreme. The most maximalist predictions, which were being trumpeted all over the techie press in the weeks after the big debut, had it that even Windows could soon become irrelevant courtesy of Java. This is what Microsoft’s own Ben Slivka meant when he said that “the Web is the next platform.” The browser itself would become the operating system from the perspective of the user, being supported behind the scenes only by the minimal amount of firmware needed to make it go. Once that happened, a new generation of cheap Internet devices would be poised to replace personal computers as the world now knew them. With all software and all of each person’s data being stored in the cloud, as we would put it today, even local hard drives might become passé. And then, with Netscape Navigator and Java having taken over the role of Windows, Microsoft might very well join IBM, the very company it had so recently displaced from the heights of power, in the crowded field of computing’s has-beens.

In retrospect, such predictions seem massively overblown. Officially labeled beta software, Java was in reality more like an alpha release at best at the time it was being celebrated as the Paris to Microsoft’s Achilles, being painfully crash-prone and slow. And even when it did reach a reasonably mature form, the reality of it would prove considerably less than the hype. One crippling weakness that would continue to plague it was the inability of a Java applet to communicate with the webpage that spawned it; applets ran in Web browsers, but weren’t really of them, being self-contained programs siloed off in a sandbox from the environment that spawned them. Meanwhile the prospects of applications like online word processing, or even online gaming in Java, were sharply limited by the fact that at least 95 percent of Web users were accessing the Internet on dial-up connections, over which even the likes of a single high-resolution photograph could take minutes to load. A word processor like the one included with Microsoft Office would require hours of downloading every time you wanted to use it, assuming it was even possible to create such a complex piece of software in the fragile young language. Java never would manage to entirely overcome these issues, and would in the end enjoy its greatest success in other incarnations than that of the browser-embedded applet.

Still, cooler-headed reasoning like this was not overly commonplace in the months after the SunWorld presentation. By the end of 1995, Sun’s stock price had more than doubled on the strength of Java alone, a product yet to see a 1.0 release. The excitement over Java probably contributed as well to Netscape’s record-breaking initial public offering in August. A cavalcade of companies rushed to follow in the footsteps of Netscape and sign Java distribution deals, most of them on markedly more expensive terms. Even Microsoft bowed to the prevailing winds on December 7 and announced a Java deal of its own. (BusinessWeek magazine described it as a “capitulation.”) That all of this was happening alongside the even more intense hype surrounding the release of Windows 95, an operating system far more expansive than any that had come out of Microsoft to date but one that was nevertheless of a very traditionalist stripe at bottom, speaks to the confusion of these go-go times when digital technology seemed to be going anywhere and everywhere at once.

Whatever fear and loathing he may have felt toward Java, Bill Gates had clearly made his peace with the fact that the Web was computing’s necessary present and future. The Microsoft Network duly debuted as an icon on the default Windows 95 desktop, but it was now pitched primarily as a gateway to the open Web, with just a handful of proprietary features; MSN was, in other words, little more than yet another Internet service provider, of the sort that were popping up all over the country like dandelions after a summer shower. Instead of the 20 million subscribers that some had predicted (and that Steve Case had so feared), it attracted only about 500,000 customers by the end of the year. This left it no more than one-eighth as large as AOL, which had by now completed its own deft pivot from proprietary online service of the 1980s type to the very face of the World Wide Web in the eyes of countless computing neophytes.

Yet if Microsoft’s first tentative steps onto the Web had proved underwhelming, people should have known from the history of the company — and not least from the long, checkered history of Windows itself — that Bill Gates’s standard response to failure and rejection was simply to try again, harder and better. The real war for online supremacy was just getting started.

(Sources: the books Overdrive: Bill Gates and the Race to Control Cyberspace by James Wallace, The Silicon Boys by David A. Kaplan, Architects of the Web by Robert H. Reid, Competing on Internet Time: Lessons from Netscape and Its Battle with Microsoft by Michael Cusumano and David B. Yoffie, dot.con: The Greatest Story Ever Sold by John Cassidy, Stealing Time: Steve Case, Jerry Levin, and the Collapse of AOL Time Warner by Alec Klein, Fools Rush In: Steve Case, Jerry Levin, and the Unmaking of AOL Time Warner by Nina Munk, and There Must be a Pony in Here Somewhere: The AOL Time Warner Debacle by Kara Swisher.)

 
 

Tags: , , , ,

Games on the Net Before the Web, Part 3: The Persistent Multiplayer CRPG

The first CRPG to go online appeared on The Source, CompuServe’s most prominent early competitor. Black Dragon, written by a programmer of telephone switching systems named Bob Maples, was at bottom a simplified version of Wizardry — not a hugely surprising state of affairs, given that it made its debut in 1981, at the height of the Wizardry craze. The player created a character — just one, not a full party as in Wizardry — and then began a series of expeditions into the game’s ten-level labyrinth, fighting monsters, collecting equipment and experience, and hopefully penetrating a little deeper with each outing. Only the character’s immediate surroundings were described on the scrolling, text-only display, so careful mapping became every bit as critical as it was in Wizardry. The ultimate goal, guaranteed to consume many hours — not to mention a small fortune in connection charges — was to kill Asmodeus, the black dragon of the title, who lurked down on the tenth level. Any player who managed to accomplish that feat and escape back to the surface was rewarded by seeing her name along with her character’s immortalized on the game’s public wall of fame.

Those bragging rights aside, Black Dragon had no multiplayer aspect at all, which might lead one to ask why its players didn’t just pick up a copy of Wizardry instead; doing so would certainly have been cheaper in the long run. But the fact is that not every Source subscriber’s computer could run Wizardry in those early days. Certainly Black Dragon proved quite popular as the first game of its kind. Sadly lost to history now, it has been described by some of its old players as far more cleverly designed than its bare-bones presentation and its willingness to unabashedly ride Wizardry‘s coattails might lead one to believe.

Bill Louden, the “games guy” over at CompuServe, naturally followed developments on The Source closely. The success of Black Dragon led him to launch a somewhat more sophisticated single-player CRPG in 1982. Known as Dungeons of Kesmai, it was, as the name would imply, another work of the indefatigable John Taylor and Kelton Flinn — i.e., Kesmai, the programmers also responsible for CompuServe’s MegaWars III and, a bit later, for GEnie’s Air Warrior. Like so many of CompuServe’s staple games, Dungeons of Kesmai would remain on the service for an absurdly long time, until well into the 1990s.

But more ambitious games as well would come down the pipe well before then. A few years later after these first single-player online CRPGs debuted, CompuServe made the leap to multiplayer virtual worlds. As we’ve already seen in my previous article, MUD washed up from British shores in the spring of 1986 under the name of British Legends, bringing with it the idea of the multiplayer text adventure as virtual world. Yet even before that happened, in December of 1985, the CRPG genre had already made the same leap thanks to still another creation from Kesmai: Island of Kesmai.

Taylor and Flinn had originally hoped to make Dungeons of Kesmai something akin to the game which Island would later become, but that project had been cut back to a single-player game when Bill Louden deemed it simply too ambitious for such an early effort. Undaunted, Kesmai treated Dungeons as a prototype for their real vision for a multiplayer CRPG and just kept plugging away. They never saw nor heard of MUD when developing the more advanced game, meaning that said game’s innovations, which actually hew much closer than MUD to the massively-multiplayer games to come, were all its own.

Island of Kesmai demonstrated just how far games’ presentation had come on CompuServe in the three years of creeping advancement that had followed Dungeons of Kesmai. While it was still limited to text and crude character graphics, the latest terminal protocols did allow it to make use of color, and to divide the screen into quadrants dedicated to different purposes: a status “window” showing the state of the player’s character, a pseudo-graphical overhead view of the character’s surroundings, a text area for descriptions of the environment, a command line for the player to issue orders. Island of Kesmai looked like a roguelike, a genre of hardcore tactical CRPG that was a bigger favorite with hackers than with commercial game developers. This roguelike, however, was a multiplayer game set in a persistent world, and that changed everything.

Island of Kesmai used the ASCII graphics typical of roguelikes. Here “>” represents the player’s character; “A” is a monster; “B” is another player’s character; “@@” is a spider web; and “$” is a treasure or other item. The brackets are walls, while “–” represents a closed door and “/” an open one.

As with British Legends, up to 100 players could share Island of Kesmai‘s persistent world at the same time. Yet Kesmai’s creation was a far more coherent, far more designed experience than the cheerful insanity that was life on MUD. Players chose a class for their characters, along with an alignment, a gender, and even a land of origin. As befitted the game’s grounding in CRPG rather than text-adventure tradition, combat was a far more elaborate and tactical affair than in MUD. You had to reckon with the position of your character and your opponents; had to worry about initiative and fatigue; could be stunned or poisoned or even fumble your weapon. The magic system, too, was far more advanced and subtle than MUD‘s handful of ad-hoc spells that had often been added as much for comedic value as anything else.

The Island that gave the game its name was divided into five regions, comprising in total some 62,000 discrete locations, over which roamed some 2500 creatures in addition to one’s fellow players. The game was consciously designed to support differing levels of player engagement. “A person can play casually or seriously,” said Ben Shih, a “scenario designer” hired by Kesmai to continue evolving the game. “He or she can relax and take out frustrations on a few goblins or unwind by joining other players in hunting bear and griffin. But to become a superstar, a ‘mega-character,’ takes time.”

Ben Shih, John Taylor, and Kelton Flinn of Kesmai.

Scenario designers like Shih added content on a regular basis to keep the game fresh even for veteran players, sometimes giving a unique artifact to the first player to complete a new quest. Kelton Flinn was still excited about adding new stuff to the game three years after it had first gone online:

We don’t feel we’re designing games. We’re designing simulations. We create a world and then we let the players roam around in it. Of course, we’re always adding to our view of the world, fiddling with things all the time, creating new treasures, making things work better. I suppose at some point you have to call a halt and say, “Let’s see if we want to make a clean break and try something bigger.” But we haven’t reached that stage yet.

For all the changes the game went through, the ultimate achievement in Island of Kesmai remained always to kill the dragon, the toughest monster in the game. Players who did so were rewarded with everlasting fame as part of the true elite. As for the dragon: he of course re-spawned in a few days’ time, ready to serve as fodder for the next champion.

Those who hoped to do well were almost forced to buy the 181-page manual for the game, available for the low, low price of $16.50 directly from CompuServe. A rather stunning percentage of the elements described therein would still ring true to any World of Warcraft player of today. There was, for instance, a questing system, a ladder of challenges offering ever greater rewards in return for surviving ever greater dangers. Even those looking for an equivalent to the endless stream of World of Warcraft expansions can find it with Island of Kesmai. In 1988, Kesmai opened up the new lands of Torii and Annwn, filled with “more powerful weapons, tougher monsters, and a variety of treasures.” Advanced players were allowed to travel there only after their characters had hit the old Island’s level cap, and weren’t allowed to return again after they passed through the magic portal, lest they wreak havoc among the less powerful monsters and characters they once left behind.

While play on the Island was much more structured than it was in The Land of MUD, it was still the other players who really made the game what it was. Taylor and Flinn went into the project understanding that, and even anticipating to an extraordinary degree the shape of virtual societies to come. “We fully expect that a political system will evolve,” said Taylor upon the game’s launch, “and someone may even try to proclaim himself King of Kesmai.” Much of the design was put in place to emphasize the social aspect of the game. For example, a conference room was provided for strategizing and conspiring, and many quests were deliberately designed to require the cooperation of several characters. The verbiage adopted by players in relation to the quest system still rings true to modern ears. For example, a verb was coined for those loners determined to undertake quests on their own: to “solo.”

Although player-versus-player combat was allowed, it was restricted to specific areas of the Island; an attempt to attack another character in a “civilized” area, such as the town where new players began their adventures, would be met by the Sheriff, an invincible non-player character guaranteed to grind the brawniest hero into dust. Alignment also played a role: a karma meter kept track of players’ actions. Actions like assault or theft would gradually turn a good character neutral, then finally evil. The last alignment was highly undesirable from many perspectives, not least in that it would prevent you from entering the town, with its shops, bars, and trainers.

And there were still other mechanisms for discouraging the veterans from tormenting the newbies in the way so many MUD players so enjoyed. Players were urged to report griefers who preyed excessively upon newbies, even if they only did so in the dungeons and other “uncivilized” areas where player-versus-player combat was technically allowed. If enough people lodged complaints against them, the griefers might find themselves visited by the wrath of the “ghods of Kesmai,” the game’s administrators — the alternate spelling was used so as not to offend the religious — who might take away experience points, steal back their precious magic items, or just smite them dead as punishment. The game thus tended to foster a less cutthroat, more congenial atmosphere than MUD, with most players preferring to band together against the many monsters rather than fight with one another.

A journalist from the magazine Compute’s Gazette shared this tale of his own almost unbelievably positive first encounter with another player in the game:

I desperately wish I could afford to buy a few bottles of balm sold by the vendor here in the nave, but at 16 gold pieces each they are far above my limited budget. Another player walks in from the square. “Hello, Cherp!” she says, looking at me. Taking a close look at her, I recognize Lynn, a middle-aged female fighter from my home country of Mnar.

“Howdy to you. Are you headed down into the dungeon? I’ve just arrived and this is my first trip down,” I tell her.

“Ah, I see. Yes, I was headed down, but I don’t think it’s safe for you to hunt where I’ll be going. Do you have any balm yet?” she asks as she stands next to the balm vendor.

“No, I haven’t got the gold to afford it,” I say hesitantly.

“No problem. I have a few extra pieces. Come and get them.”

“Thank you very much,” I say. Lynn drops some gold on the ground, and we wait as the vendor takes the gold and drops the balm bottles for us. I pick up the bottles and add them to my meager possessions.

“I can’t thank you enough for this,” I say. “Is there some way I can repay you? Perhaps we could meet here again later and I could give you some balms in return.”

“No,” she laughs, “I have no need of them. Just remember there are always other players who are just starting out. They may find themselves in the same position you are in now. Try to lend them a hand when you are sufficiently strong.”

At the risk of putting too fine a point on it, I will just note one more time that this attitude stands in marked contrast to the newbie-tormenting that the various incarnations of MUD always seemed to engender. At least one player of Island of Kesmai so distinguished himself through his knowledge of the game and his sense of community spirit that he was hired by Kesmai to design new challenges and serve as a community liaison — a wiz mode of a different and much more lucrative stripe.

But the community spirit of Island of Kesmai at its finest is perhaps best exemplified by Valis, one of the game’s most accomplished players. This online CRPG was actually the first RPG of any stripe he had ever managed to enjoy, despite attending university during the height of the Dungeons & Dragons fad: “I could never get into sitting around eating crackers and cheese doodles and arguing for twelve hours at a time. I can do as much in a half hour in Island of Kesmai as they did in twelve hours.” Valis became the first person to exhaustively map the entire Island, uploading the results to the service’s file libraries for the benefit of all. Further, he put together a series of beginners classes for those new to what could be a very daunting game. CompuServe’s hapless marketers advertised his efforts as an “escort service,” a name which perhaps didn’t convey quite the right impression.

We think we’ve come up with the perfect way of teaching a beginners class. We spend an hour or so in the conference area with a lecture and questions. Then we go on a “field trip” to the Island itself. I lead the beginners onto the Island, where we encounter a few things and look for some treasure. That usually is enough to get them started.

In many respects, the personal stories that emerged from Island of Kesmai will ring very familiar to anyone who’s been reading my recent articles, as they will to anyone familiar with the massively-multiplayer games of today. Carrie Washburn discovered the game in 1986, just after her son was born fourteen weeks premature. During the months the baby spent in intensive care, Island of Kesmai became the “link back to reality” for her and her husband. After spending the day at the hospital, they “would enter a fantasy world in order to forget the real one. The online friends that we met there helped pull us through.” Of course, the escape wasn’t without cost: Washburn’s monthly CompuServe bill routinely topped $500, and once hit $2000. Later she divorced her husband and took to prancing around the Island as the uninhibited Lynn De’Leslie — “more of a slut, really” — until she met her second husband there. Her sentiments about it all echoed those expressed by the CB Simulator fraternity on another part of CompuServe: “One of the great things about meeting people online is that you get to really know them. The entire relationship is built on talking.” (Appropriately enough for a talker, Washburn went on to find employment as the administrator of the Multiplayer Games Roundtable on GEnie.)

Kelton Flinn once called Island of Kesmai “about as complicated as a game can be on a commercial system.” Yet it deserves to be remembered for the thought that went into it even more than for its complexity. Almost every issue that designers of the massively-multiplayer games of today deal with was anticipated and addressed by Kesmai — sometimes imperfectly, yes, but then many of the design questions which swirl around the format have arguably still not been met with perfect answers even today. Incredibly, Island of Kesmai went online in December of 1985 with almost all of its checks and balances already in place, so thoroughly had its designers thought about what they were creating and where it would lead. To use Richard Bartle’s terminology from my previous article, Island of Kesmai was a “product” rather than a “program,” and it was all the better for it. While MUD strikes me as a pioneering work with an awful lot of off-putting aspects, such that I probably wouldn’t have lasted five minutes if I’d stumbled into it as a player, Island of Kesmai still sounds like it must have been fantastic to play.

 

One big name in the field of single-player graphical CRPGs took note of what was going on on The Island quite early. In 1987, a decade before Ultima Online would take the games industry by storm, Richard Garriott and Origin Systems began doing more than just muse about the potential for a multiplayer Ultima. They assigned at least one programmer to work full-time on the technology that could enable just such a product. This multiplayer Ultima was envisioned on a more modest scale than the eventual Ultima Online or even the current Island of Kesmai. It was described by Garriott thus: “What you’ll buy in the store will be a package containing all the core graphics routines and the game-development stuff (all the commands and so on), which you could even plug into your computer and play as a standalone. But with a modem you could tie a friend into the game, or up to somewhere between eight and sixteen other players, all within the same game.” Despite the modest number of players the game would support and the apparent lack of plans for a persistent world, Origin did hold out the prospect of a partnership with CompuServe. In the end, though, none of it went anywhere. After 1987 the idea of a multiplayer Ultima was shelved for a long, long time; Origin presumably deemed it too much of a distraction from their bread-and-butter single-player CRPG franchise.

Another of the big single-player CRPG franchises, however, would make the leap — and not just to multiplayer but all way to a persistent virtual world like that of MUD or Island of Kesmai. Rather than running on the industry-leading CompuServe or even the gamer haven of GEnie, this pioneering effort would run on the nascent America Online.

Don Daglow was already a grizzled veteran of the games industry when he founded a development company called Beyond Software (no relation to the British company of the same name) in 1988. He had programmed games for fun on his university’s DEC PDP-10 in the 1970s, programmed them for money at Intellivision in the early 1980s, been one of the first producers at Electronic Arts in the mid-1980s — working on among other titles Thomas M. Disch’s flawed but fascinating text adventure Amnesia and the hugely lauded baseball simulation Earl Weaver Baseball — and finally came to spend some time in the same role at Brøderbund. At last, though, he had “got itchy” to do something that would be all his own. Beyond was his way of scratching that itch.

Thanks to Daglow’s industry connections, Beyond hit the ground running, establishing solid working relationships with two very disparate companies: Quantum Computer Services, who owned and operated America Online, and the boxed-game publisher SSI. Daglow actually signed on with the former the day after forming his company, agreeing to develop some simple games for their young online service which would prove to be the very first Beyond games to see the light of day. Beyond’s relationship with the latter would lead to the publication of another big-name-endorsed baseball simulation: Tony La Russa’s Ultimate Baseball, which would sell an impressive 85,684 copies, thereby becoming SSI’s most successful game to date that wasn’t an entry in their series of licensed Dungeons & Dragons games.

As it happened, though, Beyond’s relationship with SSI also came to encompass that license in fairly short order. They contracted to create some new Dungeons & Dragons single-player CRPGs, using the  popular but aging Gold Box engine which SSI had heretofore reserved for in-house titles; the Beyond games were seen by SSI as a sort of stopgap while their in-house staff devoted themselves to developing a next-generation CRPG engine. Beyond’s efforts on this front would result in a pair of titles, Gateway to the Savage Frontier and Treasures of the Savage Frontier, before the disappointing sales of the latter told both parties that the jig was well and truly up for the Gold Box engine.

By Don Daglow’s account, the first graphical multiplayer CRPG set in a persistent world was the product of a fortunate synergy between the work Beyond was doing for AOL and the work they were doing for SSI.

I realized that I was doing online games with AOL and I was doing Dungeons & Dragons games with SSI. Nobody had done a graphical massively-multiplayer online game yet. Several teams had tried, but nobody had succeeded in shipping one. I looked at that, and said, “Wait, I know how to do this because I understand how the Dungeons & Dragons system works on the one hand, and I understand how online works on the other.” I called up Steve Case [at AOL], and Joel Billings and Chuck Kroegel at SSI, and said, “If you guys want to give it a shot, I can give you a graphical MMO, and we can be the first to have it.”

The game was christened Neverwinter Nights. “Neverwinter” was the area of the Forgotten Realms campaign setting which TSR, makers of the Dungeons & Dragons tabletop RPG, had carved out for Beyond to set their games; the two single-player Savage Frontier games were also set in the region. The “Nights,” meanwhile, was a sly allusion to the fact that AOL — and thus this game — was only available on nights and weekends, when the nation’s telecommunications lines could be leased relatively cheaply.

Neverwinter Nights had to be purchased as a boxed game before players could start paying AOL’s connection fees to actually play it. It looked almost indistinguishable from any other Gold Box title on store shelves — unless one noticed the names of America Online and Quantum Computer Services in the fine print.

On the face of it, Neverwinter Nights was the ugliest of kludges. Beyond took SSI’s venerable Gold Box engine, which had never been designed to incorporate multiplayer capabilities, and grafted exactly those capabilities onto it. At first glance, the end result looked the same as any of the many other Gold Box titles, right down to the convoluted interface that had been designed before mice were standard equipment on most computers. But when you started to look closer, the differences started to show. The player now controlled just one character instead of a full party; parties were formed by multiple players coming together to undertake a quest. To facilitate organizing and socializing, a system for chatting with other players in the same map square had been added. And, in perhaps the trickiest and certainly the kludgiest piece of the whole endeavor, the turn-based Gold Box engine had been converted into a pseudo-real-time proposition that worked just well enough to make multiplayer play possible.

It made for a strange hybrid to say the least — one which Richard Bartle for one dismisses as “innovative yet flawed.” Yet somehow it worked. After launching the game in June of 1991 with a capacity of 100 simultaneous players, Beyond and AOL were soon forced by popular demand to raise this number to 500, thus making Neverwinter Nights the most populous virtual world to go online to date. And even at that, there were long lines of players during peak periods waiting for others to drop out of the game so they could get into it, paying AOL’s minute-by-minute connection fee just to stand in the queue.

While players and would-be players of online CRPGs had undoubtedly been dreaming of the graphics which Neverwinter Nights offered for a long time, smart design was perhaps equally important to the game’s long-term popularity. To an even greater degree than Island of Kesmai, Neverwinter Nights strove to provide a structure for play. Don Daglow had been interested in online gaming for a long time, had played just about all of what was available, and had gone into this project with a clear idea of exactly what sort of game he wanted Neverwinter Nights to be. It was emphasized from the get-go that this was not to be a game of direct player-versus-player conflict. In fact, Beyond went even Kesmai one better in this area, electing not just to ban such combat from certain parts of the game but to ban it entirely. Neverwinter Nights was rather to be a game of cooperation and friendly competition. Players would meet on the town’s central square, form themselves into adventuring parties, and be assigned quests by a town clerk — shades of the much-loved first Gold Box game, Pool of Radiance — to kill such-and-such a monster or recover such-and-such a treasure. Everyone in the party would then share equally in the experience and loot that resulted. Even death was treated relatively gently: characters would be revived in town minus all of the stuff they had been toting along with them, but wouldn’t lose the armor, weapons, and magic items they had actually been using — much less lose their lives permanently, as happened in MUD.

One player’s character has just cast feeblemind on another’s, rendering him “stupid.” This became a sadly typical sight in the game.

Beyond’s efforts to engender the right community spirit weren’t entirely successful; players did find ways to torment one another. While player characters couldn’t attack one another physically, they could cast spells at one another — a necessary capability if a party’s magic-using characters were to be able to cast “buffing” spells on the fighters before and during combat. A favorite tactic of the griefers was to cast the “feeblemind” spell several times in succession on the newbies’ characters, reducing their intelligence and wisdom scores to the rock bottom of 3, thus making them for all practical purposes useless. One could visit a temple to get this sort of thing undone, but that cost gold the newbies didn’t have. By most accounts, there was much more of this sort of willful assholery in Neverwinter Nights than there had been in Island of Kesmai, notwithstanding the even greater lengths Beyond had gone to prevent it. Perhaps it was somehow down to the fact that Neverwinter Nights was a graphical game — however crude the graphics were even by the standards of the game’s own time — that led to it attracting a greater percentage of such immature players.

Griefers aside, though, Neverwinter Nights had much to recommend it, as well as plenty of players happy to play it in the spirit Beyond had intended. Indeed, the devotion the game’s most hardcore players displayed remains legendary to this day. They formed themselves into guilds, using that very word for the first time to describe such aggregations. They held fairs, contests, performances, and the occasional wedding. And they started at least two newsletters to keep track of goings-on in Neverwinter. Some issues have been preserved by dedicated fans, allowing us today a glimpse into a community that was at least as much about socializing and role-playing as monster-bashing. The first issue of News of the Realm, for example, tells us that Cyric has just become a proud father in the real world; that Vulcan and Dramia have opened their own weapons shop in the game; that Cold Chill the notorious bandit has shocked everyone by recognizing the errors of his ways and becoming good; that the dwarves Nystramo and Krishara are soon to hold their wedding — or, as dwarves call it, their “Hearth Building.” Clearly there was a lot going on in Neverwinter.

The addition of graphics would ironically limit the lifespan of many an online game; while text is timeless, computer graphics, especially in the fast-evolving 1980s and 1990s, had a definite expiration date. Under the circumstances, Neverwinter Nights had a reasonably long run, remaining available for six years on AOL. Over the course of that period online life and computer games both changed almost beyond recognition. Already looking pretty long in the tooth when Neverwinter Nights made its debut in 1991, the Gold Box engine by 1997 was a positive antique.

Despite the game’s all-too-obvious age, AOL’s decision to shut it down in July of 1997 was greeted with outrage by its rabid fan base, some of whom still nurse a strong sense of grievance to this day. But exactly how large that fan base still was by 1997 is a little uncertain. The Neverwinter Nights community insisted (and continues to insist) that the game was as popular as ever, making the claim from uncertain provenance that AOL was still making good money from it. Richard Bartle makes the eye-popping claim today, also without attribution, that it was still bringing in fully $5 million per year. Yet the reality remains that this was an archaic MS-DOS game at a time when software in general had largely completed the migration to Windows. It was only getting more brittle as it fell further and further behind the times. Just two months after the plug was pulled on Neverwinter Nights, Ultima Online debuted, marking the beginning of the modern era of massively-multiplayer CRPGs as we’ve come to know them today. Neverwinter Nights would have made for a sad sight in any direct comparison with Ultima Online. It’s understandable that AOL, never an overly games-focused service to begin with, would want to get out while the getting was good.

Even in its heyday, when the land of Neverwinter was stuffed to its 500-player capacity every night and more players were lining up outside, its popularity was never all that great in the grand scheme of the games industry; that very capacity limit if nothing else saw to that. Nevertheless, its place in gaming lore as a storied pioneer was such that Bioware chose to revive the name in 2002 in the form of a freestanding boxed CRPG with multiplayer capabilities. That version of Neverwinter Nights was played by many, many times more people than the original — and yet it could never hope to rival its predecessor’s claim to historical importance.

The massively-multiplayer online CRPGs that would follow the original Neverwinter Nights would be slicker, faster, in some ways friendlier, but the differences would be of degree, not of kind. MUD, Island of Kesmai, and Neverwinter Nights between them had invented a genre, going a long way in the process toward showing any future designers who happened to be paying attention exactly what worked there and what didn’t. All that remained for their descendants to do was to popularize it, to make it easier and cheaper and more convenient to lose oneself in a shared virtual world of the fantastic.

(Sources: the books MMOs from the Inside Out by Richard Bartle and Gamers at Work: Stories Behind the Games People Play by Morgan Ramsey; Online Today of February 1986, April 1986, August 1986, June 1987, January 1988, August 1988, September 1988, and February 1989; Computer Gaming World of June/July 1986; The Gamers Connection of September/October 1988; Compute!’s Gazette of July 1989; Compute! of November 1991; the SSI archive at the Strong Museum of Play. Online sources include Barbara Baser’s Black Dragon walkthrough, as preserved by Arthur J. O’Dwyer; “The Game Archaeologist Discovers the Island of Kesmai” from Engadget. Readers may also be interested in the CRPG Addict’s more experiential impression of playing Neverwinter Nights offline — and be sure to check out the comments to that article for some memories of old players.)

 
33 Comments

Posted by on December 22, 2017 in Digital Antiquaria, Interactive Fiction

 

Tags: , , , , , ,

A Net Before the Web, Part 5: The Pony

Even as Bill von Meister and company were flailing away at GameLine, a pair of former General Electric research scientists in Troy, New York, were working on the idea destined to become Control Video’s real future. Howard S. Goldberg and David Panzl had spent some time looking at online services like CompuServe and The Source, and had decided that they could never become a truly mass-market phenomenon in their current form. In an era when far more people watched television than read books, all that monochrome text unspooling slowly down the screen would cause the vast majority of potential customers to run away screaming.

Goldberg and Panzl thought they saw a better model. The Apple Lisa had just been released, the Macintosh was waiting in the wings, and you couldn’t shake a stick at any computer conference without hitting someone with the phrase “graphical user interface” on the lips. Simplicity was the new watchword in computing. Goldberg and Panzl believed that anyone who could make a point-and-shoot online service to go up against the SLR complexity of current offerings could make a killing.

But how to do so, given the current state of technology? It was all a 300-baud modem could do to transfer text at a reasonable speed. Graphics were out of the question.

Or were they? What if the graphics could be stored locally, on the subscriber’s computer, taking most of the load off the modem? Goldberg and Panzl envisioned a sort of hybrid service, in which as much code and data as possible was stored on a disk that would be sent out to subscribers rather than on the service’s big computers. With this approach, you would be able to navigate through the service’s offerings using a full GUI, which would run via a local application on your computer. If you went into a chat room, the chat application itself would be loaded from disk; only the actual words you wrote and read would need to be sent to and from a central computer. If you decided to write an email, a full-featured editor the likes of which a CompuServe subscriber could only dream of could be loaded in from disk, with only the finished product uploaded when you clicked the send button.

The PlayNet main menu. Note that system updates could be downloaded and installed on the user’s disks, thus avoiding the most obvious problem of this approach to an online service: that of having to send out new disks to every customer every time the system was updated. The games were also modular, with new ones made available for download to disk at the user’s discretion as they were developed. All told, it was an impressive feat of software engineering that would prove very robust; the software shown here would remain in active use as PlayNet or QuantumLink for a decade, and some of its underpinnings would last even longer than that.

Goldberg and Panzl were particularly taken with the possibilities the approach augured for online multiplayer games, a genre still in its infancy. CompuServe had put up a conquer-the-universe multiplayer strategy game called MegaWars, but it was all text, demanding that players navigate through a labyrinth of arcane typed commands. Otherwise there were perennials like Adventure to go along with even moldier oldies like Hangman, but these were single-player games that just happened to be played online. And they all were, once again, limited to monochrome text; it was difficult indeed to justify paying all those connect charges for them when you could type in better versions from BASIC programming books. But what if you could play poker or checkers online against people from anywhere in the country instead of against the boring old computer, and could do so with graphics? Then online gaming would be getting somewhere. The prospect was so exciting that Goldberg and Panzl called their proposed new online service PlayNet. It seemed the perfect name for the funner, more colorful take on the online experience they hoped to create.

When they shared their idea with others, they found a number who agreed with them about its potential. With backing from Rensselaer Polytechnic Institute, the New York State Science and Technology Foundation, and Key Venture Corporation, they moved into a technology “incubator” run by the first of these in May of 1983. For PlayNet’s client computer — one admitted disadvantage of their approach was that it would require them to write a separate version of their software for every personal computer they targeted — they chose the recently released, fast-selling Commodore 64, which sported some of the best graphics in the industry. The back end would run on easily scalable 68000-based servers made by a relatively new company called Stratus. (The progression from CompuServe to PlayNet thus highlights the transition from big mainframes and minicomputers to the microcomputer-based client/server model in networking, just as it does the transition from a textual to a graphical focus.) Facing a daunting programming task on both the client and server sides, Goldberg and Panzl took further advantage of their relationship with Rensselaer Polytechnic Institute to bring in a team of student coders, who worked for a stipend in exchange for university credit, applying to the project many of the cutting-edge theoretical constructs they were learning about in their classes.

PlayNet began trials around Troy and Albany in April of 1984, with the service rolling out nationwide in October. Commodore 64 owners had the reputation of being far more price-sensitive than owners of other computers, and Goldberg and Panzl took this conventional wisdom to heart. PlayNet was dramatically cheaper than any of the other services: $35 for the signup package which included the necessary software, followed by $6 per month and $2 per hour actually spent online; this last was a third of what CompuServe would cost you. PlayNet hoped to, as the old saying goes, make it up in volume. Included on the disks were no fewer than thirteen games, whose names are mostly self-explanatory: Backgammon, Boxes, Capture the Flag, Checkers, Chess, Chinese Checkers, Contract Bridge, Four in a Row, Go, Hangman, Quad 64, Reversi, and Sea Strike. While they were all fairly unremarkable in terms of interface and graphics, not to mention lack of originality, it was of course the well-nigh unprecedented ability to play them with people hundreds or thousands of miles away that was their real appeal. You could even chat with your opponent as you played.

In addition to the games, most of the other areas people had come to expect from online services were present, if sometimes a little bare. There were other small problems beyond the paucity of content — some subscribers complained that chunks loaded so slowly from the Commodore 64’s notoriously slow disk drive that they might almost just as well have come in via modem, and technical glitches were far from unknown — but PlayNet was certainly the most user-friendly online service anyone had ever seen, an utterly unique offering in an industry that tended always to define itself in relation to the lodestar that was CompuServe.

Things seemed to go fairly well at the outset, with PlayNet collecting their first 5000 subscribers within a couple of months of launch. But, sadly given how visionary the service really was, they would never manage to get much beyond that. Separated both geographically and culturally from the big wellsprings of technology venture capital around Silicon Valley, forced to deal with a decline in the home-computer market shortly after their launch that made other sources of funding shy away, they were perpetually cash-poor, a situation that was only exacerbated by the rock-bottom pricing — something that, what with prices always being a lot harder to raise on customers than they are to lower, they were now stuck with. An ugly cycle began to perpetuate itself. Sufficient new subscribers would sign up to badly tax the existing servers, but PlayNet wouldn’t have enough money to upgrade their infrastructure to match their growth right away. Soon, enough customers would get frustrated by the sluggish response and occasional outright crashes to cancel their subscriptions, bringing the system back into equilibrium. Meanwhile PlayNet was constantly existing at the grace of the big telecommunications networks whose pipes and access numbers they leased, the prospect of sudden rate hikes a Sword of Damocles hanging always over their heads. Indeed, the story of PlayNet could serve as an object illustration as to why all of the really big, successful online services seemed to have the backing of the titans of corporate America, like H&R Block, Readers Digest, General Electric, or Sears. This just wasn’t a space with much room for the little guy. PlayNet may have been the most innovative service to arrive since CompuServe and The Source had spawned the consumer-focused online-services industry in the first place, but innovation alone wasn’t enough to be successful there.

Still, Goldberg and Panzl could at least take solace that their company had a reason to exist. While PlayNet was struggling to establish an online presence, Control Video was… continuing to exist, with little clear reason why beyond Jim Kimsey and Steve Case’s sheer stubbornness. Kimsey loved to tell an old soldier’s joke about a boy who is seen by the roadside, frantically digging into a giant pile of horse manure. When passersby ask him why, he says, “There must be a pony in here somewhere!” There must indeed, thought Kimsey, be a pony for Control Video as well buried somewhere in all this shit they were digging through. He looked for someone he could sell out to, but Control Video’s only real asset was the agreements they had signed with telecommunications companies giving them access to a nationwide network they had barely ever used. That was nice, but it wasn’t, judged potential purchasers, worth taking on a mountain of debt to acquire.

The way forward — the pony in all the shit — materialized more by chance than anything. Working through his list of potential purchasers, Kimsey made it to Commodore, the home-computer company, in the spring of 1985. Maybe, he thought, they might like to buy him out in order to use Control Video’s network to set up their own online service for their customers. He had a meeting with Clive Smith, an import from Commodore’s United Kingdom branch who was among the bare handful of truly savvy executives the home office ever got to enjoy. (Smith’s marketing instincts had been instrumental in the hugely successful launch of the Commodore 64.) Commodore wasn’t interested in running their own online service, Smith told Kimsey; having released not one but two flop computers in 1984 in the form of the Commodore 16 and Plus/4, they couldn’t afford such distractions. But if Control Video wanted to start an independent online service just for Commodore 64 owners, Commodore would be willing to anoint it as their officially recommended service, including it in the box with every new Commodore 64 and 128 sold in lieu of the CompuServe Snapaks that were found there now. He even knew where Kimsey could get some software that would make his service stand out from all of the others, by taking full advantage of the Commodore 64’s color graphics: a little outfit called PlayNet, up in Troy, New York.

It seemed that PlayNet, realizing that they needed to find a strong corporate backer if they hoped to survive, had already come to Commodore looking for a deal very similar to the one that Clive Smith was now offering Jim Kimsey. But, while he had been blown away by the software they showed him, Smith had been less impressed by the business acumen of the two former research scientists sitting in his office. He’d sent them packing without a deal, but bookmarked the PlayNet software in his mind. While Kimsey’s company was if anything in even worse shape than PlayNet on the surface, Smith thought he saw a much shrewder businessman before him, and knew from the grapevine that Kimsey was still tight with the venture capitalists who had convinced him to take the job with Control Video in the first place. He had, in short, all the business savvy and connections that Goldberg and Panzl lacked. Smith thus brokered a meeting between Control Video and PlayNet to let them see what they could work out.

What followed was a veritable looting of PlayNet’s one great asset. Kimsey acquired all of their software for a reported $50,000, plus ongoing royalty payments that were by all accounts very small indeed. If it wasn’t quite Bill Gates’s legendary fleecing of Seattle Computer Products for the operating system that became MS-DOS, it wasn’t that far behind either. PlayNet’s software would remain for the next nine years the heart of the Commodore 64 online service Kimsey was now about to start.

The best thing Goldberg and Panzl could have done for their company would have been to abandon altogether the idea of hosting their own online service, embracing the role of Control Video’s software arm. But they remained wedded to the little community they had fostered, determined to soldier on with the PlayNet service as an independent entity even after having given away the store to a fearsome competitor that enjoyed the official blessing of Commodore which had been so insultingly withheld from them. Needless to say, it didn’t go very well; PlayNet finally gave up the ghost in 1987, almost two years after the rival service had launched using their own technology. As part of the liquidation, they transferred all title to said technology in perpetuity to Jim Kimsey and Steve Case’s company, to do with as they would. Thus was the looting completed.

Well before that happened, the looter was no longer known as Control Video. Wanting a fresh start after all the fiasco and failure of the last couple of years, wanting to put the Bill von Meister era behind him once and for all, Kimsey on May 25, 1985, put Control Video in a shoe box, as he put it, and pulled out Quantum Computer Services. A new company in the eyes of the law, Quantum was in every other way a continuation of the old, with all the same people, all the same assets and liabilities, even the same philosophical orientation. For all that the deal with Commodore and the acquisition of the PlayNet software was down to seeming happenstance, the online service that would come to be known as QuantumLink evinced von Meister’s — and Steve Case’s — determination to create a more colorful, easier, friendlier online experience that would be as welcoming to homemakers and humanities professors as it would to hardcore hackers. And in running on its own custom software, it allowed Quantum the complete control of the user’s experience which von Meister and Case had always craved.

The QuantumLink main menu. Anyone who had used PlayNet would feel right at home…

Continuing to tax the patience of their financiers — patience that would probably have been less forthcoming had Daniel Case III’s brother not been on the payroll — Quantum worked through the summer and early fall of 1985 to adapt the PlayNet software to their own needs and to set up the infrastructure of Stratus servers they would need to launch. QuantumLink officially went live on the evening of November 1, 1985. It was a tense group of administrators and techies who sat around the little Vienna, Virginia, data center, watching as the first paying customers logged in, watching what they did once they arrived. (Backgammon, for what it’s worth, was an early favorite.) By the time the users’ numbers had climbed into the dozens, beers were being popped and spontaneous cheers were in the air. Simultaneous users would peak at about 100 that night — not exactly a number to leave CompuServe shaking in their boots. But so be it; it just felt so good to have an actual product — an actual, concrete purpose — after their long time in the wilderness.

In keeping with the price-sensitive nature of the Commodore market, Quantum strove to make their service cheaper than the alternatives, but were careful not to price-cut themselves right out of business as had PlayNet. Subscribers paid a flat fee of $10 per month for unlimited usage of so-called “Basic” services, which in all honesty didn’t include much of anything beyond the online encyclopedia and things that made Quantum money in other ways, like the online shopping mall. “Plus” services, including the games and the chat system that together were always the centerpiece of QuantumLink social life, cost $3.60 per hour, with one hour of free Plus usage per month included with every subscription. The service didn’t set the world on fire in the beginning, but the combination of Commodore’s official support, the user-friendliness of the graphical interface, and the aggressive pricing paid off reasonably well in the long term. Within two months, QuantumLink had its first 10,000 subscribers, a number it had taken CompuServe two years to achieve. Less than a year after that, it had hit 50,000 subscribers. By then, Quantum Computer Services had finally become self-sustaining, even able to make a start at paying down the debt they had accumulated during the Control Video years.

One of QuantumLink’s unique editorial services was an easy-to-navigate buyer’s guide to Commodore software.

Quantum had the advantage of being able to look back on six years of their rivals’ experience for clues as to what worked and what didn’t. For the intensely detail-oriented Steve Case, this was a treasure trove of incalculable value. Recognizing, as had Goldberg and Panzl before him, that other services were still far too hard to use for true mainstream acceptance, he insisted that nothing be allowed on QuantumLink that his mother couldn’t handle.

But Case’s vision for QuantumLink wasn’t only about being, as he put it, “a little easier and cheaper and more useful” than the competition. He grasped that, while people might sign up for an online service for the practical information and conveniences it could offer them, it was the social interchange — the sense of community — that kept them logging on. To a greater degree than that of any of its rivals, QuantumLink’s user community was actively curated by its owner. Every night of the week seemed to offer a chat with a special guest, or a game tournament, or something. If it was more artificial — perhaps in a way more cynical — than CompuServe’s more laissez-faire, organic approach to community-building, it was every bit as effective. “Most services are information- and retrieval-oriented. It doesn’t matter if you get on on Tuesday or Thursday because the information is the same,” said Case; as we’ve seen from earlier articles in this series, this statement wasn’t really accurate at all, but it served his rhetorical purpose. “What we’ve tried to do is create a more event-oriented social system, so you really do want to check in every night just to see what’s happening — because you don’t want to miss anything.” Getting the subscriber to log on every night was of course the whole point of the endeavor. “We recognized that chat and community were so important to keep people on,” remembers Bill Pytlovany, a Quantum programmer. “I joked about it. You get somebody online, we’ve got them by the balls. Plain and simple, they’ll be back tomorrow.”

Indeed, QuantumLink subscribers became if anything even more ferociously loyal — and ferociously addicted — than users of rival services. “For some people, it was their whole social life,” remembers a Quantum copywriter named Julia Wilkinson. “That was their reality.” All of the social phenomena I’ve already described on CompuServe — the friendships and the romances and, inevitably, the dirty talk — happened all over again on QuantumLink. (“The most popular [features of the service] were far and away the sexual chat rooms,” remembers one Quantum manager. “The reality of what was happening was, if you just let these folks plug into each other, middle-aged people start talking dirty to each other.”) Even at the cheaper prices, plenty of subscribers were soon racking up monthly bills in the hundreds of dollars — music to the ears of Steve Case and Jim Kimsey, especially given that the absolute number of QuantumLink subscribers would never quite meet the original expectations of either Quantum or Commodore. While the raw numbers of Commodore 64s had seemingly boded well — it had been by far the most popular home computer in North America when the service had launched — a glance beyond the numbers might have shown that the platform wasn’t quite as ideal as it seemed. Known most of all for its cheap price and its great games, the Commodore 64 attracted a much younger demographic than most other computer models. Such youngsters often lacked the means to pay even QuantumLink’s relatively cheap rates — and, when they did have money, often preferred to spend it on boxed games to play face to face with their friends rather than online games and chat.

Nevertheless, and while I know of no hard numbers that can be applied to QuantumLink at its peak, it had become a reasonably popular service by 1988, with a subscriber base that must have climbed comfortably over the 100,000 threshold. If not a serious threat to the likes of CompuServe, neither was it anything to sneeze at in the context of the times. Considering that QuantumLink was only ever available to owners of Commodore 64s and 128s — platforms that went into rapid decline in North America after 1987 — it did quite well in the big picture in what was always going to be a bit of an uphill struggle.

Even had the service been notable for nothing else, something known as Habitat would have been enough to secure QuantumLink a place in computing history. Developed in partnership with Lucasfilm Games, it was the first graphical massively multiplayer virtual world, one of the most important forerunners to everything from World of Warcraft to Second Life.  It was online in its original form for only a few months in early 1988, in a closed beta of a few hundred users that’s since passed into gaming legend. Quantum ultimately judged Habitat to be technologically and economically unfeasible to maintain on the scale that would have been required in order to offer access to all of their subscribers. It did, however, reemerge a year later in bowdlerized fashion as Club Caribe, more of an elaborate online-chat environment than the functioning virtual world Lucasfilm had envisioned.

But to reduce QuantumLink to the medium for Habitat, as is too often done in histories like this one, is unjust. The fact is that the service is notable for much more than this single pioneering game that tends so to dominate its historical memory. Its graphical interface would prove very influential on the competition, to a degree that is perhaps belied by its relatively modest subscriber roll. In 1988, a new service called Prodigy, backed by IBM and Sears, entered the market with an interface not all that far removed from QuantumLink’s, albeit running on MS-DOS machines rather than the Commodore 64; thanks mostly to its choice of platform, it would far outstrip its inspiration, surpassing even GEnie to become the number-two service behind CompuServe for a time in the early 1990s. Meanwhile virtually all of the traditional text-only services introduced some form of optional graphical front end. CompuServe, as usual, came up with the most thoroughgoing technical solution, offering up a well-documented “Host Micro Interface” protocol which third-party programmers could use to build their own front ends, thus creating a thriving, competitive marketplace with alternatives to suit most any user. Kimsey and Case could at least feel proud that their little upstart service had managed to influence such a giant of online life, even as they wished that QuantumLink’s bottom line was more reflective of its influence.

QuantumLink’s technical approach was proving to be, for all its advantages, something of a double-edged sword. For all that it had let Quantum create an easier, friendlier online service, for all that the Commodore and PlayNet deals had saved them from bankruptcy, it also left said service’s fate tied to that of the platform on which it ran. It meant, in other words, that QuantumLink came with an implacable expiration date.

This hard reality had never been lost on Steve Case. As early as 1986, he had started looking to create alternative services on other platforms, especially ones that might be longer-lived than Commodore’s aging 8-bit line. His dream platform was the Apple Macintosh, with its demographic of well-heeled users who loathed the command-line interfaces of most online services as the very embodiment of The Bad Old Way of pre-Mac computing. Showing the single-minded determination that could make him alternately loved and loathed, he actually moved to Cupertino, California, home of Apple, for a few months at the height of his lobbying efforts. But Apple wasn’t quite sure Quantum was really up to the task of making a next-generation online service for the Macintosh, finally offering him instead only a sort of trial run on the Apple II, their own aging 8-bit platform.

Quantum Computer Services’s second online service, a fairly straightforward port of the Commodore QuantumLink software stack to the Apple II, went online in May of 1988. It didn’t take off like they had hoped. Part of the problem was doubtless down to the fact that Apple II owners were well-entrenched by 1988 on services like CompuServe and GEnie, and weren’t inclined to switch to a rival service. But there was also some uncharacteristically mixed public messaging on the part of an Apple that had always seemed lukewarm about the whole project; people inside both companies joked that they had given the deal to Quantum to make an online service for a platform they didn’t much care about anymore just to get Steve Case to quit bugging them. Having already a long-established online support network known as AppleLink for dealers and professional clients, Apple insisted on calling this new, completely unrelated service AppleLink Personal Edition, creating huge confusion. And they rejected most of the initiatives that had made QuantumLink successful among Commodore owners, such as the inclusion of subscription kits in their computers’ boxes, thus compounding the feeling at Quantum that their supposed partners weren’t really all that committed to the service. Chafing under Apple’s rigid rules for branding and marketing, the old soldier Kimsey growled that they were harder to deal with than the Pentagon bureaucracy.

Apple dropped Quantum in the summer of 1989, barely a year after signing the deal with them, and thereby provoked a crisis inside the latter company. The investors weren’t at all happy with the way that Quantum seemed to be doing little more than treading water; with so much debt still to service, they were barely breaking even as a business. Meanwhile the Commodore 64 market to which they were still bound was now in undeniable free fall, and they had just seen their grand chance to ride Apple into greener pastures blow up in their faces. The investors blamed for the situation Steve Case, who had promised them that the world would be theirs if they could just get in the door at Cupertino. Jim Kimsey was forced to rise up in his protege’s defense. “You don’t take a 25-pound turkey out of the oven and throw it away before it’s done,” he said, pointing to the bright future that Case was insisting could yet be theirs if they would just stay the course. Kimsey could also deliver the good news from his legal department that terminating their marketing agreement early was going to cost Apple $2.5 million, to be paid directly to Quantum Computer Services. For the time being, it was enough to save Case’s job. But the question remained: what was Quantum to do in a post-Commodore world?

In his methodical way, Case had already been plugging away at several potential answers to that question beyond the Apple relationship. One of them, called PC-Link, was in fact just going live as this internal debate was taking place. Produced in partnership with Radio Shack, it was yet another port of the Commodore QuantumLink software stack, this time to Radio Shack’s Tandy line of MS-DOS clones. PC-Link would do okay, but Radio Shack stores were no longer the retail Ground Zero of the home-computing market that they had been when CompuServe had gotten into bed with them with such success almost a decade ago.

Quantum was also in discussions with no less of a computing giant than IBM, to launch an online service called Promenade in 1990 for a new line of IBM home computers called the PS/1, a sort of successor to the earlier, ill-fated PCjr. On the one hand, this was a huge deal for so tiny a company as Quantum Computer Services. But on the other, taking the legendary flop that had been the PCjr to heart, many in the industry were already expressing skepticism about a model line that had yet to even launch. Even Jim Kimsey was downplaying the deal: “It’s not a make-or-break deal for us. We’re not expecting more than $1 million in revenue from it [the first] year. Down the road, we don’t know how much it will be. If the PS/1 doesn’t work, we’re not in trouble.” A good thing, too: the PS/1 project would prove another expensive fiasco for an IBM who could never seem to figure out how to extend their success in business computing into the consumer marketplace.

So, neither of these potential answers was the answer Quantum sought. In fact, they were just exacerbating a problem that dogged the entire online-services industry: the way that no service could talk to any other service. By the end of the 1980s Quantum had launched or were about to launch four separate online services, none of which could talk to one another, marooning their subscribers on one island or another on the arbitrary basis of the model of computer they happened to have chosen to buy. It was hard enough to nurture one online community to health; to manage four was all but impossible. The deal with Commodore to found QuantumLink had almost certainly saved Quantum from drowning, but the similar bespoke deals with Apple, Radio Shack, and IBM, as impressive as they sounded on their face, threatened to become the millstone around their neck which dragged them under again.

Circa October of 1989, Case therefore decided it was time for Quantum to go it alone, to build a brand of their own instead of for someone else. The perfect place to start was with the moribund AppleLink Personal Edition, which, having just lost its official blessing from Apple, would have to either find a new name or shut down. Case wasn’t willing to do the latter, so it would have to be the former. While it would be hard to find a worse name than the one the service already had, he wanted something truly great for what he was coming to envision as the next phase of his company’s existence. He held a company-wide contest soliciting names, but in the end the one he chose was the one he came up with himself. AppleLink Personal Edition would become America Online. He loved the sense of sweep, and loved how very Middle American it sounded, like, say, Good Morning, America on the television or America’s Top 40 on the radio. It emphasized his dream of building an online community not for the socioeconomic elite but for the heart of the American mainstream. A member of said elite though he himself was, he knew where the real money was in American media. And besides, he thought the natural abbreviation of AOL rolled off the tongue in downright tripping fashion.

In the beginning, the new era which the name change portended was hard to picture; the new AOL was at this point nothing more than a re-branding of the old AppleLink Personal Edition. Only some months after the change, well into 1990, did Case begin to tip his hand. He had had his programmers working on his coveted Macintosh version of the AppleLink software since well before Apple had walked away, in the hope, since proven forlorn, that the latter would decide to expand their agreement with Quantum. Now, Quantum released the Macintosh version anyway — a version that connected to the very same AOL that was being used by Apple II owners. A process that would become known inside Quantum as “The Great Commingling” had begun.

Case had wanted the Mac version of AOL to blend what Jeff Wilkins over at CompuServe would have called “high-tech” and “high-touch.” He wanted, in other words, a product that would impress, but that would do so in a friendly, non-intimidating way. He came up with the idea of using a few voice samples in the software — a potentially very impressive feature indeed, given that the idea of a computer talking was still quite an exotic one among the non-techie demographic he intended to target. A customer-service rep at Quantum named Karen Edwards had a husband, Elwood Edwards, who worked as a professional broadcaster and voice actor. Case took him into a studio and had him record four phrases: “Welcome!,” “File’s done!,” “Goodbye!,” and, most famously, “You’ve got mail!” The last in particular would become one of the most iconic catchphrases of the 1990s, furnishing the title of a big Hollywood romantic comedy and even showing up in a Prince song. Even for those of us who were never on AOL, the sample today remains redolent of its era, when all of the United States seemed to be rushing to embrace its online future all at once. At AOL’s peak, the chirpy voice of Elwood Edwards was easily the most recognizable — and the most widely heard — voice in the country.

You’ve got mail!

But we get ahead of the story: recorded in 1990, the Edwards samples wouldn’t become iconic for several more years. In the meantime, the Great Commingling continued apace, with PC-Link and Promenade being shut down as separate services and merged into AOL in March of 1991. Only QuantumLink was left out in the cold; running as it was on the most limited hardware, with displays restricted to 40 columns of text, Quantum’s programmers judged that it just wasn’t possible to integrate what had once been their flagship service with the others. Instead QuantumLink would straggle on alone, albeit increasingly neglected, as a separate service for another four and a half years. The few tens of thousands of loyalists who stuck it out to the bitter end often retained their old Commodore hardware, now far enough out of date to be all but useless for any other purpose, just to retain access to QuantumLink. The plug was finally pulled on October 31, 1994, one day shy of the service’s ninth birthday. Even discounting the role it had played as the technical and philosophical inspiration for America Online, the software that Howard Goldberg and David Panzl and their team of student programmers had created had had one heck of a run. Indeed, QuantumLink is regarded to this day with immense nostalgia by those who used it, to such an extent that they still dream the occasional quixotic dream of reviving it.

The first version of America Online for MS-DOS. Steve Case convinced Isaac Asimov, Bill von Meister’s original celebrity spokesman for The Source all those years ago, to lend his name to a science-fiction area. It seemed that things had come full-circle…

For Steve Case, though, QuantumLink was the past already in 1991; AOL was the future. The latter was now available to anyone with an MS-DOS computer — already the overwhelmingly dominant platform in the country, whose dominance would grow to virtual monopoly status as the decade progressed. This was the path to the mainstream. To better reflect the hoped-for future, the name of Quantum Computer Services joined that of Control Video in Jim Kimsey’s shoe box of odds and ends in October of 1991. Henceforward, the company as well as the service would be known as America Online.

Much of the staff’s time continued to be devoted to curating community. Now, though, even more of the online events focused on subject areas that had little to do with computers, or for that matter with the other things that stereotypical computer owners tended to be interested in. Gardening, auto repair, and television were as prominently featured as programming languages. The approach seemed to be paying off, giving AOL, helped along by its easy-to-use software and a meticulously coached customer-support staff, a growing reputation as the online service for the rest of us. It had just under 150,000 subscribers by October of 1991. This was still small by the standards of CompuServe, GEnie, or Prodigy, but AOL was coming on strong. The number of subscribers would double within the next few months, and again over the next few months after that, and so on and so on.

CompuServe offered to buy AOL for $50 million. At two and a half times the latter’s current annual revenue, it was a fairly generous offer. Just a few years before, Kimsey would have leaped at a sum a fraction of this size to wash his hands of his problem child of a company. Even now, he was inclined to take the deal, but Steve Case was emphatically opposed, insisting that they were all on the verge of something extraordinary. The first real rift between the pair of unlikely friends was threatening. But when his attempts to convince CompuServe to pay a little more failed to bear fruit, Kimsey finally agreed to reject the offer. He would later say that, had CompuServe been willing to pay $60 million, he would have corralled his investors and sold out, upset Case or no. Had he done so, the history of online life in the 1990s would have played out in considerably different fashion.

With the CompuServe deal rejected, the die was cast; AOL would make it alone or not at all. At the end of 1991, Kimsey formally passed the baton to Case, bestowing on him the title of CEO of this company in which he had always been far more emotionally invested than his older friend. But then, just a few months later, Kimsey grabbed the title back at the behest of the board of directors. They were on the verge of an initial public offering, and the board had decided that the grizzled and gregarious Kimsey would make a better face of the company on Wall Street than Case, still an awkward public speaker prone to lapse gauche or just clam up entirely at the worst possible moments. It was only temporary, Kimsey assured his friend, who was bravely trying but failing to hide how badly this latest slap in the face from AOL’s investors stung him.

America Online went public on March 19, 1992, with an initial offering of 2 million shares. Suddenly nearly everyone at the company, now 116 employees strong, was wealthy. Jim Kimsey made $3.2 million that day, Steve Case $2 million. A real buzz was building around AOL, which was indeed increasingly being seen, just as Case had always intended, as the American mainstream’s online service. The Wall Street Journal‘s influential technology reporter Walt Mossberg called AOL “the sophisticated wave of the future,” and no less a tech mogul than Paul Allen of Microsoft fame began buying up shares at a voracious pace. Ten years on from its founding, and already on its third name, AOL was finally getting hot. Which was good, because it would never be cool, would always be spurned by the tech intelligentsia who wrote for Wired and talked about the Singularity. No matter; Steve Case would take being profitable over being cool any day, would happily play Michael Bolton to the other services’ Nirvana.

For all the change and turmoil that Control Video/Quantum Computer/America Online had gone through over the past decade, Bill von Meister’s original vision for the company remained intact to a surprising degree. He had recognized that an online service must offer the things that mainstream America cared about in order to foster mainstream appeal. He had recognized that an online service must be made as easy to use as humanly possible. And he had seen the commercial and technical advantages — not least in fostering that aforementioned ease of use — that could flow from taking complete control of the subscriber’s experience via custom, proprietary software. He had even seen that the mainstream online life of the future would be based around graphics at least as much as text. But, as usual for him, he had come to all these realizations a little too early. Now, the technology was catching up to the vision, and AOL stood poised to reap rewards which even Steve Case could hardly imagine.

(Sources: the books On the Way to the Web: The Secret History of the Internet and its Founders by Michael A. Banks, Stealing Time: Steve Case, Jerry Levin, and the Collapse of AOL Time Warner by Alec Klein, Fools Rush In: Steve Case, Jerry Levin, and the Unmaking of AOL Time Warner by Nina Munk, and There Must be a Pony in Here Somewhere: The AOL Time Warner Debacle by Kara Swisher; Softline of May 1982; New York Times of December 3 1984; Ahoy! of February 1985; Commodore Power/Play of December 1984/January 1985; Info issues 6 and 9; Run of August 1985 and November 1985; Midnite Software Gazette of January/February 1985 and November/December 1985; Washington Post of May 30 1985 and June 29 1990; Compute! of November 1985; Compute!’s Gazette of March 1986 and January 1989; Commodore Magazine of October 1989; Commodore World of August/September 1995; The Monitor of March 1996; the episode of the Computer Chronicles television series entitled “Online Databases, Part 1”; old Usenet posts by C.D. Kaiser and Randell Jesup.)

 
15 Comments

Posted by on November 24, 2017 in Digital Antiquaria, Interactive Fiction

 

Tags: , , ,