RSS

Tag Archives: internet

A Digital Pornutopia, Part 2: The Internet is for Porn


Fair warning: although there’s no nudity in the pictures below, the text of this article does contain frank descriptions of the human anatomy and sexual acts.

When you want to know where the zeitgeist is heading, just look to what the punters are lining up to see on Broadway. To wit: the unexpected breakout hit of the 2003 to 2004 season was Avenue Q, a low-budget send-up of Sesame Street where the puppets cursed, drank, and had sex with one another. They were rude, crude, and weirdly relatable — even lovable, what with their habit of breaking into song at the drop of a hat. The most enduring of their songs was a timeless show tune called “The Internet is for Porn.” (Eat your hearts out, Rodgers and Hammerstein!) It became, inevitably, an Internet meme of its own, reflecting an unnerving feeling that the ground was shifting beneath society’s feet, that the most important practical affordance of the World Wide Web, that noble experiment in the unfettered exchange of information, might indeed be to put porn at the fingertips of every human being with a computer on his desk.

And yet the world hadn’t seen anything yet in 2003; the statistics surrounding Internet porn would become truly gob-smacking after streaming video and smartphones became everyday commodities. By 2016, Pornhub, the biggest smut aggregator on the Internet, would be attracting four visits per year for every man, woman, and child on the planet. There was enough material on that site alone to keep a porn hound glued to his screen for five times as long as Homo sapiens have existed, with more fresh porn being uploaded to the site every few months than the entirety of the twentieth century had managed to produce. Needless to say, the pace of neither porn consumption nor production has cooled off a jot in the years since.

On one level, the sheer size of porn’s digital footprint is kind of hilarious. How many images do we really need of an activity which has only a limited number of possible permutations and combinations in the end, despite the fevered efforts of the imaginations behind it to discover… well, not quite virgin territory, but you know what I mean. I’ve long since come to realize that I am, for better or for worse, a member of the last generation of Western humanity to have grown up thinking of images of naked bodies and sexual activity as a scarce commodity. Cue the anecdotes about the lengths boys like I was used to have to go to in order to get a glimpse of an actual naked or even partially unclothed woman: sneaking into Dad’s Playboy stash, circumventing the child lockout on the family television’s cable box, perusing Big Sister’s Victoria’s Secret catalog, even resorting when worst came to worst to the sturdy maidens in equally sturdy brassieres that used to be found in the lingerie section of the Sears catalog. Such tales read as quaintly as the courtship rituals of Jane Austen novels to the generation after ours, who just have to pull their phones out of their pockets to see sights that would have shocked the young me to my pubescent core.

Yet lurking behind the farcical absurdity of porn’s present-day popularity are serious questions for which none of us have any concrete answers. What does it do to young people to grow up with virtual if not physical sex at their literal fingertips? For that matter, what does it do to those of us who aren’t so young anymore? Some point hopefully to statistics which seem to show that accessible porn leads to dramatically decreased rates of real-world sexual violence. But even those of us who try our darnedest to be open-minded and sex-positive can’t always suppress the uneasy feeling that turning an act as intimate as making love into a commodity as ubiquitous as toilet paper may come at a cost to our humanity.

Of course, we won’t be able to resolve these dilemmas here. What we will do today, however, is learn how the song “The Internet is for Porn” may have been more truthy than even its writers were aware of. For if you look at the technologies and practices that make the modern Web go — not the idealistic building blocks provided by J.C.R. Licklider and Tim Berners-Lee and their many storied colleagues, but the ones behind the commercial Web of today — you find that a crazy amount of them came straight out of porn: online payment systems, ad trackers, affiliated marketing, streaming video, video conferencing… all of them and more were made for porn.



It was fully eight years before Avenue Q opened that the mainstream media’s attention was captured for the first time by porn on the Internet. On June 14, 1995, Jim Exon, a 74-year-old Democratic senator from Nebraska, stood up inside the United States Capitol Building to lead his colleagues in a prayer.

Almighty God, lord of all life, we praise you for the advancements in computerized communications that we enjoy in our time. Sadly, however, there are those who are littering this information superhighway with obscene, indecent, and destructive pornography. Now, guide the senators when they consider ways of controlling the pollution of computer communications and how to preserve one of our greatest resources: the minds of our children and the future moral strength of our nation. Amen.

As Exon spoke, he waved a blue binder in front of his face, filled, so he said, with filthy pictures his staff had found online. “I cannot and would not show these pictures to the Senate,” he thundered. “I would not want our cameras to pick them up. If nothing is done now, the pornographers may become the primary beneficiary of the information revolution.”

Most of his audience had no idea what he was on about. An exception was Dan Coats, a Republican senator from Indiana, who had in fact been the one to light a fire under Exon in the first place. “With old Internet technology, retrieving and viewing any graphic image on a PC at home could be laborious,” Coats explained in slightly more grounded diction after Exon had finished his righteous call to arms. “New Internet technology, like browsers for the Web, makes all of this easier.” He cited a study that was about to be published in The Georgetown Law Review, claiming that 450,000 pornographic images could already be found online, and that these had already been downloaded 6.4 million times. What on earth would happen once the Internet became truly ubiquitous, something everyone expected to happen over the next few years? “Think of the children!”

Luckily for that most precious of all social resources, Coats and Exon had legislation ready to go. Their bill would make it a federal crime to “use any interactive computer service to display in a manner available to a person under eighteen years of age, any comment, request, suggestion, proposal, image, or other communication that, in context, depicts or describes, in terms patently offensive as measured by contemporary community standards, sexual or excretory activities or organs.” Nonplussed though they largely were by it all, “few senators wanted to cast a nationally televised vote that might later be characterized as pro-pornography,” as Time magazine put it. The bill passed by a vote of 86 to 14.

On February 8, 1996, President Bill Clinton signed the final version of the Communications Decency Act into law. “Today,” he said, “with the stroke of a pen, our laws will catch up to the future.” Or perhaps not quite today: just as countless legal experts had warned would happen, the new law was immediately tied up in litigation, challenged as an unacceptable infringement on the right to free speech.

Looking back, the most remarkable thing about this first furor over online porn is just how early it came, before the World Wide Web was more than a vague aspirational notion, if that, in the minds of the large majority of Americans. The Georgetown Law study which had prompted it — a seriously flawed if not outright fraudulent study, written originally as an undergraduate research paper — didn’t focus on the Web at all but rather on Usenet, a worldwide textual discussion forum which had been hacked long ago to foster the exchange of binary files as well, among them dirty pictures.

Nevertheless, by the time the Communications Decency Act became one of the shakier laws of the land the locus of digital porn was migrating quickly from CD-ROM and Usenet to the Web. Like so much else there, porn on the Web began more in a spirit of amateur experimentation than hard-eyed business acumen. During the early days of Mosaic and Netscape and Web 1.0, hundreds of thousands of ordinary folks who could sense a communications revolution in the offing rushed to learn HTML and set up their own little pages on the Web, dedicated to whatever topic they found most interesting. For some of them, that topic was sex. There are far too many stories here for me to tell you today, but we can make space for one of them at least. It involves Jen Peterson and Dave Miller, a young couple just out of high school who were trying to make ends meet in their cramped Baltimore apartment.

In the spring of 1995, Jen got approved for a Sears credit card, whereupon Dave convinced her that they should buy a computer with their windfall, to find out what this Internet thing that they were seeing in the news was really all about. So, they spent $4000 on a state-of-the-art 75Mhz Packard Bell system, complete with monitor and modem, and lugged it back home on the bus.

Dave’s first destinations on the Internet were Simpsons websites. But one day he asked himself, “I wonder if there’s any nudity on this thing?” Whereupon he discovered that there was an active trade in dirty pictures going on on Usenet. Now, it just so happened that Dave was something of a photographer himself, and his favorite subject was the unclothed Jen: “We would look at [the pictures] afterwards, and that would lead to even better sex. I wanted to share them. I wanted people to see Jen’s body.” Jen was game, so the couple started uploading their own pictures to Usenet.

But Usenet was just so baroque and unfriendly. Dave’s particular sexual kink — not an unusual one, on the spectrum of same — made him want to show Jen to as many people as possible, which meant finding a more accessible medium for the purpose. In or around October of 1995, the couple opened “JENnDAVE’s HOME PAGE!” (“I called us Jen and Dave rather than Dave and Jen,” says Dave, “because I knew nobody was there to see me. I wasn’t being sweet; I was being practical.”) At that time, Internet service providers gave you a home page with which to plant your flag on the Web as part of your subscription, so the pair’s initial financial investment in the site was literally zero. This same figure was, not coincidentally, what they charged their visitors.

Jen and Dave’s home page, from a simpler time when 800 X 600 was a high resolution.

But within five months, the site was attracting 25,000 visitors every day, and their service provider was growing restless under the traffic load; in fact, the amount of bandwidth Jen and Dave’s dirty pictures were absorbing was single-handedly responsible for the provider deleting the promise of “unlimited traffic” from its contracts. Meanwhile the Communications Decency Act had become law — a law which their site was all too plainly violating, placing them at risk of significant fines or even prison terms if the courts should finally decide that it was constitutional.

Yet just how was one to ensure that one’s porn wasn’t “available to a person under eighteen years of age,” as the law demanded, on the wide-open Web? Some folks, Jen and Dave among them, put up entrance pages which asked the visitor to click a button certifying that, “Yep, I’m eighteen, alright!” It was doubtful, however, whether a judge would construe such an honor system to mean that their sites were no longer “available” to youngsters. Out of this dilemma, almost as much as the pure profit motive, arose the need and desire to accept credit cards in return for dirty pictures over the Internet. For in the United States at least, a credit card, which by law could not be issued to anyone under the age of eighteen, was about as trustworthy a signifier of maturity as you were likely to find.

We’ll return to Jen and Dave momentarily. Right now, though, we must shift our focus to a wheeler and dealer named Richard Gordon, a fellow aptly described by journalist Samantha Cole as “a smooth serial entrepreneur with a grifter’s lean.” Certainly he had a sketchy background by almost anyone’s terms. In the late 1970s, he’d worked in New York in insurance and financial planning, and had gotten into the habit of dipping into his customers’ accounts to fund his own lavish lifestyle. He attempted to flee the country after being tipped off that he was under investigation by the feds, only to be dragged out of the closet of a friend’s apartment with a Concorde ticket to Paris in his hand. He served just two years of his seven-year prison sentence, emerging on parole in 1982 to continue the hustle.

Two years later, President Ronald Reagan’s administration effected a long-in-the-offing final breakup of AT&T, the corporate giant that had for well over half a century held an all but ironclad monopoly over telegraphy, telephony, and computer telecommunications in the United States. Overnight, one ginormous company became 23 smaller ones. There followed just the explosion of innovation that the Reagan administration had predicted, as those companies and other, new players all jockeyed for competitive advantage. Among other things, this led to a dramatic expansion in the leasing of “1-900” numbers: commercial telephone numbers which billed the people who called them by the minute. When it had first rolled them out in the 1970s, AT&T had imagined that they would be used for time and temperature updates, sports scores, movie listings, perhaps for dial-a-joke services, polls, and horoscopes. And indeed, they were used for all of these things and more. But if you’ve read this far, you can probably guess where this is going: they were used most of all for phone sex. The go-go 1980s in the telecom sector turned personalized auditory masturbation aids into a thriving cottage industry.

Still, there was a problem that many of those who wanted to get in on the action found well-nigh intractable: the problem of actually collecting money from their customers. The obvious way of doing so was through a credit card, which was quick and convenient and thus highly conducive to impulse buying, and which could serve as an age guarantee to boot. But the credit-card companies were huge corporations with convoluted application processes for merchants, difficult entities for the average shoestring phone-sex provider teetering on the ragged edge of business-world legitimacy to deal with.

Richard Gordon saw opportunity in this state of affairs. He set up an intermediate credit-card-billing service for the phone-sex folks. They just had to sign up with him, and he would take care of all the rest — for a small cut of the transactions he processed, naturally. His system came with an additional advantage which phone-sex customers greatly appreciated: instead of, say, “1-900-HOT-SEXX” appearing on their credit-card statements, there appeared only the innocuously generic name of “Electronic Card Systems,” which was much easier to explain away to a suspicious spouse. Gordon made a lot of money off phone sex, learning along the way an important lesson: that there was far more money to be made in facilitating the exchange of porn than in making the stuff yourself and selling it directly. The venture even came with a welcome veneer of plausible deniability; there was nothing preventing Gordon from signing up other sorts of 1-900 numbers to his billing service as well. These could be the customers he talked about at polite cocktail parties, even as he made the bulk of his money from telephonic masturbation.

The Web came to his attention in the mid-1990s. “What is the Net?” he asked himself. “It’s just a phone call with pictures.” So, Gordon extended his thriving phone-sex billing service to the purveyors of Internet pornography. In so doing, he would “play a significant role in the birth of electronic commerce,” as The New York Times would diffidently put it twelve years later, “laying the groundwork for electronic transactions conducted with credit cards, opening the doors to the first generation of e-commerce startups.”

In truth, it’s difficult to overstate the importance of this step to the evolution of the Web. Somewhat contrary to The New York Times‘s statement, Richard Gordon did not invent e-commerce from whole cloth; it had been going on on closed commercial services like CompuServe since the mid-1980s. Because those services were run from the top down and, indeed, were billing their customers’ credit cards every month already, they were equipped out of the box to handle online transactions in a way that the open, anarchic Web was not. Netscape provided the necessary core technology for this purpose when they added support for encrypted traffic to their Navigator browser. But it was Gordon and a handful of others like him who actually made commerce on the Web a practical reality, blazing trails that would soon be followed by more respectable institutions; without Gordon’s Electronic Card Services to show the way, there would never have been a PayPal.

In the meantime, Gordon happily accepted babes in the woods like Jen Peterson and Dave Miller, who wouldn’t have had a clue how to set up a merchant’s account with any one of the credit-card companies, much less all of them for maximum customer convenience. “He was the house for Internet porn in those days,” says one Steven Peisner, who worked for him. “At that time, if you had anything to do with Internet porn, you called Electronic Card Systems.”

Thanks to Gordon, Jen and Dave were able to sign up with a real hosting company and start charging $5 for six months of full access to their site in early 1996. By the turn of the millennium, the price was $15 for one month.

Dave in booby clover.

The site lost some of its innocence in another sense as well over the course of time. What had begun with cheesecake nudie pics turned into real hardcore porn, as others came to join in on the fun. “I would be with other girls and Jen would be with other dudes and most of the time, that was in the context of picture taking,” says Dave. “People said, ‘Oh Jen and Dave, you’ve gone away from your roots, you’re no longer the sweet innocent couple that you were. Now, you’ll screw anybody.'”

Their unlikely careers in porn largely ended after they had twins in 2005, by which time their quaint little site was already an anachronism in a sea of cutthroat porn aggregators. Today Dave works in medical administration and runs pub quizzes on the weekends, while Jen maintains their sexy archive and runs a home. They have no regrets about their former lives. “We were just looking to have a good time and spread the ideals of body-positivity and sex-positivity,” says Jen. “Even if we didn’t yet have the words for those things.”

Jen and Dave today, in wholesome middle age.

A Pennsylvanian college student named Jennifer Ringley was a trailblazer of a different stripe, billing herself as a “lifecaster.” In 1996, she saw an early webcam, capable of capturing still images only, for sale in the Dickinson College bookstore and just had to have it. “You could become the human version of FishCam,” joked one of her friends, referring to a camera that had been set up in an aquarium in Mountain View, California, to deliver a live feed, refreshed every three to four seconds, to anyone who visited its website. Having been raised in a nudist family, Ringley was no shrinking violet; she found the idea extremely appealing.

The result was JenniCam, which showed what was going on in her dorm room around the clock — albeit, this being the 1990s, in the form of a linear series of still photographs only, snapped at the rather bleary-eyed resolution of 320 X 240. “Whatever you’re seeing isn’t staged or faked,” she said, “and while I don’t claim to be the most interesting person in the world, there’s something compelling about real life that staging it wouldn’t bring to the medium.” She was at pains to point out that JenniCam was a social experiment, one of many that were making the news on the early Web at the time. Whatever else it was, it wasn’t porn; if you happened to catch her changing clothes or making out with a boy she’d brought back to the room or even pleasuring herself alone, that was just another aspect of the life being documented.

One cannot help but feel that she protested a bit too much. After all, her original domain name was boudoir.org, and she wasn’t above performing the occasional striptease for the camera. Even if she hadn’t played for the camera so obviously at times, we would have reason to doubt whether the scenes it captured were the same as they would have been had the camera not been present. For, as documentary-film theory teaches us, the “fly on the wall” is a myth; the camera always changes the scene it captures by the very fact of its presence.

Jenny Ringley

Jenny Ringley not performing at all for the camera.

Like Jen and Dave, Ringley first put her pictures online for free, but later she began charging for access. At its peak, her site was getting millions of hits every day. “The peep-show nature of the medium was enough to get viewers turned on,” writes Patchen Barss in The Erotic Engine, a study of pornography. “Just having a window into a real person’s life was plenty — people would pay for the occasional chance to observe Ringley’s non-porn-star-like sex life, or to just catch her walking naked to the shower.”

Ringley inspired countless imitators, some likewise insisting that they were engaged in a social experiment or art project, others leaning more frankly into titillation. Some of the shine went off the experiment for her personally in 2000, when she was captured enjoying a tryst with the fiancé of another “cam girl.” (Ah, what a strange world it was already becoming…) The same mainstream media that had been burning with high-minded questions to ask her a few years earlier now labeled her a “redheaded little minx” and “amoral man-trapper.” Still, she kept her site going until December 31, 2003, making a decent living from a 95-percent male clientele who wanted the thrill of being a Peeping Tom without the dangers.

Sites like Jen and Dave’s and to some extent Jennifer Ringley’s existed on the hazy border between amateur exhibitionism and porn as a business. Much of their charm, if that is a word that applies in your opinion, stems from their ramshackle quality. But other keen minds realized early on that online porn was going to be huge, and set out far more methodically to capitalize on it.

One of the most interesting and unique of them was the stripper and model who called herself Danni Ashe, who marketed herself as a living, breathing fantasy of nerdy males, a “geek with big breasts,” as she put it. Well before becoming an online star, she was a headline attraction at strip clubs all over the country, thanks to skin-magazine “profiles” and soft-core videos. “I ventured onto the Internet and quickly got into the Usenet newsgroups, where I was hearing that my pictures were being posted, and started talking to people,” she said later. “I spent several really intense months in the newsgroups, and it was out of those conversations that the idea for Danni’s Hard Drive was born.” According to her official lore, she learned HTML during a vacation to the Bahamas and coded up her site all by herself from scratch.

In contrast to the sites we’ve already met, Danni’s Hard Drive was designed to make money from the start. It went live with a subscription price of $20 per month, which provided access to hundreds of nude and semi-nude photographs of the proprietor and, soon enough, many other women as well. Ashe dealt only in pictorials not much more explicit than those seen in Playboy, both in the case of her own pictures and those of others. As Samantha Cole writes, “Danni never shot any content with men and never posted images of herself with anything — even a sex toy — inside her.” Despite its defiantly soft-core nature in a field where extremism usually reigns supreme, some accounts claim that Danni’s Hard Drive was the busiest single site on the Internet for a couple of years, consuming more bandwidth each day than the entirety of Central America. It was as innovative as it was profitable, setting into place more building blocks of the post-millennial Web. Most notably, it pioneered online video streaming via a proprietary technology called DanniVision more than half a decade before YouTube came to be.

Danni’s Hard Drive had 25,000 subscribers by the time DanniVision was added to its portfolio of temptations in 1999. It weathered the dot.com crash of the year 2000 with nary a scratch. In 2001, the business employed 45 people behind the cameras — almost all of them women — and turned an $8 million annual profit. Savvy businesswoman that she was, Ashe sold out at the perfect moment, walking away with millions in her pocket in 2004.

Equally savvy was one Beth Mansfield, who realized, like Richard Gordon before her, that the easiest and safest way to earn money from porn was as a facilitator rather than a maker. She was a 36-year-old unemployed accountant and single mother living with her children in a trailer in Alabama when she heard the buzz about the Web and resolved to find a way to make a living online. She decided that porn was the easiest content area in which to do so, even though she had no personal interest in it whatsoever. It was just smart business; she saw a hole in an existing market and figured out how to fill it.

Said hole was the lack of a good way to find the porn you found most exciting. With automated site-indexing Web crawlers still in their infancy, most people’s on-ramp to the Web at the time was the Yahoo! home page, an exhaustive list of hand-curated links, a sort of Internet Yellow Pages. But Yahoo! wasn’t about to risk offending the investors who had just rewarded it with the splashiest IPO this side of Netscape’s by getting into porn curation.

So, Mansfield decided to make her own Yahoo! for porn. She called it Persian Kitty, after the family cat. Anyone could submit a porn site to her to be listed, after which she would do her due diligence by ensuring it was what they said it was and add it to one or more of her many fussily specific categories. She compared her relationship with the sex organs she spent hours per day staring at to that of a gynecologist: “I’m probably the strangest adult cruiser there is. I go and look at the structure [of the site, not the sex organs!], look at what they offer, count the images, and I’m out.” While a simple listing on Persian Kitty was free, she made money — quite a lot of money — by getting the owners of porn sites to pay for banner advertisements and priority placement within the categories, long before such online advertising went mainstream. Like Danni Ashe, she eventually sold her business for a small fortune and walked away.

We’ve met an unexpected number of female entrepreneurs thus far. And indeed, if you’re looking for positives to take away from the earliest days of online porn, one of them must surely be the number of women who took advantage of the low barriers to entry in online media to make or facilitate porn on their own terms — a welcome contrast to the notoriously exploitive old-school porn industry, a morally reprehensible place regardless of your views on sexual mores. “People have an idea of who runs a sexually oriented site on the Web,” said Danni Ashe during a chance encounter with the film critic Roger Ebert at Cannes. “They think of a dirty old man with a cigar. A Mafia type.” However you judged her, she certainly didn’t fit that stereotype.

Sadly, though, the stereotype became more and more the rule as time went on and the money waiting to be made from sex on the Web continued to escalate almost exponentially. By the turn of the millennium, the online porn industry was mostly controlled by men, just like the offline one. In the end, that is to say, Richard Gordon rather than Danni Ashe or Beth Mansfield became the archetypal porn entrepreneur online as well.

Another of the new bosses who were the same as the old was Ron “Fantasy Man” Levi, an imposing amalgamation of muscles, tattoos, and hair tonic who looked and lived like a character out of a mob movie. Having made his first fortune as the owner of a network of phone-sex providers, Levi, like Gordon before him, turned to the Web as the logical next frontier. His programmers developed the technology behind what we now refer to as “online affiliate marketing,” yet another of the mainstays of modern e-commerce, in a package he called the “XXX Counter.”

In a sense, it was just a super-powered version of what Beth Mansfield was already doing on Persian Kitty. By taking advantage of cookies — small chunks of persistent information that a Web browser can be asked to store on a user’s hard drive, that can then be used to track that user’s progress from site to site — the XXX Counter was able to see exactly what links had turned a sex-curious surfer into a paying customer of one or more porn sites.

This technology is almost as important to the commercial Web of today as the ability to accept credit cards. It’s employed by countless online stores from Amazon on down, being among other things the reason that a profession with the title of “online influencer” exists today. (Oh, what a strange world we live in…) Patchen Barss:

The esoteric computer technology which originally merely allowed EuroNubiles.com to know when PantyhosePlanet.com had sent some customers their way is today a key part of how Amazon, iTunes, eBay, and thousands of other online retailers work. Each offers a commission system for referring sites that send paying traffic their way. They rarely acknowledge that this key part of their business model was developed and refined by the adult industry.

All of the stories and players we’ve met thus far, along with many, many more, added up to a thriving industry, long before respectable e-commerce was much more than a twinkle in Jeff Bezos’s eye. Wired magazine reported in its December 1997 issue that an extraordinary 28,000 adult sites now existed on the World Wide Web, and that one or more of them were visited by 30 percent of all Internet-connected computers every single month. Estimates of the annual revenues they were bringing in ranged from $100 million to $1.2 billion. The joke in Silicon Valley and Wall Street alike was that porn was the only thing yet making real money on the Web (as opposed to the funny money of IPOs and venture capitalists, a product of aspirations rather than operations). Porn was the only form of online content that people had as yet definitively shown themselves to be willing — eager, in fact — to pay for. From the same Wired article:

Within the information-and-entertainment category — sales of online content, as opposed to consumer goods and financial services — commercial sex sites are almost the only ones in the black. Other content providers, operating in an environment that puts any offering that doesn’t promise an orgasm at a competitive disadvantage, are still trying to come up with a viable business model. ESPN SportsZone may be one of the most popular content sites on the Web, but most of what it offers is free. Online game developers can’t figure out whether to impose a flat fee or charge by the hour or rely on ad sales. USA Today had to cut the monthly subscription fee on its website from $15 to $13 and finally to nothing. Among major print publications, only The Wall Street Journal has managed to impose a blanket subscription fee.

“Sex and money,” observes Mike Wheeler, president of MSNBC Desktop Video, a Web-based video news service for the corporate market. “Those are the two areas you can charge for.”

The San Francisco Chronicle put it more succinctly at about the same time: “There’s a two-word mantra for people who want to make money on the Internet — sex sells.”

Ironically, the Communications Decency Act — the law that had first prompted so many online porn operators to lock their content behind paywalls — was already history by the time the publications wrote these words. The Supreme Court had struck it down once and for all in June of 1997, calling it a gross violation of the right to free speech. Nevertheless, the paywalled porn sites remained. Too many people were making too much money for it to be otherwise. In attempting to stamp out online porn, Senators Coats and Exon had helped to create a monster beyond their wildest nightmares.

In addition to blazing the trails that the Jeff Bezos of the world would soon follow in terms of online payments and affiliate marketing, porn sites were embracing new technologies like JavaScript before just about anyone else. As Wired magazine wrote, “No matter how you feel about their content, sex sites are among the most visually dazzling around.” “We’re on the cutting edge of everything,” said one porn-site designer. “If there’s a new technology out there and we want to add it to the site, it’s not hard to convince management.”

I could keep on going, through online technology after online technology. For example, take the video-conferencing systems that have become such a mainstay of business life around the world since the pandemic. Porn was their first killer app, after some enterprising entrepreneurs figured out that the only thing better than phone sex was phone sex with video. The porn mavens even anticipated — ominously, some might say — the business models of modern-day social-media sites. “The consumers are the content!” crowed one of them in the midst of setting up a site for amateur porn stars to let it all hang out. The vast majority of that deluge of new porn that now gets uploaded every day comes from amateurs who expect little or nothing in payment beyond the thrill of knowing that others are getting off on watching them. The people hosting this material understand what Richard Gordon, Beth Mansfield and Ron Levi knew before them. Allow me to repeat it one more time, just for good measure: the real money is in porn facilitation, not in porn production.

In light of all this, it’s small wonder that nobody talked much about porn on CD-ROM after 1996, that AdultDex became all about online sex, showcasing products like a “$100,000 turnkey cyberporn system” — a porn site in a box, perfect for those looking to break into the Web’s hottest sector in a hurry. “The whole Internet is being driven by the adult industry,” said one AdultDex exhibitor who asked not to be named. “If all this were made illegal tomorrow, the Internet would go back to being a bunch of scientists discussing geek stuff in email.” That might have been overstating the case just a bit, but there was no denying that virtual sex was at the vanguard of the most revolutionary development in mass communications since the printing press. The World Wide Web had fulfilled the promise of the seedy ROM.



Did you enjoy this article? If so, please think about pitching in to help me make many more like it. You can pledge any amount you like.


Sources: The books How Sex Changed the Internet and the Internet Changed Sex by Samantha Cole, Obscene Profits: The Entrepreneurs of Pornography in the Cyber Age by Frederick S. Lane III, The Players Ball: A Genius, a Con Man, and the Secret History of the Internet’s Rise by David Kushner, The Erotic Engine by Patchen Barss, and The Pornography Wars: The Past, Present, and Future of America’s Obscene Obsession by Kelsy Burke. Wired of February 1997 and December 1997; Time of July 1995; San Francisco Chronicle of November 19 1997; San Diego Tribune of May 8 2017; New York Times of August 1 2003 and May 1 2004; Wall Street Journal of May 20 1997. Online sources include “Sex Sells, Doesn’t It?” Mark Gimein on Salon, Jen and Dave’s current (porn-free) home page, and “‘I Started Really Getting Into It’: Seven Pioneers of Amateur Porn Look Back” by Alexa Tsoulis-Reay at The Cut.

You can find the 1990s-vintage Jen and Dave, JenniCam, Danni’s Hard Drive, and Persian Kitty at archive.org. Needless to say, you should understand what you are getting into before you visit.

Finally, for a highly fictionalized and sensationalized but entertaining and truthy tale about the early days of online porn, see the 2009 movie Middle Men.

 
 

Tags:

Doing Windows, Part 12: David and Goliath

Microsoft, intent on its mission to destroy Netscape, rolled out across the industry with all the subtlety and attendant goodwill of Germany invading Poland…

— Merrill R. Chapman

No one reacted more excitedly to the talk of Java as the dawn of a whole new way of computing than did the folks at Netscape. Marc Andreessen, whose head had swollen exactly as much as the average 24-year-old’s would upon being repeatedly called a great engineer, businessman, and social visionary all rolled into one, was soon proclaiming Netscape Navigator to be far more than just a Web browser: it was general-purpose computing’s next standard platform, possibly the last one it would ever need. Java, he said, generously sharing the credit for this development, was “as revolutionary as the Web itself.” As for Microsoft Windows, it was merely “a poorly debugged set of device drivers.” Many even inside Netscape wondered whether he was wise to poke the bear from Redmond so, but he was every inch a young man feeling his oats.

Just two weeks before the release of Windows 95, the United States Justice Department had ended a lengthy antitrust investigation of Microsoft’s business practices with a decision not to bring any charges. Bill Gates and his colleague took this to mean it was open season on Netscape.

Thus, just a few weeks after the bravura Windows 95 launch, a war that would dominate the business and computing press for the next three years began. The opening salvo from Microsoft came in a weirdly innocuous package: something called the “Windows Plus Pack,” which consisted mostly of slightly frivolous odds and ends that hadn’t made it into the main Windows 95 distribution — desktop themes, screensavers, sound effects, etc. But it also included the very first release of Microsoft’s own Internet Explorer browser, the fruit of the deal with Spyglass. After you put the Plus! CD into the drive and let the package install itself, it was as hard to get rid of Internet Explorer as it was a virus. For unlike all other applications, there appeared no handy “uninstall” option for Internet Explorer. Once it had its hooks in your computer, it wasn’t letting go for anything. And its preeminent mission in life there seemed to be to run roughshod over Netscape Navigator. It inserted itself in place of its arch-enemy in your file associations and everywhere else, so that it kept turning up like a bad penny every time you clicked a link. If you insisted on bringing up Netscape Navigator in its stead, you were greeted with the pointed “suggestion” that Internet Explorer was the better, more stable option.

Microsoft’s biggest problem at this juncture was that that assertion didn’t hold water; Internet Explorer 1.0 was only a modest improvement over the old NCSA Mosaic browser on whose code it was based. Meanwhile Netscape was pushing aggressively forward with its vision of the browser as a platform, a home for active content of all descriptions. Netscape Navigator 2.0, whose first beta release appeared almost simultaneously with Internet Explorer 1.0, doubled down on that vision by including an email and Usenet client. More importantly, it supported not only Java but a second programming language for creating active content on the Web — a language that would prove much more important to the evolution of the Web in the long run.

Even at this early stage — still four months before Sun would deign to grant Java its own 1.0 release — some of the issues with using it on the Web were becoming clear: namely, the weight of the virtual machine that had to be loaded and started before a Java applet could run, and said applet’s inability to communicate easily with the webpage that had spawned it. Netscape therefore decided to create something that lay between the static simplicity of vanilla HTML and the dynamic complexity of Java. The language called JavaScript would share much of its big brother’s syntax, but it would be interpreted rather than compiled, and would live in the same environment as the HTML that made up a webpage rather than in a sandbox of its own. In fact, it would be able to manipulate that HTML directly and effortlessly, changing the page’s appearance on the fly in response to the user’s actions. The idea was that programmers would use JavaScript for very simple forms of active content — like, say, a popup photo gallery or a scrolling stock ticker — and use Java for full-fledged in-browser software applications — i.e., your word processors and the like.

In contrast to Java, a compiled language walled off inside its own virtual machine, JavaScript is embedded directly into the HTML that makes up a webpage, using the handy “<script>” tag.

​There’s really no way to say this kindly: JavaScript was (and is) a pretty horrible programming language by any objective standard. Unlike Java, which was the product of years of thought, discussion, and experimentation, JavaScript was the very definition of “quick and dirty” in a computer-science context. Even its principal architect Brendan Eich doesn’t speak of it like an especially proud parent; he calls it “Java’s dumb little brother” and “a rush job.” Which it most certainly was: he designed and implemented JavaScript from scratch in a matter of bare weeks.

What he ended up with would revolutionize the Web not because it was good, but because it was good enough, filling a craving that turned out to be much more pressing and much more satisfiable in the here and now than the likes of in-browser word processing. The lightweight JavaScript could be used to bring the Web alive, to make it a responsive and interactive place, more quickly and organically than the heavyweight Java. Once JavaScript had reached a critical mass in that role, it just kept on rolling with all the relentlessness of a Microsoft operating system. Today an astonishing 98 percent of all webpages contain at least a little bit of JavaScript in addition to HTML, and a cottage industry has sprung up to modify and extend the language — and attempt to fix the many infelicities that haunt the sleep of computer-science professors all over the world. JavaScript has become, in other words, the modern world’s nearest equivalent to what BASIC was in the 1980s, a language whose ease of use, accessibility, and populist appeal make up for what it lacks in elegance. These days we even do online word processing in JavaScript. If you had told Brendan Eich that that would someday be the case back in 1995, he would have laughed as loud and long at you as anyone.

Although no one could know it at the time, JavaScript also represents the last major building block to the modern Web for which Marc Andreessen can take a substantial share of the credit, following on from the “image” tag for displaying inline graphics, the secure sockets layer (SSL) for online encryption (an essential for any form of e-commerce), and to a lesser extent the Java language. Microsoft, by contrast, was still very much playing catch-up.

Nevertheless, on December 7, 1995 — the symbolism of this anniversary of the United States’s entry into World War II was lost on no one — Bill Gates gave a major address to the Microsoft faithful and assembled press, in which he made it clear that Microsoft was in the browser war to win it. In addition to announcing that his company too would bite the bullet and license Java for Internet Explorer, he said that the latter browser would no longer be a Windows 95 exclusive, but would soon be made available for Windows 3 and even MacOS as well. And everywhere it appeared, it would continue to sport the very un-Microsoft price tag of free, proof that this old dog was learning some decidedly new tricks for achieving market penetration in this new era of online software distribution. “When we say the browser’s free, we’re saying something different from other people,” said Gates, in a barbed allusion to Netscape’s shareware distribution model. “We’re not saying, ‘You can use it for 90 days,’ or, ‘You can use it and then maybe next year we’ll charge you a bunch of money.'” Netscape, whose whole business revolved around its browser, couldn’t afford to give Navigator away, a fact of which Gates was only too well aware. (Some pundits couldn’t resist contrasting this stance with Gates’s famous 1976 “Open Letter To Hobbyists,” in which he had asked, “Who can afford to do professional work for nothing?” Obviously Microsoft now could…)

Netscape’s stock price dropped by $28.75 that day. For Microsoft’s research budget alone was five times the size of Netscape’s total annual revenues, while the bigger company now had more than 800 people — twice Netscape’s total headcount — working on Internet Explorer alone. Marc Andreessen could offer only vague Silicon Valley aphorisms when queried about these disparities: “In a fight between a bear and an alligator, what determines the victor is the terrain” — and Microsoft, he claimed, had now moved “onto our terrain.” The less abstractly philosophical Larry Ellison, head of the database giant Oracle and a man who had had more than his share of run-ins with Bill Gates in the past, joked darkly about the “four stages” of Microsoft stealing someone else’s innovation. Stage 1: to “ridicule” it. Stage 2: to admit that, “yeah, there are a few interesting ideas here.” Stage 3: to make its own version. Stage 4: to make the world forget that the non-Microsoft version had ever existed.

Yet for the time being the Netscape tail continued to wag the Microsoft dog. A more interactive and participatory vision of the Web, enabled by the magic of JavaScript, was spreading like wildfire by the middle of 1996. You still needed Netscape Navigator to experience this first taste of what would eventually be labelled Web 2.0, a World Wide Web that blurred the lines between readers and writers, between content consumers and content creators. For if you visited one of these cutting-edge sites with Internet Explorer, it simply wouldn’t work. Despite all of Microsoft’s efforts, Netscape in June of 1996 could still boast of a browser market share of 85 percent. Marc Andreessen’s Sun Tzu-lite philosophy appeared to have some merit to it after all; his company was by all indications still winning the browser war handily. Even in its 2.0 incarnation, which had been released at about the same time as Gates’s Pearl Harbor speech, Internet Explorer remained something of a joke among Windows users, the annoying mother-in-law you could never seem to get rid of once she showed up.

But then, grizzled veterans like Larry Ellison had seen this movie before; they knew that it was far too early to count Microsoft out. That August, both Netscape and Microsoft released 3.0 versions of their browsers. Netscape’s was a solid evolution of what had come before, but contained no game changers like JavaScript. Microsoft’s, however, was a dramatic leap forward. In addition to Java support, it introduced JScript, a lightweight scripting language that just so happened to have the same syntax as JavaScript. At a stroke, all of those sites which hadn’t worked with earlier versions of Internet Explorer now displayed perfectly well in either browser.

With his browser itself more or less on a par with Netscape’s, Bill Gates decided it was time to roll out his not-so-secret weapon. In October of 1996, Microsoft began shipping Windows 95’s “Service Pack 2,” the second substantial revision of the operating system since its launch. Along with a host of other improvements, it included Internet Explorer. From now on, the browser would ship with every single copy of Windows 95 and be installed automatically as part of the operating system, whether the user wanted it or not. New Windows users would have to make an active choice and then an active effort to go to Netscape’s site — using Internet Explorer, naturally! — and download the “alternative” browser. Microsoft was counting on the majority of these users not knowing anything about the browser war and/or just not wanting to be bothered.

Microsoft employed a variety of carrots and sticks to pressure other companies throughout the computing ecosystem to give or at the bare minimum to recommend Internet Explorer to their customers in lieu of Netscape Navigator. It wasn’t above making the favorable Windows licensing deals it signed with big consumer-computer manufacturers like Compaq dependent on precisely this. But the most surprising pact by far was the one Microsoft made with America Online (AOL).

Relations between the face of the everyday computing desktop and the face of the Internet in the eyes of millions of ordinary Americans had been anything but cordial in recent years. Bill Gates had reportedly told Steve Case, his opposite number at AOL, that he would “bury” him with his own Microsoft Network (MSN). Meanwhile Case had complained long and loud about Microsoft’s bullying tactics to the press, to the point of mooting a comparison between Gates and Adolf Hitler on at least one occasion. Now, though, Gates was willing to eat crow and embrace AOL, even at the expense of his own MSN, if he could stick it to Netscape in the process.

For its part, AOL had come as far as it could with its Booklink browser. The Web was evolving too rapidly for the little development team it had inherited with that acquisition to keep up. Case grudgingly accepted that he needed to offer his customers one of the Big Two browsers. All of his natural inclinations bent toward Netscape. And indeed, he signed a deal with Netscape to make Navigator the browser that shipped with AOL’s turnkey software suite — or so Netscape believed. It turned out that Netscape’s lawyers had overlooked one crucial detail: they had never stipulated exclusivity in the contract. This oversight wasn’t lost on the interested bystander Microsoft, which swooped in immediately to take advantage of it. AOL soon announced another deal, to provide its customers with Internet Explorer as well. Even worse for Netscape, this deal promised Microsoft not only availability but priority: Internet Explorer would be AOL’s recommended, default browser, Netscape Navigator merely an alternative for iconoclastic techies (of which there were, needless to say, very few in AOL’s subscriber base).

What did AOL get in return for getting into bed with Adolf Hitler and “jilting Netscape at the altar,” as the company’s own lead negotiator would later put it? An offer that was impossible for a man with Steve Case’s ambitions to refuse, as it happened. Microsoft would put an AOL icon on the desktop of every new Windows 95 installation, where the hundreds of thousands of Americans who were buying a computer every month in order to check out this Internet thing would see it sitting there front and center, and know, thanks to AOL’s nonstop advertising blitz, that the wonders of the Web were just one click on it away. It was a stunning concession on Microsoft’s part, not least because it came at the direct cost of MSN, the very online network Bill Gates had originally conceived as his method of “burying” AOL. Now, though, no price was too high to pay in his quest to destroy Netscape.

Which raises the question of why he was so obsessed, given that Microsoft was making literally no money from Internet Explorer. The answer is rooted in all that rhetoric that was flying around at the time about the browser as a computing platform — about the Web effectively turning into a giant computer in its own right, floating up there somewhere in the heavens, ready to give a little piece of itself to anyone with a minimalist machine running Netscape Navigator. Such a new world order would have no need for a Microsoft Windows — perish the thought! But if, on the other hand, Microsoft could wrest the title of leading browser developer out of the hands of Netscape, it could control the future evolution of this dangerously unruly beast known as the World Wide Web, and ensure that it didn’t encroach on its other businesses.

That the predictions which prompted Microsoft’s downright unhinged frenzy to destroy Netscape were themselves wildly overblown is ironic but not material. As tech journalist Merrill R. Chapman has put it, “The prediction that anyone was going to use Navigator or any other browser anytime soon to write documents, lay out publications, build budgets, store files, and design presentations was a fantasy. The people who made these breathless predictions apparently never tried to perform any of these tasks in a browser.” And yet in an odd sort of way this reality check didn’t matter. Perception can create its own reality, and Bill Gates’s perception of Netscape Navigator as an existential threat to the software empire he had spent the last two decades building was enough to make the browser war feel like a truly existential clash for both parties, even if the only one whose existence actually was threatened — urgently threatened! — was Netscape. Jim Clark, Marc Andreessen’s partner in founding Netscape, makes the eyebrow-raising claim that he “knew we were dead” in the long run well before the end of 1996, when the Department of Justice declined to respond to an urgent plea on Netscape’s part to take another look at Microsoft’s business practices.

Perhaps the most surprising aspect of the conflict is just how long Netscape’s long run proved to be. It was in most respects David versus Goliath: Netscape in 1996 had $300 million in annual revenues to Microsoft’s nearly $9 billion. But whatever the disparities of size, Netscape had built up a considerable reservoir of goodwill as the vehicle through which so many millions had experienced the Web for the first time. Microsoft found this soft power oddly tough to overcome, even with a browser of its own that was largely identical in functional terms. A remarkable number of people continued to make the active choice to use Netscape Navigator instead of the passive one to use Internet Explorer. By October of 1997, one year after Microsoft brought out the big gun and bundled Internet Explorer right into Windows 95, its browser’s market share had risen as high as 39 percent — but it was Netscape that still led the way at 51 percent.

Yet Netscape wasn’t using those advantages it did possess all that effectively. It was not a happy or harmonious company: there were escalating personality clashes between Jim Clark and Marc Andreessen, and also between Andreessen and his programmers, who thought their leader had become a glory hound, too busy playing the role of the young dot.com millionaire to pay attention to the vital details of software development. Perchance as a result, Netscape’s drive to improve its browser in paradigm-shifting ways seemed to slowly dissipate after the landmark Navigator 2.0 release.

Netscape, so recently the darling of the dot.com age, was now finding it hard to make a valid case for itself merely as a viable business. The company’s most successful quarter in financial terms was the third of 1996 — just before Internet Explorer became an official part of Windows 95 — when it brought in $100 million in revenue. Receipts fell precipitously after that point, all the way down to just $18.5 million in the last quarter of 1997. By so aggressively promoting Internet Explorer as entirely and perpetually free, Bill Gates had, whether intentionally or inadvertently, instilled in the general public an impression that all browsers were or ought to be free, due to some unstated reason inherent in their nature. (This impression has never been overturned, as has been testified over the years by the failure of otherwise worthy commercial browsers like Opera to capture much market share.) Thus even the vast majority of those who did choose Netscape’s browser no longer seemed to feel any ethical compulsion to pay for it. Netscape was left in a position all too familiar to Web firms of the past and present alike: that of having immense name recognition and soft power, but no equally impressive revenue stream to accompany them. It tried frantically to pivot into back-end server architecture and corporate intranet solutions, but its efforts there were, as its bottom line will attest, not especially successful. It launched a Web portal and search engine known as Netcenter, but struggled to gain traction against Yahoo!, the leader in that space. Both Jim Clark and Marc Andreessen sold off large quantities of their personal stock, never a good sign in Silicon Valley.

Netscape Navigator was renamed Netscape Communicator for its 4.0 release in June of 1997. As the name would imply, Communicator was far more than just a browser, or even just a browser with an integrated email client and Usenet reader, as Navigator had been since version 2.0. Now it also sported an integrated editor for making your own websites from scratch, a real-time chat system, a conference caller, an appointment calendar, and a client for “pushing” usually unwanted content to your screen. It was all much, much too much, weighted down with features most people would never touch, big and bloated and slow and disturbingly crash-prone; small wonder that even many Netscape loyalists chose to stay with Navigator 3 after the release of Communicator. Microsoft had not heretofore been known for making particularly svelte software, but Internet Explorer, which did nothing but browse the Web, was a lean ballerina by comparison with the lumbering Sumo wrestler that was Netscape Communicator. The original Netscape Navigator had sprung from the hacker culture of institutional computing, but the company had apparently now forgotten one of that culture’s key dictums in its desire to make its browser a platform unto itself: the best programs are those that do only one thing, but do that one thing very, very well, leaving all of the other things to other programs.

Netscape Communicator. I’m told that there’s an actual Web browser buried somewhere in this pile. Probably a kitchen sink too, if you look hard enough.

Luckily for Netscape, Internet Explorer 4.0, which arrived three months after Communicator, violated the same dictum in an even more inept way. It introduced what Microsoft called the “Active Desktop,” which let it bury its hooks deeper than ever into Windows itself. The Active Desktop was — or tried to be —  Bill Gates’s nightmare of a Web that was impossible to separate from one’s local computer come to life, but with Microsoft’s own logo on it. Ironically, it blurred the distinction between the local computer and the Internet more thoroughly than anything the likes of Sun or Netscape had produced to date; local files and applications became virtually indistinguishable from those that lived on the Internet in the new version of the Windows desktop it installed in place of the old. The end result served mainly to illustrate how half-baked all of the prognostications about a new era of computing exclusively in the cloud really were. The Active Desktop was slow and clumsy and confusing, and absolutely everyone who was exposed to it seemed to hate it and rush to find a way to turn it off. Fortunately for Microsoft, it was possible to do so without removing the Internet Explorer 4 browser itself.

The dreaded Active Desktop. Surprisingly, it was partially defended on philosophical grounds by Tim Berners-Lee, not normally a fan of Microsoft. “It was ridiculous for a person to have two separate interfaces, one for local information (the desktop for their own computer) and one for remote information (a browser to reach other computers),” he writes. “Why did we need an entire desktop for our own computer, but only get little windows through which to view the rest of the planet? Why, for that matter, should we have folders on our desktop but not on the Web? The Web was supposed to be the universe of all accessible information, which included, especially, information that happened to be stored locally. I argued that the entire topic of where information was physically stored should be made invisible to the user.” For better or for worse, though, the public didn’t agree. And even he had to allow that “this did not have to imply that the operating system and browser should be the same program.”

The Active Desktop damaged Internet Explorer’s reputation, but arguably not as badly as Netscape’s had been damaged by the bloated Communicator. For once you turned off all that nonsense, Internet Explorer 4 proved to be pretty good at doing the rest of its job. But there was no similar method for trimming the fat from Netscape Communicator.

While Microsoft and Netscape, those two for-profit corporations, had been vying with one another for supremacy on the Web, another, quieter party had been looking on with great concern. Before the Web had become the hottest topic of the business pages, it had been an idea in the head of the mild-mannered British computer scientist Tim Berners-Lee. He had built the Web on the open Internet, using a new set of open standards; his inclination had never been to control his creation personally. It was to be a meeting place, a library, a forum, perhaps a marketplace if you liked — but always a public commons. When Berners-Lee formed the non-profit World Wide Web Consortium (W3C) in October of 1994 in the hope of guiding an orderly evolution of the Web that kept it independent of the moneyed interests rushing to join the party, it struck many as a quaint endeavor at best. Key technologies like Java and JavaScript appeared and exploded in popularity without giving the W3C a chance to say anything about them. (Tellingly, the word “JavaScript” never even appears in Berners-Lee’s 1999 book about his history with and vision for the Web, despite the scripting language’s almost incalculable importance to making it the dynamic and diverse place it had become by that point.)

From the days when he had been a mere University of Illinois student making a browser on the side, Marc Andreessen had blazed his own trail without giving much thought to formal standards. When the things he unilaterally introduced proved useful, others rushed to copy them, and they became de-facto standards. This was as true of JavaScript as it was of anything else. As we’ve seen, it began as a Netscape-exclusive feature, but was so obviously transformative to what the Web could do and be that Microsoft had no choice but to copy it, to incorporate its own implementation of it into Internet Explorer.

But JavaScript was just about the last completely new feature to be rolled out and widely adopted in this ad-hoc fashion. As the Web reached a critical mass, with Netscape Navigator and Internet Explorer both powering users’ experiences of it in substantial numbers, site designers had a compelling reason not to use any technology that only worked on the one or the other; they wanted to reach as many people as possible, after all. This brought an uneasy sort of equilibrium to the Web.

Nevertheless, the first instinct of both Netscape and Microsoft remained to control rather than to share the Web. Both companies’ histories amply demonstrated that open standards meant little to them; they preferred to be the standard. What would happen if and when one company won the browser war, as Microsoft seemed slowly to be doing by 1997, what with the trend lines all going in its favor and Netscape in veritable financial free fall? Once 90 percent or more of the people browsing the Web were doing so with Internet Explorer, Microsoft would be free to give its instinct for dominance free reign. With an army of lawyers at its beck and call, it would be able to graft onto the Web proprietary, patented technologies that no upstart competitor would be able to reverse-engineer and copy, and pragmatic website designers would no longer have any reason not to use them, if they could make their sites better. And once many or most websites depended on these features that were available only in Internet Explorer, that would be that for the open Web. Despite its late start, Microsoft would have managed to embrace, extend, and in a very real sense destroy Tim Berners-Lee’s original vision of a World Wide Web. The public commons would have become a Microsoft-branded theme park.

These worries were being bandied about with ever-increasing urgency in January of 1998, when Netscape made what may just have been the most audacious move of the entire dot.com boom. Like most such moves, it was born of sheer desperation, but that shouldn’t blind us to its importance and even bravery. First of all, Netscape made its browser free as in beer, finally giving up on even asking people to pay for the thing. Admittedly, though, this in itself was little more than an acceptance of the reality on the ground, as it were. It was the other part of the move that really shocked the tech world: Netscape also made its browser free as in freedom — it opened up its source code to all and sundry. “This was radical in its day,” remembers Mitchell Baker, one of the prime drivers of the initiative at Netscape. “Open source is mainstream now; it was not then. Open source was deep, deep, deep in the technical community. It never surfaced in a product. [This] was a very radical move.”

Netscape spun off a not-for-profit organization, led by Baker and called Mozilla, after a cartoon dinosaur that had been the company’s office mascot almost from day one. Coming well before the Linux operating system began conquering large swaths of corporate America, this was to be open source’s first trial by fire in the real world. Mozilla was to concentrate on the core code required for rendering webpages — the engine room of a browser, if you will. Then others — not least among them the for-profit arm of Netscape — would build the superstructures of finished applications around that sturdy core.

Alas, Netscape the for-profit company was already beyond saving. If anything, this move only hastened the end; Netscape had chosen to give away the one product it had that some tiny number of people were still willing to pay for. Some pundits talked it up as a dying warrior’s last defiant attempt to pass the sword to others, to continue the fight against Microsoft and Internet Explorer: “From the depths of Hell, I spit at thee!” Or, as Tim Berners-Lee put it more soberly: “Microsoft was bigger than Netscape, but Netscape was hoping the Web community was bigger than Microsoft.” And there may very well be something to these points of view. But regardless of the motivations behind it, the decision to open up Netscape’s browser proved both a landmark in the history of open-source software and a potent weapon in the fight to keep the Web itself open and free. Mozilla has had its ups and downs over the years since, but it remains with us to this day, still providing an alternative to the corporate-dominated browsers almost a quarter-century on, having outlived the more conventional corporation that spawned it by a factor of six.

Mozilla’s story is an important one, but we’ll have to leave the details of it for another day. For now, we return to the other players in today’s drama.

While Microsoft and Netscape were battling one another, AOL was soaring into the stratosphere, the happy beneficiary of Microsoft’s decision to give it an icon on the Windows 95 desktop in the name of vanquishing Netscape. In 1997, in a move fraught with symbolic significance, AOL bought CompuServe, its last remaining competitor from the pre-Web era of closed, proprietary online services. By the time Netscape open-sourced its browser, AOL had 12 million subscribers and annual profits — profits, mind you, not revenues — of over $500 million, thanks not only to subscription fees but to the new frontier of online advertising, where revenues and profits were almost one and the same. At not quite 40 years old, Steve Case had become a billionaire.

“AOL is the Internet blue chip,” wrote the respected stock analyst Henry Blodget. And indeed, for all of its association with new and shiny technology, there was something comfortingly stolid — even old-fashioned — about the company. Unlike so many of his dot.com compatriots, Steve Case had found a way to combine name recognition and a desirable product with a way of getting his customers to actually pay for said product. He liked to compare AOL with a cable-television provider; this was a comparison that even the most hidebound investors could easily understand. Real, honest-to-God checks rolled into AOL’s headquarters every month from real, honest-to-God people who signed up for real, honest-to-God paid subscriptions. So what if the tech intelligentsia laughed and mocked, called AOL “the cockroach of cyberspace,” and took an “@AOL.com” suffix on someone’s email address as a sign that they were too stupid to be worth talking to? Case and his shareholders knew that money from the unwashed masses spent just as well as money from the tech elites.

Microsoft could finally declare victory in the browser war in the summer of 1998, when the two browsers’ trend lines crossed one another. At long last, Internet Explorer’s popularity equaled and then rapidly eclipsed that of Netscape Navigator/Communicator. It hadn’t been clean or pretty, but Microsoft had bludgeoned its way to the market share it craved.

A few months later, AOL acquired Netscape through a stock swap that involved no cash, but was worth a cool $9.8 billion on paper — an almost comical sum in relation to the amount of actual revenue the purchased company had brought in during its lifetime. Jim Clark and Marc Andreessen walked away very, very rich men. Just as Netscape’s big IPO had been the first of its breed, the herald of the dot.com boom, Netscape now became the first exemplar of the boom’s unique style of accounting, which allowed people to get rich without ever having run a profitable business.

Even at the time, it was hard to figure out just what it was about Netscape that AOL thought was worth so much money. The deal is probably best understood as a product of Steve Case’s fear of a Microsoft-dominated Web; despite that AOL icon on the Windows desktop, he still didn’t trust Bill Gates any farther than he could throw him. In the end, however, AOL got almost nothing for its billions. Netscape Communicator was renamed AOL Communicator and offered to the service’s subscribers, but even most of them, technically unsophisticated though they tended to be, could see that Internet Explorer was the cleaner and faster and just plain better choice at this juncture. (The open-source coders working with Mozilla belatedly realized the same; they would wind up spending years writing a brand-new browser engine from scratch after deciding that Netscape’s just wasn’t up to snuff.)

Most of Netscape’s remaining engineers walked soon after the deal was made. They tended to describe the company’s meteoric rise and fall in the terms of a Shakespearean tragedy. “At least the old timers among us came to Netscape to change the world,” lamented one. “Getting killed by the Evil Empire, being gobbled up by a big corporation — it’s incredibly sad.” If that’s painting with rather too broad a brush — one should always run away screaming when a Silicon Valley denizen starts talking about “changing the world” — it can’t be denied that Netscape at no time enjoyed a level playing field in its war against Microsoft.

But times do change, as Microsoft was about to learn to its cost. In May of 1998, the Department of Justice filed suit against Microsoft for illegally exploiting its Windows monopoly in order to crush Netscape. The suit came too late to save the latter, but it was all over the news even as the first copies of Windows 98, the hotly anticipated successor to Windows 95, were reaching store shelves. Bill Gates had gotten his wish; Internet Explorer and Windows were now indissolubly bound together. Soon he would have cause to wish that he had not striven for that outcome quite so vigorously.

(Sources: the books Overdrive: Bill Gates and the Race to Control Cyberspace by James Wallace, The Silicon Boys by David A. Kaplan, Architects of the Web by Robert H. Reid, Competing on Internet Time: Lessons from Netscape and Its Battle with Microsoft by Michael Cusumano and David B. Yoffie, dot.con: The Greatest Story Ever Sold by John Cassidy, Stealing Time: Steve Case, Jerry Levin, and the Collapse of AOL Time Warner by Alec Klein, Fools Rush In: Steve Case, Jerry Levin, and the Unmaking of AOL Time Warner by Nina Munk, There Must be a Pony in Here Somewhere: The AOL Time Warner Debacle by Kara Swisher, In Search of Stupidity: Over Twenty Years of High-Tech Marketing Disasters by Merrill R. Chapman, Coders at Work: Reflections on the Craft of Programming by Peter Seibel, and Weaving the Web by Tim Berners-Lee. Online sources include “1995: The Birth of JavaScript” at Web Development History, the New York Times timeline of AOL’s history, and Mitchell Baker’s talk about the history of Mozilla, which is available on Wikipedia.)

 
41 Comments

Posted by on December 23, 2022 in Digital Antiquaria, Interactive Fiction

 

Tags: , , , ,

Doing Windows, Part 11: The Internet Tidal Wave

On August 6, 1991, when Microsoft was still in the earliest planning stages of creating the operating system that would become known as Windows 95, an obscure British researcher named Tim Berners-Lee, working out of the Conseil Européen pour la Recherche Nucléaire (CERN) in Switzerland, put the world’s first publicly accessible website online. For years to come, these two projects would continue to evolve separately, blissfully unconcerned by if not unaware of one another’s existence. And indeed, it is difficult to imagine two computing projects with more opposite personalities. Mirroring its co-founder and CEO Bill Gates, Microsoft was intensely pragmatic and maniacally competitive. Tim Berners-Lee, on the other hand, was a classic academic, a theorist and idealist rather than a businessman. The computers on which he and his ilk built the early Web ran esoteric operating systems like NeXTSTEP and Unix, or at their most plebeian MacOS, not Microsoft’s mass-market workhorse Windows. Microsoft gave you tools for getting everyday things done, while the World Wide Web spent the first couple of years of its existence as little more than an airy proof of concept, to be evangelized by wide-eyed adherents who often appeared to have read one too many William Gibson novels. Forbes magazine was soon to anoint Bill Gates the world’s richest person, his reward for capturing almost half of the international software market; the nascent Web was nowhere to be found in the likes of Forbes.

Those critics who claim that Microsoft was never a visionary company — that it instead thrived by letting others innovate, then swooping in and taking taking over the markets thus opened — love to point to its history with the World Wide Web as Exhibit Number One. Despite having a role which presumably demanded that he stay familiar with all leading-edge developments in computing, Bill Gates by his own admission never even heard of the Web until April of 1993, twenty months after that first site went up. And he didn’t actually surf the Web for himself until another six months after that — perhaps not coincidentally, shortly after a Windows version of NCSA Mosaic, the user-friendly graphical browser that made the Web a welcoming place even for those whose souls didn’t burn with a passion for information theory, had finally been released.

Gates focused instead on a different model of online communication, one arguably more in keeping with his instincts than was the free and open Web. For almost a decade and a half by 1993, various companies had been offering proprietary dial-up services aimed at owners of home computers. These came complete with early incarnations of many of the staples of modern online life: email, chat lines, discussion forums, online shopping, online banking, online gaming, even online dating. They were different from the Web in that they were walled gardens that provided no access to anything that lay beyond the big mainframes that hosted them. Yet within their walls lived bustling communities whose citizens paid their landlords by the minute for the privilege of participation.

The 500-pound gorilla of this market had always been CompuServe, which had been in the business since the days when a state-of-the-art home computer had 16 K of memory and used cassette tapes for storage. Of late, however, an upstart service called America Online (AOL) had been making waves. Under Steve Case, its wunderkind CEO, AOL aimed its pitch straight at the heart of Middle America rather than the tech-savvy elite. Over the course of 1993 alone, it went from 300,000 to 500,000 subscribers. But that was only the beginning if one listened to Case. For a second Home Computer Revolution, destined to be infinitely more successful and long-lasting than the first, was now in full swing, powered along by the ease of use of Windows 3 and by the latest consumer-grade hardware, which made computing faster and more aesthetically attractive than it had ever been before. AOL’s quick and easy custom software fit in perfectly with these trends. Surely this model of the online future — of curated content offered up by a firm whose stated ambition was to be the latest big player in mass media as a whole; of a subscription model that functioned much like the cable television which the large majority of Americans were already paying for — was more likely to take hold than the anarchic jungle that was the World Wide Web. It was, at any rate, a model that Bill Gates could understand very well, and naturally gravitated toward. Never one to leave cash on the table, he started asking himself how Microsoft could get a piece of this action as well.

Steve Case celebrates outside the New York Stock Exchange on March 19, 1992, the day America Online went public.

Gates proceeded in his standard fashion: in May of 1993, he tried to buy AOL outright. But Steve Case, who nursed dreams of becoming a media mogul on the scale of Walt Disney or Jack Warner, turned him down flat. At this juncture, Russ Siegelman, a 33-year-old physicist-by-education whom Gates had made his point man for online strategy, suggested a second classically Microsoft solution to the dilemma: they could build their own online service that copied AOL in most respects, then bury their rival with money and sheer ubiquity. They could, Siegelman suggested, make their own network an integral part of the eventual Windows 95, make signing up for it just another step in the installation process. How could AOL possibly compete with that? It was the first step down a fraught road that would lead to widespread outrage inside the computer industry and one of the most high-stakes anti-trust investigations in the history of American business — but for all that, the broad strategy would prove very, very effective once it reached its final form. It had a ways still to go at this stage, though, targeting as it did AOL instead of the Web.

Gates put Siegelman in charge of building Microsoft’s online service, which was code-named Project Marvel. “We were not thinking about the Internet at all,” admits one of the project’s managers. “Our competition was CompuServe and America Online. That’s what we were focused on, a proprietary online service.” At the time, there were exactly two computers in Microsoft’s sprawling Redmond, Washington, campus that were connected to the Internet. “Most college kids knew much more than we did because they were exposed to it,” says the Marvel manager. “If I had wanted to connect to the Internet, it would have been easier for me to get into my car and drive over to the University of Washington than to try and get on the Internet at Microsoft.”

It came down to the old “not built here” syndrome that dogs so many large institutions, as well as the fact that the Web and the Internet on which it lived were free, and Bill Gates tended to hold that which was free in contempt. Anyone who attempted to help him over his mental block — and there were more than a few of them at Microsoft — was greeted with an all-purpose rejoinder: “How are we going to make money off of free?” The biggest revolution in computing since the arrival of the first pre-assembled personal computers back in 1977 was taking place all around him, and Gates seemed constitutionally incapable of seeing it for what it was.

In the meantime, others were beginning to address the vexing question of how you made money out of free. On April 4, 1994, Marc Andreessen, the impetus behind the NCSA Mosaic browser, joined forces with Jim Clark, a veteran Silicon Valley entrepreneur, to found Netscape Communications for the purpose of making a commercial version of the Mosaic browser. A team of programmers, working without consulting the Mosaic source code so as to avoid legal problems, soon did just that, and uploaded Netscape Navigator to the Web on October 13, 1994. Distributed under the shareware model, with a $39 licensing fee requested but not demanded after a 90-day trial period was up, the new browser was installed on more than 10 million computers within nine months.

AOL’s growth had continued apace despite the concurrent explosion of the open Web; by the time of Netscape Navigator’s release, the service had 1.25 million subscribers. Yet Steve Case, no one’s idea of a hardcore techie, was ironically faster to see the potential — or threat — of the Web than was Bill Gates. He adopted a strategy in response that would make him for a time at least a superhero of the business press and the investor set. Instead of fighting the Web, AOL would embrace it — would offer its own Web browser to go along with its proprietary content, thereby adding a gate to its garden wall and tempting subscribers with the best of both worlds. As always for AOL, the whole package would be pitched toward neophytes, with a friendly interface and lots of safeguards — “training wheels,” as the tech cognoscenti dismissively dubbed them — to keep the unwashed masses safe when they did venture out into the untamed wilds of the Web.

But Case needed a browser of his own in order to execute his strategy, and he needed it in a hurry. He needed, in short, to buy a browser rather than build one. He saw three possibilities. One was to bring Netscape and its Navigator into the AOL fold. Another was a small company called Spyglass, a spinoff of the National Center for Supercomputing (NCSA) which was attempting to commercialize the original NCSA Mosaic browser. And the last was a startup called Booklink Technologies, which was making a browser from scratch.

Netscape was undoubtedly the superstar of the bunch, but that didn’t help AOL’s cause any; Marc Andreessen and Jim Clark weren’t about to sell out to anyone. Spyglass, on the other hand, struck Case as an unimaginative Johnny-come-lately that was trying to shut the barn door long after the horse called Netscape had busted out. That left only Booklink. In November of 1994, AOL paid $30 million for the company. The business press scoffed, deeming it a well-nigh flabbergasting over-payment. But Case would get the last laugh.

While AOL was thus rushing urgently to “embrace and extend” the Web, to choose an ominous phrase normally associated with Microsoft, the latter was dawdling along more lackadaisically toward a reckoning with the Internet. During that same busy fall of 1994, IBM released OS/2 3.0, which was marketed as OS/2 Warp in the hope of lending it some much-needed excitement. By either name, it was the latest iteration of an operating system that IBM had originally developed in partnership with Microsoft, an operating system that had once been regarded by both companies as nothing less than the future of mainstream computing. But since the pair’s final falling out in 1991, OS/2 had become an irrelevancy in the face of the Windows juggernaut, winning a measure of affection only in some hacker circles and a few other specialized niches. Despite its snazzy new name and despite being an impressive piece of software from a purely technical perspective, OS/2 Warp wasn’t widely expected to change those fortunes before its release, and this lack of expectations proved well-founded afterward. Yet it was a landmark in another way, being the first operating system to include a Web browser as an integral component, in this case a program called Web Explorer, created by IBM itself because no one else seemed much interested in making a browser for the unpopular OS/2.

This appears to have gotten some gears turning in Bill Gates’s head. Microsoft already planned to include more networking tools than ever before in Windows 95. They had, for example, finally decided to bow to customer demand and build right into the operating system TCP/IP, the networking protocol that allowed a computer to join the Internet; Windows 3 required the installation of a third-party add-on for the same purpose. (“I don’t know what it is, and I don’t want to know what it is,” said Steve Ballmer, Gates’s right-hand man, to his programmers on the subject of TCP/IP. “[But] my customers are screaming about it. Make the pain go away.”) Maybe a Microsoft-branded Web browser for Windows 95 would be a good idea as well, if they could acquire one without breaking the bank.

Just days after AOL bought Booklink for $30 million, Microsoft agreed to give $2 million to Spyglass. In return, Spyglass would give Microsoft a copy of the Mosaic source code, which it could then use as the basis for its own browser. But, lest you be tempted to see this transaction as evidence that Gates’s opinions about the online future had already undergone a sea change by this date, know that the very day this deal went down was also the one on which he chose to publicly announce Microsoft’s own proprietary AOL competitor, to be known as simply the Microsoft Network, or MSN. At most, Gates saw the open Web at this stage as an adjunct to MSN, just as it would soon become to AOL. MSN would come bundled into Windows 95, he told the assembled press, so that anyone who wished to could become a subscriber at the click of a mouse.

The announcement caused alarm bells to ring at AOL. “The Windows operating system is what the dial tone is to the phone industry,” said Steve Case. He thus became neither the first nor the last of Gates’s rival to hint at the need for government intervention: “There needs to be a level playing field on which companies compete.” Some pundits projected that Microsoft might sign up 20 million subscribers to MSN before 1995 was out. Others — the ones whom time would prove to have been more prescient — shook their heads and wondered how Microsoft could still be so clueless about the revolutionary nature of the World Wide Web.

AOL leveraged the Booklink browser to begin offering its subscribers Web access very early in 1995, whereupon its previously robust rate of growth turned downright torrid. By November of 1995, it would have 4 million subscribers. The personable and photogenic Steve Case became a celebrity in his own right, to the point of starring in a splashy advertising campaign for The Gap’s line of khakis; the man and the pants represented respectively the personification and the uniform of the trend in corporate America toward “business casual.” Meanwhile Case’s company became an indelible part of the 1990s zeitgeist. “You’ve got mail!,” the words AOL’s software spoke every time a new email arrived — something that was still very much a novel experience for many subscribers — was featured as a sample in a Prince song, and eventually became the name of a hugely popular romantic comedy starring Tom Hanks and Meg Ryan. CompuServe and AOL’s other old rivals in the proprietary space tried to compete by setting up Internet gateways of their own, but were never able to negotiate the transition from one era of online life to another with the same aplomb as AOL, and gradually faded into irrelevancy.

Thankfully for Microsoft’s shareholders, Bill Gates’s eyes were opened before his company suffered the same fate. At the eleventh hour, with what were supposed to be the final touches being put onto Windows 95, he made a sharp swerve in strategy. He grasped at last that the open Web was the here, the now, and the future, the first major development in mainstream consumer computing in years that hadn’t been more or less dictated by Microsoft — but be that as it may, the Web wasn’t going anywhere. On May 26, 1995, he wrote a memo to every Microsoft employee that exuded an all-hands-on-deck sense of urgency. Gates, the longstanding Internet agnostic, had well and truly gotten the Internet religion.

I want to make clear that our focus on the Internet is critical to every part of our business. The Internet is the most important single development to come along since the IBM PC was introduced in 1981. It is even more important than the arrival of [the] graphical user interface (GUI). The PC analogy is apt for many reasons. The PC wasn’t perfect. Aspects of the PC were arbitrary or even poor. However, a phenomena [sic] grew up around the IBM PC that made it a key element of everything that would happen for the next fifteen years. Companies that tried to fight the PC standard often had good reasons for doing so, but they failed because the phenomena overcame any weakness that [the] resistors identified.

Over the last year, a number of people [at Microsoft] have championed embracing TCP/IP, hyperlinking, HTML, and building clients, tools, and servers that compete on the Internet. However, we still have a lot to do. I want every product plan to try and go overboard on Internet features.

Everything changed that day. Instead of walling its campus off from the Internet, Microsoft put the Web at every employee’s fingertips. Gates himself sent his people lists of hot new websites to explore and learn from. The team tasked with building the Microsoft browser, who had heretofore labored in under-staffed obscurity, suddenly had all the resources of the company at their beck and call. The fact was, Gates was scared; his fear oozes palpably from the aggressive language of the memo above. (Other people talked of “joining” the Internet; Gates wanted to “compete” on it.)

But just what was he so afraid of? A pair of data points provides us with some clues. Three days before he wrote his memo, a new programming language and run-time environment had taken the industry by storm. And the day after he did so, a Microsoft executive named Ben Slivka sent out a memo of his own with Gate’s blessing, bearing the odd title of “The Web Is the Next Platform.” To understand what Slivka was driving at, and why Bill Gates took it as such an imminent existential threat to his company’s core business model, we need to back up a few years and look at the origins of the aforementioned programming language.


Bill Joy, an old-school hacker who had made fundamental contributions to the Unix operating system, was regarded as something between a guru and an elder statesman by 1990s techies, who liked to call him “the other Bill.” In early 1991, he shared an eye-opening piece of his mind at a formal dinner for select insiders. Microsoft was then on the ascendant, he acknowledged, but they were “cruising for a bruising.” Sticking with the automotive theme, he compared their products to the American-made cars that had dominated until the 1970s — until the Japanese had come along peddling cars of their own that were more efficient, more reliable, and just plain better than the domestic competition. He said that the same fate would probably befall Microsoft within five to seven years, when a wind of change of one sort or another came along to upend the company and its bloated, ugly products. Just four years later, people would be pointing to a piece of technology from his own company Sun Microsystems as the prophesied agent of Microsoft’s undoing.

Sun had been founded in 1982 to leverage the skills of Joy along with those of a German hardware engineer named Andy Bechtolsheim, who had recently built an elegant desktop computer inspired by the legendary Alto machines of Xerox’s Palo Alto Research Center. Over the remainder of the 1980s, Sun made a good living as the premier maker of Unix-based workstations: computers that were a bit too expensive to be marketed to even the most well-heeled consumers, but were among the most powerful of their day that could be fit onto or under a single desktop. Sun possessed a healthy antipathy for Microsoft, for all of the usual reasons cited by the hacker contingent: they considered Microsoft’s software derivative and boring, considered the Intel hardware on which it ran equally clunky and kludgy (Sun first employed Motorola chips, then processors of their own design), and loathed Microsoft’s intensely adversarial and proprietorial approach to everything it touched. For some time, however, Sun’s objections remained merely philosophical; occupying opposite ends of the market as they did, the two companies seldom crossed one another’s paths. But by the end of the decade, the latest Intel hardware had advanced enough to be comparable with that being peddled by Sun. And by the time that Bill Joy made his prediction, Sun knew that something called Windows NT was in the works, knew that Microsoft would be coming in earnest for the high-end-computing space very soon.

About six months after Joy played the oracle, Sun’s management agreed to allow one of their star programmers, a fellow named James Gosling, to form a small independent group in order to explore an idea that had little obviously to do with the company’s main business. “When someone as smart as James wants to pursue an area, we’ll do our best to provide an environment,” said Chief Technology Officer Eric Schmidt.

James Gosling

The specific “area” — or, perhaps better said, problem — that Gosling wanted to address was one that still exists to a large extent today: the inscrutability and lack of interoperability of so many of the gadgets that power our daily lives. The problem would be neatly crystalized almost five years later by one of the milquetoast jokes Jay Leno made at the Windows 95 launch, about how the VCR in even Bill Gates’s living room was still blinking “12:00” because he had never figured out how to set the thing’s clock. What if everything in your house could be made to talk together, wondered Gosling, so that setting one clock would set all of them — so that you didn’t have to have a separate remote control for your television and your VCR, each with about 80 buttons on it that you didn’t understand what they did and never, ever pressed. “What does it take to watch a videotape?” he mused. “You go plunk, plunk, plunk on all of these things in certain magic sequences before you can actually watch your videotape! Why is it so hard? Wouldn’t it be nice if you could just slide the tape into the VCR, [and] the system sort of figures it out: ‘Oh, gee, I guess he wants to watch it, so I ought to power up the television set.'”

But when Gosling and his colleagues started to ponder how best to realize their semi-autonomous home of the future, they tripped over a major stumbling block. While it was true that more and more gadgets were becoming “smart,” in the sense of incorporating programmable microprocessors, the details of their digital designs varied enormously. Each program to link each individual model of, say, VCR into the home network would have to be written, tested, and debugged from scratch. Unless, that is, the program could be made to run in a virtual machine.

A virtual machine is an imaginary computer which a real computer can be programmed to simulate. It permits a “write once, run everywhere” approach to software: once a given real computer has an interpreter for a given virtual machine, it can run any and all programs that have been or will be written for that virtual machine, albeit at some cost in performance.

Like almost every other part of the programming language that would eventually become known as Java, the idea of a virtual machine was far from new in the abstract. (“In some sense, I would like to think that there was nothing invented in Java,” says Gosling.) For example, a decade before Gosling went to work on his virtual machine, the Apple Pascal compiler was already targeting one that ran on the lowly Apple II, even as the games publisher Infocom was distributing its text adventures across dozens of otherwise incompatible platforms thanks to its Z-Machine.

Unfortunately, Gosling’s new implementation of this old concept proved unable to solve by itself the original problem for which it had been invented. Even Wi-Fi didn’t exist at this stage, much less the likes of Bluetooth. Just how were all of these smart gadgets supposed to actually talk to one another, to say nothing of pulling down the regular software updates which Gosling envisioned as another benefit of his project? (Building a floppy-disk drive into every toaster was an obvious nonstarter.) After reluctantly giving up on their home of the future, the team pivoted for a while toward “interactive television,” a would-be on-demand streaming system much like our modern Netflix. But Sun had no real record in the consumer space, and cable-television providers and other possible investors were skeptical.

While Gosling was trying to figure out just what this programming language and associated runtime environment he had created might be good for, the World Wide Web was taking off. In July of 1994, a Sun programmer named Patrick Naughton did something that would later give Bill Gates nightmares: he wrote a fairly bare-bones Web browser in Java, more for the challenge than anything else. A couple of months later there came the eureka moment: Naughton and another programmer named Jonathan Payne made it possible to run other Java programs, or “applets” as they would soon be known, right inside their browser. They stuck one of the team’s old graphical demos on a server and clicked the appropriate link, whereupon they were greeted with a screen full of dancing Coca-Cola cans. Payne found it “breathtaking”: “It wasn’t just playing an animation. It was physics calculations going on inside a webpage!”

In order to appreciate his awe, we need to understand what a static place the early Web was. HTML, the “language” in which pages were constructed, was an abbreviation for “Hypertext Markup Language.” In form and function, it was more akin to a typesetting specification than a Turing-complete programming language like C or Pascal or Java; the only form of interactivity it allowed for was the links that took the reader from static page to static page, while its only visual pizazz came in the form of static in-line images (themselves a relatively recent addition to the HTML specification, thanks to NCSA Mosaic). Java stood to change all that at a stroke. If you could embed programs running actual code into your page layouts, you could in theory turn your pages into anything you wanted them to be: games, word processors, spreadsheets, animated cartoons, stock-market tickers, you name it. The Web could almost literally come alive.

The potential was so clearly extraordinary that Java went overnight from a moribund project on the verge of the chopping block to Sun’s top priority. Even Bill Joy, now living in blissful semi-retirement in Colorado, came back to Silicon Valley for a while to lend his prodigious intellect to the process of turning Java into a polished tool for general-purpose programming. There was still enough of the old-school hacker ethic left at Sun that management bowed to the developers’ demand that the language be made available for free to individual programmers and small businesses; Sun would make its money on licensing deals with bigger partners, who would pay for the Java logo on their products and the right to distribute the virtual machine. The potential of Java certainly wasn’t lost on Netscape’s Marc Andreessen, who had long been leading the charge to make the Web more visually exciting. He quickly agreed to pay Sun $750,000 for the opportunity to build Java into the Netscape Navigator browser. In fact, it was Andreessen who served as master of ceremonies at Java’s official coming-out party at a SunWorld conference on May 23, 1995 — i.e., three days before Bill Gates wrote his urgent Internet memo.

What was it that so spooked him about Java? On the one hand, it represented a possible if as-yet unrealized challenge to Microsoft’s own business model of selling boxed software on floppy disks or CDs. If people could gain access to a good word processor just by pointing their browsers to a given site, they would presumably have little motivation to invest in Microsoft Office, the company’s biggest cash cow after Windows. But the danger Java posed to Microsoft might be even more extreme. The most maximalist predictions, which were being trumpeted all over the techie press in the weeks after the big debut, had it that even Windows could soon become irrelevant courtesy of Java. This is what Microsoft’s own Ben Slivka meant when he said that “the Web is the next platform.” The browser itself would become the operating system from the perspective of the user, being supported behind the scenes only by the minimal amount of firmware needed to make it go. Once that happened, a new generation of cheap Internet devices would be poised to replace personal computers as the world now knew them. With all software and all of each person’s data being stored in the cloud, as we would put it today, even local hard drives might become passé. And then, with Netscape Navigator and Java having taken over the role of Windows, Microsoft might very well join IBM, the very company it had so recently displaced from the heights of power, in the crowded field of computing’s has-beens.

In retrospect, such predictions seem massively overblown. Officially labeled beta software, Java was in reality more like an alpha release at best at the time it was being celebrated as the Paris to Microsoft’s Achilles, being painfully crash-prone and slow. And even when it did reach a reasonably mature form, the reality of it would prove considerably less than the hype. One crippling weakness that would continue to plague it was the inability of a Java applet to communicate with the webpage that spawned it; applets ran in Web browsers, but weren’t really of them, being self-contained programs siloed off in a sandbox from the environment that spawned them. Meanwhile the prospects of applications like online word processing, or even online gaming in Java, were sharply limited by the fact that at least 95 percent of Web users were accessing the Internet on dial-up connections, over which even the likes of a single high-resolution photograph could take minutes to load. A word processor like the one included with Microsoft Office would require hours of downloading every time you wanted to use it, assuming it was even possible to create such a complex piece of software in the fragile young language. Java never would manage to entirely overcome these issues, and would in the end enjoy its greatest success in other incarnations than that of the browser-embedded applet.

Still, cooler-headed reasoning like this was not overly commonplace in the months after the SunWorld presentation. By the end of 1995, Sun’s stock price had more than doubled on the strength of Java alone, a product yet to see a 1.0 release. The excitement over Java probably contributed as well to Netscape’s record-breaking initial public offering in August. A cavalcade of companies rushed to follow in the footsteps of Netscape and sign Java distribution deals, most of them on markedly more expensive terms. Even Microsoft bowed to the prevailing winds on December 7 and announced a Java deal of its own. (BusinessWeek magazine described it as a “capitulation.”) That all of this was happening alongside the even more intense hype surrounding the release of Windows 95, an operating system far more expansive than any that had come out of Microsoft to date but one that was nevertheless of a very traditionalist stripe at bottom, speaks to the confusion of these go-go times when digital technology seemed to be going anywhere and everywhere at once.

Whatever fear and loathing he may have felt toward Java, Bill Gates had clearly made his peace with the fact that the Web was computing’s necessary present and future. The Microsoft Network duly debuted as an icon on the default Windows 95 desktop, but it was now pitched primarily as a gateway to the open Web, with just a handful of proprietary features; MSN was, in other words, little more than yet another Internet service provider, of the sort that were popping up all over the country like dandelions after a summer shower. Instead of the 20 million subscribers that some had predicted (and that Steve Case had so feared), it attracted only about 500,000 customers by the end of the year. This left it no more than one-eighth as large as AOL, which had by now completed its own deft pivot from proprietary online service of the 1980s type to the very face of the World Wide Web in the eyes of countless computing neophytes.

Yet if Microsoft’s first tentative steps onto the Web had proved underwhelming, people should have known from the history of the company — and not least from the long, checkered history of Windows itself — that Bill Gates’s standard response to failure and rejection was simply to try again, harder and better. The real war for online supremacy was just getting started.

(Sources: the books Overdrive: Bill Gates and the Race to Control Cyberspace by James Wallace, The Silicon Boys by David A. Kaplan, Architects of the Web by Robert H. Reid, Competing on Internet Time: Lessons from Netscape and Its Battle with Microsoft by Michael Cusumano and David B. Yoffie, dot.con: The Greatest Story Ever Sold by John Cassidy, Stealing Time: Steve Case, Jerry Levin, and the Collapse of AOL Time Warner by Alec Klein, Fools Rush In: Steve Case, Jerry Levin, and the Unmaking of AOL Time Warner by Nina Munk, and There Must be a Pony in Here Somewhere: The AOL Time Warner Debacle by Kara Swisher.)

 
 

Tags: , , , ,

A Web Around the World, Part 11: A Zero-Sum Game

Mosaic Communications was founded on $13 million in venture capital, a pittance by the standards of today but an impressive sum by those of 1994. Marc Andreessen and Jim Clark’s business plan, if you can call it that, would prove as emblematic of the era of American business history they were inaugurating as anything they ever did. “I don’t know how in hell we’re going to make money,” mused Clark, “but I’ll put money behind it, and we’ll figure out a way. A market growing as quickly as that [one] is going to have money to be made in it.” This naïve faith that nebulous user “engagement” must inevitably be transformed into dollars in the end by some mysterious alchemical process would be all over Silicon Valley throughout the dot-com boom — and, indeed, has never entirely left it even after the bust.

Andreessen and Clark’s first concrete action after the founding was to contact everyone at the National Center for Supercomputing Applications who had helped out with the old Mosaic browser, asking them to come to Silicon Valley and help make the new one. Most of their targets were easily tempted away from the staid nonprofit by the glamor of the most intensely watched tech startup of the year, not to mention the stock options that were dangled before them. The poaching of talent from NCSA secured for the new company some of the most seasoned browser developers in the world. And, almost as importantly, it also served to cut NCSA’s browser — the new one’s most obvious competition — off at the knees. For without these folks, how was NCSA to keep improving its browser?

The partners were playing a very dangerous game here. The Mosaic browser and all of its source code were owned by NCSA as an organization. Not only had Andreessen and Clark made the cheeky move of naming their company after a browser they didn’t own, but they had now stolen away from NCSA those people with the most intimate knowledge of how said browser actually worked. Fortunately, Clark was a grizzled enough veteran of business to put some safeguards in place. He was careful to ensure that no one brought so much as a line of code from the old browser with them to Mosaic Communications. The new one would be entirely original in terms of its code if not in terms of the end-user experience; it would be what the Valley calls a “clean-room implementation.”

Andreessen and Clark were keenly aware that the window of opportunity to create the accepted successor to NCSA Mosaic must be short. They made it clearer with every move they made that they saw the World Wide Web as a zero-sum game. They consciously copied the take-no-prisoners approach of Bill Gates, CEO of Microsoft, which had by now replaced IBM as the most powerful and arguably the most hated company in the computer industry. Marc Andreessen:

We knew that the key to success for the whole thing was getting ubiquity on the [browser] side. That was the way to get the company jump-started because that gives you essentially a broad platform to build off of. It’s basically a Microsoft lesson, right? If you get ubiquity, you have a lot of options, a lot of ways to benefit from that. You can get paid by the product that you are ubiquitous on, but you can also get paid on products that benefit as a result. One of the fundamental lessons is that market share now equals revenue later, and if you don’t have the market share now, you are not going to have revenue later. Another fundamental lesson is that whoever gets the volume does win in the end. Just plain wins. There has to be just one single winner in a market like this.

The founders pushed their programmers hard, insisting that the company simply had to get the browser out by the fall of 1994, which gave them a bare handful of months to create it from scratch. To spur their employees on, they devised a semi-friendly competition. They divided the programmers into three teams, one working on a browser for Unix, one on the Macintosh version, and one on the Microsoft Windows version. The teams raced one another from milestone to milestone, and compared their browsers’ rendering speeds down to the millisecond, all for weekly bragging rights and names on walls of fame and/or shame. One mid-level manager remembers how “a lot of times, people were there 48 hours straight, just coding. I’ve never seen anything like it, in terms of honest-to-God, no BS, human endurance.” Inside the office, the stakes seemed almost literally life or death. He recalls an attitude that “we were fighting some war and that we could win.”

In the meantime, Jim Clark was doing some more poaching. He hired away from his old company Silicon Graphics an ace PR woman named Rosanne Siino. She became the mass-media architect of the dot-com founder as genius, visionary, and all-around rock star. “We had this 22-year-old kid who was pretty damn interesting, and I thought, ‘There’s a story here,'” she says. She proceeded to pitch that story to anyone who would take her calls.

Andreeseen, for his part, slipped into his role fluidly enough after just a bit of coaching. “If you get more visible,” he reasoned, “it counts as advertising, and it doesn’t cost anything.” By the mid-summer of 1994, he was doing multiple interviews most days. Tall and athletically built, well-dressed and glib — certainly no one’s stereotype of a pasty computer nerd — he was perfect fodder for tech journals, mainstream newspapers, and supermarket tabloids alike. “He’s young, he’s hot, and he’s here!” trumpeted one of the last above a glamor shot of the wunderkind.

The establishment business media found the rest of the company to be almost as interesting if not quite as sexy, from its other, older founder who was trying to make lightning strike a second time to the fanatical young believers who filled the cubicles; stories of crunch time were more novel then than they would soon become. Journalists fixated on the programmers’ whimsical mascot, a huge green and purple lizard named Mozilla who loomed over the office from his perch on one wall. Some were even privileged to learn that his name was a portmanteau of  “Mosaic” and “Godzilla,” symbolizing the company’s intention to annihilate the NCSA browser as thoroughly as the movie monster had leveled Tokyo. On the strength of sparkling anecdotes like this, Forbes magazine named Mosaic Communications one of its “25 Cool Companies” — all well before it had any products whatsoever.

Mozilla, the unofficial mascot of Mosaic (later Netscape) Communications. He would prove to be far longer-lived than the company he first represented. Today he still lends his name to the Mozilla Foundation, which maintains an open-source browser and fights for open standards on the Web — somewhat ironically, given that the foundation’s origins lie in the first company to be widely perceived as a threat to those standards.

The most obvious obstacle to annihilating the NCSA browser was the latter’s price: it was, after all, free. Just how was a for-profit business supposed to compete with that price point? Andreeseen and Clark settled on a paid model that nevertheless came complete with a nudge and a wink. The browser they called Mosaic Netscape would technically be free only to students and educators. But others would be asked to pay the $39 licensing fee only after a 90-day trial period — and, importantly, no mechanism would be implemented to coerce them into doing so even after the trial expired. Mosaic Communications would thus make the cornerstone of its business strategy Andreessen’s sanguine conviction that “market share now equals revenue later.”

Mosaic Netscape went live on the Internet on October 13, 1994. And in terms of Andreessen’s holy grail of market share at least, it was an immediate, thumping success. Within weeks, Mosaic Netscape had replaced NCSA Mosaic as the dominant browser on the Web. In truth, it had much to recommend it. It was blazing fast on all three of the platforms on which it ran, a tribute to the fierce competition between the teams who had built its different versions. And it sported some useful new HTML tags, such as “<center>” for centering text and “<blink>” for making it do just that. (Granted, the latter was rather less essential than the former, but that wouldn’t prevent thousands of websites from hastening to make use of it; as is typically the case with such things, the evolution of Web aesthetics would happen more slowly than that of Web technology.) Most notably of all, Netscape added the possibility of secure encryption to the Web, via the Secure Sockets Layer (SSL). The company rightly considered SSL to be an essential prerequisite to online commerce; no one in their right mind was going to send credit-card numbers in the clear.

But, valuable though these additions (mostly) were, they raised the ire of many of those who had shepherded the Web through its early years, not least among them Tim Berners-Lee. Although they weren’t patented and thus weren’t proprietary in a legal sense — anyone was free to implement them if they could figure out how they worked — Mosaic Communications had rolled them out without talking to anyone about what they were doing, leaving everyone else to play catch-up in a race of their making.

Still, such concerns carried little weight with most users. They were just happy to have a better browser.

More pressing for Andreessen and Clark were the legal threats that were soon issuing from NCSA and the University of Illinois, demanding up to 50 percent of the revenue from Mosaic Netscape, which they alleged was by rights at least half theirs. These continued even after Jim Clark produced a report from a forensic software expert which stated that, for all that they might look and feel the same, NCSA Mosaic and Mosaic Netscape shared no code at all. Accepting at last that naming their company after the rival browser whose code they insisted they were not stealing had been terrible optics, Andreessen and Clark rechristened Mosaic Communications as Netscape Communications on November 14, 1994; its browser now became known as Netscape Navigator. Seeking a compromise to make the legal questions go away once and for all, Clark offered NCSA a substantial amount of stock in Netscape, only to be turned down flat. In the end, he agreed to a cash settlement instead; industry rumor placed it in the neighborhood of $2 million. NCSA and the university with which it was affiliated may have have felt validated by the settlement, but time would show that it had not been an especially wise decision to reject Clark’s first overture: ten months later, the stock NCSA had been offered was worth $17 million.



For all its exciting growth, the World Wide Web had made relatively few inroads with everyday Americans to this point. But all of that changed in 1995, the year when the Web broke through in earnest. There was now enough content there to make it an interesting place for the ordinary Joe or Jane to visit, as well as a slick, user-friendly browser for him or her to use in Netscape Navigator.

Just as importantly, there were for the first time enough computers in daily use in American homes to make something like the Web a viable proposition. With the more approachable Microsoft Windows having replaced the cryptic, command-line-driven MS-DOS as the typical face of consumer computing, with new graphics card, sound cards, and CD-ROM drives providing a reasonably pleasing audiovisual experience, with the latest word processors and spreadsheets being more powerful and easier to use than ever before, and with the latest microprocessors and hard drives allowing it all to happen at a reasonably brisk pace, personal computers had crossed a Rubicon in the last half-decade or so, to become gadgets that people who didn’t find computers themselves intrinsically fascinating might nonetheless want to own and use. Netscape Navigator was fortunate enough to hit the scene just as these new buyers were reaching a critical mass. They served to prime the pump. And then, once just about everyone with a computer seemed to be talking about the Web, the whole thing became a self-reinforcing virtuous circle, with computer owners streaming onto the Web and the Web in turn driving computer sales. By the summer of 1995, Netscape Navigator had been installed on at least 10 million computers.

Virtually every major corporation in the country that didn’t have a homepage already set one up during 1995. Many were little more than a page or two of text and a few corporate logos at this point, but a few did go further, becoming in the process harbingers of the digital future. Pizza Hut, for example, began offering an online ordering service in select markets, and Federal Express made it possible for customers to track the progress of their packages around the country and the world from right there in their browsers. Meanwhile Silicon Valley and other tech centers played host to startup after startup, including plenty of names we still know well today: the online bookstore (and later anything-store) Amazon, the online auction house eBay, and the online dating service Match.com among others were all founded this year.

Recognizing an existential threat when they saw one, the old guard of circumscribed online services such as CompuServe, who had pioneered much of the social and commercial interaction that was now moving onto the open Web, rushed to devise hybrid business models that mixed their traditional proprietary content with Internet access. Alas, it would avail most of them nothing in the end; the vast majority of these dinosaurs would shuffle off to extinction before the decade was out. Only an upstart service known as America Online, a comparative latecomer on the scene, would successfully weather the initial storm, thanks mostly to astute marketing that positioned it as the gentler, friendlier, more secure alternative to the vanilla Web for the non-tech-savvy consumer. Its public image as a sort of World Wide Web with training wheels would rake in big profits even as it made the service and its subscribers objects of derision for Internet sophisticates. But even America Online would not be able to maintain its stranglehold on Middle America forever. By shortly after the turn of the millennium — and shortly after an ill-advised high-profile merger with the titan of old media Time Warner — it too would be in free fall.



One question stood foremost in the minds of many of these millions who were flocking onto the Web for the first time: how the heck were they supposed to find anything here? It was, to be sure, an ironic question to be asking, given that Tim Berners-Lee had invented his World Wide Web for the express purpose of making the notoriously confounding pre-Web Internet easier to navigate. Yet as websites bred and spawned like rabbits in a Viagra factory, it became a relevant one once again.

The idea of a network of associative links was as valid as ever — but just where were you to start when you knew that you wanted to, say, find out the latest rumors about your favorite band Oasis? (This was the mid-1990s, after all.) Once you were inside the Oasis ecosystem, as it were, it was easy enough to jump from site to site through the power of association. But how were you to find your way inside in the first place when you first fired up your browser and were greeted with a blank page and a blank text field waiting for you to type in a Web address you didn’t know?

One solution to this conundrum was weirdly old-fashioned: brick-and-mortar bookstore shelves were soon filling up with printed directories that cataloged the Web’s contents. But this was a manifestly inadequate solution as well as a retrograde one; what with the pace of change on the Web, such books were out of date before they were even sold. What people really needed was a jumping-off point on the Web itself, a home base from which to start each journey down the rabbit hole of their particular interests, offering a list of places to go that could grow and change as fast as the Web itself. Luckily, two young men with too much time on their hands had created just such a thing.

Jerry Yang and David Filo were rather unenthusiastic Stanford graduate students in computer science during the early 1990s. Being best friends, they discovered the Web together shortly after the arrival of the NCSA Mosaic browser. Already at this early date, finding the needles in the digital haystack was becoming difficult. Therefore they set up a list of links they found interesting, calling it “Jerry and David’s Guide to the World Wide Web.” This was not unique in itself; thousands of others were putting up similar lists of “cool links.” Yang and Filo were unique, however, in how much energy they devoted to the endeavor.

Jerry Yang and David Filo. Bare feet were something of a staple of Silicon Valley glamor shots, serving as a delightful shorthand for informal eccentricity in the eyes of the mass media.

They were among the first wave of people to discover the peculiar, dubiously healthy dopamine-release mechanism that is online attention, whether measured in page views, as in those days, or likes or retweets, as today. The more traffic that came their way, the more additional traffic they wanted. Instead of catering merely to their personal interests, they gradually turned their site into a comprehensive directory of the Web — all of it, in the ideal at least. They surfed tirelessly day after day, neglecting girlfriends, family, and personal hygiene, not to mention their coursework, trying to keep up with the Sisyphean task of cataloging every new site of note that went up on the Web, then slotting it into a branching hierarchy of hundreds of categories and sub-categories.

In April of 1994, they decided that their site needed a catchier name. Their initial thought was to combine their last names in some ingenious way, but they couldn’t find one that worked. So, they focused on the name of Yang, by nature the more voluble and outgoing of the pair. They were steeped enough in hacker culture to think of a popular piece of software called YACC; it stood for “Yet Another Compiler Compiler,” but was pronounced like the Himalayan beast of burden. That name was obviously taken, but perhaps they could come up with something else along those lines. They looked in a dictionary for words starting with “ya”: “yawn,” “yawp,” “yaw,” “y-axis”… “yahoo.” The good book told them that “yahoo” derived from Jonathan Swift’s Gulliver’s Travels, where it referred to “any of a race of brutish, degraded creatures having the form and all of the vices of man.” Whatever — they just liked the sound of the word. They racked their brains until they had turned it into an acronym: “Yet Another Hierarchical Officious Oracle.” Whatever. It would do. A few months later, they stuck an exclamation point at the end as a finishing touch. And so Yahoo! came to be.

Yahoo! very shortly after it received its name, but before it received its final flourish of an exclamation point.

For quite some time after that, not much changed on the surface. Yang and Filo had by now appropriated a neglected camping trailer on one of Stanford’s back parking lots, which they turned into their squalid headquarters. They tried to keep up with the flood of new content coming onto the Web every day by living in the trailer, trading four-hour shifts with one another around the clock, working like demons for that sweet fix of ever-increasing page-view numbers. “There was nothing else in the world like it,” says Yang. “There was such camaraderie, it was like driving off a cliff.”

But there came a point, not long after the start of that pivotal Web year of 1995, when Yang and Filo had to recognize that they were losing their battle with new content. So, they set off in search of the funding they would need to turn what had already become in the minds of many the Web’s de-facto “front page” into a real business, complete with employees they could pay to do what they had been doing for free. They seriously considered joining America Online, then came even closer to signing on with Netscape, a company which had already done much for their popularity by placing their site behind a button displayed prominently by the Navigator browser. In the end, though, they opted to remain independent. In April of 1995, they secured $4 million in financing, thanks to a far-sighted venture capitalist named Mike Moritz, who made the deal in the face of enormous skepticism from his colleagues. “The venture community [had never] invested in anything that gave a product away for free,” he remembers.

Or had they? It all depended on how you looked at it. Yang and Filo noted that television broadcasters had been giving their product away for free for decades as far as the individual viewer was concerned, making their money instead by selling access to their captive audience to third-party advertisers. Why couldn’t the same thing work on the Web? The demographic that visited Yahoo! regularly was, after all, an advertiser’s dream, being largely comprised of young adults with disposable income, attracted to novelty and with enough leisure time to indulge that attraction.

So, advertising started appearing on Yahoo! very shortly after it became a real business. Adherents to the old, non-commercial Web ideal grumbled, and some of them left in a huff, but their numbers were dwarfed by the continuing flood of new Netizens, who tended to perceive the Web as just another form of commercial media and were thus unfazed when they were greeted with advertising there. With the help of a groundbreaking Web analytics firm known as I/PRO, Yahoo! came up with ways to target its advertisements ever more precisely to each individual user’s interests, which she revealed to the company whether she wanted to or not through the links she clicked. The Web, Yang and Filo were at pains to point out, was the most effective advertising environment ever to appear. Business journalist Robert H. Reid, who profiled Netscape, Yahoo!, I/PRO, and much of the rest of the early dot-com startup scene for a book published in 1997, summed up the advantages of online advertising thusly:

There is a limit to how targeted advertising can be in traditional media. [This is] because any audience that is larger than one, even a fairly small and targeted [audience], will inevitably have its diversity elements (certain readers of the [Wall Street] Journal’s C section surely do not care about new bond issues, while certain readers of Field and Stream surely do). The Web has the potential to let marketers overcome this because, as an interactive medium, it can enable them to target their messages with surgical precision. Database technology can allow entirely unique webpages to be generated and served in moments based upon what is known about a viewer’s background, interests, and prior trajectory through a site. A site with a diverse audience can therefore direct one set of messages to high-school boys and a wholly different one to retired women. Or it could go further than this — after all, not all retired women are interested in precisely the same things — and present each visitor with an entirely unique message or experience.

Then, too, on the Web advertisers could do more than try to lodge an impression in a viewer’s mind and hope she followed up on it later, as was the case with television. They could rather present an advertisement as a clickable link that would take her instantly to their own site, which she could browse to learn far more about their products than she ever could from a one-minute commercial, which she might even be able to use to buy their products then and there — instant gratification for everyone involved.

Unlike so many Web firms before and after it, Yahoo! became profitable right away on the strength of reasoning like that. Even when Netscape pulled the site from Navigator at the end of 1995, replacing it with another one that was willing to pay dearly for the privilege — another sign of the changing times — it only briefly affected Yahoo!’s overall trajectory. As far as the mainstream media was concerned, Yang and Filo — these two scruffy graduate students who had built their company in a camping trailer — were the best business story since the rise of Netscape. If anything, Jerry Yang’s personal history made Yahoo! an even more compelling exemplar of the American Dream: he had come to the United States from Taiwan at the age of ten, when the only word of English he knew was “shoe.” When Yang showed that he could be every bit as charming as Marc Andreessen, that only made the story that much better.

Declaring that Yahoo! was a media rather than a technology company, Yang displayed a flair for branding one would never expect from a lifelong student: “It’s an article of culture. This differentiates Yahoo!, makes it cool, and gives it a market premium.” Somewhat ironically given its pitch that online advertising was intrinsically better than television advertising, Yahoo! became the first of the dot-com startups to air television commercials, all of which concluded with a Gene Autry -soundalike yodeling the name, an unavoidable ear worm for anyone who heard it. A survey conducted in 1996 revealed that half of all Americans already knew the brand name — a far larger percentage than that which had actually ventured online by that point. It seems safe to say that Yahoo! was the most recognizable of all the early Web brands, more so even than Netscape.


Trailblazing though Yahoo!’s business model was in many ways, its approach to its core competency seems disarmingly quaint today. Yahoo! wasn’t quite a search engine in the way we think of such things; it was rather a collection of sanctioned links, hand-curated and methodically organized by a small army of real human beings. Well before television commercials like the one above had begun to air, the dozens of “surfers” it employed — many of them with degrees in library science — had been relieved of the burden of needing to go out and find new sites for themselves by their own site’s ubiquity. Owners of sites which wished to be listed were expected to fill out a form, then wait patiently for a few days or weeks for someone to get to their request and, if it passed muster, slot it into Yahoo!’s ever-blossoming hierarchy.

Yahoo! as it looked in October of 1996. A search field has recently been added, but it searches only Yahoo!’s hand-curated database of sites rather than the Web itself.

The alternative approach, which was common among Yahoo!’s competitors even at the time, is to send out automated “web crawlers,” programs that jump from link to link, in order to index all of the content on the Web into a searchable database. But as far as many Netizens were concerned in the mid-1990s, that approach just didn’t work all that well. A search for “Oasis” on one of these sites was likely to show you hundreds of pages dealing with desert ecosystems, all jumbled together with those dealing with your favorite rock band. It would be some time before search engines would be developed that could divine what you were really looking for based on context, that could infer from your search for “Oasis band” that you really, really didn’t want to read about deserts at that particular moment. Search engines like the one around which Google would later build its empire require a form of artificial intelligence — still not the computer consciousness of the old “giant brain” model of computing, but a more limited, context-specific form of machine learning — that would not be quick or easy to develop. In the meantime, there was Yahoo! and its army of human librarians.



And there were also the first Internet IPOs. As ever, Netscape rode the crest of the Web wave, the standard bearer for all to follow. On the eve of its IPO of August 9, 1995, it was decided to price the shares at $28 each, giving a total value to the company of over $1 billion, even though its total revenues to date amounted to $17 million and its bottom line to date tallied a loss of $13 million. Nevertheless, when trading opened the share price immediately soared to $74.75. “It took General Dynamics 43 years to become a corporation worth today’s $2.7 billion,” wrote The Wall Street Journal. “It took Netscape Communications about a minute.”

Yahoo!’s turn came on April 12, 1996. Its shares were priced at $13 when the day’s trading opened, and peaked at $43 over the course of that first day, giving the company an implied value of $850 million.

It was the beginning of an era of almost incomprehensible wealth generated by the so-called “Internet stocks,” often for reasons that were hard for ordinary people to understand, given how opaque the revenue models of so many Web giants could be. Even many of the beneficiaries of the stock-buying frenzy struggled to wrap their heads around it all. “Take, say, a Chinese worker,” said Lou Montulli, a talented but also ridiculously lucky programmer at Netscape. “I’m probably worth a million times the average Chinese worker, or something like that. It’s difficult to rationalize the value there. I worked hard, but did I really work that hard? I mean, can anyone work that hard? Is it possible? Is anyone worth that much?” Four of the ten richest people in the world today according to Forbes magazine — including the two richest of all — can trace the origins of their fortunes directly to the dot-com boom of the 1990s. Three more were already in the computer industry before the boom, and saw their wealth exponentially magnified by it. (The founders I’ve profiled in this article are actually comparatively small fish today. Their rankings on the worldwide list of billionaires as of this writing range from 792 in the case of David Filo to 1717 for Marc Andreessen.)

And what was Tim Berners-Lee doing as people began to get rich from his creation? He did not, as some might have expected, decamp to Silicon Valley to start a company of his own. Nor did he accept any of the “special advisor” roles that were his for the taking at a multitude of companies eager to capitalize on the cachet of his name. He did leave CERN, but made it only as far as Boston, where he founded a non-profit World Wide Web Consortium in partnership with MIT and others. The W3C, as it would soon become known, was created to lead the defense of open standards against those corporate and governmental forces which were already demonstrating a desire to monopolize and balkanize the Web. At times, there would be reason to question who was really leading whom; the W3C would, for example, be forced to write into its HTML standard many of the innovations which Netscape had already unilaterally introduced into its industry-leading browser. Yet the organization has undoubtedly played a vital role in keeping the original ideal of the Web from giving way completely to the temptations of filthy lucre. Tim Berners-Lee remains to this day the only director the W3C has ever known.

So, while Marc Andreessen and Jerry Yang and their ilk were becoming the darlings of the business pages, were buying sports cars and attending the most exclusive parties, Tim Berners-Lee was riding a bus to work every day in Boston, just another anonymous commuter in a gray suit. It was fall when he first arrived in his new home, and so, as he says, “the bus ride gave me time to revel in New England’s autumnal colours.” Many over the years have found it hard to believe he wasn’t bitter that his name had become barely a footnote in the reckoning of the business-page pundits who were declaring the Web — correctly, it must be said — the most important development in mass media in their lifetimes. But he himself insists — believably, it must be said — that he was not and is not resentful over the way things played out.

People sometimes ask me whether I am upset that I have not made a lot of money from the Web. In fact, I made some quite conscious decisions about which way to take in life. Those I would not change. What does distress me, though, is how important a question it seems to be for some. This happens mostly in America, not Europe. What is maddening is the terrible notion that a person’s value depends on how important and financially successful they are, and that that is measured in terms of money. This suggests disrespect for the researchers across the globe developing ideas for the next leaps in science and technology. Core in my upbringing was a value system that put monetary gain well in its place, behind things like doing what I really want to do. To use net worth as a criterion by which to judge people is to set our children’s sights on cash rather than on things that will actually make them happy.

It can be occasionally frustrating to think about the things my family could have done with a lot of money. But in general I’m fairly happy to let other people be in the Royal Family role…

Perhaps Tim Berners-Lee is the luckiest of all the people whose names we still recognize from that go-go decade of the 1990s, being the one who succeeded in keeping his humanity most intact by never stepping onto the treadmill of wealth and attention and “disruption” and Forbes rankings. Heaven help those among us who are no longer able to feel the joy of watching nature change her colors around them.



In 1997, Robert H. Reid wrote that “the inevitable time will come when the Web’s dawning years will seem as remote as the pioneering days of film seem today. Today’s best and most lavishly funded websites will then look as naïve and primitive as the earliest silent movies.” Exactly this has indeed come to pass. And yet if we peer beneath the surface of the early Web’s garish aesthetics, most of what we find there is eerily familiar.

One of the most remarkable aspects of the explosion of the Web into the collective commercial and cultural consciousness is just how quickly it occurred. In the three and one quarter years between the initial release of the NCSA Mosaic browser and the Yahoo! IPO, a new digital society sprang into being, seemingly from nothing and nowhere. It brought with it all of the possibilities and problems we still wrestle with today. For example, the folks at Netscape, Yahoo!, and other startups were the first to confront the tension between free speech and hate speech online. (Straining to be fair to everyone, Yahoo! reluctantly decided to classify the Ku Klux Klan under the heading of “White Power” rather than “Fascism,” much less booting it off their site completely.) As we’ve seen, the Internet advertising business emerged from whole cloth during this time, along with all of the privacy concerns raised by its determination to track every single Netizen’s voyages in the name of better ad targeting. (It’s difficult to properly tell the story of this little-loved but enormously profitable branch of business in greater depth because it has always been shrouded in so much deliberate secrecy.) Worries about Web-based pornography and the millions of children and adolescents who were soon viewing it regularly took center stage in the mass media, both illuminating and obscuring a huge range of  questions — largely still unanswered today — about what effect this had on their psychology. (“Something has to be done,” said one IBM executive who had been charged with installing computers in classrooms, “or children won’t be given access to the Web.”) And of course the tension between open standards and competitive advantage remains of potentially existential importance to the Web as we know it, even if the browser that threatens to swallow the open Web whole is now Google Chrome instead of Netscape Navigator.

All told, the period from 1993 to 1996 was the very definition of a formative one. And yet, as we’ve seen, the Web — this enormous tree of possibility that seemed to so many to sprout fully formed out of nothing — had roots stretching back centuries. If we have learned anything over the course of the last eleven articles, it has hopefully been that no technology lives in a vacuum. The World Wide Web is nothing more nor less than the latest realization of a dream of instantaneous worldwide communication that coursed through the verse of Aeschylus, that passed through Claude Chappe and Samuel Morse and Cyrus Field and Alexander Graham Bell among so many others. Tellingly, almost all of those people who accessed the Web from their homes during the 1990s did so by dialing into it, using modems attached to ordinary telephone lines — a validation not only of Claude Shannon’s truism that information is information but of all of the efforts that led to such a flexible and sophisticated telephone system in the first place. Like every great invention since at least the end of prehistory, the World Wide Web stands on the shoulders of those which came before it.

Was it all worth it? Did all the bright sparks we’ve met in these articles really succeed in, to borrow one of the more odious clichés to come out of Silicon Valley jargon, “making the world a better place?” Clichés aside, I think it was, and I think they did. For all that the telegraph, the telephone, the Internet, and the World Wide Web have plainly not succeeded in creating the worldwide utopia that was sometimes promised by their most committed evangelists, I think that communication among people and nations is always preferable to the lack of same.

And with that said, it is now time to end this extended detour into the distant past — to end it here, with J.C.R. Licklider’s dream of an Intergalactic Computer Network a reality, and right on the schedule he proposed. But of course what I’ve written in this article isn’t really an end; it’s barely the beginning of what the Web came to mean to the world. As we step back into the flow of things and return to talking about digital culture and interactive entertainment on a more granular, year-by-year basis, the Web will remain an inescapable presence for us, being the place where virtually all digital culture lived after 1995 or so. I look forward to seeing it continue to evolve in real time, and to grappling alongside all of you with the countless Big Questions it will continue to pose for us.

(Sources: the books A Brief History of the Future: The Origins of the Internet by John Naughton, Communication Networks: A Concise Introduction by Jean Walrand and Shyam Parekh, Weaving the Web by Tim Berners-Lee, and Architects of the Web by Robert H. Reid. Online sources include the Pew Research Center’s “World Wide Web Timeline” and Forbes‘s up-to-the-minute billionaires scoreboard.)

 

Tags:

A Web Around the World, Part 10: A Web of Associations

While wide-area computer networking, packet switching, and the Internet were coming of age, all of the individual computers on the wire were becoming exponentially faster, exponentially more capacious internally, and exponentially smaller externally. The pace of their evolution was unprecedented in the history of technology; had automobiles been improved at a similar rate, the Ford Model T would have gone supersonic within ten years of its introduction. We should take a moment now to find out why and how such a torrid pace was maintained.

As Claude Shannon and others realized before World War II, a digital computer in the abstract is an elaborate exercise in boolean logic, a dynamic matrix of on-off switches — or, if you like, of ones and zeroes. The more of these switches a computer has, the more it can be and do. The first Turing-complete digital computers, such as ENIAC and Whirlwind, implemented their logical switches using vacuum tubes, a venerable technology inherited from telephony. Each vacuum tube was about as big as an incandescent light bulb, consumed a similar amount of power, and tended to burn out almost as frequently. These factors made the computers which employed vacuum tubes massive edifices that required as much power as the typical city block, even as they struggled to maintain an uptime of more than 50 percent — and all for the tiniest sliver of one percent of the overall throughput of the smartphones we carry in our pockets today. Computers of this generation were so huge, expensive, and maintenance-heavy in relation to what they could actually be used to accomplish that they were largely limited to government-funded research institutions and military applications.

Computing’s first dramatic leap forward in terms of its basic technological underpinnings also came courtesy of telephony. More specifically, it came in the form of the transistor, a technology which had been invented at Bell Labs in December of 1947 with the aim of improving telephone switching circuits. A transistor could function as a logical switch just as a vacuum tube could, but it was a minute fraction of the size, consumed vastly less power, and was infinitely more reliable. The computers which IBM built for the SAGE project during the 1950s straddled this technological divide, employing a mixture of vacuum tubes and transistors. But by 1960, the computer industry had fully and permanently embraced the transistor. While still huge and unwieldy by modern standards, computers of this era were practical and cost-effective for a much broader range of applications than their predecessors had been; corporate computing started in earnest in the transistor era.

Nevertheless, wiring together tens of thousands of discrete transistors remained a daunting task for manufacturers, and the most high-powered computers still tended to fill large rooms if not entire building floors. Thankfully, a better way was in the offing. Already in 1958, a Texas Instruments engineer named Jack Kilby had come up with the idea of the integrated circuit: a collection of miniaturized transistors and other electrical components embedded in a silicon wafer, the whole being suitable for stamping out quickly in great quantities by automated machinery. Kilby invented, in other words, the soon-to-be ubiquitous computer chip, which could be wired together with its mates to produce computers that were not only smaller but easier and cheaper to manufacture than those that had come before. By the mid-1960s, the industry was already in the midst of the transition from discrete transistors to integrated circuits, producing some machines that were no larger than a refrigerator; among these was the Honeywell 516, the computer which was turned into the world’s first network router.

As chip-fabrication systems improved, designers were able to miniaturize the circuitry on the wafers more and more, allowing ever more computing horsepower to be packed into a given amount of physical space. An engineer named Gordon Moore proposed the principle that has become known as Moore’s Law: he calculated that the number of transistors which can be stamped into a chip of a given size doubles every second year.[1]When he first stated his law in 1965, Moore actually proposed a doubling every single year, but revised his calculations in 1975. In July of 1968, Moore and a colleague named Robert Noyce formed the chip maker known as Intel to make the most of Moore’s Law. The company has remained on the cutting edge of chip fabrication to this day.

The next step was perhaps inevitable, but it nevertheless occurred almost by accident. In 1971, an Intel engineer named Federico Faggin put all of the circuits making up a computer’s arithmetic, logic, and control units — the central “brain” of a computer — onto a single chip. And so the microprocessor was born. No one involved with the project at the time anticipated that the Intel 4004 central-processing unit would open the door to a new generation of general-purpose “microcomputers” that were small enough to sit on desktops and cheap enough to be purchased by ordinary households. Faggin and his colleagues rather saw the 4004 as a fairly modest, incremental advancement of the state of the art, which would be deployed strictly to assist bigger computers by serving as the brains of disk controllers and other single-purpose peripherals. Before we rush to judge them too harshly for their lack of vision, we should remember that they are far from the only inventors in history who have failed to grasp the real importance of their creations.

At any rate, it was left to independent tinkerers who had been dreaming of owning a computer of their own for years, and who now saw in the microprocessor the opportunity to do just that, to invent the personal computer as we know it. The January 1975 issue of Popular Electronics sports one of the most famous magazine covers in the history of American technology: it announces the $439 Altair 8800, from a tiny Albuquerque, New Mexico-based company known as MITS. The Altair was nothing less than a complete put-it-together-yourself microcomputer kit, built around the Intel 8080 microprocessor, a successor model to the 4004.

The magazine cover that launched a technological revolution.

The next milestone came in 1977, when three separate companies announced three separate pre-assembled, plug-em-in-and-go personal computers: the Apple II, the Radio Shack TRS-80, and the Commodore PET. In terms of raw computing power, these machines were a joke compared to the latest institutional hardware. Nonetheless, they were real, Turing-complete computers that many people could afford to buy and proceed to tinker with to their heart’s content right in their own homes. They truly were personal computers: their buyers didn’t have to share them with anyone. It is difficult to fully express today just how extraordinary an idea this was in 1977.

This very website’s early years were dedicated to exploring some of the many things such people got up to with their new dream machines, so I won’t belabor the subject here. Suffice to say that those first personal computers were, although of limited practical utility, endlessly fascinating engines of creativity and discovery for those willing and able to engage with them on their own terms. People wrote programs on them, drew pictures and composed music, and of course played games, just as their counterparts on the bigger machines had been doing for quite some time. And then, too, some of them went online.

The first microcomputer modems hit the market the same year as the trinity of 1977. They operated on the same principles as the modems developed for the SAGE project a quarter-century before — albeit even more slowly. Hobbyists could thus begin experimenting with connecting their otherwise discrete microcomputers together, at least for the duration of a phone call.

But some entrepreneurs had grander ambitions. In July of 1979, not one but two subscription-based online services, known as CompuServe and The Source, were announced almost simultaneously. Soon anyone with a computer, a modem, and the requisite disposable income could dial them up to socialize with others, entertain themselves, and access a growing range of useful information.

Again, I’ve written about this subject in some detail before, so I won’t do so at length here. I do want to point out, however, that many of J.C.R. Licklider’s fondest predictions for the computer networks of the future first became a reality on the dozen or so of these commercial online services that managed to attract significant numbers of subscribers over the years. It was here, even more so than on the early Internet proper, that his prognostications about communities based on mutual interest rather than geographical proximity proved their prescience. Online chatting, online dating, online gaming, online travel reservations, and online shopping first took hold here, first became a fact of life for people sitting in their living rooms. People who seldom or never met one another face to face or even heard one another’s voices formed relationships that felt as real and as present in their day-to-day lives as any others — a new phenomenon in the history of social interaction. At their peak circa 1995, the commercial online services had more than 6.5 million subscribers in all.

Yet these services failed to live up to the entirety of Licklider’s old dream of an Intergalactic Computer Network. They were communities, yes, but not quite networks in the sense of the Internet. Each of them lived on a single big mainframe, or at most a cluster of them, in a single data center, which you dialed into using your microcomputer. Once online, you could interact in real time with the hundreds or thousands of others who might have dialed in at the same time, but you couldn’t go outside the walled garden of the service to which you’d chosen to subscribe. That is to say, if you’d chosen to sign up with CompuServe, you couldn’t talk to someone who had chosen The Source. And whereas the Internet was anarchic by design, the commercial online services were steered by the iron hands of the companies who had set them up. Although individual subscribers could and often did contribute content and in some ways set the tone of the services they used, they did so always at the sufferance of their corporate overlords.

Through much of the fifteen years or so that the commercial services reigned supreme, many or most microcomputer owners failed to even realize that an alternative called the Internet existed. Which is not to say that the Internet was without its own form of social life. Its more casual side centered on an online institution known as Usenet, which had arrived on the scene in late 1979, almost simultaneously with the first commercial services.

At bottom, Usenet was (and is) a set of protocols for sharing public messages, just as email served that purpose for private ones. What set it apart from the bustling public forums on services like CompuServe was its determinedly non-centralized nature. Usenet as a whole was a network of many servers, each storing a local copy of its many “newsgroups,” or forums for discussions on particular topics. Users could read and post messages using any of the servers, either by sitting in front of the server’s own keyboard and monitor or, more commonly, through some form of remote connection. When a user posted a new message to a server, it sent it on to several other servers, which were then expected to send it further, until the message had propagated through the whole network of Usenet servers. The system’s asynchronous nature could distort conversations; messages reached different servers at different times, which meant you could all too easily find yourself replying to a post that had already been retracted, or making a point someone else had already made before you. But on the other hand, Usenet was almost impossible to break completely — just like the Internet itself.

Strictly speaking, Usenet did not depend on the Internet for its existence. As far as it was concerned, its servers could pass messages among themselves in whatever way they found most convenient. In its first few years, this sometimes meant that they dialed one another up directly over ordinary phone lines and talked via modem. As it matured into a mainstay of hacker culture, however, Usenet gradually became almost inseparable from the Internet itself in the minds of most of its users.

From the three servers that marked its inauguration in 1979, Usenet expanded to 11,000 by 1988. The discussions that took place there didn’t quite encompass the whole of the human experience equally; the demographics of the hacker user base meant that computer programming tended to get more play than knitting, Pink Floyd more play than Madonna, and science-fiction novels more play than romances. Still, the newsgroups were nothing if not energetic and free-wheeling. For better or for worse, they regularly went places the commercial online services didn’t dare allow. For example, Usenet became one of the original bastions of online pornography, first in the form of fevered textual fantasies, then in the somehow even more quaint form of “ASCII art,” and finally, once enough computers had the graphics capabilities to make it worthwhile, as actual digitized photographs. In light of this, some folks expressed relief that it was downright difficult to get access to Usenet and the rest of the Internet if one didn’t teach or attend classes at a university, or work at a tech company or government agency.

The perception of the Internet as a lawless jungle, more exciting but also more dangerous than the neatly trimmed gardens of the commercial online services, was cemented by the Morris Worm, which was featured on the front page of the New York Times for four straight days in December of 1988. Created by a 23-year-old Cornell University graduate student named Robert Tappan Morris, it served as many people’s ironic first notice that a network called the Internet existed at all. The exploit, which its creator later insisted had been meant only as a harmless prank, spread by attaching itself to some of the core networking applications used by Unix, a powerful and flexible operating system that was by far the most popular among Internet-connected computers at the time. The Morris Worm came as close as anything ever has to bringing the entire Internet down when its exponential rate of growth effectively turned it into a network-wide denial-of-service attack — again, accidentally, if its creator is to be believed. (Morris himself came very close to a prison sentence, but escaped with three years of probation, a $10,000 fine, and 400 hours of community service, after which he went on to a lucrative career in the tech sector at the height of the dot-com boom.)

Attitudes toward the Internet in the less rarefied wings of the computing press had barely begun to change even by the beginning of the 1990s. An article from the issue of InfoWorld dated February 4, 1991, encapsulates the contemporary perceptions among everyday personal-computer owners of this “vast collection of networks” which is “a mystery even to people who call it home.”

It is a highway of ideas, a collective brain for the nation’s scientists, and perhaps the world’s most important computer bulletin board. Connecting all the great research institutions, a large network known collectively as the Internet is where scientists, researchers, and thousands of ordinary computer users get their daily fix of news and gossip.

But it is the same network whose traffic is occasionally dominated by X-rated graphics files, UFO sighting reports, and other “recreational” topics. It is the network where renegade “worm” programs and hackers occasionally make the news.

As with all communities, this electronic village has both high- and low-brow neighborhoods, and residents of one sometimes live in the other.

What most people call the Internet is really a jumble of networks rooted in academic and research institutions. Together these networks connect over 40 countries, providing electronic mail, file transfer, remote login, software archives, and news to users on 2000 networks.

Think of a place where serious science comes from, whether it’s MIT, the national laboratories, a university, or [a] private enterprise, [and] chances are you’ll find an Internet address. Add [together] all the major sites, and you have the seeds of what detractors sometimes call “Anarchy Net.”

Many people find the Internet to be shrouded in a cloud of mystery, perhaps even intrigue.

With addresses composed of what look like contractions surrounded by ‘!’s, ‘@’s, and ‘.’s, even Internet electronic mail seems to be from another world. Never mind that these “bangs,” “at signs,” and “dots” create an addressing system valid worldwide; simply getting an Internet address can be difficult if you don’t know whom to ask. Unlike CompuServe or one of the other email services, there isn’t a single point of contact. There are as many ways to get “on” the Internet as there are nodes.

At the same time, this complexity serves to keep “outsiders” off the network, effectively limiting access to the world’s technological elite.

The author of this article would doubtless have been shocked to learn that within just four or five years this confusing, seemingly willfully off-putting network of scientists and computer nerds would become the hottest buzzword in media, and that absolutely everybody, from your grandmother to your kids’ grade-school teacher, would be rushing to get onto this Internet thing before they were left behind, even as stalwart rocks of the online ecosystem of 1991 like CompuServe would already be well on their way to becoming relics of a bygone age.

The Internet had begun in the United States, and the locus of the early mainstream excitement over it would soon return there. In between, though, the stroke of inventive genius that would lead to said excitement would happen in the Old World confines of Switzerland.


Tim Berners-Lee

In many respects, he looks like an Englishman from central casting — quiet, courteous, reserved. Ask him about his family life and you hit a polite but exceedingly blank wall. Ask him about the Web, however, and he is suddenly transformed into an Italian — words tumble out nineteen to the dozen and he gesticulates like mad. There’s a deep, deep passion here. And why not? It is, after all, his baby.

— John Naughton, writing about Tim Berners-Lee

The seeds of the Conseil Européen pour la Recherche Nucléaire — better known in the Anglosphere as simply CERN — were planted amidst the devastation of post-World War II Europe by the great French quantum physicist Louis de Broglie. Possessing an almost religious faith in pure science as a force for good in the world, he proposed a new, pan-European foundation dedicated to exploring the subatomic realm. “At a time when the talk is of uniting the peoples of Europe,” he said, “[my] attention has turned to the question of developing this new international unit, a laboratory or institution where it would be possible to carry out scientific work above and beyond the framework of the various nations taking part. What each European nation is unable to do alone, a united Europe can do, and, I have no doubt, would do brilliantly.” After years of dedicated lobbying on de Broglie’s part, CERN officially came to be in 1954, with its base of operations in Geneva, Switzerland, one of the places where Europeans have traditionally come together for all manner of purposes.

The general technological trend at CERN over the following decades was the polar opposite of what was happening in computing: as scientists attempted to peer deeper and deeper into the subatomic realm, the machines they required kept getting bigger and bigger. Between 1983 and 1989, CERN built the Large Electron-Positron Collider in Geneva. With a circumference of almost seventeen miles, it was the largest single machine ever built in the history of the world. Managing projects of such magnitude, some of them employing hundreds of scientists and thousands of support staff, required a substantial computing infrastructure, along with many programmers and systems architects to run it. Among this group was a quiet Briton named Tim Berners-Lee.

Berners-Lee’s credentials were perfect for his role. He had earned a bachelor’s degree in physics from Oxford in 1976, only to find that pure science didn’t satisfy his urge to create practical things that real people could make use of. As it happened, both of his parents were computer scientists of considerable note; they had both worked on the University of Manchester’s Mark I computer, the world’s very first stored-program von Neumann machine. So, it was natural for their son to follow in their footsteps, to make a career for himself in the burgeoning new field of microcomputing. Said career took him to CERN for a six-month contract in 1980, then back to Geneva on a more permanent basis in 1984. Because of his background in physics, Berners-Lee could understand the needs of the scientists he served better than many of his colleagues; his talent for devising workable solutions to their problems turned him into something of a star at CERN. Among other projects, he labored long and hard to devise a way of making the thousands upon thousands of pages of documentation that were generated at CERN each year accessible, manageable, and navigable.

But, for all that Berners-Lee was being paid to create an internal documentation system for CERN, it’s clear that he began thinking along bigger lines fairly quickly. The same problems of navigation and discoverability that dogged his colleagues at CERN were massively present on the Internet as a whole. Information was hidden there in out-of-the-way repositories that could only be accessed using command-line-driven software with obscure command sets — if, that is, you knew that it existed at all.

His idea of a better way came courtesy of hypertext theory: a non-linear approach to reading texts and navigating an information space, built around associative links embedded within and between texts. First proposed by Vannevar Bush, the World War II-era MIT giant whom we briefly met in an earlier article in this series, hypertext theory had later proved a superb fit with a mouse-driven graphical computer interface which had been pioneered at Xerox PARC during the 1970s under the astute management of our old friend Robert Taylor. The PARC approach to user interfaces reached the consumer market in a prominent way for the first time in 1984 as the defining feature of the Apple Macintosh. And the Mac in turn went on to become the early hotbed of hypertext experimentation on consumer-grade personal computers, thanks to Apple’s own HyperCard authoring system and the HyperCard-driven laser discs and CD-ROMs that soon emerged from companies like Voyager.

The user interfaces found in HyperCard applications were surprisingly similar to those found in the web browsers of today, but they were limited to the curated, static content found on a single floppy disk or CD-ROM. “They’ve already done the difficult bit!” Berners-Lee remembers thinking. Now someone just needed to put hypertext on the Internet, to allow files on one computer to link to files on another, with anyone and everyone able to create such links. He saw how “a single hypertext link could lead to an enormous, unbounded world.” Yet no one else seemed to see this. So, he decided at last to do it himself. In a fit of self-deprecating mock-grandiosity, not at all dissimilar to J.C.R. Licklider’s call for an “Intergalactic Computer Network,” he named his proposed system the “World Wide Web.” He had no idea how perfect the name would prove.

He sat down to create his World Wide Web in October of 1990, using a NeXT workstation computer, the flagship product of the company Steve Jobs had formed after getting booted out of Apple several years earlier. It was an expensive machine — far too expensive for the ordinary consumer market — but supremely elegant, combining the power of the hacker-favorite operating system Unix with the graphical user interface of the Macintosh.

The NeXT computer on which Tim Berners-Lee created the foundations of the World Wide Web. It then went on to become the world’s first web server.

Progress was swift. In less than three months, Berners-Lee coded the world’s first web server and browser, which also entailed developing the Hypertext Transfer Protocol (HTTP) they used to communicate with one another and the Hypertext Markup Language (HTML) for embedding associative links into documents. These were the foundational technologies of the Web, which still remain essential to the networked digital world we know today.

The first page to go up on the nascent World Wide Web, which belied its name at this point by being available only inside CERN, was a list of phone numbers of the people who worked there. Clicking through its hypertext links being much easier than entering commands into the database application CERN had previously used for the purpose, it served to get Berners-Lee’s browser installed on dozens of NeXT computers. But the really big step came in August of 1991, when, having debugged and refined his system as thoroughly as he was able by using his CERN colleagues as guinea pigs, he posted his web browser, his web server, and documentation on how to use HTML to create web documents on Usenet. The response was not immediately overwhelming, but it was gratifying in a modest way. Berners-Lee:

People who saw the Web and realised the sense of unbound opportunity began installing the server and posting information. Then they added links to related sites that they found were complimentary or simply interesting. The Web began to be picked up by people around the world. The messages from system managers began to stream in: “Hey, I thought you’d be interested. I just put up a Web server.”

Tim Berners-Lee’s original web browser, which he named Nexus in honor of its host platform. The NeXT computer actually had quite impressive graphics capabilities, but you’d never know it by looking at Nexus.

In December of 1991, Berners-Lee begged for and was reluctantly granted a chance to demonstrate the World Wide Web at that year’s official Hypertext conference in San Antonio, Texas. He arrived with high hopes, only to be accorded a cool reception. The hypertext movement came complete with more than its fair share of stodgy theorists with rigid ideas about how hypertext ought to work — ideas which tended to have more to do with the closed, curated experiences of HyperCard than the anarchic open Internet. Normally modest almost to a fault, the Berners-Lee of today does allow himself to savor the fact that “at the same conference two years later, every project on display would have something to do with the Web.”

But the biggest factor holding the Web back at this point wasn’t the resistance of the academics; it was rather its being bound so tightly to the NeXT machines, which had a total user base of no more than a few tens of thousands, almost all of them at universities and research institutions like CERN. Although some browsers had been created for other, more popular computers, they didn’t sport the effortless point-and-click interface of Berners-Lee’s original; instead they presented their links like footnotes, whose numbers the user had to type in to visit them. Thus Berners-Lee and the fellow travelers who were starting to coalesce around him made it their priority in 1992 to encourage the development of more point-and-click web browsers. One for the X Window System, the graphical-interface layer which had been developed for the previously text-only Unix, appeared in April. Even more importantly, a Macintosh browser arrived just a month later; this marked the first time that the World Wide Web could be explored in the way Berners-Lee had envisioned on a computer that the proverbial ordinary person might own and use.

Amidst the organization directories and technical papers which made up most of the early Web — many of the latter inevitably dealing with the vagaries of HTTP and HTML themselves — Berners-Lee remembers one site that stood out for being something else entirely, for being a harbinger of the more expansive, humanist vision he had had for his World Wide Web almost from the start. It was a site about Rome during the Renaissance, built up from a traveling museum exhibition which had recently visited the American Library of Congress. Berners-Lee:

On my first visit, I wandered to a music room. There was an explanation of the events that caused the composer Carpentras to present a decorated manuscript of his Lamentations of Jeremiah to Pope Clement VII. I clicked, and was glad I had a 21-inch colour screen: suddenly it was filled with a beautifully illustrated score, which I could gaze at more easily and in more detail than I could have done had I gone to the original exhibit at the Library of Congress.

If we could visit this site today, however, we would doubtless be struck by how weirdly textual it was for being a celebration of the Renaissance, one of the most excitingly visual ages in all of history. The reality is that it could hardly have been otherwise; the pages displayed by Berners-Lee’s NeXT browser and all of the others could not mix text with images at all. The best they could do was to present links to images, which, when clicked, would lead to a picture being downloaded and displayed in a separate window, as Berners-Lee describes above.

But already another man on the other side of the ocean was working on changing that — working, one might say, on the last pieces necessary to make a World Wide Web that we can immediately recognize today.


Marc Andreessen barefoot on the cover of Time magazine, creating the archetype of the dot-com entrepreneur/visionary/rock star.

Tim Berners-Lee was the last of the old guard of Internet pioneers. Steeped in an ethic of non-profit research for the abstract good of the human race, he never attempted to commercialize his work. Indeed, he has seemed in the decades since his masterstroke almost to willfully shirk the money and fame that some might say are rightfully his for putting the finishing touch on the greatest revolution in communications since the printing press, one which has bound the world together in a way that Samuel Morse and Alexander Graham Bell could never have dreamed of.

Marc Andreessen, by contrast, was the first of a new breed of business entrepreneurs who have dominated our discussions of the Internet from the mid-1990s until the present day. Yes, one can trace the cult of the tech-sector disruptor, “making the world a better place” and “moving fast and breaking things,” back to the dapper young Steve Jobs who introduced the Apple Macintosh to the world in January of 1984. But it was Andreessen and the flood of similar young men that followed him during the 1990s who well and truly embedded the archetype in our culture.

Before any of that, though, he was just a kid who decided to make a web browser of his own.

Andreessen first discovered the Web not long after Berners-Lee first made his tools and protocols publicly available. At the time, he was a twenty-year-old student at the University of Illinois at Urbana-Champaign who held a job on the side at the National Center for Supercomputing Applications, a research institute with close ties to the university. The name sounded very impressive, but he found the job itself to be dull as ditch water. His dissatisfaction came down to the same old split between the “giant brain” model of computing of folks like Marvin Minsky and the more humanist vision espoused in earlier years by people like J.C.R. Licklider. The NCSA was in pursuit of the former, but Andreessen was a firm adherent of the latter.

Bored out of his mind writing menial code for esoteric projects he couldn’t care less about, Andreessen spent a lot of time looking for more interesting things to do on the Internet. And so he stumbled across the fledgling World Wide Web. It didn’t look like much — just a screen full of text — but he immediately grasped its potential.

In fact, he judged, the Web’s not looking like much was a big part of its problem. Casting about for a way to snazz it up, he had the stroke of inspiration that would make him a multi-millionaire within three years. He decided to add a new tag to Berners-Lee’s HTML specification: “<img>,” for “image.” By using it, one would be able to show pictures inline with text. It could make the Web an entirely different sort of place, a wonderland of colorful visuals to go along with its textual content.

As conceptual leaps go, this one really wasn’t that audacious. The biggest buzzword in consumer computing in recent years — bigger than hypertext — had been “multimedia,” a catch-all term describing exactly this sort of digital mixing of content types, something which was now becoming possible thanks to the ever-improving audiovisual capabilities of personal computers since those primitive early days of the trinity of 1977. Hypertext and multimedia had actually been sharing many of the same digs for quite some time. The HyperCard authoring system, for example, boasted capabilities much like those which Andreessen now wished to add to HTML, and the Voyager CD-ROMs already existed as compelling case studies in the potential of interactive multimedia hypertext in a non-networked context.

Still, someone had to be the first to put two and two together, and that someone was Marc Andreessen. An only moderately accomplished programmer himself, he convinced a much better one, another NCSA employee named Eric Bina, to help him create his new browser. The pair fell into roles vaguely reminiscent of those of Steve Jobs and Steve Wozniak during the early days of Apple Computer: Andreessen set the agenda and came up with the big ideas — many of them derived from tireless trawling of the Usenet newsgroups to find out what people didn’t like about the current browsers — and Bina turned his ideas into reality. Andreessen’s relentless focus on the end-user experience led to other important innovations beyond inline images, such as the “forward,” “back,” and “refresh” buttons that remain so ubiquitous in the browsers of today. The higher-ups at NCSA eventually agreed to allow Andreessen to brand his browser as a quasi-official product of their institute; on an Internet still dominated by academics, such an imprimatur was sure to be a useful aid. In January of 1993, the browser known as Mosaic — the name seemed an apt metaphor for the colorful multimedia pages it could display — went up on NCSA’s own servers. After that, “it spread like a virus,” in the words of Andreessen himself.

The slick new browser and its almost aggressively ambitious young inventor soon came to the attention of Tim Berners-Lee. He calls Andreessen “a total contrast to any of the other [browser] developers. Marc was not so much interested in just making the program work as in having his browser used by as many people as possible.” But, lest he sound uncharitable toward his populist counterpart, he hastens to add that “that was, of course, what the Web needed.” Berners-Lee made the Web; the garrulous Andreessen brought it to the masses in a way the self-effacing Briton could arguably never have managed on his own.

About six months after Mosaic hit the Internet, Tim Berners-Lee came to visit its inventor. Their meeting brought with it the first palpable signs of the tension that would surround the World Wide Web and the Internet as a whole almost from that point forward. It was the tension between non-profit idealism and the urge to commercialize, to brand, and finally to control. Even before the meeting, Berners-Lee had begun to feel disturbed by the press coverage Mosaic was receiving, helped along by the public-relations arm of NCSA itself: “The focus was on Mosaic, as if it were the Web. There was little mention of other browsers, or even the rest of the world’s effort to create servers. The media, which didn’t take the time to investigate deeper, started to portray Mosaic as if it were equivalent to the Web.” Now, at the meeting, he was taken aback by an atmosphere that smacked more of a business negotiation than a friendly intellectual exchange, even as he wasn’t sure what exactly was being negotiated. “Marc gave the impression that he thought of this meeting as a poker game,” Berners-Lee remembers.

Andreessen’s recollections of the meeting are less nuanced. Berners-Lee, he claims, “bawled me out for adding images to the thing.” Andreessen:

Academics in computer science are so often out to solve these obscure research problems. The universities may force it upon them, but they aren’t always motivated to just do something that people want to use. And that’s definitely the sense that we always had of CERN. And I don’t want to mis-characterize them, but whenever we dealt with them, they were much more interested in the Web from a research point of view rather than a practical point of view. And so it was no big deal to them to do a NeXT browser, even though nobody would ever use it. The concept of adding an image just for the sake of adding an image didn’t make sense [to them], whereas to us, it made sense because, let’s face it, they made pages look cool.

The first version of Mosaic ran only on X-Windows, but, as the above would indicate, Andreessen had never intended for that to be the case for long. He recruited more programmers to write ports for the Macintosh and, most importantly of all, for Microsoft Windows, whose market share of consumer computing in the United States was crossing the threshold of 90 percent. When the Windows version of Mosaic went online in September of 1993, it motivated hundreds of thousands of computer owners to engage with the Internet for the first time; the Internet to them effectively was Mosaic, just as Berners-Lee had feared would come to pass.

The Mosaic browser. It may not look like much today, but its ability to display inline images was a game-changer.

At this time, Microsoft Windows didn’t even include a TCP/IP stack, the software layer that could make a machine into a full-fledged denizen of the Internet, with its own IP address and all the trimmings. In the brief span of time before Microsoft remedied that situation, a doughty Australian entrepreneur named Peter Tattam provided an add-on TCP/IP stack, which he distributed as shareware. Meanwhile other entrepreneurs scrambled to set up Internet service providers to provide the unwashed masses with an on-ramp to the Web — no university enrollment required! —  and the shelves of computer stores filled up with all-in-one Internet kits that were designed to make the whole process as painless as possible.

The unabashed elitists who had been on the Internet for years scorned the newcomers, but there was nothing they could do to stop the invasion, which stormed their ivory towers with overwhelming force. Between December of 1993 and December of 1994, the total amount of Web traffic jumped by a factor of eight. By the latter date, there were more than 10,000 separate sites on the Web, thanks to people all over the world who had rolled up their sleeves and learned HTML so that they could get their own idiosyncratic messages out to anyone who cared to read them. If some (most?) of the sites they created were thoroughly frivolous, well, that was part of the charm of the thing. The World Wide Web was the greatest leveler in the history of media; it enabled anyone to become an author and a publisher rolled into one, no matter how rich or poor, talented or talent-less. The traditional gatekeepers of mass media have been trying to figure out how to respond ever since.

Marc Andreessen himself abandoned the browser that did so much to make all this happen before it celebrated its first birthday. He graduated from university in December of 1993, and, annoyed by the growing tendency of his bosses at NCSA to take credit for his creation, he decamped for — where else? — Silicon Valley. There he bumped into Jim Clark, a huge name in the Valley, who had founded Silicon Graphics twelve years earlier and turned it into the biggest name in digital special effects for the film industry. Feeling hamstrung by Silicon Graphics’s increasing bureaucracy as it settled into corporate middle age, Clark had recently left the company, leading to much speculation about what he would do next. The answer came on April 4, 1994, when he and Marc Andreessen founded Mosaic Communications in order to build a browser even better than the one the latter had built at NCSA. The dot-com boom had begun.

(Sources: the books A Brief History of the Future: The Origins of the Internet by John Naughton, From Gutenberg to the Internet: A Sourcebook on the History of Information Technology edited by Jeremy M. Norman, A History of Modern Computing (2nd ed.) by Paul E. Ceruzzi, Communication Networks: A Concise Introduction by Jean Walrand and Shyam Parekh, Weaving the Web by Tim Berners-Lee, How the Web was Born by James Gillies and Robert Calliau, and Architects of the Web by Robert H. Reid. InfoWorld of August 24 1987, September 7 1987, April 25 1988, November 28 1988, January 9 1989, October 23 1989, and February 4 1991; Computer Gaming World of May 1993.)

Footnotes

Footnotes
1 When he first stated his law in 1965, Moore actually proposed a doubling every single year, but revised his calculations in 1975.
 

Tags: