RSS

Category Archives: Interactive Fiction

A Web Around the World, Part 10: A Web of Associations

While wide-area computer networking, packet switching, and the Internet were coming of age, all of the individual computers on the wire were becoming exponentially faster, exponentially more capacious internally, and exponentially smaller externally. The pace of their evolution was unprecedented in the history of technology; had automobiles been improved at a similar rate, the Ford Model T would have gone supersonic within ten years of its introduction. We should take a moment now to find out why and how such a torrid pace was maintained.

As Claude Shannon and others realized before World War II, a digital computer in the abstract is an elaborate exercise in boolean logic, a dynamic matrix of on-off switches — or, if you like, of ones and zeroes. The more of these switches a computer has, the more it can be and do. The first Turing-complete digital computers, such as ENIAC and Whirlwind, implemented their logical switches using vacuum tubes, a venerable technology inherited from telephony. Each vacuum tube was about as big as an incandescent light bulb, consumed a similar amount of power, and tended to burn out almost as frequently. These factors made the computers which employed vacuum tubes massive edifices that required as much power as the typical city block, even as they struggled to maintain an uptime of more than 50 percent — and all for the tiniest sliver of one percent of the overall throughput of the smartphones we carry in our pockets today. Computers of this generation were so huge, expensive, and maintenance-heavy in relation to what they could actually be used to accomplish that they were largely limited to government-funded research institutions and military applications.

Computing’s first dramatic leap forward in terms of its basic technological underpinnings also came courtesy of telephony. More specifically, it came in the form of the transistor, a technology which had been invented at Bell Labs in December of 1947 with the aim of improving telephone switching circuits. A transistor could function as a logical switch just as a vacuum tube could, but it was a minute fraction of the size, consumed vastly less power, and was infinitely more reliable. The computers which IBM built for the SAGE project during the 1950s straddled this technological divide, employing a mixture of vacuum tubes and transistors. But by 1960, the computer industry had fully and permanently embraced the transistor. While still huge and unwieldy by modern standards, computers of this era were practical and cost-effective for a much broader range of applications than their predecessors had been; corporate computing started in earnest in the transistor era.

Nevertheless, wiring together tens of thousands of discrete transistors remained a daunting task for manufacturers, and the most high-powered computers still tended to fill large rooms if not entire building floors. Thankfully, a better way was in the offing. Already in 1958, a Texas Instruments engineer named Jack Kilby had come up with the idea of the integrated circuit: a collection of miniaturized transistors and other electrical components embedded in a silicon wafer, the whole being suitable for stamping out quickly in great quantities by automated machinery. Kilby invented, in other words, the soon-to-be ubiquitous computer chip, which could be wired together with its mates to produce computers that were not only smaller but easier and cheaper to manufacture than those that had come before. By the mid-1960s, the industry was already in the midst of the transition from discrete transistors to integrated circuits, producing some machines that were no larger than a refrigerator; among these was the Honeywell 516, the computer which was turned into the world’s first network router.

As chip-fabrication systems improved, designers were able to miniaturize the circuitry on the wafers more and more, allowing ever more computing horsepower to be packed into a given amount of physical space. An engineer named Gordon Moore proposed the principle that has become known as Moore’s Law: he calculated that the number of transistors which can be stamped into a chip of a given size doubles every second year.[1]When he first stated his law in 1965, Moore actually proposed a doubling every single year, but revised his calculations in 1975. In July of 1968, Moore and a colleague named Robert Noyce formed the chip maker known as Intel to make the most of Moore’s Law. The company has remained on the cutting edge of chip fabrication to this day.

The next step was perhaps inevitable, but it nevertheless occurred almost by accident. In 1971, an Intel engineer named Federico Faggin put all of the circuits making up a computer’s arithmetic, logic, and control units — the central “brain” of a computer — onto a single chip. And so the microprocessor was born. No one involved with the project at the time anticipated that the Intel 4004 central-processing unit would open the door to a new generation of general-purpose “microcomputers” that were small enough to sit on desktops and cheap enough to be purchased by ordinary households. Faggin and his colleagues rather saw the 4004 as a fairly modest, incremental advancement of the state of the art, which would be deployed strictly to assist bigger computers by serving as the brains of disk controllers and other single-purpose peripherals. Before we rush to judge them too harshly for their lack of vision, we should remember that they are far from the only inventors in history who have failed to grasp the real importance of their creations.

At any rate, it was left to independent tinkerers who who had been dreaming of owning a computer of their own for years, and who now saw in the microprocessor the opportunity to do just that, to invent the personal computer as we know it. The January 1975 issue of Popular Electronics sports one of the most famous magazine covers in the history of American technology: it announces the $439 Altair 8800, from a tiny Albuquerque, New Mexico-based company known as MITS. The Altair was nothing less than a complete put-it-together-yourself microcomputer kit, built around the Intel 8080 microprocessor, a successor model to the 4004.

The magazine cover that launched a technological revolution.

The next milestone came in 1977, when three separate companies announced three separate pre-assembled, plug-em-in-and-go personal computers: the Apple II, the Radio Shack TRS-80, and the Commodore PET. In terms of raw computing power, these machines were a joke compared to the latest institutional hardware. Nonetheless, they were real, Turing-complete computers that many people could afford to buy and proceed to tinker with to their heart’s content right in their own homes. They truly were personal computers: their buyers didn’t have to share them with anyone. It is difficult to fully express today just how extraordinary an idea this was in 1977.

This very website’s early years were dedicated to exploring some of the many things such people got up to with their new dream machines, so I won’t belabor the subject here. Suffice to say that those first personal computers were, although of limited practical utility, endlessly fascinating engines of creativity and discovery for those willing and able to engage with them on their own terms. People wrote programs on them, drew pictures and composed music, and of course played games, just as their counterparts on the bigger machines had been doing for quite some time. And then, too, some of them went online.

The first microcomputer modems hit the market the same year as the trinity of 1977. They operated on the same principles as the modems developed for the SAGE project a quarter-century before — albeit even more slowly. Hobbyists could thus begin experimenting with connecting their otherwise discrete microcomputers together, at least for the duration of a phone call.

But some entrepreneurs had grander ambitions. In July of 1979, not one but two subscription-based online services, known as CompuServe and The Source, were announced almost simultaneously. Soon anyone with a computer, a modem, and a valid credit card could dial them up to socialize with others, entertain themselves, and access a growing range of useful information.

Again, I’ve written about this subject in some detail before, so I won’t do so at length here. I do want to point out, however, that many of J.C.R. Licklider’s fondest predictions for the computer networks of the future first became a reality on the dozen or so of these commercial online services that managed to attract significant numbers of subscribers over the years. It was here, even more so than on the early Internet proper, that his prognostications about communities based on mutual interest rather than geographical proximity proved their prescience. Online chatting, online dating, online gaming, online travel reservations, and online shopping first took hold here, first became a fact of life for people sitting in their living rooms. People who seldom or never met one another face to face or even heard one another’s voices formed relationships that felt as real and as present in their day-to-day lives as any others — a new phenomenon in the history of social interaction. At their peak circa 1995, the commercial online services had more than 3.5 million subscribers in all.

Yet these services failed to live up to the entirety of Licklider’s old dream of an Intergalactic Computer Network. They were communities, yes, but not quite networks in the sense of the Internet. Each of them lived on a single big mainframe, or at most a cluster of them, in a single data center, which you dialed into using your microcomputer. Once online, you could interact in real time with the hundreds or thousands of others who might have dialed in at the same time, but you couldn’t go outside the walled garden of the service to which you’d chosen to subscribe. That is to say, if you’d chosen to sign up with CompuServe, you couldn’t talk to someone who had chosen The Source. And whereas the Internet was anarchic by design, the commercial online services were steered by the iron hands of the companies who had set them up. Although individual subscribers could and often did contribute content and in some ways set the tone of the services they used, they did so always at the sufferance of their corporate overlords.

Through much of the fifteen years or so that the commercial services reigned supreme, many or most microcomputer owners failed to even realize that an alternative called the Internet existed. Which is not to say that the Internet was without its own form of social life. Its more casual side centered on an online institution known as Usenet, which had arrived on the scene in late 1979, almost simultaneously with the first commercial services.

At bottom, Usenet was (and is) a set of protocols for sharing public messages, just as email served that purpose for private ones. What set it apart from the bustling public forums on services like CompuServe was its determinedly non-centralized nature. Usenet as a whole was a network of many servers, each storing a local copy of its many “newsgroups,” or forums for discussions on particular topics. Users could read and post messages using any of the servers, either by sitting in front of its own keyboard and monitor or, more commonly, through some form of remote connection. When a user posted a new message to a server, it sent it on to several other servers, which were then expected to send it further, until the message had propagated through the whole network of Usenet servers. The system’s asynchronous nature could distort conversations; messages reached different servers at different times, which meant you could all too easily find yourself replying to a post that had already been retracted, or making a point someone else had already made before you. But on the other hand, Usenet was almost impossible to break completely — just like the Internet itself.

Strictly speaking, Usenet did not depend on the Internet for its existence. As far as it was concerned, its servers could pass messages among themselves in whatever way they found most convenient. In its first few years, this sometimes meant that they dialed one another up directly over ordinary phone lines and talked via modem. As it matured into a mainstay of hacker culture, however, Usenet gradually became almost inseparable from the Internet itself in the minds of most of its users.

From the three servers that marked its inauguration in 1979, Usenet expanded to 11,000 by 1988. The discussions that took place there didn’t quite encompass the whole of the human experience equally; the demographics of the hacker user base meant that computer programming tended to get more play than knitting, Pink Floyd more play than Madonna, and science-fiction novels more play than romances. Still, the newsgroups were nothing if not energetic and free-wheeling. For better or for worse, they regularly went places the commercial online services didn’t dare allow. For example, Usenet became one of the original bastions of online pornography, first in the form of fevered textual fantasies, than in the somehow even more quaint form of “ASCII art,” and finally, once enough computers had the graphics capabilities to make it worthwhile, as actual digitized photographs. In light of this, some folks expressed relief that it was downright difficult to get access to Usenet and the rest of the Internet if one didn’t teach or attend classes at a university, or work at a tech company or government agency.

The perception of the Internet as a lawless jungle, more exciting but also more dangerous than the neatly trimmed gardens of the commercial online services, was cemented by the Morris Worm, which was featured on the front page of the New York Times for four straight days in December of 1988. Created by a 23-year-old Cornell University graduate student named Robert Tappan Morris, it served as many people’s ironic first notice that a network called the Internet existed at all. The exploit, which its creator later insisted had been meant only as a harmless prank, spread by attaching itself to some of the core networking applications used by Unix, a powerful and flexible operating system that was by far the most popular among Internet-connected computers at the time. The Morris Worm came as close as anything ever has to bringing the entire Internet down when its exponential rate of growth effectively turned it into a network-wide denial-of-service attack — again, accidentally, if its creator is to be believed. (Morris himself came very close to a prison sentence, but escaped with three years of probation, a $10,000 fine, and 400 hours of community service, after which he went on to a lucrative career in the tech sector at the height of the dot-com boom.)

Attitudes toward the Internet in the less rarefied wings of the computing press had barely begun to change even by the beginning of the 1990s. An article from the issue of InfoWorld dated February 4, 1991, encapsulates the contemporary perceptions among everyday personal-computer owners of this “vast collection of networks” which is “a mystery even to people who call it home.”

It is a highway of ideas, a collective brain for the nation’s scientists, and perhaps the world’s most important computer bulletin board. Connecting all the great research institutions, a large network known collectively as the Internet is where scientists, researchers, and thousands of ordinary computer users get their daily fix of news and gossip.

But it is the same network whose traffic is occasionally dominated by X-rated graphics files, UFO sighting reports, and other “recreational” topics. It is the network where renegade “worm” programs and hackers occasionally make the news.

As with all communities, this electronic village has both high- and low-brow neighborhoods, and residents of one sometimes live in the other.

What most people call the Internet is really a jumble of networks rooted in academic and research institutions. Together these networks connect over 40 countries, providing electronic mail, file transfer, remote login, software archives, and news to users on 2000 networks.

Think of a place where serious science comes from, whether it’s MIT, the national laboratories, a university, or [a] private enterprise, [and] chances are you’ll find an Internet address. Add [together] all the major sites, and you have the seeds of what detractors sometimes call “Anarchy Net.”

Many people find the Internet to be shrouded in a cloud of mystery, perhaps even intrigue.

With addresses composed of what look like contractions surrounded by ‘!’s, ‘@’s, and ‘.’s, even Internet electronic mail seems to be from another world. Never mind that these “bangs,” “at signs,” and “dots” create an addressing system valid worldwide; simply getting an Internet address can be difficult if you don’t know whom to ask. Unlike CompuServe or one of the other email services, there isn’t a single point of contact. There are as many ways to get “on” the Internet as there are nodes.

At the same time, this complexity serves to keep “outsiders” off the network, effectively limiting access to the world’s technological elite.

The author of this article would doubtless have been shocked to learn that within just four or five years this confusing, seemingly willfully off-putting network of scientists and computer nerds would become the hottest buzzword in media, and that absolutely everybody, from your grandmother to your kids’ grade-school teacher, would be rushing to get onto this Internet thing before they were left behind, even as stalwart rocks of the online ecosystem of 1991 like CompuServe would already be well on their way to becoming relics of a bygone age.

The Internet had begun in the United States, and the locus of the early mainstream excitement over it would soon return there. In between, though, the stroke of inventive genius that would lead to said excitement would happen in the Old World confines of Switzerland.


Tim Berners-Lee

In many respects, he looks like an Englishman from central casting — quiet, courteous, reserved. Ask him about his family life and you hit a polite but exceedingly blank wall. Ask him about the Web, however, and he is suddenly transformed into an Italian — words tumble out nineteen to the dozen and he gesticulates like mad. There’s a deep, deep passion here. And why not? It is, after all, his baby.

— John Naughton, writing about Tim Berners-Lee

The seeds of the Conseil Européen pour la Recherche Nucléaire — better known in the Anglosphere as simply CERN — were planted amidst the devastation of post-World War II Europe by the great French quantum physicist Louis de Broglie. Possessing an almost religious faith in pure science as a force for good in the world, he proposed a new, pan-European foundation dedicated to exploring the subatomic realm. “At a time when the talk is of uniting the peoples of Europe,” he said, “[my] attention has turned to the question of developing this new international unit, a laboratory or institution where it would be possible to carry out scientific work above and beyond the framework of the various nations taking part. What each European nation is unable to do alone, a united Europe can do, and, I have no doubt, would do brilliantly.” After years of dedicated lobbying on de Broglie’s part, CERN officially came to be in 1954, with its base of operations in Geneva, Switzerland, one of the places where Europeans have traditionally come together for all manner of purposes.

The general technological trend at CERN over the following decades was the polar opposite of what was happening in computing: as scientists attempted to peer deeper and deeper into the subatomic realm, the machines they required kept getting bigger and bigger. Between 1983 and 1989, CERN built the Large Electron-Positron Collider in Geneva. With a circumference of almost seventeen miles, it was the largest single machine ever built in the history of the world. Managing projects of such magnitude, some of them employing hundreds of scientists and thousands of support staff, required a substantial computing infrastructure, along with many programmers and systems architects to run it. Among this group was a quiet Briton named Tim Berners-Lee.

Berners-Lee’s credentials were perfect for his role. He had earned a bachelor’s degree in physics from Oxford in 1976, only to find that pure science didn’t satisfy his urge to create practical things that real people could make use of. As it happened, both of his parents were computer scientists of considerable note; they had both worked on the University of Manchester’s Mark I computer, the world’s very first stored-program von Neumann machine. So, it was natural for their son to follow in their footsteps, to make a career for himself in the burgeoning new field of microcomputing. Said career took him to CERN for a six-month contract in 1980, then back to Geneva on a more permanent basis in 1984. Because of his background in physics, Berners-Lee could understand the needs of the scientists he served better than many of his colleagues; his talent for devising workable solutions to their problems turned him into something of a star at CERN. Among other projects, he labored long and hard to devise a way of making the thousands upon thousands of pages of documentation that were generated at CERN each year accessible, manageable, and navigable.

But, for all that Berners-Lee was being paid to create an internal documentation system for CERN, it’s clear that he began thinking along bigger lines fairly quickly. The same problems of navigation and discoverability that dogged his colleagues at CERN were massively present on the Internet as a whole. Information was hidden there in out-of-the-way repositories that could only be accessed using command-line-driven software with obscure command sets — if, that is, you knew that it existed at all.

His idea of a better way came courtesy of hypertext theory: a non-linear approach to reading texts and navigating an information space, built around associative links embedded within and between texts. First proposed by Vannevar Bush, the World War II-era MIT giant whom we briefly met in an earlier article in this series, hypertext theory had later proved a superb fit with a mouse-driven graphical computer interface which had been pioneered at Xerox PARC during the 1970s under the astute management of our old friend Robert Taylor. The PARC approach to user interfaces reached the consumer market in a prominent way for the first time in 1984 as the defining feature of the Apple Macintosh. And the Mac in turn went on to become the early hotbed of hypertext experimentation on consumer-grade personal computers, thanks to Apple’s own HyperCard authoring system and the HyperCard-driven laser discs and CD-ROMs that soon emerged from companies like Voyager.

The user interfaces found in HyperCard applications were surprisingly similar to those found in the web browsers of today, but they were limited to the curated, static content found on a single floppy disk or CD-ROM. “They’ve already done the difficult bit!” Berners-Lee remembers thinking. Now someone just needed to put hypertext on the Internet, to allow files on one computer to link to files on another, with anyone and everyone able to create such links. He saw how “a single hypertext link could lead to an enormous, unbounded world.” Yet no one else seemed to see this. So, he decided at last to do it himself. In a fit of self-deprecating mock-grandiosity, not at all dissimilar to J.C.R. Licklider’s call for an “Intergalactic Computer Network,” he named his proposed system the “World Wide Web.” He had no idea how perfect the name would prove.

He sat down to create his World Wide Web in October of 1990, using a NeXT workstation computer, the flagship product of the company Steve Jobs had formed after getting booted out of Apple several years earlier. It was an expensive machine — far too expensive for the ordinary consumer market — but supremely elegant, combining the power of the hacker-favorite operating system Unix with the graphical user interface of the Macintosh.

The NeXT computer on which Tim Berners-Lee created the foundations of the World Wide Web. It then went on to become the world’s first web server.

Progress was swift. In less than three months, Berners-Lee coded the world’s first web server and browser, which also entailed developing the Hypertext Transfer Protocol (HTTP) they used to communicate with one another and the Hypertext Markup Language (HTML) for embedding associative links into documents. These were the foundational technologies of the Web, which still remain essential to the networked digital world we know today.

The first page to go up on the nascent World Wide Web, which belied its name at this point by being available only inside CERN, was a list of phone numbers of the people who worked there. Clicking through its hypertext links being much easier than entering commands into the database application CERN had previously used for the purpose, it served to get Berners-Lee’s browser installed on dozens of NeXT computers. But the really big step came in August of 1991, when, having debugged and refined his system as thoroughly as he was able by using his CERN colleagues as guinea pigs, he posted his web browser, his web server, and documentation on how to use HTML to create web documents on Usenet. The response was not immediately overwhelming, but it was gratifying in a modest way. Berners-Lee:

People who saw the Web and realised the sense of unbound opportunity began installing the server and posting information. Then they added links to related sites that they found were complimentary or simply interesting. The Web began to be picked up by people around the world. The messages from system managers began to stream in: “Hey, I thought you’d be interested. I just put up a Web server.”

Tim Berners-Lee’s original web browser, which he named Nexus in honor of its host platform. The NeXT computer actually had quite impressive graphics capabilities, but you’d never know it by looking at Nexus.

In December of 1991, Berners-Lee begged for and was reluctantly granted a chance to demonstrate the World Wide Web at that year’s official Hypertext conference in San Antonio, Texas. He arrived with high hopes, only to be accorded a cool reception. The hypertext movement came complete with more than its fair share of stodgy theorists with rigid ideas about how hypertext ought to work — ideas which tended to have more to do with the closed, curated experiences of HyperCard than the anarchic open Internet. Normally modest almost to a fault, the Berners-Lee of today does allow himself to savor the fact that “at the same conference two years later, every project on display would have something to do with the Web.”

But the biggest factor holding the Web back at this point wasn’t the resistance of the academics; it was rather its being bound so tightly to the NeXT machines, which had a total user base of no more than a few tens of thousands, almost all of them at universities and research institutions like CERN. Although some browsers had been created for other, more popular computers, they didn’t sport the effortless point-and-click interface of Berners-Lee’s original; instead they presented their links like footnotes, whose numbers the user had to type in to visit them. Thus Berners-Lee and the fellow travelers who were starting to coalesce around him made it their priority in 1992 to encourage the development of more point-and-click web browsers. One for X-Windows, the graphical-interface layer which had been developed for the previously text-only Unix, appeared in April. Even more importantly, a Macintosh browser arrived just a month later; this marked the first time that the World Wide Web could be explored in the way Berners-Lee had envisioned on a computer that the proverbial ordinary person might own and use.

Amidst the organization directories and technical papers which made up most of the early Web — many of the latter inevitably dealing with the vagaries of HTTP and HTML themselves — Berners-Lee remembers one site that stood out for being something else entirely, for being a harbinger of the more expansive, humanist vision he had had for his World Wide Web almost from the start. It was a site about Rome during the Renaissance, built up from a traveling museum exhibition which had recently visited the American Library of Congress. Berners-Lee:

On my first visit, I wandered to a music room. There was an explanation of the events that caused the composer Carpentras to present a decorated manuscript of his Lamentations of Jeremiah to Pope Clement VII. I clicked, and was glad I had a 21-inch colour screen: suddenly it was filled with a beautifully illustrated score, which I could gaze at more easily and in more detail than I could have done had I gone to the original exhibit at the Library of Congress.

If we could visit this site today, however, we would doubtless be struck by how weirdly textual it was for being a celebration of the Renaissance, one of the most excitingly visual ages in all of history. The reality is that it could hardly have been otherwise; the pages displayed by Berners-Lee’s NeXT browser and all of the others could not mix text with images at all. The best they could do was to present links to images, which, when clicked, would lead to a picture being downloaded and displayed in a separate window, as Berners-Lee describes above.

But already another man on the other side of the ocean was working on changing that — working, one might say, on the last pieces necessary to make a World Wide Web that we can immediately recognize today.


Marc Andreessen barefoot on the cover of Time magazine, creating the archetype of the dot-com entrepreneur/visionary/rock star.

Tim Berners-Lee was the last of the old guard of Internet pioneers. Steeped in an ethic of non-profit research for the abstract good of the human race, he never attempted to commercialize his work. Indeed, he has seemed in the decades since his masterstroke almost to willfully shirk the money and fame that some might say are rightfully his for putting the finishing touch on the greatest revolution in communications since the the printing press, one which has bound the world together in a way that Samuel Morse and Alexander Graham Bell could never have dreamed of.

Marc Andreessen, by contrast, was the first of a new breed of business entrepreneurs who have dominated our discussions of the Internet from the mid-1990s until the present day. Yes, one can trace the cult of the tech-sector disruptor, “making the world a better place” and “moving fast and breaking things,” back to the dapper young Steve Jobs who introduced the Apple Macintosh to the world in January of 1984. But it was Andreessen and the flood of similar young men that followed him during the 1990s who well and truly embedded the archetype in our culture.

Before any of that, though, he was just a kid who decided to make a web browser of his own.

Andreessen first discovered the Web not long after Berners-Lee first made his tools and protocols publicly available. At the time, he was a twenty-year-old student at the University of Illinois at Urbana-Champaign who held a job on the side at the National Center for Supercomputing Applications, a research institute with close ties to the university. The name sounded very impressive, but he found the job itself to be dull as ditch water. His dissatisfaction came down to the same old split between the “giant brain” model of computing of folks like Marvin Minsky and the more humanist vision espoused in earlier years by people like J.C.R. Licklider. The NCSA was in pursuit of the former, but Andreessen was a firm adherent of the latter.

Bored out of his mind writing menial code for esoteric projects he couldn’t care less about, Andreessen spent a lot of time looking for more interesting things to do on the Internet. And so he stumbled across the fledgling World Wide Web. It didn’t look like much — just a screen full of text — but he immediately grasped its potential.

In fact, he judged, the Web’s not looking like much was a big part of its problem. Casting about for a way to snazz it up, he had the stroke of inspiration that would make him a multi-millionaire within three years. He decided to add a new tag to Berners-Lee’s HTML specification: “<img>,” for “image.” By using it, one would be able to show pictures inline with text. It could make the Web an entirely different sort of place, a wonderland of colorful visuals to go along with its textual content.

As conceptual leaps go, this one really wasn’t that audacious. The biggest buzzword in consumer computing in recent years — bigger than hypertext — had been “multimedia,” a catch-all term describing exactly this sort of digital mixing of content types, something which was now becoming possible thanks to the ever-improving audiovisual capabilities of personal computers since those primitive early days of the trinity of 1977. Hypertext and multimedia had actually been sharing many of the same digs for quite some time. The HyperCard authoring system, for example, boasted capabilities much like those which Andreessen now wished to add to HTML, and the Voyager CD-ROMs already existed as compelling case studies in the potential of interactive multimedia hypertext in a non-networked context.

Still, someone had to be the first to put two and two together, and that someone was Marc Andreessen. An only moderately accomplished programmer himself, he convinced a much better one, another NCSA employee named Eric Bina, to help him create his new browser. The pair fell into roles vaguely reminiscent of those of Steve Jobs and Steve Wozniak during the early days of Apple Computer: Andreessen set the agenda and came up with the big ideas — many of them derived from tireless trawling of the Usenet newsgroups to find out what people didn’t like about the current browsers — and Bina turned his ideas into reality. Andreessen’s relentless focus on the end-user experience led to other important innovations beyond inline images, such as the “forward,” “back,” and “refresh” buttons that remain so ubiquitous in the browsers of today. The higher-ups at NCSA eventually agreed to allow Andreessen to brand his browser as a quasi-official product of their institute; on an Internet still dominated by academics, such an imprimatur was sure to be a useful aid. In January of 1993, the browser known as Mosaic — the name seemed an apt metaphor for the colorful multimedia pages it could display — went up on NCSA’s own servers. After that, “it spread like a virus,” in the words of Andreessen himself.

The slick new browser and its almost aggressively ambitious young inventor soon came to the attention of Tim Berners-Lee. He calls Andreessen “a total contrast to any of the other [browser] developers. Marc was not so much interested in just making the program work as in having his browser used by as many people as possible.” But, lest he sound uncharitable toward his populist counterpart, he hastens to add that “that was, of course, what the Web needed.” Berners-Lee made the Web; the garrulous Andreessen brought it to the masses in a way the self-effacing Briton could arguably never have managed on his own.

About six months after Mosaic hit the Internet, Tim Berners-Lee came to visit its inventor. Their meeting brought with it the first palpable signs of the tension that would surround the World Wide Web and the Internet as a whole almost from that point forward. It was the tension between non-profit idealism and the urge to commercialize, to brand, and finally to control. Even before the meeting, Berners-Lee had begun to feel disturbed by the press coverage Mosaic was receiving, helped along by the public-relations arm of NCSA itself: “The focus was on Mosaic, as if it were the Web. There was little mention of other browsers, or even the rest of the world’s effort to create servers. The media, which didn’t take the time to investigate deeper, started to portray Mosaic as if it were equivalent to the Web.” Now, at the meeting, he was taken aback by an atmosphere that smacked more of a business negotiation than a friendly intellectual exchange, even as he wasn’t sure what exactly was being negotiated. “Marc gave the impression that he thought of this meeting as a poker game,” Berners-Lee remembers.

Andreessen’s recollections of the meeting are less nuanced. Berners-Lee, he claims, “bawled me out for adding images to the thing.” Andreessen:

Academics in computer science are so often out to solve these obscure research problems. The universities may force it upon them, but they aren’t always motivated to just do something that people want to use. And that’s definitely the sense that we always had of CERN. And I don’t want to mis-characterize them, but whenever we dealt with them, they were much more interested in the Web from a research point of view rather than a practical point of view. And so it was no big deal to them to do a NeXT browser, even though nobody would ever use it. The concept of adding an image just for the sake of adding an image didn’t make sense [to them], whereas to us, it made sense because, let’s face it, they made pages look cool.

The first version of Mosaic ran only on X-Windows, but, as the above would indicate, Andreessen had never intended for that to be the case for long. He recruited more programmers to write ports for the Macintosh and, most importantly of all, for Microsoft Windows, whose market share of consumer computing in the United States was crossing the threshold of 90 percent. When the Windows version of Mosaic went online in September of 1993, it motivated hundreds of thousands of computer owners to engage with the Internet for the first time; the Internet to them effectively was Mosaic, just as Berners-Lee had feared would come to pass.

The Mosaic browser. It may not look like much today, but its ability to display inline images was a game-changer.

At this time, Microsoft Windows didn’t even include a TCP/IP stack, the software layer that could make a machine into a full-fledged denizen of the Internet, with its own IP address and all the trimmings. In the brief span of time before Microsoft remedied that situation, a doughty Australian entrepreneur named Peter Tattam made a small fortune from his add-on TCP/IP stack, which he distributed as shareware. Meanwhile other entrepreneurs scrambled to set up Internet service providers to provide the unwashed masses with an on-ramp to the Web — no university enrollment required! —  and the shelves of computer stores filled up with all-in-one Internet kits that were designed to make the whole process as painless as possible.

The unabashed elitists who had been on the Internet for years scorned the newcomers, but there was nothing they could do to stop the invasion, which stormed their ivory towers with overwhelming force. Between December of 1993 and December of 1994, the total amount of Web traffic jumped by a factor of eight. By the latter date, there were more than 10,000 separate sites on the Web, thanks to people all over the world who had rolled up their sleeves and learned HTML so that they could get their own idiosyncratic messages out to anyone who cared to read them. If some (most?) of the sites they created were thoroughly frivolous, well, that was part of the charm of the thing. The World Wide Web was the greatest leveler in the history of media; it enabled anyone to become an author and a publisher rolled into one, no matter how rich or poor, talented or talent-less. The traditional gatekeepers of mass media have been trying to figure out how to respond ever since.

Marc Andreessen himself abandoned the browser that did so much to make all this happen before it celebrated its first birthday. He graduated from university in December of 1993, and, annoyed by the growing tendency of his bosses at NCSA to take credit for his creation, he decamped for — where else? — Silicon Valley. There he bumped into Jim Clark, a huge name in the Valley, who had founded Silicon Graphics twelve years earlier and turned it into the biggest name in digital special effects for the film industry. Feeling hamstrung by Silicon Graphics’s increasing bureaucracy as it settled into corporate middle age, Clark had recently left the company, leading to much speculation about what he would do next. The answer came on April 4, 1994, when he and Marc Andreessen founded Mosaic Communications in order to build a browser even better than the one the latter had built at NCSA. The dot-com boom had begun.

(Sources: the books A Brief History of the Future: The Origins of the Internet by John Naughton, From Gutenberg to the Internet: A Sourcebook on the History of Information Technology edited by Jeremy M. Norman, A History of Modern Computing (2nd ed.) by Paul E. Ceruzzi, Communication Networks: A Concise Introduction by Jean Walrand and Shyam Parekh, Weaving the Web by Tim Berners-Lee, How the Web was Born by James Gillies and Robert Calliau, and Architects of the Web by Robert H. Reid. InfoWorld of August 24 1987, September 7 1987, April 25 1988, November 28 1988, January 9 1989, October 23 1989, and February 4 1991; Computer Gaming World of May 1993.)

Footnotes

Footnotes
1 When he first stated his law in 1965, Moore actually proposed a doubling every single year, but revised his calculations in 1975.
 

Tags:

A Web Around the World, Part 9: A Network of Networks

UCLA will become the first station in a nationwide computer network which, for the first time, will link together computers of different makes and using different machine languages into one time-sharing system. Creation of the network represents a major step in computer technology and may serve as the forerunner of large computer networks of the future. The ambitious project is supported by the Defense Department’s Advanced Research Projects Agency (ARPA), which has pioneered many advances in computer research, technology, and applications during the past decade.

The system will, in effect, pool the computer power, programs, and specialized know-how of about fifteen computer-research centers, stretching from UCLA to MIT. Other California network stations (or nodes) will be located at the Rand Corporation and System Development Corporation, both of Santa Monica; the Santa Barbara and Berkeley campuses of the University of California; Stanford University and the Stanford Research Institute.

The first stage of the network will go into operation this fall as a sub-net joining UCLA, Stanford Research Institute, UC Santa Barbara, and the University of Utah. The entire network is expected to be operational in late 1970.

Engineering professor Leonard Kleinrock, who heads the UCLA project, describes how the network might handle a sample problem:

Programmers at Computer A have a blurred photo which they want to bring into focus. Their program transmits the photo to Computer B, which specializes in computer graphics, and instructs Computer B’s program to remove the blur and enhance the contrast. If B requires specialized computational assistance, it may call on Computer C for help. The processed work is shuttled back and forth until B is satisfied with the photo, and then sends it back to Computer A. The messages, ranging across the country, can flash between computers in a matter of seconds, Dr. Kleinrock says.

Each computer in the network will be equipped with its own interface message processor (IMP), which will double as a sort of translator among the Babel of computers languages and as a message handler and router.

Computer networks are not an entirely new concept, notes Dr. Kleinrock. The SAGE radar defense system of the fifties was one of the first, followed by the airlines’ SABRE reservation system. However, [both] are highly specialized and single-purpose systems, in contrast to the planned ARPA system which will link a wide assortment of different computers for a wide range of unclassified research functions.

“As of now, computer networks are still in their infancy,” says Dr. Kleinrock. “But as they grow up and become more sophisticated, we will probably see the spread of ‘computer utilities,’ which, like present electric and telephone utilities, will serve individual homes and offices across the country.”

— UCLA press release dated July 3, 1969 (which may include the first published use of the term “router”)



In July of 1968, Larry Roberts sent out a request for bids to build the ARPANET’s interface message processors — the world’s very first computer routers. More than a dozen proposals were received in response, some of them from industry heavy hitters like DEC and Raytheon. But when Roberts and Bob Taylor announced their final decision at the end of the year, everyone was surprised to learn that they had given the contract to the comparatively tiny firm of Bolt Beranek and Newman.

BBN, as the company was more typically called, came up in our previous article as well; J.C.R. Licklider was working there at the time he wrote his landmark paper on “human-computer symbiosis.” Formed in 1948 as an acoustics laboratory, BBN moved into computers in a big way during the 1950s, developing in the process a symbiotic relationship of its own with MIT. Faculty and students circulated freely between the university and BBN, which became a hacker refuge, tolerant of all manner of eccentricity and uninterested in such niceties as dress codes and stipulated working hours. A fair percentage of BBN’s staff came to consist of MIT dropouts, young men who had become too transfixed by their computer hacking to keep up with the rest of their coursework.

BBN’s forte was one-off, experimental contracts, not the sort of thing that led directly to salable commercial products but that might eventually do so ten or twenty years in the future. In this sense, the ARPANET was right up their alley. They won the bid by submitting a more thoughtful, detailed proposal than anyone else, even going so far as to rewrite some of ARPA’s own specifications to make the IMPs operate more efficiently.

Like all of the other bidders, BBN didn’t propose to build the IMPs from scratch, but rather to adapt an existing computer for the purpose. Their choice was the Honeywell 516, one of a new generation of robust integrated-circuit-based “minicomputers,” which distinguished themselves by being no larger than the typical refrigerator and being able to run on ordinary household current. Since the ARPANET would presumably need a lot of IMPs if it proved successful, the relatively cheap and commonplace Honeywell model seemed a wise choice.

The Honeywell 516, the computer model which was transformed into the world’s first router.

Still, the plan was to start as small as possible. The first version of the ARPANET to go online would include just four IMPs, linking four research clusters together. Surprisingly, MIT was not to be one of them; it was left out because the other inaugural sites were out West and ARPA didn’t want to pay AT&T for a transcontinental line right off the bat. Instead the Universities of California at Los Angeles and Santa Barbara each got the honor of being among the first to join the ARPANET, as did the University of Utah and the Stanford Research Institute (SRI), an adjunct to Stanford University. ARPA wanted BBN to ship the first turnkey IMP to UCLA by September of 1969, and for all four of the inaugural nodes to be up and running by the end of the year. Meeting those deadlines wouldn’t be easy.

The project leader at BBN was Frank Heart, a man known for his wide streak of technological paranoia — he had a knack for zeroing in on all of the things that could go wrong with any given plan — and for being “the only person I knew who spoke in italics,” as his erstwhile BBN colleague Severo Ornstein puts it. (“Not he was inflexible or unpleasant — just definite.”) Ornstein himself, having moved up in the world of computing since his days as a hapless entry-level “Crosstelling” specialist on the SAGE project, worked under Heart as the principal hardware architect, while an intense young hacker named Will Crowther, who loved caving and rock climbing almost as much as computers, supervised the coding. At the start, they all considered the Honeywell 516 a well-proven machine, given that it had been on the market for a few years already. They soon learned to their chagrin, however, that no one had ever pushed it as hard as they were now doing; obscure flaws in the hardware nearly derailed the project on more than one occasion. But they got it done in the end. The first IMP was shipped across the country to UCLA right on schedule.

The team from Bolt Beranek and Newman who created the world’s first routers. Severo Ornstein stands at the extreme right, Will Crowther just next to him. Frank Heart is near the center, the only man wearing a necktie.


On July 20, 1969, American astronaut Neil Armstrong stepped onto the surface of the Moon, marking one culmination of that which had begun with the launch of the Soviet Union’s first Sputnik satellite twelve years earlier. Five and a half weeks after the Moon landing, another, much quieter result of Sputnik became a reality. The first public demonstration of a functioning network router was oddly similar to some of the first demonstrations of Samuel Morse’s telegraph, in that it was an exercise in sending a message around a loop that led it right back to the place where it had first come from. A Scientific Data Systems Sigma 7 computer at UCLA sent a data packet to the IMP that had just been delivered, which was sitting right beside it. Then the IMP duly read the packet’s intended destination and sent it back where it had come from, to appear as text on a monitor screen.

There was literally nowhere else to send it, for only one IMP had been built to date and only this one computer was yet possessed of the ability to talk to it. The work of preparing the latter had been done by a team of UCLA graduate students working under Leonard Kleinrock, the man whose 1964 book had popularized the idea of packet switching. “It didn’t look like anything,” remembers Steve Crocker, a member of Kleinrock’s team. But looks can be deceiving; unlike the crowd of clueless politicians who had once watched Morse send a telegraph message in a ten-mile loop around the halls of the United States Congress, everyone here understood the implications of what they were witnessing. The IMPs worked.

Bob Taylor, the man who had pushed and pushed until he found a way to make the ARPANET happen, chose to make this moment of triumph his ironic exit cue. A staunch opponent of the Vietnam War, he had been suffering pangs of conscience over his role as a cog in the military-industrial complex for a long time, even as he continued to believe in the ARPANET’s future value for the civilian world. After Richard Nixon was elected president in November of 1968, he had decided that he would stay on just long enough to get the IMPs finished, by which point the ARPANET as a whole would hopefully be past the stage where cancellation was a realistic possibility. He stuck to that decision; he resigned just days after the first test of an IMP. His replacement was Larry Roberts — another irony, given that Taylor had been forced practically to blackmail Roberts into joining ARPA in the first place. Taylor himself would land at Xerox’s new Palo Alto Research Center, where over the course of the new decade he would help to invent much else that has become an everyday part of our digital lives.

About a month after the test of the first IMP, BBN shipped a second one, this time to the Stanford Research Institute. It was connected to its twin at UCLA by an AT&T long-distance line. Another, local cable was run from it to SRI’s Scientific Data Systems 940 computer, which was normally completely incompatible with UCLA’s Sigma machine despite coming from the same manufacturer. In this case, however, programmers at the two institutions had hacked together a method of echoing text back and forth between their computers — assuming it worked, that is; they had had no way of actually finding out.

On October 29, 1969, a UCLA student named Charlie Kline, sitting behind his Sigma 7 terminal, called up SRI on an ordinary telephone to initiate the first real test of the ARPANET. Computer rooms in those days were noisy places, what with all of the ventilation the big beasts required, so the two human interlocutors had to fairly shout into their respective telephones. “I’m going to type an L,” Kline yelled, and did so. “Did you get the L?” His opposite number acknowledged that he had. Kline typed an O. “Did you get the O?” Yes. He typed a G.

“The computer just crashed,” said the man at SRI.

“History now records how clever we were to send such a prophetic first message, namely ‘LO,'” says Leonard Kleinrock today with a laugh. They had been trying to manage “LOGIN,” which itself wouldn’t have been a challenger to Samuel Morse’s “What hath God wrought?” in the eloquence sweepstakes — but then, these were different times.

At any rate, the bug which had caused the crash was fixed before the day was out, and regular communications began. UC Santa Barbara came online in November, followed by the University of Utah in December. Satisfied with this proof of concept, ARPA agreed to embark on the next stage of the project, extending the network to the East Coast. In March of 1970, the ARPANET reached BBN itself. Needless to say, this achievement — computer networking’s equivalent to telephony’s spanning of the continent back in 1915 — went entirely unnoticed by an oblivious public. BBN was followed before the year was out by MIT, Rand, System Development Corporation, and Harvard University.


It would make for a more exciting tale to say that the ARPANET revolutionized computing immediately, but such was not the case. In its first couple of years, the network was neither a raging success nor an abject failure. On the one hand, its technical underpinnings advanced at a healthy clip; BBN steadily refined their IMPs, moving them away from modified general-purpose computers and toward the specialized routers we know today. Likewise, the network they served continued to grow; by the end of 1971, the ARPANET had fifteen nodes. But despite it all, it remained frustratingly underused; a BBN survey conducted about two years in revealed that the ARPANET was running at just 2 percent of its theoretical capacity.

The problem was one of computer communication at a higher level than that of the IMPs. Claude Shannon had told the world that information was information in a networking context, and the minds behind the ARPANET had taken his tautology to heart. They had designed a system for shuttling arbitrary blocks of data about, without concerning themselves overmuch about the actual purpose of said data. But the ability to move raw data from computer to computer availed one little if one didn’t know how to create meaning out of all those bits. “It was like picking up the phone and calling France,” Frank Heart of BBN would later say. “Even if you get the connection to work, if you don’t speak French you’ve got a little problem.”

What was needed were higher-level protocols that could run on top of the ARPANET’s packet switching — a set of agreed-upon “languages” for all of these disparate computers to use when talking with one another in order to accomplish something actually useful. Seeing that no else was doing so, BBN and MIT finally deigned to provide them. First came Telnet, a protocol to let one log into a remote computer and interact with it at a textual command line just as if one was sitting right next to it at a local terminal. And then came the File Transfer Protocol, or FTP, which allowed one to move files back and forth between two computers, optionally performing useful transformations on them in the process, such as going from EBCDIC to ASCII text encoding or vice versa. It is a testament to how well the hackers behind these protocols did their jobs that both have remained with us to this day. Still, the application that really made the ARPANET come alive — the one that turned it almost overnight from a technological experiment to an indispensable tool for working and even socializing — was the next one to come along.

Jack Ruina was now long gone as the head of all of ARPA; that role was now filled by a respected physicist named Steve Lukasik. Lukasik would later remember how Larry Roberts came into his office one day in April of 1972 to try to convince him to use the ARPANET personally. “What am I going to do on the ARPANET?” the non-technical Lukasik asked skeptically.

“Well,” mused Roberts, “you could do email.”

Email wasn’t really a new idea at the time. By the mid-1960s, the largest computer at MIT had hundreds of users, who logged in as many as 30 at a time via local terminals. An undergraduate named Tom Van Vleck noticed that some users had gotten in a habit of passing messages to one another by writing them up in text files with names such as “TO TOM,” then dropping them into a shared directory. In 1965, he created what was probably the world’s first true email system in order to provide them with a more elegant solution. Just like all of the email systems that would follow it, it gave each user a virtual mailbox to which any other user could direct a virtual letter, then see it delivered instantly. Replying, forwarding, address books, carbon copies — all of the niceties we’ve come to expect — followed in fairly short order, at MIT and in many other institutions. Early in 1972, a BBN programmer named Ray Tomlinson took what struck him as the logical next step, by creating a system for sending email between otherwise separate computers — or “hosts,” as they were known in the emerging parlance of the ARPANET.

Thanks to FTP, Tomlinson already had a way of doing the grunt work of moving the individual letters from computer to computer. His biggest dilemma was a question of addressing. It was reasonable for the administrators of any single host to demand that every user have a unique login ID, which could also function as her email address. But it would be impractical to insist on unique IDs across the entire ARPANET. And even if it was possible, how was the computer on which an electronic missive had been composed to know which other computer was home to the intended recipient? Trying to maintain a shared central database of every login for every computer on the ARPANET didn’t strike Tomlinson as much of a solution.

His alternative approach, which he would later describe as no more than “obvious,” would go on to become an icon of the digital age. Each email address would consist of a local user name followed by an “at” sign (@) and the name of the host on which it lived. Just as a paper letter moves from an address in a town, then to a larger postal hub, then onward to a hub in another region, and finally to another individual street address, email would use its suffix to find the correct host on the ARPANET. Once it arrived there, said host could drill down further and route it to the correct user. “Now, there’s a nice hack,” said one of Tomlinson’s colleagues; that was about as effusive as a compliment could get in hacker circles.

Stephen Lukasik, ARPA head and original email-obsessed road warrior.

Steve Lukasik reluctantly allowed Larry Roberts to install an ARPANET terminal in his office for the purpose of reading and writing email. Within days, the skeptic became an evangelist. He couldn’t believe how useful email actually was. He sent out a directive to anyone who was anyone at ARPA, whether their work involved computers or not: all were ordered to accept a terminal in their office. “The way to communicate with me is through electronic mail,” he announced categorically. He soon acquired a “portable” terminal which was the size of a suitcase and weighed 30 pounds, but which came equipped with a modem that would allow him to connect to the ARPANET from any location from which he could finagle access to an ordinary telephone. He became the prototype for millions of professional road warriors to come, dialing into the office constantly from conference rooms, from hotel rooms, from airport lounges. He became perhaps the first person in the world who wasn’t already steeped in computing to make the services the ARPANET could provide an essential part of his day-to-day life.

But he was by no means the last. “Email was the biggest surprise about the ARPANET,” says Leonard Kleinrock. “It was an ad-hoc add-on by BBN, and it just blossomed. And that sucked a lot of people in.” Within a year of Lukasik’s great awakening, three quarters of all the traffic on the ARPANET consisted of emails flying to and fro, and the total volume of traffic on the network had grown by a factor of five and a half.



With a supportive ARPA administrator behind them and applications like email beginning to prove their network’s real-world usefulness, it struck the people who had designed and built the ARPANET that it was time for a proper coming-out party. They settled on the International Conference on Computers and Communications, which was to be held at the Washington, D.C., Hilton hotel in October of 1972. Almost every institution connected to the ARPANET sent representatives toting terminals and demonstration software, while AT&T ran a special high-capacity line into the hotel’s ballroom to get them all online.

More than a thousand people traipsed through the exhibition over the course of two and half days, taking in several dozen demonstrations of what the ARPANET could do now and might conceivably be able to do in the future. It was the first that some of them had ever heard of the network, or even of the idea of computer networking in general.

One of the demonstrations bore an ironic resemblance to the SAGE system that had first proved that wide-area computer networking could work at all. Leonard Kleinrock:

One of the things that was demonstrated there was a distributed air-traffic-control system. The idea was there would be some air traffic on the system. And as a plane moved out of its region, it would be picked up by another computer in the next region, data would be exchanged, and collectively the computers would be managing airspace over a large region. This package also had the ability to freeze the simulation at any given point, take the program on Machine A, squeeze it over to Machine B, and then continue the simulation with the user being none the wiser. So there were really some sophisticated things going on there.

Severo Ornstein, who was also present, must have been having flashbacks to his early experiences with Crosstelling…

A Stanford researcher named Vint Cerf remembers three principal reactions among the attendees.

The first came from the die-hard circuit-switching people from the telephone industry, who didn’t believe packet switching could possibly work. And they were stunned because it did work. It was almost like the rail industry disbelieving that airplanes could really fly until they saw one in flight. The second group were the people who didn’t know anything about computer communications at all to speak of, and who were sort of overwhelmed by the whole thing. But then there was the third group: the people who were just as excited as little kids because of all these neat things that were going on.

Amidst the demonstrations of email, file transfers, remote logins, even remote software development, the hit of the show was something far more frivolous. Stanford’s AI Lab had brought a program called Parry, a “simulation” — we would call it a chatbot today — of a patient undergoing psychological therapy. They had wired Parry up to talk with another program called Eliza, a computerized psychotherapist created by one Joseph Weizenbaum at MIT back in 1966.

Good evening. Tell me your problems. Please terminate input with a period or a question mark.

People get on my nerves sometimes.

I am not sure I understand you.

You should pay more attention.

Suppose you should pay more attention.

You're entitled to your own opinion.

What makes you think I am entitled to my own opinion?

It bothers me just to be around people in general.

Please go on.

I went to the track at Bay Meadows a while back.

Why do you say that just now?

There's a lot of crooked gambling going on there.

Please go on.

I gamble on horses.

What does that suggest to you?

Do you know anything about bookies?

What makes you think I know anything about bookies?

A bookie didn't pay me off once.

What does that suggest to you?

I beat up the bookie who did not pay off, and I thought he might use his friends in the underworld to get even with me.

What are you really talking about?

Have you ever really thought about the mafia?

Suppose you ever really thought about the mafia.

Let's talk about something else.

And so it went. As Joseph Weizenbaum himself was at pains to point out later in his career, Eliza and Parry were nothing more than clever parlor tricks, a set of rote semantic transformations and randomized catchphrases meant to convey a tissue-thin simulacrum of real understanding, convincing only to the naïve and those actively determined to believe. Their presence here as the shabby best that the strong-AI contingent could offer, surrounded by so many genuinely visionary demonstrations of computing’s humanistic, networked future, ought to have demonstrated to the thoughtful observer how one vision of computing was delivering on its promises while the other manifestly was not. But no matter: the crowd ate it up. It seems there was no shortage of gullible true believers in the Hilton ballroom during those exciting two and a half days.


The International Conference on Computers and Communications provided the ARPANET with some of its first press coverage beyond academic journals. Within computing circles, however, the ARPANET’s existence hadn’t gone unnoticed even by those who, thanks to accidents of proximity, had no opportunity to participate in it. During the early 1970s, would-be ARPANET equivalents popped up in a number of places outside the continental United States. There was ALOHANET, which used radio waves to join the various campuses of the University of Hawaii, which were located on different islands, into one computing neighborhood. There was the National Physical Laboratory (NPL) network in Britain, which served that country’s research community in much the same way that ARPANET served computer scientists in the United States. (The NPL network’s design actually dated back to the mid-1960s, and some of its proposed architecture had influenced the ARPANET, making it arguably more a case of parallel evolution than of learning from example.) Most recently, there was a network known as CYCLADES in development in France.

All of which is to say that computer networking in the big picture was looking more and more like the early days of telephony: a collection of discrete networks that served their own denizens well but had no way of communicating with one another. This wouldn’t do at all; ever since the time when J.C.R. Licklider had been pushing his Intergalactic Computer Network, proponents of wide-area computer networking had had a decidedly internationalist, even utopian streak. As far as they were concerned, the world’s computers — all of the world’s computers, wherever they happened to be physically located — simply had to find a way to talk to one another.

The problem wasn’t one of connectivity in its purest sense. As we saw in earlier articles, telephony had already found ways of going where wires could not easily be strung decades before. And by now, many of telephony’s terrestrial radio and microwave beams had been augmented or replaced by communications satellites — another legacy of Sputnik — that served to bind the planet’s human voices that much closer together. There was no intrinsic reason that computers couldn’t talk to one another over the same links. The real problem was rather that the routers on each of the extant networks used their own protocols for talking among themselves and to the computers they served. The routers of the ARPANET, for example, used something called the Network Control Program, or NCP, which had been codified by a team from Stanford led by Steve Crocker, based upon the early work of BBN hackers like Will Crowther. Other networks used completely different protocols. How were they to make sense of one another? Larry Roberts came to see this as computer networking’s next big challenge.

He happened to have working just under him at ARPA a fellow named Bob Kahn, a bright spark who had already achieved much in computing in his 35 years. Roberts now assigned Kahn the task of trying to make sense of the international technological Tower of Babel that was computer networking writ large. Kahn in turn enlisted Stanford’s Vint Cerf as a collaborator.

Bob Kahn

Vint Cerf

The two theorized and argued with one another and with their academic colleagues for about a year, then published their conclusions in the May 1974 issue of IEEE Transactions on Communications, in an article entitled “A Protocol for Packet Network Intercommunication.” It introduced to the world a new word: the “Internet,” shorthand for Khan and Cerf’s envisioned network of networks. The linchpin of their scheme was a sort of meta-network of linked “gateways,” special routers that handled all traffic going in and out of the individual networks; if the routers on the ARPANET were that network’s interstate highway system, its gateway would become its international airport. A host wishing to send a packet to a computer outside its own network would pass it to its local gateway using its network’s standard protocols, but would include within the packet information about the particular “foreign” computer it was trying to reach. The gateway would then rejigger the packet into a universal standard format and send it over the meta-network to the gateway of the network to which the foreign computer belonged. Then this gateway would rejigger the packet yet again, into a format suitable for passing over the network behind it to reach its ultimate destination.

Kahn and Cerf detailed a brand-new protocol to allow the gateways on the meta-network to talk among themselves. They called it the Transmission Control Protocol, or TCP. It gave each computer on the networks served by the gateways the equivalent of a telephone number. These “TCP addresses” — which we now call “IP addresses,” for reasons we’ll get to shortly — originally consisted of three fields, each containing a number between 0 and 255. The first field stipulated the network to which the host belonged; think of it as a telephone number’s country code. The other two fields identified the specific computer on that network. “Network identification allows up to 256 distinct networks,” wrote Kahn and Cerf. “This seems sufficient for the foreseeable future. Similarly, the TCP identifier field permits up to 65,536 distinct [computers] to be addressed, which seems more than sufficient for any given network.” Time would prove these statements to be among their few failures of vision.

It wasn’t especially easy to convince the managers of other networks, who came from different cultures and were all equally convinced that their way of doings things was the best way, to accept the standard being shoved in their faces by the long and condescending arm of the American government. Still, the reality was that TCP was as solid and efficient a protocol as anyone could ask for, and there were huge advantages to be had by linking up with the ARPANET, where more cutting-edge computer research was happening than anywhere else. Late in 1975, the NPL network in Britain, the second largest in the world, officially joined up. After that, the Internet began to take on an unstoppable momentum of its own. In 1981, with the number of individual networks on it barreling with frightening speed toward the limit of 256, a new addressing scheme was hastily adopted, one which added a fourth field to each computer’s telephone number to create the format we are still familiar with today.

Amidst all the enthusiasm for communicating across networks, the distinctions between them were gradually lost. The Internet became just the Internet, and no one much knew or cared whether any given computer was on the ARPANET or the NPL network or somewhere else. The important thing was, it was on the Internet. The individual networks’ internal protocols came slowly to resemble that of the Internet, just because it made everything easier from a technical standpoint. In 1978, in a reflection of these trends, the TCP protocol was split into a matched pair of them called TCP/IP. The part that was called the Transmission Control Protocol was equally useful for pushing packets around a network behind a gateway, while the Internet Protocol was reserved for the methods that gateways used to pass packets across network boundaries. (This is the reason that we now refer to IP addresses rather than TCP addresses.) Beginning on January 1, 1983, all computers on the ARPANET were required to use TCP rather than NCP even when they were only talking among themselves behind their gateway.



Alas, by that point ARPA itself was not what it once had been; the golden age of blue-sky computer research on the American taxpayer’s dime had long since faded into history. One might say that the beginning of the end came as early as the fall of 1969, when a newly fiscally conservative United States Congress, satisfied that the space race had been won and the Soviets left in the country’s technological dust once again, passed an amendment to the next year’s Department of Defense budget which specified that any and all research conducted by agencies like ARPA must have “a direct and apparent relationship” to the actual winning of wars by the American military. Dedicated researchers and administrators found that they could still keep their projects alive afterward by providing such justifications in the form of lengthy, perhaps deliberately obfuscated papers, but it was already a far cry from the earlier days of effectively blank checks. In 1972, as if to drive home a point to the eggheads in its ranks who made a habit of straying too far out of their lanes, the Defense Department officially renamed ARPA to DARPA: the Defense Advanced Research Projects Agency.

Late in 1973, Larry Roberts left ARPA. His replacement the following January was none other than J.C.R. Licklider, who had reluctantly agreed to another tour of duty in the Pentagon only when absolutely no one else proved willing to step up.

But, just as this was no longer quite the same ARPA, it was no longer quite the same Lick. He had continued to be a motivating force for computer networking from behind the scenes at MIT during recent years, but his decades of burning the candle at both ends, of living on fast food and copious quantities of Coca Cola, were now beginning to take their toll. He suffered from chronic asthma which left him constantly puffing at an inhaler, and his hands had a noticeable tremor that would later reveal itself to be an early symptom of Parkinson’s disease. In short, he was not the man to revive ARPA in an age of falling rather than rising budgets, of ever increasing scrutiny and internecine warfare as everyone tried to protect their own pet projects, at the expense of those of others if necessary. “When there is scarcity, you don’t have a community,” notes Vint Cerf, who perchance could have made a living as a philosopher if he hadn’t chosen software engineering. “All you have is survival.”

Lick did the best he could, but after Steve Lukasik too left, to be replaced by a tough cookie who grilled everyone who proposed doing anything about its concrete military value, he felt he could hold on no longer. Lick’s second tenure at ARPA ended in September of 1975. Many computing insiders would come to mark that day as the one when a door shut forever on this Defense Department agency’s oddly idealistic past. When it came to new projects at least, DARPA from now on would content itself with being exactly what its name said it ought to be. Luckily, the Internet already existed, and had already taken on a life of its own.



Lick wound up back at MIT, the congenial home to which this prodigal son had been regularly returning since 1950. He took his place there among the younger hackers of the Dynamic Modeling Group, whose human-focused approach to computing caused him to favor them over their rivals at the AI Lab. If Lick wasn’t as fast on his feet as he once had been, he could still floor you on occasion with a cogent comment or the perfect question.

Some of the DMG folks who now surrounded him would go on to form Infocom, an obsession of the early years of this website, a company whose impact on the art of digital storytelling can still be felt to this day.[1]In fact, Lick agreed to join Infocom’s board of directors, although his role there was a largely ceremonial one; he was not a gamer himself, and had little knowledge of or interest in the commercial market for home-computer games that had begun to emerge by the beginning of the 1980s. Still, everyone involved with the company remembers that he genuinely exulted at Infocom’s successes and commiserated with their failures, just as he did with those of all of his former students. One of them was a computer-science student named Tim Anderson, who met the prophet in their ranks often in the humble surroundings of a terminal room.

He signed up for his two hours like everybody else. You’d come in and find this old guy sitting there with a bottle of Coke and a brownie. And it wasn’t even a good brownie; he’d be eating one of those vending-machine things as if that was a perfectly satisfying lunch. Then I also remember that he had these funny-colored glasses with yellow lenses; he had some theory that they helped him see better.

When you learned what he had done, it was awesome. He was clearly the father of us all. But you’d never know it from talking to him. Instead, there was always a sense that he was playing. I always felt that he liked and respected me, even though he had no reason to: I was no smarter than anybody else. I think everybody in the group felt the same way, and that was a big part of what made the group the way it was.

In 1979, Lick penned the last of his periodic prognostications of the world’s networked future, for a book of essays about the abstract future of computing that was published by the MIT Press. As before, he took the year 2000 as the watershed point.

On the whole, computer technology continues to advance along the curve it has followed in its three decades of history since World War II. The amount of information that can be stored for a given period or processed in a given way at unit cost doubles every two years. (The 21 years from 1979 to 2000 yielded ten doublings, for a factor of about 1000.) Wave guides, optical fibers, rooftop satellite antennas, and coaxial cables provide abundant bandwidth and inexpensive digital transmission both locally and over long distances. Computer consoles with good graphics displays and speech input and output have become almost as common as television sets. Some pocket computers are fully programmable, as powerful as IBM 360/40s used to be, and are equipped with both metallic and radio connectors to computer-communication networks.

An international network of digital computer-communication networks serves as the main and essential medium of informational interaction for governments, institutions, corporations, and individuals. The Multinet [i.e., Internet], as it is called, is hierarchical — some of the component networks are themselves networks of networks — and many of the top-level networks are national networks. The many sub-networks that comprise this network of networks are electronically and physically interconnected. Most of them handle real-time speech as well as computer messages, and some handle video.

The Multinet has supplanted the postal system for letters, the dial-telephone system for conversations and teleconferences, standalone batch-processing and time-sharing systems for computation, and most filing cabinets, microfilm repositories, document rooms, and libraries for information storage and retrieval. Many people work at home, interacting with clients and coworkers through the Multinet, and many business offices (and some classrooms) are little more than organized interconnections of such home workers and their computers. People shop through the Multinet, using its funds-transfer functions, and a few receive delivery of small items through adjacent pneumatic-tube networks. Routine shopping and appointment scheduling are generally handled by private-secretary-like programs called OLIVERs which know their masters’ needs. Indeed, the Multinet handles scheduling of almost everything schedulable. For example, it eliminates waiting to be seated at restaurants and if you place your order through it it can eliminate waiting to be served…

But for the first time, Lick also chose to describe a dystopian scenario to go along with the utopian one, stating that the former was just as likely as the latter if open standards like TCP/IP, and the spirit of cooperation that they personified, got pushed away in favor of closed networks and business models. If that happened, the world’s information spaces would be siloed off from one another, and humanity would have lost a chance it never even realized it had.

Because their networks are diverse and uncoordinated, recalling the track-gauge situation in the early days of railroading, the independent “value-added-carrier” companies capture only the fringes of the computer-communication market, the bulk of it being divided between IBM (integrated computer-communication systems based on satellites) and the telecommunications companies (transmission services but not integrated computer-communication services, no remote-computing services)…

Electronic funds transfer has not replaced money, as it turns out, because there were too many uncoordinated bank networks and too many unauthorized and inexplicable transfers of funds. Electronic message systems have not replaced mail, either, because there were too many uncoordinated governmental and commercial networks, with no network at all reaching people’s homes, and messages suffered too many failures of transfers…

Looking back on these two scenarios from the perspective of 2022, when we stand almost exactly as far beyond Lick’s watershed point as he stood before it, we can note with gratification that his more positive scenario turned out to be the more correct one; if some niceties such as computer speech recognition didn’t arrive quite on his time frame, the overall network ecosystem he described certainly did. We might be tempted to contemplate at this point that the J.C.R. Licklider of 1979 may have been older in some ways than his 64 years, being a man who had known as much failure as success over the course of a career spanning four and a half impossibly busy decades, and we might be tempted to ascribe his newfound willingness to acknowledge the pessimistic as well as the optimistic to these factors alone.

But I believe that to do so would be a mistake. It is disarmingly easy to fall into a mindset of inevitability when we consider the past, to think that the way things turned out are the only way they ever could have. In truth, the open Internet we are still blessed with today, despite the best efforts of numerous governments and corporations to capture and close it, may never have been a terribly likely outcome; we may just possibly have won an historical lottery. When you really start to dig into the subject, you find that there are countless junctures in the story where things could have gone very differently indeed.

Consider: way back in 1971, amidst the first rounds of fiscal austerity at ARPA, Larry Roberts grew worried about whether he would be able to convince his bosses to continue funding the fledgling ARPANET at all. Determined not to let it die, he entered into serious talks with AT&T about the latter buying the whole kit and caboodle. After months of back and forth, AT&T declined, having decided there just wasn’t any money to be made there. What would have happened if AT&T had said yes, and the ARPANET had fallen into the hands of such a corporation at this early date? Not only digital history but a hugely important part of recent human history would surely have taken a radically different course. There would not, for instance, have ever been a TCP/IP protocol to run the Internet if ARPA had washed their hands of the whole thing before Robert Kahn and Vint Cerf could create it.

And so it goes, again and again and again. It was a supremely unlikely confluence of events, personalities, and even national moods that allowed the ARPANET to come into being at all, followed by an equally unlikely collection of same that let its child the Internet survive down to the present day with its idealism a bit tarnished but basically intact. We spend a lot of time lamenting the horrific failures of history. This is understandable and necessary — but we should also make some time here and there for its crazy, improbable successes.



On October 4, 1985, J.C.R. Licklider finally retired from MIT for good. His farewell dinner that night had hundreds of attendees, all falling over themselves to pay him homage. Lick himself, now 70 years old and visibly infirm, accepted their praise shyly. He seemed most touched by the speakers who came to the podium late in the evening, after the big names of academia and industry: the group of students who had taken to calling themselves “Lick’s kids” — or, in hacker parlance, “lixkids.”

“When I was an undergraduate,” said one of them, “Lick was just a nice guy in a corner office who gave us all a wonderful chance to become involved with computers.”

“I’d felt I was the only one,” recalled another of the lixkids later. “That somehow Lick and I had this mystical bond, and nobody else. Yet during that evening I saw that there were 200 people in the room, 300 people, and that all of them felt that same way. Everybody Lick touched felt that he was their hero and that he had been an extraordinarily important person in their life.”

J.C.R. Licklider died on June 26, 1990, just as the networked future he had so fondly envisioned was about to become a tangible reality for millions of people, thanks to a confluence of three factors: an Internet that was descended from the original ARPANET, itself the realization of Lick’s own Intergalactic Computer Network; a new generation of cheap and capable personal computers that were small enough to sit on desktops and yet could do far more than the vast majority of the machines Lick had had a chance to work on; and a new and different way of navigating texts and other information spaces, known as hypertext theory. In the next article, we’ll see how those three things yielded the World Wide Web, a place as useful and enjoyable for the ordinary folks of the world as it is for computing’s intellectual elites. Lick, for one, wouldn’t have had it any other way.

(Sources: the books A Brief History of the Future: The Origins of the Internet by John Naughton; Where Wizards Stay Up Late: The Origins of the Internet by Katie Hafner and Matthew Lyon, Hackers: Heroes of the Computer Revolution by Steven Levy, From Gutenberg to the Internet: A Sourcebook on the History of Information Technology edited by Jeremy M. Norman, The Dream Machine by M. Mitchell Waldrop, A History of Modern Computing (2nd ed.) by Paul E. Ceruzzi, Communication Networks: A Concise Introduction by Jean Walrand and Shyam Parekh, Computing in the Middle Ages by Severo M. Ornstein, and The Computer Age: A Twenty-Year View edited by Michael L. Dertouzos and Joel Moses.)

Footnotes

Footnotes
1 In fact, Lick agreed to join Infocom’s board of directors, although his role there was a largely ceremonial one; he was not a gamer himself, and had little knowledge of or interest in the commercial market for home-computer games that had begun to emerge by the beginning of the 1980s. Still, everyone involved with the company remembers that he genuinely exulted at Infocom’s successes and commiserated with their failures, just as he did with those of all of his former students.
 

Tags:

A Web Around the World, Part 8: The Intergalactic Computer Network

One could make a strong argument for the manned Moon landing and the Internet as the two greatest technological achievements of the second half of the twentieth century. Remarkably, the roots of both reach back to the same event — in fact, to the very same American government agency, hastily created in response to that event.


A replica of the Sputnik 1 satellite, the source of the beep heard round the world.

At dawn on October 5, 1957, a rocket blasted off from southern Kazakhstan. Just under half an hour later, at an altitude of about 140 miles, it detached its payload: a silver sphere the size of a soccer ball, from which four antennas extended in vaguely insectoid fashion. Sputnik 1, the world’s first artificial satellite, began to send out a regular beep soon thereafter.

It became the beep heard round the world, exciting a consternation in the West such as hadn’t been in evidence since the first Soviet test of an atomic bomb eight years earlier. In many ways, this panic was even worse than that one. The nuclear test of 1949 had served notice that the Soviet Union had just about caught up with the West, prompting a redoubled effort on the part of the United States to develop the hydrogen bomb, the last word in apocalyptic weaponry. This effort had succeeded in 1952, restoring a measure of peace of mind. But now, with Sputnik, the Soviet Union had done more than catch up to the Western state of the art; it had surpassed it. The implications were dire. Amateur radio enthusiasts listened with morbid fascination to the telltale beep passing overhead, while newspaper columnists imagined the Soviets colonizing space in the name of communism and dropping bombs from there on the heads of those terrestrial nations who refused to submit to tyranny.

The Soviets themselves proved adept at playing to such fears. Just one month after Sputnik 1, they launched Sputnik 2. This satellite had a living passenger: a bewildered mongrel dog named Laika who had been scooped off the streets of Moscow. We now know that the poor creature was boiled alive in her tin can by the unshielded heat of the Sun within a few hours of reaching orbit, but it was reported to the world at the time that she lived fully six days in space before being euthanized by lethal injection. Clearly the Soviets’ plans for space involved more than beeping soccer balls.

These events prompted a predictable scramble inside the American government, a circular firing squad of politicians, bureaucrats, and military brass casting aspersions upon one another as everyone tried to figure out how the United States could have been upstaged so badly. President Dwight D. Eisenhower delivered a major address just four days after Laika had become the first living Earthling to reach space (and to die there). He would remedy the crisis of confidence in American science and technology, he said, by forming a new agency that would report directly to the Secretary of Defense. It would be called the Advanced Research Projects Agency, or ARPA. Naturally, its foremost responsibility would be the space race against the Soviets.

But this mission statement for ARPA didn’t last very long. Many believed that to treat the space race as a purely military endeavor would be unwise; far better to present it to the world as a peaceful, almost utopian initiative, driven by pure science and the eternal human urge to explore. These cooler heads eventually prevailed, and as a result almost the entirety of ARPA’s initial raison d’être was taken away from it in the weeks after its formal creation in February of 1958. A brand new, civilian space agency called the National Aeronautics and Space Administration was formed to carry out the utopian mission of space exploration — albeit more quickly than the Soviets, if you please. ARPA was suddenly an agency without any obvious reason to exist. But the bills to create it had all been signed and office space in the Pentagon allocated, and so it was allowed to shamble on toward destinations that were uncertain at best. It became just another acronym floating about in the alphabet soup of government bureaucracy.

Big government having an inertia all its own, it remained that way for quite some time. While NASA captured headlines with the recruitment of its first seven human astronauts and the inauguration of a Project Mercury to put one of them into space, ARPA, the agency originally slated to have all that glory, toiled away in obscurity with esoteric projects that attracted little attention outside the Pentagon. ARPA had nothing whatsoever to do with computing until mid-1961. At that point — as the nation was smarting over the Soviets stealing its thunder once again, this time by putting a man into space before NASA could — ARPA was given four huge IBM mainframes, leftovers from the SAGE project which nobody knew what to do with, for their hardware design had been tailored for the needs of SAGE alone. The head of ARPA then was a man named Jack Ruina, who just happened to be an electrical engineer, and one who was at least somewhat familiar with the latest developments in computing. Rather than looking a gift horse — or a white elephant — in the mouth, he decided to take his inherited computers as a sign that this was a field where ARPA could do some good. He asked for and was given $10 million per year to study computer-assisted command-and-control systems — basically, for a continuation of the sort of work that the SAGE  project had begun. Then he started looking around for someone to run the new sub-agency. He found the man he felt to be the ideal candidate in one J.C.R. Licklider.


J.C.R. Licklider

Lick was probably the most gifted intuitive genius I have ever known. When I would finally come to Lick with the proof of some mathematical relation, I’d discover that he already knew it. He hadn’t worked it out in detail. He just… knew it. He could somehow envision the way information flowed, and see relations that people who just manipulated the mathematical symbols could not see. It was so astounding that he became a figure of mystery to the rest of us. How the hell does Lick do it? How does he see these things? Talking with Lick about a problem amplified my own intelligence about 30 IQ points.

— William J. McGill, colleague of J.C.R. Licklider at MIT

Joseph Carl Robnett Licklider is one of history’s greatest rarities, a man who changed the world without ever making any enemies. Almost to a person, no one who worked with him had or has a bad word to say about him — not even those who stridently disagreed with him about the approach to computing which his very name came to signify. They prefer to wax rhapsodic about his incisive intellect, his endless good humor, his incomparable ability to inspire and motivate, and perhaps most of all his down-to-earth human kindness — not exactly the quality for which computer brainiacs are most known. He was the kind of guy who, when he’d visit the office soda machine, would always come back with enough Cokes for everyone. When he’d go to sharpen a pencil, he’d ask if anyone else needed theirs sharpened as well. “He could strike up a conversation with anybody,” remembered a woman named Louise Carpenter Thomas who worked with him early in his career. “Waitresses, bellhops, janitors, gardeners… it was a facility I marveled at.”

“I can’t figure it out,” she once told a friend. “He’s too… nice.” She soon decided he wasn’t too good to be true after all; she became his wife.

“Lick,” as he was universally known, wasn’t a hacker in the conventional sense. He was rather the epitome of a big-picture guy. Uninterested in the details of administration of the agencies he ostensibly led and not much more interested in those of programming or engineering at the nitty-gritty level, he excelled at creating an atmosphere that allowed other people to become their best selves and then setting a direction they could all pull toward. One might be tempted to call him a prototype of the modern Silicon Valley “disruptor,” except that he lacked the toxic narcissism of that breed of Steve Jobs wannabees. In fact, Lick was terminally modest. “If someone stole an idea from him,” said his wife Louise, “I’d pound the table and say it’s not fair, and he’d say, ‘It doesn’t matter who gets the credit. It matters that it gets done.'”

His unwillingness to blow his own horn is undoubtedly one of the contributing factors to Lick’s being one of the most under-recognized of computing’s pioneers. He published relatively little, both because he hated to write and because he genuinely loved to see one of his protegees recognized for fleshing out and popularizing one of his ideas. Yet the fact remains that his vision of computing’s necessary immediate future was actually far more prescient than that of many of his more celebrated peers.

To understand that vision and the ways in which it contrasted with that of some of his colleagues, we should begin with Lick’s background. Born in 1915 in St. Louis, Missouri, the son of a Baptist minister, he grew up a boy who was good at just about everything, from sports to mathematics to auto mechanics, but already had a knack for never making anyone feel jealous about it. After much tortured debate and a few abrupt changes of course at university, he finally settled on studying psychology, and was awarded his master’s degree in the field from St. Louis’s Washington University in 1938. According to his biographer M. Mitchell Waldrop, the choice of majors made all the difference in what he would go on to do.

Considering all that happened later, Lick’s youthful passion for psychology might seem like an aberration, a sideline, a long diversion from his ultimate career in computers. But in fact, his grounding in psychology would prove central to his very conception of computers. Virtually all the other computer pioneers of his generation would come to the field in the 1940s and 1950s with backgrounds in mathematics, physics, or electrical engineering, technological orientations that led them to focus on gadgetry — on making the machines bigger, faster, and more reliable. Lick was unique in bringing to the field a deep appreciation for human beings: our capacity to perceive, to adapt, to make choices, and to devise completely new ways of tackling apparently intricate problems. As an experimental psychologist, he found these abilities every bit as subtle and as worthy of respect as a computer’s ability to execute an algorithm. And that was why to him, the real challenge would always lie in adapting computers to the humans who used them, thereby exploiting the strengths of each.

Still, Lick might very well have remained a “pure” psychologist if the Second World War hadn’t intervened. His pre-war research focus had been the psychological processes of human hearing. After the war began, this led him to Harvard University’s Psycho-Acoustic Laboratory, where he devised technologies to allow bomber crews to better communicate with one another inside their noisy airplanes. Thus he found the focus that would mark the rest of his career: the interaction between humans and technology. After moving to MIT in 1950, he joined the SAGE project, where he helped to design the user interface — not that the term yet existed! — which allowed the SAGE ground controllers to interact with the display screens in front of them; among his achievements here was the invention of the light pen. Having thus been bitten by the computing bug, he moved on in 1957 to Bolt Beranek and Newman, a computing laboratory and think tank with close ties to MIT.

He was still there in 1960, when he published perhaps the most important of all his rare papers, a piece entitled “Man-Computer Symbiosis,” in the journal Transactions on Human Factors in Electronics. In order to appreciate what a revolutionary paper it was, we should first step back to look at the view of computing to which it was responding.

The most typical way of describing computers in the mass media of the time was as “giant brains,” little different in qualitative terms from those of humans. This conception of computing would soon be all over pop culture — for example, in the rogue computers that Captain Kirk destroyed on almost a monthly basis on Star Trek, or in the computer HAL 9000, the villain of 2001: A Space Odyssey. A large number of computer researchers who probably ought to have known better subscribed to a more positive take on essentially the same view. Their understanding was that, if artificial intelligence wasn’t yet up to human snuff, it was only a matter of time. These proponents of “strong AI,” such as Stanford University’s John McCarthy and MIT’s own Marvin Minsky, were already declaring by the end of the 1950s that true computer consciousness was just twenty years away. (This would eventually lead to a longstanding joke in hacker culture, that strong AI is always exactly two decades away…) Even such an undeniable genius as Alan Turing, who had been dead six years already when Lick published his paper, had spent much effort devising a “Turing test” that could serve as a determiner of true artificial intelligence, and had attempted to teach a computer to play chess as a sort of opening proof of concept.

Lick, on the other hand, well recognized that to use the completely deterministic and algorithm-friendly game of chess for that purpose was not quite honest; a far better demonstration of artificial intelligence would be a computer that could win at poker, what with all of the intuition and social empathy that game required. But rather than chase such chimeras at all, why not let computers do the things they already do well and let humans do likewise, and teach them both to work together to accomplish things neither could on their own? Many of computing’s leading theorists, Lick implied, had developed delusions of grandeur, moving with inordinate speed from computers as giant calculators for crunching numbers to computers as sentient beings in their own right. They didn’t have to become the latter, Lick understood, to become one of the most important tools humanity had ever invented for itself; there was a sweet spot in between the two extremes. He chose to open his paper with a metaphor from the natural world, describing how fig trees are pollinated by the wasps which feed upon their fruit. “The tree and the insect are thus heavily interdependent,” he wrote. “The tree cannot reproduce without the insect; the insect cannot eat without the tree; they constitute not only a viable but a productive and thriving partnership.” A symbiosis, in other words.

A similar symbiosis could and should become the norm in human-computer interactions, with the humans always in the cat-bird seat as the final deciders — no Star Trek doomsday scenarios here.

[Humans] will set the goals and supply the motivations. They will formulate hypotheses. They will ask questions. They will think of mechanisms, procedures, and models. They will define criteria and serve as evaluators, judging the contributions of the equipment and guiding the general line of thought. The information-processing equipment, for its part, will convert hypotheses into testable models and then test the models against the data. The equipment will answer questions. It will simulate the mechanisms and models, carry out the procedures, and display the results to the operator. It will transform data, plot graphs. [It] will interpolate, extrapolate, and transform. It will convert static equations or logical statements into dynamic models so that the human operator can examine their behavior. In general, it will carry out the routinizable, clerical operations that fill the intervals between decisions.

Perchance in a bid not to offend his more grandiose colleagues, Lick did hedge his bets on the long-term prospects for strong artificial intelligence. It might very well arrive at some point, he said, although he couldn’t say whether that would take ten years or 500 years. Regardless, the years before its arrival “should be intellectually and creatively the most exciting in the history of mankind.”

In the end, however, even Lick’s diplomatic skills would prove insufficient to smooth out the differences between two competing visions of computing. By the end of the 1960s, the argument would literally split MIT’s computer-science research in two. One part would become the AI Lab, dedicated to artificial intelligence in its most expansive form; the other, known as the Dynamic Modeling Group, would take its mission statement as well as its name almost verbatim from Lick’s 1960 paper. For all that some folks still love to talk excitedly and/or worriedly of a “Singularity” after which computer intelligence will truly exceed human intelligence in all its facets, the way we actually use computers today is far more reflective of J.C.R. Licklider’s vision than that of Marvin Minsky or John McCarthy.

But all of that lay well in the future at the dawn of the 1960s. Viewing matters strictly through the lens of that time, we can now begin to see why Jack Ruina at ARPA found J.C.R. Licklider and the philosophy of computing he represented so appealing. Most of the generals and admirals Ruina talked to were much like the general public; they still thought of computers as giant brains that would crunch a bunch of data and then unfold for them the secrets of the universe — or at least of the Soviets. “The idea was that you take this powerful computer and feed it all this qualitative information, such as ‘the air-force chief drank two martinis’ or ‘Khrushchev isn’t reading Pravda on Mondays,'” laughed Ruina later. “And the computer would play Sherlock Holmes and reveal that the Russians must be building an MX-72 missile or something like that.” Such hopes were, as Lick put it to Ruina at their first meeting, “asinine.”

SAGE existed already as a shining example of Lick’s take on computers — computers as aids to rather than replacements for human intelligence. Ruina was well aware that command-and-control was one of the most difficult problems in warfare; throughout history, it has often been the principal reason that wars are won or lost. Just imagine what SAGE-like real-time information spaces could do for the country’s overall level of preparedness if spread throughout the military chain of command…

On October 1, 1962, following a long courtship on the part of Ruina, Lick officially took up his new duties in a small office in the Pentagon. Like Lick himself, Ruina wasn’t much for micromanagement; he believed in hiring smart people and stepping back to let them do their thing. Thus he turned over his $10 million per year to Lick with basically no strings attached. Just find a way to make interactive computing better, he told him, preferably in ways useful to the military. For his part, Lick made it clear that “I wasn’t doing battle planning,” as he later remembered. “I was doing the technical substrate that would one day support battle planning.” Ruina said that was just fine with him. Lick had free rein.

Ironically, he never did do much of anything with the leftover SAGE computers that had gotten the whole ball rolling; they were just too old, too specialized, too big. Instead he set about recruiting the smartest people he knew of to do research on the government’s dime, using the equipment found at their own local institutions.

If I tried to describe everything these folks got up to here, I would get hopelessly sidetracked. So, we’ll move directly to ARPA’s most famous computing project of all. A Licklider memo dated April 25, 1963, is surely one of the most important of its type in all of modern history. For it was here that Lick first made his case for a far-flung general-purpose computer network. The memo was addressed to “members and affiliates of the Intergalactic Computer Network,” which serves as an example of Lick’s tendency to attempt to avoid sounding too highfalutin by making the ideas about which he felt most strongly sound a bit ridiculous instead. Strictly speaking, the phrase “Intergalactic Computer Network” didn’t apply to the thing Lick was proposing; the network in question here was rather the human network of researchers that Lick was busily assembling. Nevertheless, a computer network was the topic of the memo, and its salutation and its topic would quickly become conflated. Before it became the Internet, even before it became the ARPANET, everyone would call it the Intergalactic Network.

In the memo, Lick notes that ARPA is already funding a diverse variety of computing projects at an almost equally diverse variety of locations. In the interest of not duplicating the wheel, it would make sense if the researchers involved could share programs and data and correspond with one another easily, so that every researcher could benefit from the efforts of the others whenever possible. Therefore he proposes that all of their computers be tied together on a single network, such that any machine can communicate at any time with any other machine.

Lick was careful to couch his argument in the immediate practical benefits it would afford to the projects under his charge. Yet it arose from more abstract discussions that had been swirling around MIT for years. Lick’s idea of a large-scale computer network was in fact inextricably bound up with his humanist vision for computing writ large. In a stunningly prescient article published in the May 1964 issue of Atlantic Monthly, Martin Greenberger, a professor with MIT’s Sloan School of Management, made the case for a computer-based “information utility” — essentially, for the modern Internet, which he imagined arriving at more or less exactly the moment it really did become an inescapable part of our day-to-day lives. In doing all of this, he often seemed to be parroting Lick’s ideology of better living through human-computer symbiosis, to the point of employing many of the same idiosyncratic word choices.

The range of application of the information utility includes medical-information systems for hospitals and clinics, centralized traffic controls for cities and highways, catalogue shopping from a convenient terminal at home, automatic libraries linked to home and office, integrated management-control systems for companies and factories, teaching consoles in the classroom, research consoles in the laboratory, design consoles in the engineering firm, editing consoles in the publishing office, [and] computerized communities.

Barring unforeseen obstacles, an online interactive computer service, provided commercially by an information utility, may be as commonplace by 2000 AD as a telephone service is today. By 2000 AD, man should have a much better comprehension of himself and his system, not because he will be innately any smarter than he is today, but because he will have learned to use imaginatively the most powerful amplifier of intelligence yet devised.

In 1964, the idea of shopping and socializing through a home computer “sounded a bit like working a nuclear reactor in your home,” as M. Mitchell Waldrop writes. Still, there it was — and Greenberger’s uncannily accurate predictions almost certainly originated with Lick.

Lick himself, however, was about to step back and entrust his dream to others. In September of 1964, he resigned from his post in the Pentagon to accept a job with IBM. There were likely quite a number of factors behind this decision, which struck many of his colleagues at the time as perplexing as it strikes us today. As we’ve seen, he was not a hardcore techie, and he may have genuinely believed that a different sort of mind would do a better job of managing the projects he had set in motion at ARPA. Meanwhile his family wasn’t overly thrilled at life in their cramped Washington apartment, the best accommodations his government salary could pay for. IBM, on the other hand, compensated its senior employees very generously — no small consideration for a man with two children close to university age. After decades of non-profit service, he may have seen this, reasonably enough, as his chance to finally cash in. Lastly and perhaps most importantly, he probably truly believed that he could do a lot of good for the world at IBM, by convincing this most powerful force in commercial computing to fully embrace his humanistic vision of computing’s potential. That wouldn’t happen in the end; his tenure there would prove short and disappointing. He would find the notoriously conservative culture of IBM impervious to his charms, a thoroughly novel experience for him. But of course he couldn’t know that prior to the fact.

Lick’s successor at ARPA was Ivan Sutherland, a young man of just 26 years who had recently created a sensation at MIT with his PhD project, a program called Sketchpad that let a user draw arbitrary pictures on a computer screen using one of the light pens that Lick had helped to invent for SAGE. But Sutherland proved no more interested in the details of administration than Lick had been, even as he demonstrated why a more typical hacker type might not have been the best choice for the position after all, being too fixated on his own experiments with computer graphics to have much time to inspire and guide others. Lick’s idea for a large-scale computer network lay moribund during his tenure. It remained so for almost two full years in all, until Sutherland too left what really was a rather thankless job. His replacement was one Robert Taylor. Critically, this latest administrator came complete with Lick’s passion for networking, along with something of his genius for interpersonal relations.


Robert Taylor, as photographed by Annie Leibowitz in 1972 for a Rolling Stone feature article on Xerox PARC, his destination after leaving ARPA.

Coming on as a veritable stereotype of a laid-back country boy, right down to his laconic Texan accent, Robert Taylor was a disarmingly easy man to underestimate. He was born seventeen years after Lick, but there were some uncanny similarities in their backgrounds. Taylor too grew up far from the intellectual capitals of the nation as the son of a minister. Like Lick, he gradually lost his faith in the course of trying to decide what to do with his life, and like Lick he finally settled on psychology. More or less, anyway; he graduated from the University of Texas at age 25 in 1957 with a bachelor’s degree in psychology and minors in mathematics, philosophy, English, and religion. He was working at Martin Marietta in a “stopgap” job in the spring of 1960, when he stumbled across Lick’s article on human-computer symbiosis. It changed his life. “Lick’s paper opened the door for me,” he says. “Over time, I became less and less interested in brain research, and more and more heartily subscribed to the Licklider vision of interactive computing.” The interest led him to NASA the following year, where he helped to design the displays used by ground controllers on the Mercury, Gemini, and Apollo manned-spaceflight programs. In early 1965, he moved to ARPA as Sutherland’s deputy, then took over Sutherland’s job following his departure in June of 1966.

In the course of it all, Taylor got to talk with Lick himself on many occasions. Unsurprisingly given the similarities in their backgrounds and to some extent in their demeanors, the two men hit it off famously. Soon Taylor felt the same zeal that his mentor did for a new, unprecedentedly large and flexible computer network. And once he found himself in charge of ARPA’s computer-research budget, he was in a position to do something about it. He was determined to make Lick’s Intergalactic Network a reality.

Alas, instilling the same determination in the researchers working with ARPA would not be easy. Many of them would later be loath to admit their reluctance, given that the Intergalactic Network would prove to be one of the most important projects in the entire history of computing, but it was there nonetheless. Severo Ornstein, who was working at Lick’s old employer of Bolt Beranek and Newman at this time, confesses to a typical reaction: “Who would want such a thing?” Computer cycles were a precious resource in those days, a commodity which researchers coveted for their personal use as much as Scrooge coveted his shillings. Almost no one was eager to share their computers with people in other cities and states. The strong AI contingent under Minsky and McCarthy, whose experiments not coincidentally tended to be especially taxing on a computer’s resources, were among the loudest objectors. It didn’t help matters that Taylor suffered from something of a respect deficit. Unlike Lick and Sutherland before him, he wasn’t quite of this group of brainy and often arrogant cats which he was attempting to herd, having never made a name for himself through research at one of their universities — indeed, lacking even the all-important initials “PhD” behind his name.

But Bob Taylor shared one more similarity with J.C.R. Licklider: he was all about making good things happen, not about taking credit for them. If the nation’s computer researchers refused to take him seriously, he would find someone else whom they couldn’t ignore. He settled on Larry Roberts, an MIT veteran who had helped Sutherland with Sketchpad and done much groundbreaking work of his own in the field of computer graphics, such as laying the foundation for the compressed file formats that are used to shuffle billions of images around the Internet today. Roberts had been converted by Lick to the networking religion in November of 1964, when the two were hanging out in a bar after a conference. Roberts:

The conversation was, what was the future? And Lick, of course, was talking about his concept of an Intergalactic Network.

At that time, Ivan [Sutherland] and I had gone farther than anyone else in graphics. But I had begun to realize that everything I did was useless to the rest of the world because it was on the TX-2, and that was a unique machine. The TX-2, [the] CTSS, and so forth — they were all incompatible, which made it almost impossible to move data. So everything we did was almost in isolation. The only thing we could do to get the stuff out into the world was to produce written technical papers, which was a very slow process.

It seemed to me that civilization would change if we could move all this [over a network]. It would be a whole new way of sharing knowledge.

The only problem was that Roberts had no interest in becoming a government bureaucrat. So Taylor, whose drawl masked a steely resolve when push came to shove, did what he had to in order to get his man. He went to the administrators of MIT and Lincoln Lab, which were heavily dependent on government funding, and strongly hinted that said funding might be contingent on one member of their staff stepping away from his academic responsibilities for a couple of years. Before 1966 was out, Larry Roberts reported for duty at the Pentagon, to serve as the technical lead of what was about to become known as the ARPANET.

In March of 1967, as the nation’s adults were reeling from the fiery deaths of three Apollo astronauts on the launchpad and its youth were ushering in the Age of Aquarius, Taylor and Roberts brought together 25 or so of the most brilliant minds in computing in a University of Michigan classroom in the hope of fomenting a different sort of revolution. Despite the addition of Roberts to the networking cause, most of them still didn’t want to be there, thought this ARPANET business a waste of time. They arrived all too ready to voice objections and obstacles to the scheme, of which there were no shortage.

The computers that Taylor and Roberts proposed to link together were a motley crew by any standard, ranging from the latest hulking IBM mainframes to mid-sized machines from companies like DEC to bespoke hand-built jobs. The problem of teaching computers from different manufacturers — or even different models of computer from the same manufacturer — to share data with one another had only recently been taken up in earnest. Even moving text from one machine to another could be a challenge; it had been just half a decade since a body called the American Standards Association had approved a standard way of encoding alphanumeric characters as binary numbers, constituting the computer world’s would-be equivalent to Morse Code. Known as the American Standard Code for Information Interchange, or ASCII, it was far from universally accepted, with IBM in particular clinging obstinately to an alternative, in-house-developed system known as the Extended Binary Coded Decimal Interchange Code, or EBCDIC. Uploading a text file generated on a computer that used one standard to a computer that used the other would result in gibberish. How were such computers to talk to one another?

The ARPANET would run on ASCII, Taylor and Roberts replied. Those computers that used something else would just have to implement a translation layer for communicating with the outside world.

Fair enough. But then, how was the physical cabling to work? ARPA couldn’t afford to string its own wires all over the country, and the ones that already existed were designed for telephones, not computers.

No problem, came the reply. ARPA would be able to lease high-capacity lines from AT&T, and Claude Shannon had long since taught them all that information was information. Naturally, there would be some degree of noise on the lines, but error-checking protocols were by now commonplace. Tests had shown that one could push information down one of AT&T’s best lines at a rate of up to 56,000 baud before the number of corrupted packets reached a point of diminishing returns. So, this was the speed at which the ARPANET would run.

The next objection was the gnarliest. At the core of the whole ARPANET idea lay the stipulation that any computer on the network must be able to talk to any other, just like any telephone was able to ring up any other. But existing wide-area computer networks, such as the ones behind SAGE and Sabre, all operated on the railroad model of the old telegraph networks: each line led to exactly one place. To use the same approach as existing telephone networks, with individual computers constantly dialing up one another through electro-mechanical switches, would be way too inefficient and inflexible for a high-speed data network such as this one. Therefore Taylor and Roberts had another approach in mind.

We learned in the last article about R.W. Hamming’s system of error correction, which worked by sending information down a line as a series of packets, each followed by a checksum. In 1964, in a book entitled simply Communication Nets, an MIT researcher named Leonard Kleinrock extended the concept. There was no reason, he noted, that a packet couldn’t contain additional meta-information beyond the checksum. It could, for example, contain the destination it was trying to reach on a network. This meta-information could be used to pass it from hand to hand through the network in the same way that the postal system used the address on the envelope of a paper letter to guide it to its intended destination. This approach to data transfer over a network would soon become known as “packet switching,” and would prove of incalculable importance to the world’s digital future.[1]As Kleinrock himself would hasten to point out, he was not the sole originator of the concept, which has a long and somewhat convoluted history as a theory. His book was, however, the way that many or most of the folks behind the ARPANET first encountered packet switching.

A “star” network topology, in which every computer communicates with every other by passing packets through a single “Grand Central Station.”

How exactly might packet switching work on the ARPANET? At first, Taylor and Roberts had contemplated using a single computer as a sort of central postal exchange. Every other computer on the ARPANET would be wired into this machine, whose sole duty would be to read the desired destination of each incoming packet and send it there. But the approach came complete with a glaring problem: if the central hub went down for any reason, it would take the whole ARPANET down with it.

A “distributed” network topology in which all of the computers work together to move messages through the system. It lacks a single point of failure, but is much more complicated to implement from a technical perspective.

Instead Taylor and Roberts settled on a radically de-centralized approach. Each computer would be directly connected to no more than a handful of other machines. When it received a packet from one of them, it would check the address. If it was not the intended final destination, it would consult a logical map of the network and send the packet along to the peer computer able to get it there most efficiently; then it would forget all about it and go about its own business again. The advantage of the approach was that, if any given computer went down, the others could route their way around it until it came online again. Thus there would be no easy way to “break” the ARPANET, since there would be no single point of failure. This quality of being de-centralized and self-correcting remains the most important of all the design priorities of the modern Internet.

Everyone at the meeting could agree that all of this was quite clever, but they still weren’t won over. The naysayers’ arguments still hinged on how precious computing horsepower was. Every nanosecond a computer spent acting as an electronic postal sorter was a nanosecond that computer couldn’t spend doing other sorts of more useful work. For once, Taylor and Roberts had no real riposte for this concern, beyond vague promises to invest ARPA funds into more and better computers for those who had need of them. Then, just as the meeting was breaking up, with skepticism still hanging palpably in the air, a fellow named Wesley Clark passed a note to Larry Roberts, saying he thought he had a solution to the problem.

It seemed to him, he elaborated to Taylor and Roberts after the meeting, that running the ARPANET straight through all of its constituent machines was rather like running an interstate highway system right through the center of every small town in the country. Why not make the network its own, largely self-contained thing, connected to each computer it served only by a single convenient off- and on-ramp? Instead of asking the computer end-users of the ARPANET to also direct its flow of traffic, one could use dedicated machines as the traffic wardens on the highway itself. These “Interface Message Processors,” or IMPs, would be able to move packets through the system quickly, without taxing the other computers. And they too could allow for a non-centralized, fail-safe network if they were set up the right way. Today IMPs are known as routers, but the principle of their operation remains the same.

A network that uses the IMPs proposed by Wesley Clark. Each IMP sits at the center of a cluster of computers, and is also able to communicate with its peers to send messages to computers on other clusters. A failed IMP actually can take a substantial chunk of the network offline under the arrangement shown here, but redundant IMPs and connections between them all could and eventually would be built into the design.

When Wesley Clark spoke, people listened; his had been an important voice in hacker circles since the days of MIT’s Project Whirlwind. Taylor and Roberts immediately saw the wisdom in his scheme.

The advocacy of the highly respected Clark, combined with the promise that ARPANET need not cost them computer cycles if it used his approach, was enough to finally bring most of the rest of the research community around. Over the months that followed, while Taylor and Roberts worked out a project plan and budget, skepticism gradually morphed into real enthusiasm. J.C.R. Licklider had by now left IBM and returned to the friendlier confines of MIT, whence he continued to push the ARPANET behind the scenes. Especially the younger generation that was coming up behind the old guard tended to be less enamored with the “giant brain” model of computing and more receptive to Lick’s vision, and thus to the nascent ARPANET. “We found ourselves imagining all kinds of possibilities [for the ARPANET],” remembers one Steve Crocker, a UCLA graduate student at the time. “Interactive graphics, cooperating processes, automatic database query, electronic mail…”

In the midst of the building buzz, Lick and Bob Taylor co-authored an article which appeared in the April 1968 issue of the journal Science and Technology. Appropriately entitled “The Computer as a Communications Device,” it included Lick’s most audacious and uncannily accurate prognostications yet, particularly when it came to the sociology, if you will, of its universal computer network of the future.

What will online interactive communities be like? They will consist of geographically separated members. They will be communities not of common location but of common interest [emphasis original]…

Each secretary’s typewriter, each data-gathering instrument, conceivably each Dictaphone microphone, will feed into the network…

You will not send a letter or a telegram; you will simply identify the people whose files should be linked to yours — and perhaps specify a coefficient of urgency. You will seldom make a telephone call; you will ask the network to link your consoles together…

You will seldom make a purely business trip because linking consoles will be so much more efficient. You will spend much more time in computer-facilitated teleconferences and much less en route to meetings…

Available within the network will be functions and services to which you subscribe on a regular basis and others that you call for when you need them. In the former group will be investment guidance, tax counseling, selective dissemination of information in your field of specialization, announcement of cultural, sport, and entertainment events that fit your interests, etc. In the latter group will be dictionaries, encyclopedias, indexes, catalogues, editing programs, teaching programs, testing programs, programming systems, databases, and — most important — communication, display, and modeling programs…

When people do their informational work “at the console” and “through the network,” telecommunication will be as natural an extension of individual work as face-to-face communication is now. The impact of that fact, and of the marked facilitation of the communicative process, will be very great — both on the individual and on society…

Life will be happier for the online individual because the people with whom one interacts most strongly will be selected more by commonality of interests and goals than by accidents of proximity. There will be plenty of opportunity for everyone (who can afford a console) to find his calling, for the whole world of information, with all its fields and disciplines, will be open to him…

For the society, the impact will be good or bad, depending mainly on the question: Will “to be online” be a privilege or a right? If only a favored segment of the population gets to enjoy the advantage of “intelligence amplification,” the network may exaggerate the discontinuity in the spectrum of intellectual opportunity…

On the other hand, if the network idea should prove to do for education what a few have envisioned in hope, if not in concrete detailed plan, and if all minds should prove to be responsive, surely the boon to humankind would be beyond measure…

The dream of a nationwide, perhaps eventually a worldwide web of computers fostering a new age of human interaction was thus laid out in black and white. The funding to embark on at least the first stage of that grand adventure was also there, thanks to the largess of the Cold War military-industrial complex. And solutions had been proposed for the thorniest technical problems involved in the project. Now it was time to turn theory into practice. It was time to actually build the Intergalactic Computer Network.

(Sources: the books A Brief History of the Future: The Origins of the Internet by John Naughton; Where Wizards Stay Up Late: The Origins of the Internet by Katie Hafner and Matthew Lyon, Hackers: Heroes of the Computer Revolution by Steven Levy, From Gutenberg to the Internet: A Sourcebook on the History of Information Technology edited by Jeremy M. Norman, The Dream Machine by M. Mitchell Waldrop, A History of Modern Computing (2nd ed.) by Paul E. Ceruzzi, Communication Networks: A Concise Introduction by Jean Walrand and Shyam Parekh, Communication Nets by Leonard Kleinrock, and Computing in the Middle Ages by Severo M. Ornstein. Online sources include the companion website to Where Wizards Stay Up Late and “The Computers of Tomorrow” by Martin Greenberger on The Atlantic Online.)

Footnotes

Footnotes
1 As Kleinrock himself would hasten to point out, he was not the sole originator of the concept, which has a long and somewhat convoluted history as a theory. His book was, however, the way that many or most of the folks behind the ARPANET first encountered packet switching.
 
 

Tags:

A Web Around the World, Part 7: Computers On the Wire

The world’s first digital network actually predates the world’s first computer, in the sense that we understand the word “computer” today.

It began with a Bell Labs engineer named George Stibitz, who worked on the electro-mechanical relays that were used to route telephone calls. One evening in late 1937, he took a box of parts home with him and started to put together on his kitchen table a contraption that distinctly resembled the one that Claude Shannon had recently described in his MIT master’s thesis. By the summer of the following year, it worked well enough that Stibitz took it to the office to show it around. In a testament to the spirit of freewheeling innovation that marked life at Bell Labs, his boss promptly told him to take a break from telephone switches and see if he could turn it into a truly useful calculating machine. The result emerged fifteen months later as the Complex Computer, made from some 450 telephone relays and many other off-the-shelf parts from telephony’s infrastructure. It was slow, as all machines of its electro-mechanical ilk inevitably were: it took it about one minute to multiply two eight-digit numbers together. And it was not quite as capable as the machine Shannon had described in print: it had no ability to make decisions at branch points, only to perform rote calculations. But it worked.

It is a little unclear to what extent the Complex Computer was derived from Shannon’s paper. Stibitz gave few interviews during his life. To my knowledge he never directly credited Shannon as his inspiration, but neither was he ever quizzed in depth about the subject. It strikes me as reasonable to grant that his initial explorations may have been entirely serendipitous, but one has to assume that he became aware of the Shannon paper after the Complex Computer became an official Bell Labs project; the paper was, after all, being widely disseminated and discussed at that time, and even the most cursory review of existing literature would have turned it up.

At any rate, another part of the Complex Computer project most definitely was completely original. Stibitz’s managers wanted to make the machine available to Bell and AT&T employees working all over the country. At first glance, this would have to entail making a lot more Complex Computers, at considerable cost, and even though the individual offices that received them would only need to make use of them occasionally. Might there be a better way, Stibitz wondered. Might it be possible to let the entire country share a single machine instead?

Stibitz enlisted a more experienced switching engineer named Samuel B. Williams, who figured out how to connect the Complex Computer to a telegraph line. By this point, telegraphy’s old manually operated Morse keys had been long since replaced by teletype machines that looked and functioned like typewriters, doing the grunt work of translating letters into Morse Code for the operator; similarly, the various arcane receiving mechanisms of old had been replaced by a teleprinter.

The world’s first digital network made its debut in September of 1940, at a meeting of the American Mathematical Society that was held at Dartmouth College in New Hampshire. The attendees were given the chance to type out mathematical problems on the teletype, which sent them up the line as Morse Code to the Complex Computer installed at Bell Labs’s facilities in New York City. The latter translated the dots and dashes of Morse Code into numbers, performed the requested calculations, and sent the results back to Dartmouth, where they duly appeared on the teleprinter. The tectonic plates subtly shifted on that sunny September afternoon, while the assembled mathematicians nodded politely, with little awareness of the importance of what they were witnessing. The computer networks of the future would be driven by a binary code known as ASCII rather than Morse Code, but the principle behind them would be the same.

As it happened, Stibitz and Williams never took their invention much further; it never did become a part of Bell’s everyday operations. The war going on in Europe was already affecting research priorities everywhere, and was soon to make the idea of developing a networked calculating device simply for the purpose of making civilian phone networks easier to install and repair seem positively quaint. In fact, the Complex Computer was destined to go down in history as the last of its breed: the last significant blue-sky advance in American computing for a long time to come that wasn’t driven by the priorities and the funding of the national-security state.

That reality would give plenty of the people who worked in the field pause, for their own worldviews would not always be in harmony with those of the generals and statesmen who funded their projects in the cause of winning actual or hypothetical wars, with all the associated costs in human suffering and human lives. Nevertheless, as a consequence of this (Faustian?) bargain, the early-modern era of computers and computer networks in the United States is almost the polar opposite of that of telegraphy and telephony in an important sense: rather than being left to the private sphere, computing at the cutting edge became a non-profit, government-sponsored activity. The ramifications of this were and remain enormous, yet have become so embedded in the way we see computing writ large that we seldom consider them. Government funding explains, for example, why the very concept of a modern digital computer was never locked up behind a patent like the telegraph and the telephone were. Perhaps it even explains in a roundabout way why the digital computer has no single anointed father figure, no equivalent to a Samuel Morse or Alexander Graham Bell — for the people who made computing happen were institutionalists, not lone-wolf inventors.

Most of all, though, it explains why the World Wide Web, when it finally came to be, was designed to be open in every sense of the word, easily accessible from any computer that implements its well-documented protocols. Even today, long after the big corporations have moved in, a spirit of egalitarianism and idealism underpins the very technical specifications that make the Internet go. Had the moment when the technology was ripe to create an Internet not corresponded with the handful of decades in American history when the federal government was willing and able to fund massive technological research projects of uncertain ultimate benefit, the world we live in would be a very different place.


Programming ENIAC.

There is plenty of debate surrounding the question of the first “real” computer in the modern sense of the word, with plenty of fulsome sentiment on display from the more committed partisans. Some point to the machines built by Konrad Zuse in Nazi Germany in the midst of World War II, others to the ones built by the British code breakers at Bletchley Park around the same time. But the consensus, establishment choice has long been and still remains the American “Electronic Numerical Integrator and Calculator,” or ENIAC. It was designed primarily by the physicist John Mauchly and the electrical engineer J. Presper Eckert at the University of Pennsylvania, and was funded by the United States Army for the purpose of calculating the ideal firing trajectories of artillery shells. Because building it was largely a process of trial and error from the time that the project was officially launched on June 1, 1943, it is difficult to give a precise date when ENIAC “worked” for the first time. It is clear, however, that it wasn’t able to do the job the Army expected of it until after the war that had prompted its creation was over. ENIAC wasn’t officially accepted by the Army until July of 1946.

ENIAC’s claim to being the first modern computer rests on the fact that it was the first machine to combine two key attributes: it was purely electrical rather than electro-mechanical —  no clanking telephone relays here! — and it was Turing complete. The latter quality requires some explanation.

First defined by the British mathematician and proto-computer scientist Alan Turing in the 1930s, the phrase “Turing complete” describes a machine that is able to store numerical data in internal memory of some sort, perform calculations and transformations upon that data, and make conditional jumps in the program it is running based upon the results. Anyone who has ever programmed a computer of the present day is familiar with branching decision points such as BASIC’s “if, then” construction — if such-and-such is the case, then do this — as well as loops such as its “for, next” construction, which are used to repeat sections of a program multiple times. The ability to write such statements and see them carried out means that one is working on a Turing-complete computer. ENIAC was the first purely electrical computer that could deal with the contemporary equivalent of “if, then” and “for, next” statements, and thus the patriarch of the billions more that would follow.

That said, there are ways in which ENIAC still fails to match our expectations of a computer — not just quantitatively, in the sense that it was 80 feet long, 8 feet tall, weighed 30 tons, and yet could manage barely one half of one percent of the instructions per second of an Apple II from the dawn of the personal-computing age, but qualitatively, in the sense that ENIAC just didn’t function like we expect a computer to do.

For one thing, it had no real concept of software. You “programmed” ENIAC by physically rewiring it, a process that generally consumed far more time than did actually running the program thus created. The room where it was housed looked like nothing so much as a manual telephone exchange from the old days, albeit on an enormous scale; it was a veritable maze of wires and plugboards. Perhaps we shouldn’t be surprised to learn, then, that its programmers were mostly women, next-generation telephone operators who wandered through the machine’s innards with clipboards in their hands, remaking their surroundings to match the schematics on the page.

Another distinction between ENIAC and what came later is more subtle, but in its way even more profound. If you were to ask the proverbial person on the street what distinguishes a computer program from any other form of electronic media, she would probably say something about its “interactivity.” The word has become inescapable, the defining adjective of the computer age: “interactive fiction,” “interactive learning,” “interactive entertainment,” etc. And yet ENIAC really wasn’t so interactive at all. It operated under what would later become known as the “batch-processing” model. After programming it — or, if you like, rewiring it — you fed it a chunk of data, then sat back and waited however long it took for the result to come out the metaphorical other side of the pipeline. And then, if you wished, you could feed it some more data, to be massaged in exactly the same way. Ironically, this paradigm is much closer to the literal meaning of the word “computer” than the one with which we are familiar; ENIAC was a device for computing things. No more and no less. This made it useful, but far from the mind-expanding anything machine that we’ve come to know as the computer.

Thus the story of computing in the decade or two after ENIAC is largely that of how these two paradigms — programming by rewiring and batch processing — were shattered to yield said anything machine. The first paradigm fell away fairly quickly, but the second would persist for years in many computing contexts.


John von Neumann

In November of 1944, when ENIAC was still very much a work in progress, it was visited by John von Neumann. After immigrating to the United States from Hungary more than a decade earlier, von Neumann had become one of the most prominent intellectuals in the country, an absurdly accomplished mathematician and all-around genius for all seasons, with deep wells of knowledge in everything from atomic physics to Byzantine history. He was, writes computer historian M. Mitchell Waldrop, “a scientific superstar, the very Hollywood image of what a scientist ought to be, up to and including that faint, delicious touch of a Middle European accent.” A man who hobnobbed routinely with the highest levels of his adopted nation’s political as well as scientific establishment, he was now attached to the Manhattan Project that was charged with creating an atomic bomb before the Nazis could manage to do so. He came to see ENIAC in that capacity, to find out whether it or a machine like it might be able to help himself and his colleagues with the fiendishly complicated calculations that were part and parcel of their work.

Truth be told, he was somewhat underwhelmed by what he saw that day. He was taken aback by the laborious rewiring that programming ENIAC entailed, and judged the machine to be far too balky and inflexible to be of much use on the Manhattan Project.

But discussion about what the next computer after ENIAC ought to be like was already percolating, so much so that Mauchly and Eckert had already given the unfunded, entirely hypothetical machine a catchy acronym: EDVAC, for “Electronic Discrete Variable Automatic Computer.” Von Neumann decided to throw his own hat into the ring, to offer up his own proposal for what EDVAC should be. Written betwixt and between his day job in the New Mexico desert, the resulting document laid out five abstract components of any computer. There must be a way of inputting data and a way of outputting it. There must be memory for storing the data, and a central arithmetic unit for performing calculations upon it. And finally, there must be a central control unit capable of executing programmed instructions and making conditional jumps.

But the paper’s real stroke of genius was its description of a new way of carrying out this programming, one that wouldn’t entail rewiring the computer. It should be possible, von Neumann wrote, to store not only the data a program manipulated in memory but the program itself. This way new programs could be input just the same way as other forms of data. This approach to computing — the only one most of us are familiar with — is sometimes called a “von Neumann machine” today, or simply a “stored-program computer.” It is the reason that, writes M. Mitchell Waldrop, the anything machine sitting on your desk today “can transform itself into the cockpit of a fighter jet, a budget projection, a chapter of a novel, or whatever else you want” — all without changing its physical form one iota.

Von Neumann began to distribute his paper, labeled a “first draft,” in late June of 1945, just three weeks before the Manhattan Project conducted the first test of an atomic bomb. The paper ignited a brouhaha that will ring all too familiar to readers of earlier articles in this series. Mauchly and Eckert had already resolved to patent EDVAC in order to exploit it for commercial purposes. They now rushed to do so, whilst insisting that the design had included the stored-program idea from the start, that von Neumann had in fact picked it up from them. Von Neumann himself begged to differ, saying it was all his own conception and filing a patent application of his own. Then the University of Pennsylvania entered the fray as well, saying it automatically owned any invention conceived by its employees as part of their duties. The whole mess was yet further complicated by the fact that the design of ENIAC, from which much of EDVAC was derived, had been funded by the Army, and was still considered classified.

Thus the three-way dispute wound up in the hands of the Army’s lawyers, who decided in April of 1947 that no one should get a patent. They judged that von Neumann’s paper constituted “prior disclosure” of the details of the design, effectively placing it in the public domain. The upshot of this little-remarked decision was that, in contrast to the telegraph and telephone among many other inventions, the abstract design of a digital electronic stored-program computer was to be freely available for anyone and everyone to build upon right from the start.[1]Inevitably, that wasn’t quite the end of it. Mauchly and Eckert continued their quest to win the patent they thought was their due, and were finally granted it at the rather astonishingly late date of 1964, by which time they were associated with the Sperry Rand Corporation, a maker of mainframes and minicomputers. But this victory only ignited another legal battle, pitting Sperry Rand against virtually every other company in the computer industry, who were not eager to start paying one of their competitors a royalty on every single computer they made. The patent was thrown out once and for all in 1973, primarily on the familiar premise that Von Neumann’s paper constituted prior disclosure.

Mauchly and Eckert had left the University of Pennsylvania in a huff by the time the Army’s lawyers made their decision. Without its masterminds, the EDVAC project suffered delay after delay. By the time it was finally done in 1952, it did sport stored programs, but its thunder had been stolen by other computers that had gotten there first.


The Whirlwind computer in testing, circa 1950. Jay Forrester is second from left, Robert Everett the man standing by his side.

The first stored-program computer to be actually built was known as the Manchester Mark I, after the University of Manchester in Britain that was its home. It ran its first program in April of 1949, a landmark moment in the proud computing history of Britain, which stretches back to such pioneers as Charles Babbage and Ada Lovelace. But this series of articles is concerned with how the World Wide Web came to be, and that is primarily an American story prior to its final stages. So, I hope you will forgive me if I continue to focus on the American scene. More specifically, I’d like to turn to the Whirlwind, the first stored-program all-electrical computer to be built in the United States — and, even more importantly, the first to break away from the batch-processing paradigm.

The Whirlwind had a long history behind it by the time it entered regular service at MIT in April of 1951. It had all begun in December of 1944, when the Navy had asked MIT to build it a new flight simulator for its trainees, one that could be rewired to simulate the flight characteristics of any present or future model of aircraft. The task was given to Jay Forrester, a 26-year-old engineering graduate student who would never have been allowed near such a project if all of his more senior colleagues hadn’t been busy with other wartime tasks. He and his team struggled for months to find a way to meet the Navy’s expectations, with little success. Somewhat to his chagrin, the project wasn’t cancelled even after the war ended. Then, one afternoon in October of 1945, in the course of a casual chat on the front stoop of Forrester’s research lab, a representative of the Navy brass mentioned ENIAC, and suggested that a digital computer like that one might be the solution to his problems. Forrester took the advice to heart. “We are building a digital computer!” he barked to his bewildered team just days later.

Forrester’s chief deputy Robert Everett would later admit that they started down the road of what would become known as “real-time computing” only because they were young and naïve and had no clue what they were getting into. For all that it was the product of ignorance as much as intent, the idea was nevertheless an audacious conceptual leap for computing. A computer responsible for running a flight simulator would have to do more than provide one-off answers to math problems at its own lackadaisical pace. It would need to respond to a constant stream of data about the state of the airplane’s controls, to update a model of the world in accord with that data, and provide a constant stream of feedback to the trainee behind the controls. And it would need to do it all to a clock, fast enough to give the impression of real flight. It was a well-nigh breathtaking explosion of the very idea of what a computer could be — not least in its thoroughgoing embrace of interactivity, its view of a program as a constant feedback loop of input and output.

The project gradually morphed from a single-purpose flight simulator to an even more expansive concept, an all-purpose digital computer that would be able to run a variety of real-time interactive applications. Like ENIAC before it, the machine which Forrester and Everett dubbed the Whirlwind was built and tested in stages over a period of years. In keeping with its real-time mission statement, it ended up doing seven times as many instructions per second as ENIAC, mostly thanks to a new type of memory — known as “core memory” — invented by Forrester himself for the project.

In the midst of these years of development, on August 29, 1949, the Soviet Union tested its first atomic bomb, creating panic all over the Western world; most intelligence analysts had believed that the Soviets were still years away from such a feat. The Cold War began in earnest on that day, as all of the post-World War II dreams of a negotiated peace based on mutual enlightenment gave way to one based on the terrifying brinkmanship of mutually assured destruction. The stakes of warfare had shifted overnight; a single bomb dropped from a single Soviet aircraft could now spell the end of millions of American lives. Desperate to protect the nation against this ghastly new reality, the Air Force asked Forrester whether the Whirlwind could be used to provide a real-time picture of American airspace, to become the heart of a control center which kept track of friendlies and potential enemies 24 hours per day. As it happened, the project’s other sponsors had been growing impatient and making noises about cutting their funding, so Forrester had every motivation to jump on this new chance; the likes of flight simulation was entirely forgotten for the time being. On April 20, 1951, as its first official task, the newly commissioned Whirlwind successfully tracked two fighter planes in real time.

Satisfied with that proof of concept, the Air Force offered to lavishly fund a Project Lincoln that would build upon what had been learned from the Whirlwind, with the mission of protecting the United States from Soviet bombers at any cost — almost literally, given the sum of money the Air Force was willing to throw at it. It began in November of 1951, with Forrester in charge.

Whatever its implications about the gloomy state of the world, SAGE was a truly visionary technological project, enough so as to warm the cockles of even a peacenik engineer’s heart. Soviet bombers, if they came someday, were expected to come in at low altitudes in order to minimize their radar exposure. This created a tremendous logistical problem. Even if the Air Force built enough radar stations to spot all of the aircraft before they reached their targets — a task it was willing to undertake despite the huge cost of it — there would be very little time to coordinate a response. Enter SAGE; it was meant to provide that rapid coordination, which would be impossible by any other means. Data from hundreds of radar stations would pour into its control centers in real time, to be digested by a computer and displayed as a single comprehensible strategic map on the screens of operators, who would then be able to deploy fighters and ground-based antiaircraft weapons as needed in response, with nary a moment’s delay.

All of this seems old hat today, but it was unprecedented at the time. It would require computers whose power must dwarf even that of the Whirlwind. And it would also require something else: each computer would need to be networked to all the radar stations in its sector, and to its peers in other control centers. This was a staggering task in itself. To appreciate why Jay Forrester and his people thought they had a ghost of a chance of bringing it off, we need to step back from the front lines of the Cold War for a moment and check in with an old friend.


Claude Shannon in middle age, after he had become a sort of all-purpose public intellectual for the press to trot out for big occasions. He certainly looked the part…

Claude Shannon had left MIT to work for Bell Labs on various military projects during World War II, and had remained there after the end of the war. Thus when he published the second earthshaking paper of his career in 1948, he did so in the pages of the Bell System Technical Journal.

“A Mathematical Theory of Communication” belies its name to some extent, in that it can be explained in its most basic form without recourse to any mathematics at all. Indeed, it starts off so simply as to seem almost childish. Shannon breaks the whole of communication — of any act of communication — into seven elements, six of them proactive or positive, the last one negative. In addition to the message itself, there are the “source,” the person or machine generating the message; the “transmitter,” the device which encodes the message for transport and sends it on its way; the “channel,” the medium over which the message travels; the “receiver,” which decodes the message at the other end; and the “destination,” the person or machine which accepts and comprehends the message. And then there is “noise”: any source of entropy that impedes the progress of the message from source to destination or garbles its content. Let’s consider a couple of examples of Shannon’s framework in action.

One of the oldest methods of human communication is direct speech. Here the source is a person with something to say, the transmitter the mouth with which she speaks, the channel the air through which the resulting sound waves travel, the receiver the ear of a second person, and the destination that second person herself. Noise in the system might be literal background or foreground noise such as another person talking at the same time, or a wind blowing in the wrong direction, or sheer distance.

We can break telegraphy down in the same way. Here the source is the operator with a message to send, the transmitter his Morse key or teletype, the channel the wire over which the Morse Code travels, the receiver an electromagnet-actuated pencil or a teleprinter, and the destination the human operator at the other end of the wire. Noise might be static on the line, or a poor signal caused by a weak battery or something else, or any number of other technical glitches.

But if we like we can also examine the process of telegraphy from a greater remove. We might prefer to think of the source as the original source of the message — say, a soldier overseas who wants to tell his fiancée that he loves her. Here the telegraph operator who sends the message is in a sense a part of the transmitter, while the operator who receives the message is indeed a part of the receiver. The girl back home is of course the destination. When using this scheme, we consider the administration of telegraph stations and networks also to be a part of the overall communications process. In this conception, then, strictly human mistakes, such as a message dropped under a desk and overlooked, become a part of the noise in the system. Shannon provides us, in other words, with a framework for conceptualizing communication at whatever level of granularity might happen to suit our current goals.

Notably absent in all of this is any real concern over the content of the message being sent. Shannon treats content with blithe disinterest, not to say contempt. “The ‘meaning’ of a message is generally irrelevant,” he writes. “The fundamental problem of communication is that of reproducing at one point either exactly or approximately a message selected at another point. [The] semantic aspects of communication are irrelevant to the engineering problem.” Rather than content or meaning, Shannon is interested in what he calls “information,” which is related to the actual meaning of the message but not quite the same thing. It is rather the encoded form the meaning takes as it passes down the channel.

And here Shannon clearly articulated an idea of profound importance, one which network engineers had been groping toward for some time: any channel is ultimately capable of carrying any type of content — text, sound, still or moving images, computer code, you name it. It’s just a matter of having an agreed-upon protocol for the transmitter and receiver to use to package it into information at one end and then unpack it at the other.

In practical terms, however, some types of content take longer to send over any given channel than others; while a telegraph line could theoretically be used to transmit video, it would take so long to send even a single frame using its widely spaced dots and dashes that it is effectively useless for the purpose, even though it is perfectly adequate for sending text as Morse Code. Some forms of content, that is to say, are denser than others, require more information to convey. In order to quantify this, one needs a unit for measuring quantities of information itself. This Shannon provides, in the form of a single on-or-off state — a yes or a no, a one or a zero. “The units may be called binary digits,” he writes, “or, more briefly, bits.”

And so a new word entered the lexicon. An entire universe of meaning can be built out of nothing but bits if you have enough of them, as our modern digital world proves. But some types of channel can send more bits per second than others, which makes different channels more or less suitable for different types of content.

There is still one more thing to consider: the noise that might come along to corrupt the information as it travels from transmitter to receiver. A message intended for a human is actually quite resistant to noise, for our human minds are very good at filling in gaps and working around mistakes in communication. A handful of garbled characters seldom destroys the meaning of a textual message for us, and we are equally adept at coping with a bad telephone connection or a static-filled television screen. Having a lot of noise in these situations is certainly not ideal, but the amount of entropy in the system has to get pretty extreme before the process of communication itself breaks down completely.

But what of computers? Shannon was already looking forward to a world in which one computer would need to talk directly to another, with no human middleman. Computers cannot use intuition and experience to fill in gaps and correct mistakes in an information stream. If they are to function, they need every single message to reach them in its original, pristine state. But, as Shannon well realized, some amount of noise is a fact of life with any communications channel. What could be done?

What could be done, Shannon wrote, was to design error correction into a communication protocol. The transmitter could divide the information to be sent into packets of fixed length. After sending a packet, it could send a checksum, a number derived from performing a series of agreed-upon calculations on the bits in the packet. The receiver at the other end of the line would then be expected to perform the same set of calculations on the information it had received, and compare it with the transmitter’s checksum. If the numbers matched, all must be well; it could send an “okay” back to the transmitter and wait on the next packet. But if the numbers didn’t match, it knew that noise on the channel must have corrupted the information. So, it would ask the transmitter to try sending the last packet again. It was in essence the same principle as the one that had been employed on Claude Chappe’s optical-telegraph networks of 150 years earlier.

To be sure, there were parameters in the scheme to be tinkered with on a situational basis. Larger packets, for example, would be more efficient on a relatively clean channel that gave few problems, smaller ones on a noisy channel where re-transmission was often necessary. Meanwhile the larger the checksum and more intense the calculations done to create it, the more confident one could be that the information really had been received correctly, that the checksums didn’t happen to match by mere coincidence. But this extra insurance came with a price of its own, in the form of the extra computing horsepower required to generate the more complex checksums and the extra time it took to send them down the channel. It seemed that success in digital communications was, like success in life, a matter of making wise compromises.

Two years after Shannon published his paper, another Bell Labs employee by the name of R.W. Hamming published “Error Detecting and Error Correcting Codes” in the same journal. It made Shannon’s abstractions concrete, laying out in careful detail the first practical algorithms for error detection and correction on a digital network, using checksums that would become known as “Hamming codes.”

Even before Hamming’s work came along to complement it, Shannon’s paper sent shock waves through the nascent community of computing, whilst inventing at a stroke a whole new field of research known as “information theory.” The printers of the Bell System Technical Journal, accustomed to turning out perhaps a few hundred copies for internal distribution through the company, were swamped by thousands of requests for that particular issue. Many of those involved with computers and/or communications would continue to speak of the paper and its author with awe for the rest of their lives. “It was like a bolt out of the blue, a really unique thing,” remembered a Bell Labs researcher named John Pierce. “I don’t know of any other theory that came in a complete form like that, with very few antecedents or history.” “It was a revelation,” said MIT’s Oliver Selfridge. “Around MIT the reaction was, ‘Brilliant! Why didn’t I think of that?’ Information theory gave us a whole conceptual vocabulary, as well as a technical vocabulary.” Word soon spread to the mainstream press. Fortune magazine called information theory that “proudest and rarest [of] creations, a great scientific theory which could profoundly and rapidly alter man’s view of the world.” Scientific American proclaimed it to encompass “all of the procedures by which one mind may affect another. [It] involves not only written and oral speech, but also music, the pictorial arts, the theatre, the ballet, and in fact all human behavior.” And that was only the half of it: in the midst of their excitement, the magazine’s editors failed to even notice its implications for computing.

And those implications were enormous. The fact was that all of the countless digital networks of the future would be built from the principles first described by Claude Shannon. Shannon himself largely stepped away from the table he had so obligingly set. A playful soul who preferred tinkering to writing or working to a deadline, he was content to live off the prestige his paper had brought him, accepting lucrative seats on several boards of directors and the like. In the meantime, his theories were about to be brought to vivid life by Project Lincoln.


The Lincoln Lab complex, future home of SAGE research, under construction.

In their later years, many of the mostly young people who worked on Project Lincoln would freely admit that they had had only the vaguest notion of what they were doing during those halcyon days. Having very little experience with the military or aviation among their ranks, they extrapolated from science-fiction novels, from movies, and from old newsreel footage of the command-and-control posts whence the Royal Air Force had guided defenses during the Battle of Britain. Everything they used in their endeavors had to be designed and made from whole cloth, from the input devices to the display screens to the computers behind it all, which were to be manufactured by a company called IBM that had heretofore specialized in strictly analog gadgets (typewriters, time clocks, vote recorders, census tabulators, cheese slicers). Fortunately, they had effectively unlimited sums of money at their disposal, what with the Air Force’s paranoid sense of urgency. The government paid to build a whole new complex to house their efforts, at Laurence G. Hanscom Airfield, about fifteen miles away from MIT proper. The place would become known as Lincoln Lab, and would long outlive Project Lincoln itself and the SAGE system it made; it still exists to this day.

AT&T — who else? — was contracted to set up the communications lines that would link all of the individual radar stations into control centers scattered all over the country, and in turn link the latter together with one another; it was considered essential not to have a single main control center which, if knocked out of action, could take the whole system down with it. The lines AT&T provided were at bottom ordinary telephone connections, for nothing better existed at the time. No matter; an engineer named John V. Harrington took Claude Shannon’s assertion that all information is the same in the end to heart. He made something called a “modulator/de-modulator”: a gadget which could convert a stream of binary data into a waveform and send it down a telephone line when it was playing the role of transmitter, or convert one of these waveforms back into binary data when it was playing the role of receiver, all at the impressive rate of 1300 bits per second. Its name was soon shortened to “modem,” and bits-per-second to “baud,” borrowing a term that had earlier been applied to the dots and dashes of telegraphy. Combined with the techniques of error correction developed by Shannon and R.W. Hamming, Harrington’s modems would become the basis of the world’s first permanent wide-area computer network.

At a time when the concept of software was just struggling into existence as an entity separate from computer hardware, the SAGE system would demand programs an order of magnitude more complex than anyone had ever attempted before — interactive programs that must run indefinitely and respond constantly to new stimuli, not mere algorithms to be run on static sets of data. In the end, SAGE would employ more than 800 individual programmers. Lincoln Lab created the first tools to separate the act of programming from the bare metal of the machine itself, introducing assemblers that could do some of the work of keeping track of registers, memory locations, and the like for the programmer, to allow her to better concentrate on the core logic of her task. Lincoln Lab’s official history of the project goes so far as to boast that “the art of computer programming was essentially invented for SAGE.”

In marked contrast to later years, programmers themselves were held in little regard at the time; hardware engineers ruled the roost. With no formal education programs in the discipline yet in existence, Lincoln Lab was willing to hire anyone who could get a security clearance and pass a test of basic reasoning skills. A substantial percentage of them wound up being women.

Among the men who came to program for SAGE was Severo Ornstein, a geologist who would go on to a notable career in computing over the following three decades. In his memoir, he captures the bizarre mixture of confusion and empowerment that marked life with SAGE, explaining how he was thrown in at the deep end as soon as he arrived on the job.

It seemed that not only was an operational air-defense program lacking, but the overall system hadn’t yet been fully designed. The OP SPECS (Operational Specifications) which defined the system were just being written, and, with no more background in air defense than a woodchuck, I was unceremoniously handed the task of writing the Crosstelling Spec. What in God’s name was Crosstelling? The only thing I knew about it was that it came late in the schedule, thank heavens, after everything else was finished.

It developed that the country was divided into sectors, and that the sectors were in turn divided into sub-sectors (which were really the operational units) with a Direction Center at the heart of each. Since airplanes, especially those that didn’t belong to the Air Force (or even the U.S.), could hardly be forbidden from crossing between sub-sectors, some coordination was required for handing over the tracking of planes, controlling of interceptors, etc., between the sub-sectors. This function was called Crosstelling, a name inherited from an earlier manual system in which human operators followed the tracks of aircraft on radar screens and coordinated matters by talking to one another on telephones. Now it had somehow fallen to me to define how this coordination should be handled by computers, and then to write it all down in an official OP SPEC with a bright-red cover stamped SECRET.

I was horrified. Not only did I feel incapable of handling the task, but what was to become of a country whose Crosstelling was to be specified by an ignoramus like me? My number-two daughter was born at about that time, and for the first time I began to fear for my children’s future…

In spite of it all, SAGE more or less worked out in the end. The first control center became officially operational at last in July of 1958, at McGuire Air Force Base in New Jersey. It was followed by 21 more of its kind over the course of the next three and a half years, each housing two massive IBM computers; the second was provided for redundancy, to prevent the survival of the nation from being put at risk by a blown vacuum tube. These computers could communicate with radar stations and with their peers on the network for the purpose of “Crosstelling.” The control centers went on to become one of the iconic images of the Cold War era, featuring prominently in the likes of Dr. Strangelove.[2]That film’s title character was partially based on John Von Neumann, who after his work on the Manhattan Project and before his untimely death from cancer in 1957 became as strident a Cold Warrior as they came. “I believe there is no such thing as saturation,” he once told his old Manhattan Project boss Robert Oppenheimer. “I don’t think any weapon can be too large.” Many have attributed his bellicosity to his pain at seeing the Iron Curtain come down over his homeland of Hungary, separating him from friends and family forever. SAGE remained in service until the early 1980s, by which time its hardware was positively neolithic but still did the job asked of it.

Thankfully for all of us, the system was never subjected to a real trial by fire. Would it have actually worked? Most military experts are doubtful — as, indeed, were many of the architects of SAGE after all was said and done. Severo Ornstein, for his part, says bluntly that “I believe SAGE would have failed utterly.” During a large-scale war game known as Operation Sky Shield which was carried out in the early 1960s, SAGE succeeded in downing no more than a fourth of the attacking enemy bombers. All of the tests conducted after that fiasco were, some claim, fudged to one degree or another.

But then, the fact is that SAGE was already something of a white elephant on the day the very first control center went into operation; by that point the principal nuclear threat was shifting from bombers to ballistic missiles, a form of attack the designers had not anticipated and against which their system could offer no real utility. For all its cutting-edge technology, SAGE thus became a classic example of a weapon designed to fight the last war rather than the next one. Historian Paul N. Edwards has noted that the SAGE control centers were never placed in hardened bunkers, which he believes constitutes a tacit admission on the part of the Air Force that they had no chance of protecting the nation from a full-on Soviet first nuclear strike. “Strategic Air Command,” he posits, “intended never to need SAGE warning and interception; it would strike the Russians first. After SAC’s hammer blow, continental air defenses would be faced only with cleaning up a weak and probably disorganized counter-strike.” There is by no means a consensus that SAGE could have managed to coordinate even that much of a defense.

But this is not to say that SAGE wasn’t worth it. Far from it. Bringing so many smart people together and giving them such an ambitious, all-encompassing task to accomplish in such an exciting new field as computing could hardly fail to yield rich dividends for the future. Because so much of it was classified for so long, not to mention its association with passé Cold War paranoia, SAGE’s role in the history of computing — and especially of networked computing — tends to go underappreciated. And yet many of our most fundamental notions about what computing is and can be were born here. Paul N. Edwards credits SAGE and its predecessor the Whirlwind computer with inventing:

  • magnetic-core memory
  • video displays
  • light guns [what we call light pens today]
  • the first effective algebraic computer language
  • graphic display techniques
  • simulation techniques
  • synchronous parallel logic (digits transmitted simultaneously rather than serially through the computer)
  • analog-to-digital and digital-to-analog conversion techniques
  • digital data transmission over telephone lines
  • duplexing
  • multiprocessing
  • networks (automatic data exchange among different computers)

Readers unfamiliar with computer technology may not appreciate the extreme importance of these developments to the history of computing. Suffice it to say that much-evolved versions of all of them remain in use today. Some, such as networking and graphic displays, comprise the very backbone of modern computing.

M. Mitchell Waldrop elaborates in a more philosophical mode:

SAGE planted the seeds of a truly powerful idea, the notion that humans and computers working together could be far more effective than either working separately. Of course, SAGE by itself didn’t get us all the way to the modern idea of personal computers being used for personal empowerment; the SAGE computers were definitely not “personal,” and the controllers could use them only for that one, tightly constrained task of air defense. Nonetheless, it’s no coincidence that the basic setup still seems so eerily familiar. An operator watching his CRT display screen, giving commands to a computer via a keyboard and a handheld light gun, and sending data to other computers via a digital communications link: SAGE may not have been the technological ancestor of the modern PC, mouse, and network, but it was definitely their conceptual and spiritual ancestor.

So, ineffective though it probably was as a means of national defense, the real legacy of SAGE is one of swords turning into plowshares. Consider, for example, its most direct civilian progeny.


SAGE in operation. For a quarter of a century, hundreds of Air Force Personnel were to be found sitting in antiseptic rooms like this one at any given time, peering at their displays in case something showed up there. It’s one way to make a living…

One day in the summer of 1953, long before any actual SAGE computers had been built, a senior IBM salesman who was privy to the project, whose name was R. Blair Smith, chanced to sit next to another Smith on a flight from Los Angeles to New York City. This other Smith was none other than Cyrus Rowlett Smith, the president of American Airlines.

Blair Smith had caught the computer fever, and believed that they could be very useful for airline reservations. Being a salesman, he didn’t hesitate to tell his seatmate all about this as soon as he learned who he was. He was gratified to find his companion receptive. “Now, Blair,” said Cyrus Smith just before their airplane landed, “our reservation center is at LaGuardia Airport. You go out there and look it over. Then you write me a letter and tell me what I should do.”

In his letter, Blair Smith envisioned a network that would bind together booking agents all over the country, allowing them to search to see which seats were available on which flights and to reserve them instantly for their customers. Blair Smith:

We didn’t know enough to call it anything. Later on, the word “Sabre” was adopted. By the way, it was originally spelled SABER — the only precedent we had was SAGE. SAGE was used to detect incoming airplanes. Radar defined the perimeter of the United States and then the information was signaled into a central computer. The perimeter data was then compared with what information they had about friendly aircraft, and so on. That was the only precedent we had. When the airline system was in research and development, they adopted the code name SABER for “Semi-Automatic Business Environment Research.” Later on, American Airlines changed it to Sabre.

Beginning in 1960, Sabre was gradually rolled out over the entire country. It became the first system of its kind, an early harbinger of the world’s networked future. Spun off as an independent company in 2000, it remains a key part of the world’s travel infrastructure today, when the vast majority of the reservations it accepts come from people sitting behind laptops and smartphones.

Sabre and other projects like it led to the rise of IBM as the virtually unchallenged dominant force in business computing from the middle of the 1950s until the end of the 1980s. But even as systems like Sabre were beginning to demonstrate the value of networked computing in everyday life, another, far more expansive vision of a networked world was taking shape in the clear blue sky of the country’s research institutions. The computer networks that existed by the start of the 1960s all operated on the “railroad” model of the old telegraph networks: a set of fixed stations joined together by fixed point-to-point links. What about a computer version of a telephone network instead — a national or international network of computers all able to babble happily together, with one computer able to call up any other any time it wished? Now that would really be something…

(Sources: the books A Brief History of the Future: The Origins of the Internet by John Naughton; From Gutenberg to the Internet: A Sourcebook on the History of Information Technology edited by Jeremy M. Norman, The Information by James Gleick, The Dream Machine by M. Mitchell Waldrop, The Closed World: Computers and the Politics of Discourse in Cold War America by Paul N. Edwards, Project Whirlwind: The History of a Pioneer Computer by Kent C. Redmond and Thomas M. Smith, From Whirlwind to MITRE: The R&D Story of the SAGE Air Defense Computer by Kent C. Redmond and Thomas N. Smith, The SAGE Air Defense System: A Personal History by John F. Jacobs, A History of Modern Computing (2nd ed.) by Paul E. Ceruzzi, Computing in the Middle Ages by Severo M. Ornstein, and Robot: Mere Machine to Transcendent Mind by Hans Moravec. Online sources include Lincoln Lab’s history of SAGE and the Charles Babbage Institute’s interview with R. Blair Smith.)

Footnotes

Footnotes
1 Inevitably, that wasn’t quite the end of it. Mauchly and Eckert continued their quest to win the patent they thought was their due, and were finally granted it at the rather astonishingly late date of 1964, by which time they were associated with the Sperry Rand Corporation, a maker of mainframes and minicomputers. But this victory only ignited another legal battle, pitting Sperry Rand against virtually every other company in the computer industry, who were not eager to start paying one of their competitors a royalty on every single computer they made. The patent was thrown out once and for all in 1973, primarily on the familiar premise that Von Neumann’s paper constituted prior disclosure.
2 That film’s title character was partially based on John Von Neumann, who after his work on the Manhattan Project and before his untimely death from cancer in 1957 became as strident a Cold Warrior as they came. “I believe there is no such thing as saturation,” he once told his old Manhattan Project boss Robert Oppenheimer. “I don’t think any weapon can be too large.” Many have attributed his bellicosity to his pain at seeing the Iron Curtain come down over his homeland of Hungary, separating him from friends and family forever.
 

Tags:

A Web Around the World, Part 6: Routing Calls

The telegraph networks of the late nineteenth century functioned much like the railroad networks with which they were so closely associated in the minds of the public. Each pair of Morse keys and receivers was connected to exactly one other pair via a fixed “track.” Messages traveled from station to station through the network like railroad passengers. A telegram sent from Smalltown, USA, would first be sent up the line to a larger hub station, where it would be dropped into the “outgoing” basket of another line connected to the same station that would take it to its next stop. And so on and so on, until it reached its final destination.

But the telephone wasn’t conducive to this approach. Alexander Graham Bell’s dream of being “able to chat pleasantly with friends in Europe while sitting in his Boston home” would require a different sort of network model, one more akin to the roads that would soon be built to handle automobile traffic. It would need to be possible for a message to steer its own way down a multitude of highways and byways to reach one of thousands or millions of individual addresses accessible on the network. And each message would need to do so at the same time that many other messages were doing the same thing, using the same roads. Network engineers would never again have it so easy as they had in the days when the telegraph was the only game in town.

Indeed, in contrast to this puzzle of dynamic routing, the invention of the telephone itself would soon seem a fairly minor challenge to have overcome. This new problem was too difficult, diffuse, and abstract to be solved in one eureka moment, or even a dozen of them. The worldwide telecommunications network that came into existence by the middle of the twentieth century was instead the result of steady incremental progress over the course of the decades, guided by people whose names have not found a place in history textbooks alongside those of Samuel Morse, Alexander Graham Bell, and Thomas Alva Edison. Yet the worldwide web these institutional inventors slowly pieced together was in its way more remarkable than any of the aforementioned men’s discrete creations. And it was also both the necessary precursor to and the medium of the computer-communications networks that would follow in the second half of the twentieth century.


The New Haven District Telephone Company’s exchange was the first of its type, heralding as much as the telephone itself a new era in communications.

The first system for letting any one telephone on a large network communicate with any other came into being in New Haven, Connecticut, on January 28, 1878. It was operated by the New Haven District Telephone Company, a spinoff of Bell Telephone, and connected 21 founding subscribers using a very simple, very physical method. The wire from each telephone on the network ran to a central exchange manned by a human operator. When you picked up your home phone to make a call, you were thus immediately connected to this individual. You told him which other subscriber you wished to speak to — the concept of phone numbers did not yet exist — whereupon he cranked a magneto to cause a bell to ring at the other end of your desired interlocutor’s line. If the individual in question picked up, the operator then linked your two telephones together using a patch cable.

It may strike us as a crude arrangement today. Certainly it was beset by obvious practical problems (what happened when more people tried to make calls than the operator could handle?) and privacy concerns (the operator could tell if a call was finished only by periodically listening in). Yet it spread like wildfire in lieu of any alternatives. The world’s second telephone exchange opened just three days after its first; by the end of 1878 there were several dozen of them in the United States, and a ringer had become an essential piece of telephony’s standard equipment. By the beginning of 1881, there were only nine cities with a population over 10,000 in the United States which didn’t boast at least one telephone exchange.

An early telephone exchange manned by boys, circa 1880. Such a place was called the “operating room” in telephony parlance, creating some amusing connotations.

The first exchange operators were, in the words of John Brooks,

an instant and memorable disaster. The lads, most of them in their late teens, who manned the telephone exchanges were simply too impatient and high-spirited for the job, which, in view of the imperfections of the equipment and inexperience of the subscribers, demanded above all patience and calm. They were given to lightening the tedium of their work by roughhousing, shouting constantly at each other, and swearing frequently at the customers.

Southwestern Bell historian David G. Park shares a typical anecdote:

In Little Rock, [Arkansas,] a prominent saloon keeper rang up and told one of the boy operators, fifteen-year-old Ashley Peay, “Connect me with my telephone at home. I want to talk to my wife.”

Ashley replied, “Your wife is talking to someone else.”

“What do you mean, my wife is talking to someone else?” the saloon keeper growled.

“I mean your line is busy,” Ashley snapped.

The saloon keeper wasn’t accustomed to being turned down by fifteen-year-old boys. “Get my wife on the line right now!” he shouted.

Young Peay’s reaction was to say, “Aw, shut up,” or words to that effect, and yank the connection.

The boy went on to handle other calls. Suddenly he was seized from behind, lifted from the floor, and shaken up and down by a furious saloon keeper. Just as the man was about to fling Peay through a glass window onto the street below, a man in the office came to the operator’s rescue.

Incidents like these occurred throughout the country…

But soon the telephone exchanges hit upon a solution: they replaced the boys with girls, who were not only more demure but willing to work for even lower wages. A newspaper article listed the job requirements:

The physical requirements of girls who are given positions in the telephone exchange are almost as stringent as those insisted upon in men enlisting in the army. To become a “hello” girl, the applicant must be not more than 30 years old [and] not less than five feet six inches tall. Her sight must be good, her hearing excellent, her voice soft, her perception quick, and her temper agile.

Every girl’s sight and hearing is tested and her height is measured before she is hired. Tall, slim girls with long arms are preferred for work on the switchboards. Fat, short girls occupy too much room and are not able to reach all of the six feet of space allocated to each operator.

With regard to nationality, it is said that girls of Irish parentage make the best operators.

The Little Rock, Arkansas, telephone exchange circa 1920, long after the unruly boys had been replaced with girls.

Almost from the very beginning, then, the job of telephone operator was seen as a female occupation, joining the jobs of schoolteacher and nanny in the eyes of the broader culture as another transitory way station for women between the onset of adulthood and marriage. The standard pay of between $1.00 and $1.50 per day reflected this. Those numbers would go up with inflation, but the other parameters of the job would remain the same for well over a century, for as long as it existed. Meanwhile the realization that female voices tend to be less threatening and more soothing in the ears of both genders would become even more embedded in the culture. (When was the last time a computer, smartphone, or GPS gadget spoke to you in a male voice?)

The systems and processes that drove the telephone exchanges improved steadily after 1878, even as the core model of a subscriber asking an operator to manually route his call via a patch wire and a switchboard remained in place for a surprisingly long time. The first telephone numbers made an appearance already in 1879, and quickly became commonplace, what with the way they eased the burden on the operators’ memory and provided telephony’s customers with at least an impression of anonymity. In December of 1887, the first Switchboard Conference was held in New York City. Tellingly, it devoted as much time to social engineering as it did to the technical side of telephony. Many a hand was wringed over the tendency of operators to say, “They won’t answer,” rather than “they don’t answer” in the case of a call that wasn’t picked up, what with the former’s intimation of neglectful intent. And it was agreed that operators should employ short rather than long rings when placing a call because “a short ring excites the curiosity of the subscriber.”

It wasn’t that no one was interested in an automated alternative to manual exchanges. The latter were inherently inefficient; a rule of thumb said that one operator was required during peak hours for every 100 telephone subscribers on a network, constituting an enormous financial drain on service providers even given the minimal salaries they paid to these employees. Despite this ample incentive, the problem kept engineers stymied for years. It was first partially solved by, of all people, an undertaker living in Kansas City, Missouri. Coming along in the last decade of the nineteenth century, Almon B. Strowger was one of the last of the breed of maverick independent inventors cum entrepreneurs who had built the telegraphy and telephony industries in earlier decades, who were soon to give way once and for all to the corporate institutionalists.


Almon B. Strowger

That said, Strowger conformed to no one’s stereotype of the genius inventor. Already 50 years old at the time of his achievement, he was a crotchety character whose irascibility verged on paranoia. The stage was set for his stroke of genius when he became convinced that the operators at his local telephone exchange had it in for him, and were deliberately misrouting his calls or not even bothering to place them. (If the anecdotes about his personality are anything to go by, there was perhaps another reason that so few people wanted to talk to him…) One of the operators was the wife of his principal rival in the undertaking business; he believed she was routing his potential customers’ calls to her husband’s establishment instead of his own.

So, he set out to remove the human operator from the equation altogether. His pique and grievance became the impetus behind the first workable automated switching system in the field of telephony.

Imagine a telephone whose cable terminates in a rotating electro-mechanical switch or relay, which looks rather like a windshield wiper. There is a button on the telephone. Every time the user presses it, a pulse of current goes down the line which causes the wiper to rotate one step, making a connection with a different receiving telephone. When the user has pressed the button a number of times corresponding to the “phone number” of the person she wishes to call, she presses a second button to cause that phone to ring, and proceeds to have a conversation. When she sets her phone down again, a switch is triggered that resets the system, dropping the wiper back to its home position in preparation for the next call. This is the Strowger system in its most basic form. Routing is still based on changing the physical connections between wires, but those physical changes are themselves now driven by electricity. For this reason, we call it an “electro-mechanical” design.

A very basic single-stage Strowger switch.

A network of more than ten or so nodes would be irredeemably tedious for the end-user of such a system, what with all the button-pressing it would require. But, crucially, the system could also be expanded by wiring more relays into it, and adding more buttons to the individual phones to control them. The system which Strowger first publicly demonstrated, for example, used two relay/button combinations to accommodate up to 100 phones, each with a unique two-digit number; the user tapped out the tens digit on one button, the ones digit on the other. In principle, the system could be extended to infinity by wiring yet more relays and buttons into the circuit.

Strowger was awarded a patent for his invention on March 10, 1891, and formed his own company soon after to exploit it. The first fully automated telephone exchange opened in La Porte, Indiana, on November 3, 1892. It was billed as the “girl-less, cuss-less, and wait-less telephone.” Strowger’s company would continue in the exchange business until 1983, first under the name of the Strowger Automatic Telephone Exchange Company and then as simply Automatic Electric.

But automated telephone exchanges would remain the exception to the rule for a long time after 1892; most people understandably preferred speaking a number to a fellow human being over pecking out long strings of digits manually and hoping for the best. Not until the 1920s would automated exchanges come to outnumber the manual ones, relegating the job of telephone operator to that of an occasional provider of information or extra help rather than the essential conduit of every single call. The key breakthrough that finally led to automated telephony’s widespread acceptance was the replacement of Strowger’s push buttons with spring-loaded dials; such “rotary phones” would remain the standard for decades to come, and would continue to function into the 1980s and beyond.

Rotary telephones like this one replaced buttons with a spring-loaded dial that sent the necessary bursts of electricity to move the switching relays at the exchange as it spun back to its resting position.



In the meantime, telephony made do with the manual exchanges. All of their inefficiencies and infelicities were thoroughly outweighed by the magic of the telephone itself. By the turn of the century, 1.4 million telephones were in service in the United States, and 25,000 or more girls and women were employed as operators. The impact of the telephone was different in nature from that of the telegraph, but no less socially significant. While it perhaps didn’t have the same immediate transformative effect on big business and international diplomacy, it was a vastly more democratic instrument, making a far more tangible change in the lives of its millions of individual users. The telegraph was a service, and thus to a large extent an abstraction; the telephone was a personally empowering technology, one you could literally hold in your hand.

Like the smartphones and tablets of our own day, telephones were condemned by certain segments of the intelligentsia, for destroying the old art of letter writing and for being a nuisance and a distraction from the truly important things in life; one article called them “an unmitigated domestic curse,” only good for “the exchange of twaddle between foolish women.” In another uncanny harbinger of more recent history, local newspapers fretted that telephones would slake the public’s thirst for their articles, columns, and calendars. (Unlike our more recent history, such fears would prove largely unfounded in this case.)

But the people couldn’t get enough of the telephone. American Bell — as Bell Telephone was now known, having adopted the new name in 1880 — was rather surprised to discover that the allegedly backward, rural areas of the country actually took to the telephone more readily than many of the nation’s urban centers. Farmers and particularly farmers’ wives, some of whom had heretofore been accustomed to going months at a time without talking to anyone outside their household, jumped on the telephone like a Titanic survivor on a lifeboat. The rural exchanges fostered a welcome new sense of community, becoming deeply embedded in the lives of the people they served, spreading news and gossip to all and sundry. Before Siri and “Hey, Google!,” there was the friendly local telephone operator to play the role of personal assistant, as captured in one housewife’s dialog from a gently satirical magazine article: “Oh, Central! Ring me up in fifteen minutes, so I don’t forget to take the bread out of the oven.” “Central, ring me up half an hour before the 2:17 train in the morning. See if it’s late before you call, please..”


For all the social changes it wrought, telephony extended its range much more slowly than telegraphy had. Cyrus Field’s transatlantic telegraph line had come to be just 22 years after the first telegraph line of any stripe was placed in service. The first transatlantic phone call, by contrast, didn’t take place until January 7, 1927, almost precisely 50 years after Roswell C. Downer had become the first person to have a telephone installed in his home. The delay was down to the nature of the two technologies.

The electrification of the Western world was in full swing at the turn of the century, to telephony’s immense benefit: hand-cranked magnetos and discrete batteries disappeared as companies like American Bell began to flood their networks with current from the grid. But the complex waveforms of telephony required much more power than a telegraph signal to travel an equivalent distance, due to a phenomenon known as attenuation: the tendency of a waveform to shed its peaks and valleys of amplitude and collapse toward uniformity as it travels farther and farther. Attenuation is in fact the same phenomenon in the broad strokes as the “signal retardation” which dogged the early days of undersea telegraphy, but it was never really an issue in terrestrial telegraphy, what with its staccato on-off approach to signaling. It could, however, play havoc with a sound waveform on a wire. The only way anyone knew of to fight attenuation was to add more power to the circuit, which in turn required thicker and thicker cables made of pure copper. This made the telephone into a peculiarly localized technology for instantaneous communication; it could and did foster a new sense of togetherness within communities, but struggled to reach between them. For decades, the American telephone network writ large was actually a bunch of local networks, connected to their peers if at all by just one or two long-distance lines.

Although the market for local telephone service became much more competitive after the expiration of the first of Alexander Graham Bell’s telephone patents in 1891, American Bell remained the 800-pound gorilla. The Bell executives had realized even well before that date that long-distance telephony was an area where their superior resources combined with their head start in the telephone business could allow them to sustain their monopoly without leaning on the crutch of patent law. Accordingly, American Bell on February 28, 1885, had formed a new subsidiary to specialize in long-distance telephony, with a name destined to outlive even that of its parent: the American Telephone and Telegraph Company, better known then and now as AT&T.[1]Even at the time of its inception, the name behind the acronym was anomalous if not meaningless, given that AT&T had no holdings in telegraphy; AT&T was content to leave that monopoly to Western Union. The name is perhaps best explained as a warning shot across Western Union’s bows, in case it should ever feel tempted to reenter the telephone market…

The thick, custom-made cables that AT&T employed were expensive to buy and string up, and could only carry one call at a time. These realities were reflected in the prices AT&T charged its subscribers: a ten-minute call over the 292-mile line from Boston to New York City — the longest and most celebrated line on the network at the turn of the century — cost $2 during the day or $1 at night. These were prices that only bankers and investors and other members of the well-heeled set could afford. Long-distance telephony would continue to be their prerogative alone for quite some time to come. Everyone else would have to rely on the telegraph or the even more old-fashioned medium of the hand-written paper missive for their long-distance communications needs. And needless to say, there was little point in thinking about a transatlantic telephone line while the length of even a terrestrial line was limited to 300 miles at the outside.

Rather than crossing the Atlantic, telephony’s overarching goal became to bridge the continent — to string a single telephone cable from the East to the West Coast. In addition to its practical utility, it would be an achievement of immense symbolic significance, a sort of telephonic parallel to the famous driving of the golden spike that had marked the completion of the transcontinental railroad in 1869.

One milestone came courtesy of a Serbian immigrant named Mihajlo Pupin. In 1900, he patented something called a loading coil, which, when placed at intervals along a telephone wire, could greatly reduce if not entirely eliminate a signal’s attenuation by magnetically increasing its inductance, or resistance to change. But there were limits to what loading coils could do. In combination with a very thick cable, they were enough to get a signal from New York City to Denver, but it couldn’t be coaxed any further. What was needed was an equivalent to Samuel Morse’s old telegraphic concept of the repeater: a way of actively boosting a signal as it traveled down a wire. Unfortunately, the simple system of discrete circuits joined by electromagnetic switches which Morse had proposed, and which had indeed become commonplace on telegraph lines by now, was useless for telephony, being unable to preserve the character of an audio waveform.

Then, in 1906, a researcher named Lee De Forest proposed something he called an audion. It was nothing less than the world’s first self-contained audio amplifier, built using vacuum tubes, a technology that would become hugely important outside as well as inside of telephony in the decades to come. The engineers at AT&T realized that it should be possible to install these audions — or simply repeaters, as they would quickly become known — along a terrestrial telephone line to make the voices it carried travel absolutely any distance. The details turned out to be a little bit more complicated than they first appeared, as generally happens in any form of engineering, but AT&T found a way to make it work at last. The company’s marketers came up with the perfect way to mark the occasion.

Alexander Graham Bell, center, prepares to make the first transcontinental phone call.

On January 25, 1915, a 67-year-old Alexander Graham Bell, stouter and grayer than once upon a time but still bursting with his old Scottish bonhomie, picked up a telephone before assembled press and public in New York City. “Hoy! Hoy!” he said in his booming brogue. (From the first days of his invention until the end of his own days, Bell loathed the standard telephonic greeting of “Hello.”) “Mr. Watson? Are you there? Do you hear me?”

In front of another assemblage in San Fransisco, Bell’s old friend and helper Thomas A. Watson answered him. “Yes, Mr. Bell. I hear you perfectly. Do you hear me well?”

“Yes, your voice is perfectly distinct,” said Bell. “It is as clear as if you were in New York.”

Inevitably, Bell was soon cajoled into repeating those famous first words ever spoken into a working telephone: “Mr. Watson, come here. I want to see you.” Whereupon Watson noted that, instead of seven seconds, the journey would now take him seven days. It may not have been a transatlantic link quite yet, but it did feel like a culmination of sorts.



Alexander Graham Bell and Thomas Watson weren’t the only ones on the line that memorable day. Theodore N. Vail, the erstwhile mastermind of Bell Telephone’s successful legal campaign against Western Union, had returned after a lengthy hiatus to serve as president of the company once again in 1907. He listened in to the historic conversation from a telephone on Jekyll Island, Georgia, where he was convalescing from the heart and kidney afflictions that would kill him in 1920.

But before his death, Vail established a new research-and-development division unlike any seen before in corporate America, a place designed to bring the best engineers in the country together and give them carte blanche to solve problems that the world might not even know it had yet. It would become known as Bell Labs, at first informally and then officially, and it would do much to shape the course of not just communications but the entirety of technology — not least the field of computing — over the balance of the twentieth century.

On its home turf of telephony, Bell Labs steadily improved the state of the art of automated switching and developed techniques for multiplexing, so that calls could be routed together along trunk lines instead of always requiring a wire of their own. And it devised ways to integrate Italian inventor Guglielmo Marconi’s technology of wireless radio with the network, in order to bridge gaps where wired telephony simply wouldn’t serve. Because no one had yet found a way of installing repeaters on an undersea cable, a transatlantic connection would have to depend on these new techniques of “radiotelephony.”

The call of January 7, 1927, was a curiously muted affair in contrast to the completion of the first transatlantic telegraph cable or even the first transcontinental phone call, involving no greater luminaries than Walter S. Giffords, Vail’s successor as president of American Bell and AT&T, and Evelyn P. Murray, the head of the British mail service, which held a government-granted monopoly over telephony in that country. Nevertheless, it was a landmark moment; while Alexander Graham Bell’s dream of easy, casual conversation across an ocean was still decades away from fulfillment, a conversation was at least possible now, four and a half years after his death. Wireless links such as the one which facilitated this conversation would remain a vital part of the telephone networks of the future, whether in the form of conventional radio waves, microwave beams, or satellite feeds. “Distance doesn’t mean anything anymore,” said one of the engineers behind the first transatlantic call. “We are on the verge of a very high-speed world.” Truer words were never spoken.



Outside of telephony, the Bell Labs boffins created the first motion-picture projector with audio as well as video, and saw it used it in 1927’s The Jazz Singer, that harbinger of a new era of cinema. That same year — a banner one in its history — Bell Labs conducted the first American demonstration of television, starring Secretary of Commerce (and future President) Herbert Hoover. Two years later, it broadcast television for the first time in color. AT&T and American Bell may very well have extended their telephone empire to television in the next decade, had the Great Depression not intervened to put the damper on the consumer economy.

As it was, the fallout from the stock-market crash of late 1929 slowed the march of technology, but could hardly turn back the hands of time. By that point there were more than 15 million telephones in service under the auspices of American Bell alone. Their numbers dropped for a while in the aftermath of the crash, but relatively modestly. By 1937, there were more telephones than ever in the United States and, indeed, around the world.

A review of the literature surrounding the telephone during the decade provides yet more evidence that the concerns surrounding the trendy communications mediums of our own age are not as unique as we might like to think. It seems that worries about communications technologies leading to a dumbing-down of the populace and egotism running rampant did not begin with Facebook and Instagram. A sociological study of 1000 telephone conversations, for example, revealed with horror that only 2240 separate words were used in the course of all of them, which amounted to no more than 10 percent of the words heretofore considered fairly commonplace in English. Worse, the most frequently used words of all were “I” and “me.”

On a more positive note, the telephone was promoted — perchance a bit excessively — as the Great Leveler which would allow the proverbial little people to communicate directly with the movers and shakers of the world, just as Twitter and its ilk sometimes are today. An Ohioan with the delightfully folksy name of Abe Pickens took this lesson to heart, attempting to call up Francisco Franco, Benito Mussolini, Neville Chamberlain, Emperor Hirohito, and Adolf Hitler among others to give them a piece of his mind. He reportedly did manage to get himself connected directly to Hitler at one point, but Pickens spoke no German and Hitler spoke no English; the baffled Führer quickly fobbed his interlocutor off on an aide. Sadly, Pickens did not succeed in preventing World War II.

Even by this late date, the telephone had not yet annihilated its more static predecessor the telegraph. Western Union’s tacit bargain with Bell Telephone of 1878 — you take telephony, we’ll take telegraphy — could still be construed as a wise move on the part of both, in that both companies were still hugely powerful and hugely profitable. The field of journalism remained completely in thrall to telegraphy, as did large swaths of government and business. During the war to come, telegraphy would provide a precious lifeline to loved ones back home for countless soldiers serving in faraway places where telephones couldn’t reach. Still, the telegraph had now become a legacy technology, destined only for stagnation and gradual decline. The future lay in telephony.

This sprawling amalgamation of transmitters, receivers, lines, switches, and gates was one of the wonders of its world — so wondrous that it can still inspire awe when we step back to really think about it today. You could pick up a phone at any arbitrary location and, by dialing some numbers and perhaps talking with an operator or two, make a connection with any arbitrary other phone elsewhere in your country — or in many cases elsewhere on your continent or even planet. And then you could chat with the person who answered that other phone as if the two of you were sitting together in the same parlor. If you ask me, this is amazingstill amazing.

The technological web which allowed such interconnections was arguably the most complex thing yet created by human ingenuity — so complex that no one fully understood all of its nooks and crannies. The fact that it actually worked was flabbergasting, the fact  that it did so less than a century after Samuel Morse had first figured out how to send single bursts of electronic current down a single wire nothing short of mind-blowing. When we look at it today, when we think about its bustling dynamism, its little packets of conversation and meaning flying to and fro, it’s easy to see it as a sort of massive cyber-organic computer, doing the work of the world. If most contemporary people weren’t discussing the telephone network in those terms, it was because half of the analogy literally didn’t yet exist for them: the concept of an “anything machine” in the form of a programmable computer, while by no means a new one in some academic and intellectual circles, was still a foreign one to the general public.

But it wasn’t foreign to a young man named Claude Shannon.


Anything but a stuffy academic, Claude Shannon was one of the archetypes of the playful hacker spirit which would fully emerge at MIT during the postwar years. “When researchers at the Massachusetts Institute of Technology or Bell Laboratories had to leap aside to let a unicycle pass,” writes James Gleick in The Information, “that was Claude Shannon.”

Shannon had grown up on a farm in rural Michigan, tinkering with homemade telegraphs that repurposed barbed-wire fences for communication. After taking a bachelor degree in electrical engineering and mathematics from the University of Michigan, he came to the Massachusetts Institute of Technology as a 20-year-old prodigy in 1936, having been personally recruited by Dean of Engineering Vannevar Bush to work on the Differential Analyzer, a 100-ton semi-programmable analog calculating machine designed to relieve the grunt work of solving complex mathematical problems. Inside Shannon’s fecund mind, the Differential Analyzer collided with his abiding interest in telegraphy and telephony and his memories of a class he had taken in Michigan on symbolic logic, and out popped “A Symbolic Analysis of Relays and Switching Circuits,” a paper which has been called “the most important master’s thesis of the twentieth century.”

Within his thesis, Shannon presented a plan for an electro-mechanical computer built around the digital logic of ones and zeroes — a machine far more flexible than the likes of the Differential Analyzer, yet one that required only the off-the-shelf equipment of telephony rather than the many bespoke wheels and gears of its gargantuan steampunk inspiration. Shannon’s pivotal insight was that switches on a circuit could not only route information but constitute information: an open switch could indicate a one, a closed switch a zero, and everything else could be built up from there. Abstract logic could be rendered concrete in circuitry: “Any operation that can be completely described in a finite number of steps using the words ‘if,’ ‘or,’ ‘and,’ etc., can be done automatically with relays.” I should hasten to clarify that the only way to reprogram one of Shannon’s hypothetical computers was to physically rewire it — effectively to remake it into a brand new machine. And again, it was still at bottom an electro-mechanical rather than a purely electrical device. Still, it was a major milestone on the road to the modern digital computer.

The technologies of telephony would continue to be repurposed to suit the needs of the burgeoning field of computing in the years that followed. The vacuum tubes that served American Bell so well for so long, for example, found a new application at the heart of the first programmable digital computers of the postwar era. And that technology in turn gave way to another one first developed for telephony: the transistor, which was invented at Bell Labs in 1947 and went on to become, as John Brooks wrote in 1976, “the key to modern electronics,” facilitating everything from hearing aids to the Moon landing. The transistor also lay behind the first wave of truly widespread institutional computing, over the two decades prior to the arrival of personal computers on the scene in the late 1970s.

But these developments, important though they are, are not the main reason I’ve chosen to tell the story of the analog technologies of the telegraph and telephone on a site about the history of digital culture. I’ve rather done so because computer engineers did more than borrow from the tool kits of the electrical-communications infrastructure of their day: they also came to borrow the existing communication networks themselves. This was the result of an insight which seems so self-evident as to be almost banal once it has been grasped, but which took the brilliant mind of Claude Shannon to appreciate and articulate for the first time: the fact that an electric current which could carry the dots and dashes of Morse code or the sound of a human voice could be made to carry any kind of information. This simple realization was the key that opened the door to the Internet.

(Sources: the books Alexander Graham Bell and the Conquest of Solitude by Robert V. Bruce, Telephone: The First Hundred Years by John Brooks, Good Connections: A Century of Service by the Men and Women of Southwestern Bell by David G. Park Jr., From Gutenberg to the Internet: A Sourcebook on the History of Information Technology edited by Jeremy M. Norman, The Information by James Gleick, The Dream Machine by M. Mitchell Waldrop, and The Practical Telephone Exchange Handbook by Joseph Poole. Online sources include Bob’s Old Phones by Bob Estreich, “Telephone History” by Tom Farley, “Telephone Switches” by Mark Csele, “The Strowger Telecomms Page” of SEG Communications, and “Today in History: The First Transatlantic Phone Call” by Priscilla Escobedo for UTA Libraries.)

Footnotes

Footnotes
1 Even at the time of its inception, the name behind the acronym was anomalous if not meaningless, given that AT&T had no holdings in telegraphy; AT&T was content to leave that monopoly to Western Union. The name is perhaps best explained as a warning shot across Western Union’s bows, in case it should ever feel tempted to reenter the telephone market…
 
 

Tags: