RSS

Search results for ‘trinity’

The Next Generation in Graphics, Part 2: Three Dimensions in Hardware

Most of the academic papers about 3D graphics that John Carmack so assiduously studied during the 1990s stemmed from, of all times and places, the Salt Lake City, Utah, of the 1970s. This state of affairs was a credit to one man by the name of Dave Evans.

Born in Salt Lake City in 1924, Evans was a physicist by training and an electrical engineer by inclination, who found his way to the highest rungs of computing research by way of the aviation industry. By the early 1960s, he was at the University of California, Berkeley, where he did important work in the field of time-sharing, taking the first step toward the democratization of computing by making it possible for multiple people to use one of the ultra-expensive big computers of the day at the same time, each of them accessing it through a separate dumb terminal. During this same period, Evans befriended one Ivan Sutherland, who deserves perhaps more than any other person the title of Father of Computer Graphics as we know them today.

For, in the course of earning his PhD at MIT, Sutherland developed a landmark software application known as Sketchpad, the first interactive computer-based drawing program of any stripe. Sketchpad did not do 3D graphics. It did, however, record its user’s drawings as points and lines on a two-dimensional plane. The potential for adding a third dimension to its Flatland-esque world — a Z coordinate to go along with X and Y — was lost on no one, least of all Sutherland himself. His 1963 thesis on Sketchpad rocketed him into the academic stratosphere.

Sketchpad in action.

In 1964, at the ripe old age of 26, Sutherland succeeded J.C.R. Licklider as head of the computer division of the Defense Department’s Advanced Research Projects Agency (ARPA), the most remarkable technology incubator in computing history. Alas, he proved ill-suited to the role of administrator: he was too young, too introverted — just too nerdy, as a later generation would have put it. But during the unhappy year he spent there before getting back to the pure research that was his real passion, he put the University of Utah on the computing map, largely as a favor to his friend Dave Evans.

Evans may have left Salt Lake City more than a decade ago, but he remained a devout Mormon, who found the counterculture values of the Berkeley of the 1960s rather uncongenial. So, he had decided to take his old alma mater up on an offer to come home and build a computer-science department there. Sutherland now awarded said department a small ARPA contract, one fairly insignificant in itself. What was significant was that it brought the University of Utah into the ARPA club of elite research institutions that were otherwise clustered on the coasts. An early place on the ARPANET, the predecessor to the modern Internet, was not the least of the perks which would come its way as a result.

Evans looked for a niche for his university amidst the august company it was suddenly joining. The territory of time-sharing was pretty much staked; extensive research in that field was already going full steam ahead at places like MIT and Berkeley. Ditto networking and artificial intelligence and the nuts and bolts of hardware design. Computer graphics, though… that was something else. There were smart minds here and there working on them — count Ivan Sutherland as Exhibit Number One — but no real research hubs dedicated to them. So, it was settled: computer graphics would become the University of Utah’s specialty. In what can only be described as a fantastic coup, in 1968 Evans convinced Sutherland himself to abandon the East Coast prestige of Harvard, where he had gone after leaving his post as the head of ARPA, in favor of the Mormon badlands of Utah.

Things just snowballed from there. Evans and Sutherland assembled around them an incredible constellation of bright young sparks, who over the course of the next decade defined the terms and mapped the geography of the field of 3D graphics as we still know it today, writing papers that remain as relevant today as they were half a century ago — or perchance more so, given the rise of 3D games. For example, the two most commonly used algorithms for calculating the vagaries of light and shade in 3D games stem directly from the University of Utah: Gouraud shading was invented by a Utah student named Henri Gouraud in 1971, while Phong shading was invented by another named Bui Tuong Phong in 1973.

But of course, lots of other students passed through the university without leaving so indelible a mark. One of these was Jim Clark, who would still be semi-anonymous today if he hadn’t gone on to become an entrepreneur who co-founded two of the most important tech companies of the late twentieth century.



When you’ve written as many capsule biographies as I have, you come to realize that the idea of the truly self-made person is for the most part a myth. Certainly almost all of the famous names in computing history were, long before any of their other qualities entered into the equation, lucky: lucky in their time and place of birth, in their familial circumstances, perhaps in (sad as it is to say) their race and gender, definitely in the opportunities that were offered to them. This isn’t to disparage their accomplishments; they did, after all, still need to have the vision to grasp the brass ring of opportunity and the talent to make the most of it. Suffice to say, then, that luck is a prerequisite but the farthest thing from a guarantee.

Every once in a while, however, I come across someone who really did almost literally make something out of nothing. One of these folks is Jim Clark. If today as a soon-to-be octogenarian he indulges as enthusiastically as any of his Old White Guy peers in the clichéd trappings of obscene wealth, from the mansions, yachts, cars, and wine to the Victoria’s Secret model he has taken for a fourth wife, he can at least credibly claim to have pulled himself up to his current station in life entirely by his own bootstraps.

Clark was born in 1944, in a place that made Salt Lake City seem like a cosmopolitan metropolis by comparison: the small Texas Panhandle town of Plainview. He grew up dirt poor, the son of a single mother living well below the poverty line. Nobody expected much of anything from him, and he obliged their lack of expectations. “I thought the whole world was shit and I was living in the middle of it,” he recalls.

An indifferent student at best, he was expelled from high school his junior year for telling a teacher to go to hell. At loose ends, he opted for the classic gambit of running away to sea: he joined the Navy at age seventeen. It was only when the Navy gave him a standardized math test, and he scored the highest in his group of recruits on it, that it began to dawn on him that he might actually be good at something. Encouraged by a few instructors to pursue his aptitude, he enrolled in correspondence courses to fill his free time when out plying the world’s oceans as a crewman on a destroyer.

Ten years later, in 1971, the high-school dropout, now six years out of the Navy and married with children, found himself working on a physics PhD at Louisiana State University. Clark:

I noticed in Physics Today an article that observed that physicists getting PhDs from places like Harvard, MIT, Yale, and so on didn’t like the jobs they were getting. And I thought, well, what am I doing — I’m getting a PhD in physics from Louisiana State University! And I kept thinking, well, I’m married, and I’ve got these obligations. By this time, I had a second child, so I was real eager to get a good job, and I just got discouraged about physics. And a friend of mine pointed to the University of Utah as having a computer-graphics specialty. I didn’t know much about it, but I was good with geometry and physics, which involves a lot of geometry.

So, Clark applied for a spot at the University of Utah and was accepted.

But, as I already implied, he didn’t become a star there. His 1974 thesis was entitled “3D Design of Free-Form B-Spline Surfaces”; it was a solid piece of work addressing a practical problem, but not anything to really get the juices flowing. Afterward, he spent half a decade bouncing around from campus to campus as an adjunct professor: the Universities of California at Santa Cruz and Berkeley, the New York Institute of Technology, Stanford. He was fairly miserable throughout. As an academic of no special note, he was hired primarily as an instructor rather than a researcher, and he wasn’t at all cut out for the job, being too impatient, too irascible. Proving the old adage that the child is the father of the man, he was fired from at least one post for insubordination, just like that angry teenager who had once told off his high-school teacher. Meanwhile he went through not one but two wives. “I was in this kind of downbeat funk,” he says. “Dark, dark, dark.”

It was now early 1979. At Stanford, Clark was working right next door to Xerox’s famed Palo Alto Research Center (PARC), which was inventing much of the modern paradigm of computing, from mice and menus to laser printers and local-area networking. Some of the colleagues Clark had known at the University of Utah were happily ensconced over there. But he was still on the outside looking in. It was infuriating — and yet he was about to find a way to make his mark at last.

Hardware engineering at the time was in the throes of a revolution and its backlash, over a technology that went by the mild-mannered name of “Very Large Scale Integration” (VLSI). The integrated circuit, which packed multiple transistors onto a single microchip, had been invented at Texas Instruments at the end of the 1950s, and had become a staple of computer design already during the following decade. Yet those early implementations often put only a relative handful of transistors on a chip, meaning that they still required lots of chips to accomplish anything useful. A turning point came in 1971 with the Intel 4004, the world’s first microprocessor — i.e., the first time that anyone put the entire brain of a computer on a single chip. Barely remarked at the time, that leap would result in the first kit computers being made available for home users in 1975, followed by the Trinity of 1977, the first three plug-em-in-and-go personal computers suitable for the home. Even then, though, there were many in the academic establishment who scoffed at the idea of VLSI, which required a new, in some ways uglier approach to designing circuitry. In a vivid illustration that being a visionary in some areas doesn’t preclude one from being a reactionary in others, many of the folks at PARC were among the scoffers. Look how far we’ve come doing things one way, they said. Why change?

A PARC researcher named Lynn Conway was enraged by such hidebound thinking. A rare female hardware engineer, she had made scant progress to date getting her point of view through to the old boy’s club that surrounded her at PARC. So, broadening her line of attack, she wrote a paper about the basic techniques of modern chip design, and sent it out to a dozen or so universities along with a tempting offer: if any students or faculty wished to draw up schematics for a chip of their own and send them to her, she would arrange to have the chip fabricated in real silicon and sent back to its proud parent. The point of it all was just to get people to see the potential of VLSI, not to push forward the state of the art. And indeed, just as she had expected, almost all of the designs she received were trivially simple by the standards of even the microchip industry of 1979: digital time keepers, adding machines, and the like. But one was unexpectedly, even crazily complex. Alone among the submissions, it bore a precautionary notice of copyright, from one James Clark. He called his creation the Geometry Engine.

The Geometry Engine was the first and, it seems likely, only microchip that Jim Clark ever personally attempted to design in his life. It was created in response to a fundamental problem that had been vexing 3D modelers since the very beginning: that 3D graphics required shocking quantities of mathematical calculations to bring to life, scaling almost exponentially with the complexity of the scene to be depicted. And worse, the type of math they required was not the type that the researchers’ computers were especially good at.

Wait a moment, some of you might be saying. Isn’t math the very thing that computers do? It’s right there in the name: they compute things. Well, yes, but not all types of math are created equal. Modern computers are also digital devices, meaning they are naturally equipped to deal only with discrete things. Like the game of DOOM, theirs is a universe of stair steps rather than smooth slopes. They like integer numbers, not decimals. Even in the 1960s and 1970s, they could approximate the latter through a storage format known as floating point, but they dealt with these floating-point numbers at least an order of magnitude slower than they did whole numbers, as well as requiring a lot more memory to store them. For this reason, programmers avoided them whenever possible.

And it actually was possible to do so a surprisingly large amount of the time. Most of what computers were commonly used for could be accomplished using only whole numbers — for example, by using Euclidean division that yields a quotient and a remainder in place of decimal division. Even financial software could be built using integers only to count the total number of cents rather than floating-point values to represent dollars and cents. 3D-graphics software, however, was one place where you just couldn’t get around them. Creating a reasonably accurate mathematical representation of an analog 3D space forced you to use floating-point numbers. And this in turn made 3D graphics slow.

Jim Clark certainly wasn’t the first person to think about designing a specialized piece of hardware to lift some of the burden from general-purpose computer designs, an add-on optimized for doing the sorts of mathematical operations that 3D graphics required and nothing else. Various gadgets along these lines had been built already, starting a decade or more before his Geometry Engine. Clark was the first, however, to think of packing it all onto a single chip — or at worst a small collection of them — that could live on a microcomputer’s motherboard or on a card mounted in a slot, that could be mass-produced and sold in the thousands or millions. His description of his “slave processor” sounded disarmingly modest (not, it must be said, a quality for which Clark is typically noted): “It is a four-component vector, floating-point processor for accomplishing three basic operations in computer graphics: matrix transformations, clipping, and mapping to output-device coordinates [i.e., going from an analog world space to pixels in a digital raster].” Yet it was a truly revolutionary idea, the genesis of the graphical processing units (GPUs) of today, which are in some ways more technically complex than the CPUs they serve. The Geometry Engine still needed to use floating-point numbers — it was, after all, still a digital device — but the old engineering doctrine that specialization yields efficiency came into play: it was optimized to do only floating-point calculations, and only a tiny subset of all the ones possible at that, just as quickly as it could.

The Geometry Engine changed Clark’s life. At last, he had something exciting and uniquely his. “All of these people started coming up and wanting to be part of my project,” he remembers. Always an awkward fit in academia, he turned his thinking in a different direction, adopting the mindset of an entrepreneur. “He reinvented his relationship to the world in a way that is considered normal only in California,” writes journalist Michael Lewis in a book about Clark. “No one who had been in his life to that point would be in it ten years later. His wife, his friends, his colleagues, even his casual acquaintances — they’d all be new.” Clark himself wouldn’t hesitate to blast his former profession in later years with all the fury of a professor scorned.

I love the metric of business. It’s money. It’s real simple. You either make money or you don’t. The metric of the university is politics. Does that person like you? Do all these people like you enough to say, “Yeah, he’s worthy?”

But by whatever metric, success didn’t come easy. The Geometry Engine and all it entailed proved a harder sell with the movers and shakers in commercial computing than it had with his colleagues at Stanford. It wasn’t until 1982 that he was able to scrape together the funding to found a company called Silicon Graphics, Incorporated (SGI), and even then he was forced to give 85 percent of his company’s shares to others in order to make it a reality. Then it took another two years after that to actually ship the first hardware.

The market segment SGI was targeting is one that no longer really exists. The machines it made were technically microcomputers, being built around microprocessors, but they were not intended for the homes of ordinary consumers, nor even for the cubicles of ordinary office workers. These were much higher-end, more expensive machines than those, even if they could fit under a desk like one of them. They were called workstation computers. The typical customer spent tens or hundreds of thousands of dollars on them in the service of some highly demanding task or another.

In the case of the SGI machines, of course, that task was almost always related to graphics, usually 3D graphics. Their expense wasn’t bound up with their CPUs; in the beginning, these were fairly plebeian chips from the Motorola 68000 series, the same line used in such consumer-grade personal computers as the Apple Macintosh and the Commodore Amiga. No, the justification of their high price tags rather lay with their custom GPUs, which even in 1984 already went far beyond the likes of Clark’s old Geometry Engine. An SGI GPU was a sort of black box for 3D graphics: feed it all of the data that constituted a scene on one side, and watch a glorious visual representation emerge at the other, thanks to an array of specialized circuitry designed for that purpose and no other.

Now that it had finally gotten off the ground, SGI became very successful very quickly. Its machines were widely used in staple 3D applications like computer-aided industrial design (CAD) and flight simulation, whilst also opening up new vistas in video and film production. They drove the shift in Hollywood from special effects made using miniature models and stop-motion techniques dating back to the era of King Kong to the extensive use of computer-generated imagery (CGI) that we see even in the purportedly live-action films of today. (Steven Spielberg and George Lucas were among SGI’s first and best customers.) “When a moviegoer rubbed his eyes and said, ‘What’ll they think of next?’,” writes Michael Lewis, “it was usually because SGI had upgraded its machines.”

The company peaked in the early 1990s, when its graphics workstations were the key to CGI-driven blockbusters like Terminator 2 and Jurassic Park. Never mind the names that flashed by in the opening credits; everyone could agree that the computer-generated dinosaurs were the real stars of Jurassic Park. SGI was bringing in over $3 billion in annual revenue and had close to 15,000 employees by 1993, the year that movie was released. That same year, President Bill Clinton and Vice President Al Gore came out personally to SGI’s offices in Silicon Valley to celebrate this American success story.

SGI’s hardware subsystem for graphics, the beating heart of its business model, was known in 1993 as the RealityEngine2. This latest GPU was, wrote Byte magazine in a contemporary article, “richly parallel,” meaning that it could do many calculations simultaneously, in contrast to a traditional CPU, which could only execute one instruction at a time. (Such parallelism is the reason that modern GPUs are so often used for some math-intensive non-graphical applications, such as crypto-currency mining and machine learning.) To support this black box and deliver to its well-heeled customers a complete turnkey solution for all their graphics needs, SGI had also spearheaded an open-source software library for 3D applications, known as the Open Graphics Library, or OpenGL. Even the CPUs in its latest machines were SGI’s own; it had purchased a maker of same called MIPS Technologies in 1990.

But all of this success did not imply a harmonious corporation. Jim Clark was convinced that he had been hard done by back in 1982, when he was forced to give up 85 percent of his brainchild in order to secure the funding he needed, then screwed over again when he was compelled by his board to give up the CEO post to a former Hewlett Packard executive named Ed McCracken in 1984. The two men had been at vicious loggerheads for years; Clark, who could be downright mean when the mood struck him, reduced McCracken to public tears on at least one occasion. At one memorable corporate retreat intended to repair the toxic atmosphere in the board room, recalls Clark, “the psychologist determined that everyone else on the executive committee was passive aggressive. I was just aggressive.”

Clark claims that the most substantive bone of contention was McCracken’s blasé indifference to the so-called low-end market, meaning all of those non-workstation-class personal computers that were proliferating in the millions during the 1980s and early 1990s. If SGI’s machines were advancing by leaps and bounds, these consumer-grade computers were hopscotching on a rocket. “You could see a time when the PC would be able to do the sort of graphics that [our] machines did,” says Clark. But McCracken, for one, couldn’t see it, was content to live fat and happy off of the high prices and high profit margins of SGI’s current machines.

He did authorize some experiments at the lower end, but his heart was never in it. In 1990, SGI deigned to put a limited subset of the RealityEngine smorgasbord onto an add-on card for Intel-based personal computers. Calling it IrisVision, it hopefully talked up its price of “under $5000,” which really was absurdly low by the company’s usual standards. What with its complete lack of software support and its way-too-high price for this marketplace, IrisVision went nowhere, whereupon McCracken took the failure as a vindication of his position. “This is a low-margin business, and we’re a high-margin company, so we’re going to stop doing that,” he said.

Despite McCracken’s indifference, Clark eventually managed to broker a deal with Nintendo to make a MIPS microprocessor and an SGI GPU the heart of the latter’s Nintendo 64 videogame console. But he quit after yet another shouting match with McCracken in 1994, two years before it hit the street.

He had been right all along about the inevitable course of the industry, however undiplomatically he may have stated his case over the years. Personal computers did indeed start to swallow the workstation market almost at the exact point in time that Clark bailed. The profits from the Nintendo deal were rich, but they were largely erased by another of McCracken’s pet projects, an ill-advised acquisition of the struggling supercomputer maker Cray. Meanwhile, with McCracken so obviously more interested in selling a handful of supercomputers for millions of dollars each than millions upon millions of consoles for a few hundred dollars each, a group of frustrated SGI employees left the company to help Nintendo make the GameCube, the followup to the Nintendo 64, on their own. It was all downhill for SGI after that, bottoming out in a 2009 bankruptcy and liquidation.

As for Clark, he would go on to a second entrepreneurial act as remarkable as his first, abandoning 3D graphics to make a World Wide Web browser with Marc Andreessen. We will say farewell to him here, but you can read the story of his second company Netscape’s meteoric rise and fall elsewhere on this site.



Now, though, I’d like to return to the scene of SGI’s glory days, introducing in the process three new starring players. Gary Tarolli and Scott Sellers were talented young engineers who were recruited to SGI in the 1980s; Ross Smith was a marketing and business-development type who initially worked for MIPS Technologies, then ended up at SGI when it acquired that company in 1990. The three became fast friends. Being of a younger generation, they didn’t share the contempt for everyday personal computers that dominated among their company’s upper management. Whereas the latter laughed at the primitiveness of games like Wolfenstein 3D and Ultima Underworld, if they bothered to notice them at all, our trio saw a brewing revolution in gaming, and thought about how much it could be helped along by hardware-accelerated 3D graphics.

Convinced that there was a huge opportunity here, they begged their managers to get into the gaming space. But, still smarting from the recent failure of IrisVision, McCracken and his cronies rejected their pleas out of hand. (One of the small mysteries in this story is why their efforts never came to the attention of Jim Clark, why an alliance was never formed. The likely answer is that Clark had, by his own admission, largely removed himself from the day-to-day running of SGI by this time, being more commonly seen on his boat than in his office.) At last, Tarolli, Sellers, Smith, and some like-minded colleagues ran another offer up the flagpole. You aren’t doing anything with IrisVision, they said. Let us form a spinoff company of our own to try to sell it. And much to their own astonishment, this time management agreed.

They decided to call their new company Pellucid — not the best name in the world, sounding as it did rather like a medicine of some sort, but then they were still green at all this. The technology they had to peddle was a couple of years old, but it still blew just about anything else in the MS-DOS/Windows space out of the water, being able to display 16 million colors at a resolution of 1024 X 768, with 3D acceleration built-in. (Contrast this with the SVGA card found in the typical home computer of the time, which could do 256 colors at 640 X 480, with no 3D affordances). Pellucid rebranded the old IrisVision the ProGraphics 1024. Thanks to the relentless march of chip-fabrication technology, they found that they could now manufacture it cheaply enough to be able to sell it for as little as $1000 — still pricey, to be sure, but a price that some hardcore gamers, as well as others with a strong interest in having the best graphics possible, might just be willing to pay.

The problem, the folks at Pellucid soon came to realize, was a well-nigh intractable deadlock between the chicken and the egg. Without software written to take advantage of its more advanced capabilities, the ProGraphics 1024 was just another SVGA graphics card, selling for a ridiculously high price. So, consumers waited for said software to arrive. Meanwhile software developers, seeing the as-yet non-existent installed base, saw no reason to begin supporting the card. Breaking this logjam must require a concentrated public-relations and developer-outreach effort, the likes of which the shoestring spinoff couldn’t possibly afford.

They thought they had done an end-run around the problem in May of 1993, when they agreed, with the blessing of SGI, to sell Pellucid kit and caboodle to a major up-and-comer in consumer computing known as Media Vision, which currently sold “multimedia upgrade kits” consisting of CD-ROM drives and sound cards. But Media Vision’s ambitions knew no bounds: they intended to branch out into many other kinds of hardware and software. With proven people like Stan Cornyn, a legendary hit-maker from the music industry, on their management rolls and with millions and millions of dollars on hand to fund their efforts, Media Vision looked poised to dominate.

It seemed the perfect landing place for Pellucid; Media Vision had all the enthusiasm for the consumer market that SGI had lacked. The new parent company’s management said, correctly, that the ProGraphics 1024 was too old by now and too expensive to ever become a volume product, but that 3D acceleration’s time would come as soon as the current wave of excitement over CD-ROM and multimedia began to ebb and people started looking for the next big thing. When that happened, Media Vision would be there with a newer, more reasonably priced 3D card, thanks to the people who had once called themselves Pellucid. It sounded pretty good, even if in the here and now it did seem to entail more waiting around than anything else.

The ProGraphics 1024 board in Media Vision livery.

There was just one stumbling block: “Media Vision was run by crooks,” as Scott Sellers puts it. In April of 1994, a scandal erupted in the business pages of the nation’s newspapers. It turned out that Media Vision had been an experiment in “fake it until you make it” on a gigantic scale. Its founders had engaged in just about every form of malfeasance imaginable, creating a financial house of cards whose honest revenues were a minuscule fraction of what everyone had assumed them to be. By mid-summer, the company had blown away like so much dust in the wind, still providing income only for the lawyers who were left to pick over the corpse. (At least two people would eventually be sent to prison for their roles in the conspiracy.) The former Pellucid folks were left as high and dry as everyone else who had gotten into bed with Media Vision. All of their efforts to date had led to the sale of no more than 2000 graphics cards.

That same summer of 1994, a prominent Silicon Valley figure named Gordon Campbell was looking for interesting projects in which to invest. Campbell had earned his reputation as one of the Valley’s wise men through a company called Chips and Technologies (C&T), which he had co-founded in 1984. One of those hidden movers in the computer industry, C&T had largely invented the concept of the chipset: chips or small collections of them that could be integrated directly into a computer’s motherboard to perform functions that used to be placed on add-on cards. C&T had first made a name for itself by reducing IBM’s bulky nineteen-chip EGA graphics card to just four chips that were cheaper to make and consumed less power. Campbell’s firm thrived alongside the cost-conscious PC clone industry, which by the beginning of the 1990s was rendering IBM itself, the very company whose products it had once so unabashedly copied, all but irrelevant. Onboard video, onboard sound, disk controllers, basic firmware… you name it, C&T had a cheap, good-enough-for-the-average-consumer chipset to handle it.

But now Campbell had left C&T “in pursuit of new opportunities,” as they say in Valley speak. Looking for a marketing person for one of the startups in which he had invested a stake, he interviewed a young man named Ross Smith who had SGI on his résumé — always a plus. But the interview didn’t go well. Campbell:

It was the worst interview I think I’ve ever had. And so finally, I just turned to him and I said, “Okay, your heart’s not in this interview. What do you really want to do?”

And he kind of looks surprised and says, well, there are these two other guys, and we want to start a 3D-graphics company. And the next thing I know, we had set up a meeting. And we had, over a lot of beers, a discussion which led these guys to all come and work at my office. And that set up the start of 3Dfx.

It seemed to all of them that, after all of the delays and blind alleys, it truly was now or never to make a mark. For hardware-accelerated 3D graphics were already beginning to trickle down into the consumer space. In standup arcades, games like Daytona USA and Virtua Fighter were using rudimentary GPUs. Ditto the Sega Saturn and the Sony PlayStation, the latest in home-videogame consoles, both which were on the verge of release in Japan, with American debuts expected in 1995. Meanwhile the software-only, 2.5D graphics of DOOM were taking the world of hardcore computer gamers by storm. The men behind 3Dfx felt that the next move must surely seem obvious to many other people besides themselves. The only reason the masses of computer-game players and developers weren’t clamoring for 3D graphics cards already was that they didn’t yet realize what such gadgets could do for them.

Still, they were all wary of getting back into the add-on board market, where they had been burned so badly before. Selling products directly to consumers required retail access and marketing muscle that they still lacked. Instead, following in the footsteps of C&T, they decided to sell a 3D chipset only to other companies, who could then build it into add-on boards for personal computers, standup-arcade machines, whatever they wished.

At the same time, though, they wanted their technology to be known, in exactly the way that the anonymous chipsets made by C&T were not. In the pursuit of this aspiration, Gordon Campbell found inspiration from another company that had become a household name despite selling very little directly to consumers. Intel had launched the “Intel Inside” campaign in 1990, just as the era of the PC clone was giving way to a more amorphous commodity architecture. The company introduced a requirement that the makers of computers which used its CPUs include the Intel Inside logo on their packaging and on the cases of the computers themselves, even as it made the same logo the centerpiece of a standalone advertising campaign in print and on television. The effort paid off; Intel became almost as identified with the Second Home Computer Revolution in the minds of consumers as was Microsoft, whose own logo showed up on their screens every time they booted into Windows. People took to calling the emerging duopoly the “Wintel” juggernaut, a name which has stuck around to this day.

So, it was decided: a requirement to display a similarly snazzy 3Dfx logo would be written into that company’s contracts as well. The 3Dfx name itself was a vast improvement over Pellucid. As time went on, 3Dfx would continue to display a near-genius for catchy branding: “Voodoo” for the chipset itself, “GLide” for the software library that controlled it. All of this reflected a business savvy the likes of which hadn’t been seen from Pellucid, that was a credit both to Campbell’s steady hand and the accumulating experience of the other three partners.

But none of it would have mattered without the right product. Campbell told his trio of protégés in no uncertain terms that they were never going to make a dent in computer gaming with a $1000 video card; they needed to get the price down to a third of that at the most, which meant the chipset itself could cost the manufacturers who used it in their products not much more than $100 a pop. That was a tall order, especially considering that gamers’ expectations of graphical fidelity weren’t diminishing. On the contrary: the old Pellucid card hadn’t even been able to do 3D texture mapping, a failing that gamers would never accept post-DOOM.

It was left to Gary Tarolli and Scott Sellers to figure out what absolutely had to be in there, such as the aforementioned texture mapping, and what they could get away with tossing overboard. Driven by the remorseless logic of chip-fabrication costs, they wound up going much farther with the tossing than they ever could have imagined when they started out. There could be no talk of 24-bit color or unusually high resolutions: 16-bit color (offering a little over 65,000 onscreen shades) at a resolution of 640 X 480 would be the limit.[1]A resolution of 800 X 600 was technically possible using the Voodoo chipset, but using this resolution meant that the programmer could not use a vital affordance known as Z-buffering. For this reason, it was almost never seen in the wild. Likewise, they threw out the capability of handling any polygons except for the simplest of them all, the humble triangle. For, they realized, you could make almost any solid you liked by combining triangular surfaces together. With enough triangles in your world — and their chipset would let you have up to 1 million of them — you needn’t lament the absence of the other polygons all that much.

Sellers had another epiphany soon after. Intel’s latest CPU, to which gamers were quickly migrating, was the Pentium. It had a built-in floating-point co-processor which was… not too shabby, actually. It should therefore be possible to take the first phase of the 3D-graphics pipeline — the modeling phase — out of the GPU entirely and just let the CPU handle it. And so another crucial decision was made: they would concern themselves only with the rendering or rasterization phase, which was a much greater challenge to tackle in software alone, even with a Pentium. Another huge piece of the puzzle was thus neatly excised — or rather outsourced back to the place where it was already being done in current games. This would have been heresy at SGI, whose ethic had always been to do it all in the GPU. But then, they were no longer at SGI, were they?

Undoubtedly their bravest decision of all was to throw out any and all 2D-graphics capabilities — i.e., the neat rasters of pixels used to display Windows desktops and word processors and all of those earlier, less exciting games. Makers of Voodoo boards would have to include a cable to connect the existing, everyday graphics cards inside their customers’ machines to their new 3D ones. When you ran non-3D applications, the Voodoo card would simply pass the video signal on to the monitor unchanged. But when you fired up a 3D game, it would take over from the other board. A relay inside made a distinctly audible click when this happened. Far from a bug, gamers would soon come to consider the noise a feature.”Because you knew it was time to have fun,” as Ross Smith puts it.

It was a radical plan, to be sure. These new cards would be useful only for games, would have no other purpose whatsoever; there would be no justifying this hardware purchase to the parents or the spouse with talk of productivity or educational applications. Nevertheless, the cost savings seemed worth it. After all, almost everyone who initially went out to buy the new cards would already have a perfectly good 2D video card in their computer. Why make them pay extra to duplicate those functions?

The final design used just two custom chips. One of them, internally known as the T-Rex (Jurassic Park was still in the air), was dedicated exclusively to the texture mapping that had been so conspicuously missing from the Pellucid board. Another, called the FBI (“Frame Buffer Interface”), did everything else required in the rendering phase. Add to this pair a few less exciting off-the-shelf chips and four megabytes worth of RAM chips, put it on a board with the appropriate connectors, and you had yourself a 3Dfx Voodoo GPU.

Needless to say, getting this far took some time. Tarolli, Sellers, and Smith spent the last half of 1994 camped out in Campbell’s office, deciding what they wanted to do and how they wanted to do it and securing the funding they needed to make it happen. Then they spent all of 1995 in offices of their own, hiring about a dozen people to help them, praying all the time that no other killer product would emerge to make all of their efforts moot. While they worked, the Sega Saturn and Sony PlayStation did indeed arrive on American shores, becoming the first gaming devices equpped with 3D GPUs to reach American homes in quantity. The 3Dfx crew were not overly impressed by either console — and yet they found the public’s warm reception of the PlayStation in particular oddly encouraging. “That showed, at a very rudimentary level, what could be done with 3D graphics with very crude texture mapping,” says Scott Sellers. “And it was pretty abysmal quality. But the consumers were just eating it up.”

They got their first finished chipsets back from their Taiwanese fabricator at the end of January 1996, then spent Super Bowl weekend soldering them into place and testing them. There were a few teething problems, but in the end everything came together as expected. They had their 3D chipset, at the beginning of a year destined to be dominated by the likes of Duke Nukem 3D and Quake. It seemed the perfect product for a time when gamers couldn’t get enough 3D mayhem. “If it had been a couple of years earlier,” says Gary Tarolli, “it would have been too early. If it had been a couple of years later, it would have been too late.” As it was, they were ready to go at the Goldilocks moment. Now they just had to sell their chipset to gamers — which meant they first had to sell it to game developers and board makers.



Did you enjoy this article? If so, please think about pitching in to help me make many more like it. You can pledge any amount you like.



(Sources: the books The Dream Machine by M. Mitchell Waldrop Dealers of Lightning: Xerox PARC and the Dawn of the Computer Age by Michael A. Hiltzik, and The New New Thing: A Silicon Valley Story by Michael Lewis; Byte of May 1992 and November 1993; InfoWorld of April 22 1991 and May 31 1993; Next Generation of October 1997; ACM’s Computer Graphics journal of July 1982; Wired of January 1994 and October 1994. Online sources include the Computer History Museum’s “oral histories” with Jim Clark, Forest Baskett, and the founders of 3Dfx; Wayne Carlson’s “Critical History of Computer Graphics and Animation”; “Fall of Voodoo” by Ernie Smith at Tedium; Fabian Sanglard’s reconstruction of the workings of the Voodoo 1 chips; “Famous Graphics Chips: 3Dfx’s Voodoo” by Dr. Jon Peddie at the IEEE Computer Society’s site; an internal technical description of the Voodoo technology archived at bitsavers.org.)

Footnotes

Footnotes
1 A resolution of 800 X 600 was technically possible using the Voodoo chipset, but using this resolution meant that the programmer could not use a vital affordance known as Z-buffering. For this reason, it was almost never seen in the wild.
 

Tags: ,

Byron Preiss’s Games (or, The Promise and Peril of the Electronic Book)

Byron Preiss in 1982 with some of his “Fair People.”

We humans always seek to understand the new in terms of the old. This applies as much to new forms of media as it does to anything else.

Thus at the dawn of the 1980s, when the extant world of media began to cotton onto the existence of computer software that was more than strictly utilitarian but not action-oriented videogames like the ones being played in coin-op arcades and on home consoles such as the Atari VCS, it looked for a familiar taxonomic framework by which to understand it. One of the most popular of the early metaphors was that of the electronic book. For the graphics of the first personal computers were extremely crude, little more than thick lines and blotches of primary colors. Text, on the other hand, was text, whether it appeared on a monitor screen or on a page. Some of the most successful computer games of the first half of the 1980s were those of Infocom, who drove home the literary associations by building their products out of nothing but text, for which they were lauded in glowing features in respected mainstream magazines and newspapers. In the context of the times, it seemed perfectly natural to sell Infocom’s games and others like them in bookstores. (I first discovered these games that would become such an influence on my future on the shelves of my local shopping mall’s B. Dalton bookstore…)

Small wonder, then, that several of the major New York print-publishing houses decided to move into software. As is usually the case in such situations, they were driven by a mixture of hope and fear: hope that they could expand the parameters of what a book could do and be in exciting ways, and fear that, if they failed to do it, someone else would. The result was the brief-lived era of bookware.

Byron Preiss was perhaps the most important of all the individual book people who now displayed an interest in software. Although still very young by the standards of his tweedy industry — he turned 30 in 1983 — he was already a hugely influential figure in genre publishing, with a rare knack for mobilizing others to get lots and lots of truly innovative things done. In fact, long before he did anything with computers, he was already all about “interactivity,” the defining attribute of electronic books during the mid-1980s, as well as “multimedia,” the other buzzword that would be joined to the first in the early 1990s.

Preiss’s Fiction Illustrated line produced some of the world’s first identifiable graphic novels. These were comics that didn’t involve superheroes or cartoon characters, that were bound and sold as first-run paperbacks rather than flimsy periodicals. Preiss would remain a loyal supporter of comic-book storytelling in all its forms throughout his life.

Preiss rarely published a book that didn’t have pictures; in fact, he deserves a share of the credit for inventing what we’ve come to call the graphic novel, through a series known as Fiction Illustrated which he began all the way back in 1975 as a bright-eyed 22-year-old. His entire career was predicated on the belief that books should be beautiful aesthetic objects in their own right, works of visual as well as literary art that could and should take the reader’s breath away, that reading books should be an intensely immersive experience. He innovated relentlessly in pursuit of that goal. In 1981, for example, he published a collection of stories by Samuel R. Delany that featured “the first computer-enhanced illustrations developed for a science-fiction book.” His non-fiction books on astronomy and paleontology remain a feast for the eyes, as does his Science Fiction Masterworks series of illustrated novels and stories from the likes of Arthur C. Clarke, Fritz Leiber, Philip Jose Farmer, Frank Herbert, and Isaac Asimov.

As part and parcel of his dedication to immersive literature, Preiss also looked for ways to make books interactive, even without the benefit of computers. In 1982, he wrote and published The Secret: A Treasure Hunt, a puzzle book and real-world scavenger hunt in the spirit of Kit Williams’s Masquerade. As beautifully illustrated as one would expect any book with which Preiss was involved to be, it told of “The Fair People,” gnomes and fairies who fled from the Old to the New World when Europeans began to cut down their forests and dam the rivers along which they lived: “They came over and they stayed, and they were happy. But then they saw that man was following the same path [in the Americas] and that what had happened in the Old World would probably happen in the New. So the ones who had already come over and the ones who followed them all decided they would have to go into hiding.” They took twelve treasures with them. “I have been entrusted by the Fair People to reveal the whereabouts of the [treasures] through paintings in the book,” Preiss claimed. “There are twelve treasures hidden throughout North America and twelve color paintings that contain clues to the whereabouts of the treasure. Then, there is a poem for each treasure. So, if you can correctly figure out the poem and the painting, you will find one of the treasures.” Each treasure carried a bounty for the discoverer of $1000. Preiss’s self-professed ultimate goal was to use the interactivity of the scavenger hunt as another tool for immersing the reader, “like in the kids’ books where you choose your own ending.”

The Secret failed to become the sales success or the pop-culture craze that Masquerade had become in Britain three years earlier. Only one of the treasures was found in the immediate wake of its publication, in Chicago in 1983. Yet it had a long shelf life: a second treasure was found in Cleveland more than twenty years later. A 2018 documentary film about the book sparked a renewal of interest, and the following year a third treasure was recovered in Boston. A small but devoted cult continues to search for the remaining ones today, sharing information and theories via websites and podcasts.

In a less enduring but more commercially successful vein, Preiss also published three different lines of gamebooks to feed the hunger ignited by the original Choose Your Own Adventure books of Edward Packard and R.A. Montgomery. Unsurprisingly, his books were much more visual than the typical example of the breed, with illustrations that often doubled as puzzles for the reader to solve. A dedicated nurturer of young writing and illustrating talent, he passed the contracts to make books in these lines and others to up-and-comers who badly needed the cash and the measure of industry credibility they brought with them.

Being a man with a solid claim to the woefully overused title of “visionary,” Preiss was aware of what computers could mean for our relationship with storytelling and information from a very early date. He actually visited Xerox PARC during its 1970s heyday and marveled at the potential he saw there, told all of his friends that this was the real future of information spaces. Later he became the driving force behind the most concentrated and in many ways the most interesting of all the bookware software projects of the 1980s: the Telarium line of literary adaptations, which turned popular science-fiction, fantasy, and mystery novels into illustrated text adventures. I won’t belabor this subject here because I already wrote histories and reviews of all of the Telarium games years ago for this site. I will say, however, that the line as a whole bears all the hallmarks of a Byron Preiss project, from the decision to include colorful pictures in the games — something Infocom most definitely did not provide — to the absolutely gorgeous packaging, which arguably outdid Infocom’s own high standard for same. (The packaging managed to provide a sensory overload which transcended even the visual; one of my most indelible memories of gaming in my childhood is of the rich smell those games exuded, thanks to some irreplicable combination of cardboard, paper, ink, and paste. Call it my version of Proust’s madeleine.) The games found on the actual disks were a bit hit-or-miss, but nobody could say that Telarium didn’t put its best foot forward.

Unfortunately, it wasn’t enough; the Telarium games weren’t big sellers, and the line lasted only from 1984 to 1986. Afterward, Preiss went back to his many and varied endeavors in book publishing, while computer games switched their metaphor of choice from interactive novels to interactive movies in response to the arrival of new, more audiovisually capable gaming computers like the Commodore Amiga. Even now, though, Preiss continued to keep one eye on what was going on with computers. For example, he published novelizations of some of Infocom’s games, thus showing that he bore no ill will toward the company that had both inspired his own Telarium line and outlived it. More importantly in the long run, he saw Apple’s HyperCard, with its new way of navigating texts non-linearly through association — multimedia texts which could include pictures, sound, music, and even movie clips alongside their words. By the turn of the 1990s, Bob Stein’s Voyager Software was starting to make waves with “electronic books” on CD-ROM that took full advantage of all of these affordances. The nature of electronic books had changed since the heyday of the text adventure, but the idea lived on in the abstract.

In fact, the advances in computer technology as the 1990s wore on were so transformative as to give everyone a bad case of mixed metaphors. The traditional computer-games industry, entranced by the new ability to embed video clips of real actors in their creations, was more fixated on interactive movies than ever. At the same time, though, the combination of hypertext with multimedia continued to give life to the notion of electronic books. Huge print publishers like Simon & Schuster and Random House, who had jumped onto the last bookware bandwagon only to bail out when the sales didn’t come, now made new investments in CD-ROM-based software that were an order of magnitude bigger than their last ones, even as huge names in moving pictures, from Disney to The Discovery Channel, were doing the same. The poster child for all of the taxonomical confusion was undoubtedly the pioneering Voyager, a spinoff from the Criterion Collection of classic movies on laserdisc and VHS whose many and varied releases all seemed to live on a liminal continuum between book and movie.

One has to assume that Byron Preiss felt at least a pang of jealousy when he saw the innovative work Voyager was doing. Exactly one decade after launching Telarium, he took a second stab at bookware, with the same high hopes as last time but on a much, much more lavish scale, one that was in keeping with the burgeoning 1990s tech boom. In the spring of 1994, Electronic Entertainment magazine brought the news that the freshly incorporated Byron Preiss Multimedia Company “is planning to flood the CD-ROM market with interactive titles this year.”

They weren’t kidding. Over the course of the next couple of years, Preiss published a torrent of CD-ROMs, enough to make Voyager’s prolific release schedule look downright conservative. There was stuff for the ages in high culture, such as volumes dedicated to Frank Lloyd Wright and Albert Einstein. There was stuff for the moment in pop culture, such as discs about Seinfeld, Beverly Hills 90210, and Melrose Place, not to forget The Sci-Fi Channel Trivia Game. There was stuff reflecting Preiss’s enduring love for comics (discs dedicated to R. Crumb and Jean Giraud) and animation (The Multimedia Cartoon Studio). There were electronic editions of classic novels, from John Steinbeck to Raymond Chandler to Kurt Vonnegut. There was educational software suitable for older children (The Planets, The Universe, The History of the United States), and interactive storybooks suitable for younger ones. There were even discs for toddlers, which line Preiss dubbed “BABY-ROMS.” A lot of these weren’t bad at all; Preiss’s CD-ROM library is almost as impressive as that of Voyager, another testament to the potential of a short-lived form of media that arguably deserved a longer day in the sun before it was undone by the maturation of networked hypertexts on the World Wide Web.

But then there are the games, a field Bob Stein was wise enough to recognize as outside of Voyager’s core competency and largely stay away from. Alas, Preiss was not, and did not.



The first full-fledged game from Byron Preiss Multimedia was an outgrowth of some of Preiss’s recent print endeavors. In the late 1980s, he had the idea of enlisting some of his stable of young writers to author new novels in the universes of aging icons of science fiction whose latest output had become a case of diminishing returns — names like Isaac Asimov, Ray Bradbury, and Arthur C. Clarke. Among other things, this broad concept led to a series of six books by five different authors that was called Robot City, playing with the tropes, characters, and settings of Asimov’s “Robot” stories and novels. In 1994, two years after Asimov’s death, Preiss also published a Robot City computer game. Allow me to quote the opening paragraph of Martin E. Cirulis’s review of same for Computer Gaming World magazine, since it does such a fine job of pinpointing the reasons that so many games of this sort tended to be so underwhelming.

With all the new interest in computer entertainment, it seems that a day doesn’t go by without another company throwing their hat, as well as wads of startup money, into the ring. More often than not, the first thing offered by these companies is an adventure-game title, because of the handy way the genre brings out all the bells and whistles of multimedia. I’m always a big fan of new blood, but a lot of the first offerings get points for enthusiasm, then lose ground and reinvent the wheel. Design and management teams new to the field seem so eager to show us how dumb our old games are that they fail to learn any lessons from the fifteen-odd years of successful and failed games that have gone before. Unfortunately, Robot City, Byron Preiss Multimedia’s initial game release, while impressive in some aspects, suffers from just these kinds of birthing pains.

If anything, Cirulis is being far too kind here. Robot City is a game where simply moving from place to place is infuriating, thanks to a staggeringly awful interface, city streets that are constantly changing into random new configurations, and the developers’ decision to put exterior scenes on one of its two CDs and interior scenes on the other, meaning you can look forward to swapping CDs roughly every five minutes.

Robot City. If you don’t like the look of this city street, rest assured that it will have changed completely next time you walk outside. Why? It’s not really clear… something to do with The Future.

Yet the next game from Byron Preiss Multimedia makes Robot City seem like a classic. I’d like to dwell on The Martian Chronicles just a bit today — not because it’s good, but because it’s so very, very bad, so bad in fact that I find it oddly fascinating.

Another reason for it to pique my interest is that it’s such an obvious continuation of what Preiss had begun with Telarium. One of Telarium’s very first games was an adaptation of the 1953 Ray Bradbury novel Fahrenheit 451. This later game, of course, adapts his breakthrough book The Martian Chronicles, a 1950 “fix-up novel” of loosely linked stories about the colonization — or, perhaps better said, invasion — of Mars by humans. And the two games are of a piece in many other ways once we make allowances for the technological changes in computing between 1984 and 1994.

For example, Bradbury himself gave at least a modicum of time and energy to both game projects, which was by no means always true of the authors Preiss chose to honor with an adaptation of some sort. In the Telarium game, you can call Bradbury up on a telephone and shoot the breeze; in the multimedia one, you can view interview clips of him. In the Telarium game, a special “REMEMBER” verb displays snippets of prose from the novel; in the multimedia one, a portentous narrator recites choice extracts from Bradbury’s Mars stories from time to time as you explore the Red Planet. Then, too, neither game is formally innovative in the least: the Telarium one is a parser-driven interactive fiction, the dominant style of adventure game during its time, while the multimedia game takes all of its cues from Myst, the hottest phenomenon in adventures at the time of its release. (The box even sported a hype sticker which named it the answer to the question of “Where do you go after Myst?”) About the only thing missing from The Martian Chronicles that its predecessor can boast about is Fahrenheit 451‘s gorgeous bespoke packaging. (That ship had largely sailed for computer games by 1994; as the scenes actually shown on the monitor got prettier, the packaging got more uniform and unambitious.)

By way of compensation, The Martian Chronicles emphasizes its bookware bona fides by bearing on its box the name of the book publisher Simon & Schuster, back for a second go-round after failing to make a worthwhile income stream out of publishing games in the 1980s. But sadly, once you get past all the meta-textual elements, what you are left with in The Martian Chronicles is a Myst clone notable only for its unusually extreme level of unoriginality and its utter ineptness of execution.

I must confess that I’ve enjoyed very few of the games spawned by Myst during my life, and that’s still the case today, after I’ve made a real effort to give several of them a fair shake for these histories. It strikes me that the sub-genre is, more than just about any other breed of game I know of, defined by its limitations rather than its allowances. The first-person node-based movement, with its plethora of pre-rendered 3D views, was both the defining attribute of the lineage during the 1990s and an unsatisfying compromise in itself: what you really want to be doing is navigating through a seamless 3D space, but technical limitations have made that impossible, so here you are, lurching around, discrete step by discrete step. In many of these games, movement is not just unsatisfying but actively confusing, because clicking the rotation arrows doesn’t always turn you 90 degrees as you expect it to. I often find just getting around a room in a Myst clone to be a challenge, what with the difficulty of constructing a coherent mental map of my surroundings using the inconsistent movement controls. There inevitably seems to be that one view that I miss — the one that contains something I really, really need. This is what people in the game-making trade sometimes call “fake difficulty”: problems the game throws up in front of you where no problem would exist if you were really in this environment. In other schools of software development, it’s known by the alternative name of terrible interface design.

Yet I have to suspect that the challenges of basic navigation are partially intentional, given that there’s so little else the designer can really do with these engines. Most were built in either HyperCard or the multimedia presentation manager Macromedia Director; the latter was the choice for  The Martian Chronicles. These “middleware” tools were easy to work with but slow and limiting. Their focus was the media they put on the screen; their scripting languages were never intended to be used for the complex programming that is required to present a simulated world with any dynamism to it. Indeed, Myst clones are the opposite of dynamic, being deserted, static spaces marked only by the buttons, switches, and set-piece spatial puzzles which are the only forms of gameplay that can be practically implemented using their tool chains. While all types of games have constraints, I can’t think of any other strand of them that make their constraints the veritable core of their identity. In addition to the hope of selling millions and millions of copies like Myst did, I can’t help but feel that their prevalence during the mid-1990s was to a large extent a reflection of how easy they were to make in terms of programming. In this sense, they were a natural choice for a company like the one Byron Preiss set up, which was more replete with artists and writers from the book trade than with ace programmers from the software trade.

The Martian Chronicles is marked not just by all of the usual Myst constraints but by a shocking degree of laziness that makes it play almost like a parody of the sub-genre. The plot is most kindly described as generic, casting you as the faceless explorer of the ruins of an ancient — and, needless to say, deserted — Martian city, searching for a legendary all-powerful McGuffin. You would never connect this game with Bradbury’s book at all if it weren’t for the readings from it that inexplicably pop up from time to time. What you get instead of the earnest adaptation advertised on the box is the most soul-crushingly dull Myst clone ever: a deserted static environment around which are scattered a dozen or so puzzles which you’ve seen a dozen or more times before. Everything is harder than it ought to be, thanks to a wonky cursor whose hot spot seems to float about its surface randomly, a cursor which disappears entirely whenever an animation loop is playing. This is the sort of game that, when you go to save, requires you to delete the placeholder name of “Save1” character by character before you can enter your own. This game is death by a thousand niggling little aggravations like that one, which taken in the aggregate tell you that no actual human being ever tried to play it before it was shoved into a box and shipped. Even the visuals, the one saving grace of some Myst clones and the defining element of Byron Preiss’s entire career, are weirdly slapdash, making The Martian Chronicles useless even as a tech demo. Telarium’s Fahrenheit 451 had its problems, but it’s Infocom’s Trinity compared to this thing.


It’s telling that many reviewers labelled the fifteen minutes of anodyne interview clips with Ray Bradbury the best part of the game.

Some Myst clones have the virtue of being lovely to look at. Not this one, with views that look like they were vandalized by a two-year-old Salvador Dali wannabee with only two colors of crayon to hand.



Computer Gaming World justifiably savaged The Martian Chronicles. It “is as devoid of affection and skill as any game I have ever seen,” noted Charlies Ardai, by far the magazine’s deftest writer, in his one-star review. Two years after its release, Computer Gaming World named it the sixteenth worst game of all time, outdone only by such higher-profile crimes against their players as Sierra’s half-finished Outpost and Cosmi’s DefCon 5, an “authentic SDI simulation” whose level of accuracy was reflected in its name. (DefCon 5 is the lowest level of nuclear threat, not the highest.) As for The Martian Chronicles, the magazine called it “tired, pointless, and insulting to Bradbury’s poetic genius.” Most of the other magazines had little better to say — those, that is, which didn’t simply ignore it. For it was becoming abundantly clear that games like these really weren’t made for the hardcore set who read the gaming magazines. The problem was, it wasn’t clear who they were made for.

Still, Byron Preiss Multimedia continued to publish games betwixt and between their other CD-ROMs for another couple of years. The best of a pretty sorry bunch was probably the one called Private Eye, which built upon the noir novels of Raymond Chandler, one of Preiss’s favorite touchstones. Tellingly, it succeeded — to whatever extent it did — by mostly eschewing puzzles and other traditional forms of game design, being driven instead by conversations and lengthy non-interactive cartoon cut scenes; a later generation might have labeled it a visual novel. Charlies Ardai rewarded it with a solidly mediocre review, acknowledging that “it don’t stink up da joint.” Faint praise perhaps, but beggars can’t be choosers.

The Spider-Man game, by contrast, attracted more well-earned vitriol from Ardai: “The graphics are jagged, the story weak, the puzzles laughable (cryptograms, anyone?), and the action sequences so dismal, so minor, so clumsy, so basic, so dull, so Atari 2600 as to defy comment.” Tired of what Ardai called Preiss’s “gold-into-straw act,” even Computer Gaming World stopped bothering with his games after this. That’s a pity in a way; I would have loved to see Ardai fillet Forbes Corporate Warrior, a simplistic DOOM clone that replaced monsters with rival corporations, to be defeated with weapons like Price Bombs, Marketing Missiles, Ad Blasters, Takeover Torpedoes, and Alliance Harpoons, with all of it somehow based on “fifteen years of empirical data from an internationally recognized business-simulation firm.” “Business is war, cash is ammo!” we were told. Again, one question springs to mind. Who on earth was this game for?

Corporate Warrior came out in 1997, near the end of the road for Byron Preiss Multimedia, which, like almost all similar multimedia startups, had succeeded only in losing buckets and buckets of money. Preiss finally cut his losses and devoted all of his attention to paper-based publishing again, a realm where his footing was much surer.

I hasten to add that, for all that he proved an abject failure at making games, his legacy in print publishing remains unimpeachable. You don’t have to talk to many who were involved with genre and children’s books in the 1980s and 1990s before you meet someone whose career was touched by him in a positive way. The expressions of grief were painfully genuine after he was killed in a car accident in 2005. He was called a “nice guy and honest person,” “an original,” “a business visionary,” “one of the good guys,” “a positive force in the industry,” “one of the most likable people in publishing,” “an honest, dear, and very smart man,” “warm and personable,” “charming, sophisticated, and the best dresser in the room.” “You knew one of his books would be something you couldn’t get anywhere else, and [that] it would be amazing,” said one of the relatively few readers who bothered to dig deep enough into the small print of the books he bought to recognize Preiss’s name on an inordinate number of them. Most readers, however, “never think about the guy who put it together. He’s invisible, although it wouldn’t happen without him.”

But regrettably, Preiss was a textbook dilettante when it came to digital games, more intrigued by the idea of them than he was prepared to engage with the practical reality of what goes into a playable game. It must be said that he was far from alone in this. As I already noted, many other veterans of other forms of media tried to set up similar multimedia-focused alternatives to conventional gaming, and failed just as abjectly. And yet, dodgy though these games almost invariably were in execution, there was something noble about them in concept: they really were trying to move the proverbial goalposts, trying to appeal to new demographics. What the multimedia mavens behind them failed to understand was that fresh themes and surface aesthetics do not great games make all by themselves; you have to devote attention to design as well. Their failure to do so doomed their games to becoming a footnote in history.

For in the end, games are neither books nor movies; they are their own things, which may occasionally borrow approaches from one or the other but should never delude themselves into believing that they can just stick the adjective “interactive” in front of their preferred inspiration and call it a day. Long before The Martian Chronicles stank up the joint, the very best game designers had come to understand that.


Postscript: On a more positive note…

Because I don’t like to be a complete sourpuss, let me note that the efforts of the multimedia dilettantes of the 1990s weren’t always misbegotten. I know of at least one production in this style that’s well worth your time: The Dark Eye, an exploration of the nightmare consciousness of Edgar Allan Poe that was developed by Inscape and released in 1995. On the surface, it’s alarmingly similar to The Martian Chronicles: a Myst-like presentation created in Macromedia Director, featuring occasional readings from the master’s works. But it hangs together much, much better, thanks to a sharp aesthetic sense and a willingness to eschew conventional puzzles completely in favor of atmosphere — all the atmosphere, I daresay, that you’ll be able to take, given the creepy subject matter. I encourage you to read my earlier review of it and perhaps to check it out for yourself. If nothing else, it can serve as proof that no approach to game-making is entirely irredeemable.

Another game that attempts to do much the same thing as The Martian Chronicles but does it much, much better is Rama, which was developed by Dynamix and released by Sierra in 1996. Here as well, the link to the first bookware era is catnip for your humble author; not only was Arthur C. Clarke adapted by a Telarium game before this one, but the novel chosen for that adaptation was Rendezvous with Rama, the same one that is being celebrated here. As in The Martian Chronicles, the lines between game and homage are blurred in Rama, what with the selection of interview clips in which Clarke himself talks about his storied career and one of the most lauded books it produced. And once again the actual game, when you get around to playing it, is very much in the spirit of Myst.

But Dynamix came from the old school of game development, and were in fact hugely respected in the industry for their programming chops; they wouldn’t have been caught dead using lazy middleware like Macromedia Director. Rama rather runs in a much more sophisticated engine, and was designed by people who had made games before and knew what led to playable ones. It’s built around bone-hard puzzles that often require a mathematical mind comfortable with solving complex equations and translating between different base systems. I must admit that I find it all a bit dry — but then, as I’ve said, games in this style are not usually to my taste; I’ve just about decided that the games in the “real” Myst series are all the Myst I need. Nevertheless, Rama is a vastly better answer to the question of “Where do you go after Myst?” than most of the alternatives. If you like its sort of thing, by all means, check it out. Call it another incarnation of Telarium 2.0, done right this time.

(Sources: Starlog of November 1981, December 1981, November 1982, January 1984, June 1984, April 1986, March 1987, November 1992, December 1992, January 1997, April 1997, February 1999, June 2003, May 2005, and October 2005; Compute!’s Gazette of December 1984; STart of November 1990; InCider of May 1993; Electronic Entertainment of June 1994, December 1994, January 1995, May 1995, and December 1995; MacUser of October 1995; Computer Games Strategy Plus of November 1995; Computer Gaming World of December 1995, January 1996, October 1996, November 1996, and February 1997; Next Generation of October 1996; Chicago Tribune of November 16 1982. Online sources include the announcement of Byron Preiss’s death and the outpouring of memories and sentiment that followed on COMICON.com.

A search on archive.org will reveal a version of The Martian Chronicles that has been modified to run on Windows 10. The Collection Chamber has a version of Rama that’s ready to install and run on Windows 10. Mac and Linux users can import the data files there into their computer’s version of ScummVM.)

 
26 Comments

Posted by on September 2, 2022 in Digital Antiquaria, Interactive Fiction

 

Tags: , , , ,

A Web Around the World, Part 10: A Web of Associations

While wide-area computer networking, packet switching, and the Internet were coming of age, all of the individual computers on the wire were becoming exponentially faster, exponentially more capacious internally, and exponentially smaller externally. The pace of their evolution was unprecedented in the history of technology; had automobiles been improved at a similar rate, the Ford Model T would have gone supersonic within ten years of its introduction. We should take a moment now to find out why and how such a torrid pace was maintained.

As Claude Shannon and others realized before World War II, a digital computer in the abstract is an elaborate exercise in boolean logic, a dynamic matrix of on-off switches — or, if you like, of ones and zeroes. The more of these switches a computer has, the more it can be and do. The first Turing-complete digital computers, such as ENIAC and Whirlwind, implemented their logical switches using vacuum tubes, a venerable technology inherited from telephony. Each vacuum tube was about as big as an incandescent light bulb, consumed a similar amount of power, and tended to burn out almost as frequently. These factors made the computers which employed vacuum tubes massive edifices that required as much power as the typical city block, even as they struggled to maintain an uptime of more than 50 percent — and all for the tiniest sliver of one percent of the overall throughput of the smartphones we carry in our pockets today. Computers of this generation were so huge, expensive, and maintenance-heavy in relation to what they could actually be used to accomplish that they were largely limited to government-funded research institutions and military applications.

Computing’s first dramatic leap forward in terms of its basic technological underpinnings also came courtesy of telephony. More specifically, it came in the form of the transistor, a technology which had been invented at Bell Labs in December of 1947 with the aim of improving telephone switching circuits. A transistor could function as a logical switch just as a vacuum tube could, but it was a minute fraction of the size, consumed vastly less power, and was infinitely more reliable. The computers which IBM built for the SAGE project during the 1950s straddled this technological divide, employing a mixture of vacuum tubes and transistors. But by 1960, the computer industry had fully and permanently embraced the transistor. While still huge and unwieldy by modern standards, computers of this era were practical and cost-effective for a much broader range of applications than their predecessors had been; corporate computing started in earnest in the transistor era.

Nevertheless, wiring together tens of thousands of discrete transistors remained a daunting task for manufacturers, and the most high-powered computers still tended to fill large rooms if not entire building floors. Thankfully, a better way was in the offing. Already in 1958, a Texas Instruments engineer named Jack Kilby had come up with the idea of the integrated circuit: a collection of miniaturized transistors and other electrical components embedded in a silicon wafer, the whole being suitable for stamping out quickly in great quantities by automated machinery. Kilby invented, in other words, the soon-to-be ubiquitous computer chip, which could be wired together with its mates to produce computers that were not only smaller but easier and cheaper to manufacture than those that had come before. By the mid-1960s, the industry was already in the midst of the transition from discrete transistors to integrated circuits, producing some machines that were no larger than a refrigerator; among these was the Honeywell 516, the computer which was turned into the world’s first network router.

As chip-fabrication systems improved, designers were able to miniaturize the circuitry on the wafers more and more, allowing ever more computing horsepower to be packed into a given amount of physical space. An engineer named Gordon Moore proposed the principle that has become known as Moore’s Law: he calculated that the number of transistors which can be stamped into a chip of a given size doubles every second year.[1]When he first stated his law in 1965, Moore actually proposed a doubling every single year, but revised his calculations in 1975. In July of 1968, Moore and a colleague named Robert Noyce formed the chip maker known as Intel to make the most of Moore’s Law. The company has remained on the cutting edge of chip fabrication to this day.

The next step was perhaps inevitable, but it nevertheless occurred almost by accident. In 1971, an Intel engineer named Federico Faggin put all of the circuits making up a computer’s arithmetic, logic, and control units — the central “brain” of a computer — onto a single chip. And so the microprocessor was born. No one involved with the project at the time anticipated that the Intel 4004 central-processing unit would open the door to a new generation of general-purpose “microcomputers” that were small enough to sit on desktops and cheap enough to be purchased by ordinary households. Faggin and his colleagues rather saw the 4004 as a fairly modest, incremental advancement of the state of the art, which would be deployed strictly to assist bigger computers by serving as the brains of disk controllers and other single-purpose peripherals. Before we rush to judge them too harshly for their lack of vision, we should remember that they are far from the only inventors in history who have failed to grasp the real importance of their creations.

At any rate, it was left to independent tinkerers who had been dreaming of owning a computer of their own for years, and who now saw in the microprocessor the opportunity to do just that, to invent the personal computer as we know it. The January 1975 issue of Popular Electronics sports one of the most famous magazine covers in the history of American technology: it announces the $439 Altair 8800, from a tiny Albuquerque, New Mexico-based company known as MITS. The Altair was nothing less than a complete put-it-together-yourself microcomputer kit, built around the Intel 8080 microprocessor, a successor model to the 4004.

The magazine cover that launched a technological revolution.

The next milestone came in 1977, when three separate companies announced three separate pre-assembled, plug-em-in-and-go personal computers: the Apple II, the Radio Shack TRS-80, and the Commodore PET. In terms of raw computing power, these machines were a joke compared to the latest institutional hardware. Nonetheless, they were real, Turing-complete computers that many people could afford to buy and proceed to tinker with to their heart’s content right in their own homes. They truly were personal computers: their buyers didn’t have to share them with anyone. It is difficult to fully express today just how extraordinary an idea this was in 1977.

This very website’s early years were dedicated to exploring some of the many things such people got up to with their new dream machines, so I won’t belabor the subject here. Suffice to say that those first personal computers were, although of limited practical utility, endlessly fascinating engines of creativity and discovery for those willing and able to engage with them on their own terms. People wrote programs on them, drew pictures and composed music, and of course played games, just as their counterparts on the bigger machines had been doing for quite some time. And then, too, some of them went online.

The first microcomputer modems hit the market the same year as the trinity of 1977. They operated on the same principles as the modems developed for the SAGE project a quarter-century before — albeit even more slowly. Hobbyists could thus begin experimenting with connecting their otherwise discrete microcomputers together, at least for the duration of a phone call.

But some entrepreneurs had grander ambitions. In July of 1979, not one but two subscription-based online services, known as CompuServe and The Source, were announced almost simultaneously. Soon anyone with a computer, a modem, and the requisite disposable income could dial them up to socialize with others, entertain themselves, and access a growing range of useful information.

Again, I’ve written about this subject in some detail before, so I won’t do so at length here. I do want to point out, however, that many of J.C.R. Licklider’s fondest predictions for the computer networks of the future first became a reality on the dozen or so of these commercial online services that managed to attract significant numbers of subscribers over the years. It was here, even more so than on the early Internet proper, that his prognostications about communities based on mutual interest rather than geographical proximity proved their prescience. Online chatting, online dating, online gaming, online travel reservations, and online shopping first took hold here, first became a fact of life for people sitting in their living rooms. People who seldom or never met one another face to face or even heard one another’s voices formed relationships that felt as real and as present in their day-to-day lives as any others — a new phenomenon in the history of social interaction. At their peak circa 1995, the commercial online services had more than 6.5 million subscribers in all.

Yet these services failed to live up to the entirety of Licklider’s old dream of an Intergalactic Computer Network. They were communities, yes, but not quite networks in the sense of the Internet. Each of them lived on a single big mainframe, or at most a cluster of them, in a single data center, which you dialed into using your microcomputer. Once online, you could interact in real time with the hundreds or thousands of others who might have dialed in at the same time, but you couldn’t go outside the walled garden of the service to which you’d chosen to subscribe. That is to say, if you’d chosen to sign up with CompuServe, you couldn’t talk to someone who had chosen The Source. And whereas the Internet was anarchic by design, the commercial online services were steered by the iron hands of the companies who had set them up. Although individual subscribers could and often did contribute content and in some ways set the tone of the services they used, they did so always at the sufferance of their corporate overlords.

Through much of the fifteen years or so that the commercial services reigned supreme, many or most microcomputer owners failed to even realize that an alternative called the Internet existed. Which is not to say that the Internet was without its own form of social life. Its more casual side centered on an online institution known as Usenet, which had arrived on the scene in late 1979, almost simultaneously with the first commercial services.

At bottom, Usenet was (and is) a set of protocols for sharing public messages, just as email served that purpose for private ones. What set it apart from the bustling public forums on services like CompuServe was its determinedly non-centralized nature. Usenet as a whole was a network of many servers, each storing a local copy of its many “newsgroups,” or forums for discussions on particular topics. Users could read and post messages using any of the servers, either by sitting in front of the server’s own keyboard and monitor or, more commonly, through some form of remote connection. When a user posted a new message to a server, it sent it on to several other servers, which were then expected to send it further, until the message had propagated through the whole network of Usenet servers. The system’s asynchronous nature could distort conversations; messages reached different servers at different times, which meant you could all too easily find yourself replying to a post that had already been retracted, or making a point someone else had already made before you. But on the other hand, Usenet was almost impossible to break completely — just like the Internet itself.

Strictly speaking, Usenet did not depend on the Internet for its existence. As far as it was concerned, its servers could pass messages among themselves in whatever way they found most convenient. In its first few years, this sometimes meant that they dialed one another up directly over ordinary phone lines and talked via modem. As it matured into a mainstay of hacker culture, however, Usenet gradually became almost inseparable from the Internet itself in the minds of most of its users.

From the three servers that marked its inauguration in 1979, Usenet expanded to 11,000 by 1988. The discussions that took place there didn’t quite encompass the whole of the human experience equally; the demographics of the hacker user base meant that computer programming tended to get more play than knitting, Pink Floyd more play than Madonna, and science-fiction novels more play than romances. Still, the newsgroups were nothing if not energetic and free-wheeling. For better or for worse, they regularly went places the commercial online services didn’t dare allow. For example, Usenet became one of the original bastions of online pornography, first in the form of fevered textual fantasies, then in the somehow even more quaint form of “ASCII art,” and finally, once enough computers had the graphics capabilities to make it worthwhile, as actual digitized photographs. In light of this, some folks expressed relief that it was downright difficult to get access to Usenet and the rest of the Internet if one didn’t teach or attend classes at a university, or work at a tech company or government agency.

The perception of the Internet as a lawless jungle, more exciting but also more dangerous than the neatly trimmed gardens of the commercial online services, was cemented by the Morris Worm, which was featured on the front page of the New York Times for four straight days in December of 1988. Created by a 23-year-old Cornell University graduate student named Robert Tappan Morris, it served as many people’s ironic first notice that a network called the Internet existed at all. The exploit, which its creator later insisted had been meant only as a harmless prank, spread by attaching itself to some of the core networking applications used by Unix, a powerful and flexible operating system that was by far the most popular among Internet-connected computers at the time. The Morris Worm came as close as anything ever has to bringing the entire Internet down when its exponential rate of growth effectively turned it into a network-wide denial-of-service attack — again, accidentally, if its creator is to be believed. (Morris himself came very close to a prison sentence, but escaped with three years of probation, a $10,000 fine, and 400 hours of community service, after which he went on to a lucrative career in the tech sector at the height of the dot-com boom.)

Attitudes toward the Internet in the less rarefied wings of the computing press had barely begun to change even by the beginning of the 1990s. An article from the issue of InfoWorld dated February 4, 1991, encapsulates the contemporary perceptions among everyday personal-computer owners of this “vast collection of networks” which is “a mystery even to people who call it home.”

It is a highway of ideas, a collective brain for the nation’s scientists, and perhaps the world’s most important computer bulletin board. Connecting all the great research institutions, a large network known collectively as the Internet is where scientists, researchers, and thousands of ordinary computer users get their daily fix of news and gossip.

But it is the same network whose traffic is occasionally dominated by X-rated graphics files, UFO sighting reports, and other “recreational” topics. It is the network where renegade “worm” programs and hackers occasionally make the news.

As with all communities, this electronic village has both high- and low-brow neighborhoods, and residents of one sometimes live in the other.

What most people call the Internet is really a jumble of networks rooted in academic and research institutions. Together these networks connect over 40 countries, providing electronic mail, file transfer, remote login, software archives, and news to users on 2000 networks.

Think of a place where serious science comes from, whether it’s MIT, the national laboratories, a university, or [a] private enterprise, [and] chances are you’ll find an Internet address. Add [together] all the major sites, and you have the seeds of what detractors sometimes call “Anarchy Net.”

Many people find the Internet to be shrouded in a cloud of mystery, perhaps even intrigue.

With addresses composed of what look like contractions surrounded by ‘!’s, ‘@’s, and ‘.’s, even Internet electronic mail seems to be from another world. Never mind that these “bangs,” “at signs,” and “dots” create an addressing system valid worldwide; simply getting an Internet address can be difficult if you don’t know whom to ask. Unlike CompuServe or one of the other email services, there isn’t a single point of contact. There are as many ways to get “on” the Internet as there are nodes.

At the same time, this complexity serves to keep “outsiders” off the network, effectively limiting access to the world’s technological elite.

The author of this article would doubtless have been shocked to learn that within just four or five years this confusing, seemingly willfully off-putting network of scientists and computer nerds would become the hottest buzzword in media, and that absolutely everybody, from your grandmother to your kids’ grade-school teacher, would be rushing to get onto this Internet thing before they were left behind, even as stalwart rocks of the online ecosystem of 1991 like CompuServe would already be well on their way to becoming relics of a bygone age.

The Internet had begun in the United States, and the locus of the early mainstream excitement over it would soon return there. In between, though, the stroke of inventive genius that would lead to said excitement would happen in the Old World confines of Switzerland.


Tim Berners-Lee

In many respects, he looks like an Englishman from central casting — quiet, courteous, reserved. Ask him about his family life and you hit a polite but exceedingly blank wall. Ask him about the Web, however, and he is suddenly transformed into an Italian — words tumble out nineteen to the dozen and he gesticulates like mad. There’s a deep, deep passion here. And why not? It is, after all, his baby.

— John Naughton, writing about Tim Berners-Lee

The seeds of the Conseil Européen pour la Recherche Nucléaire — better known in the Anglosphere as simply CERN — were planted amidst the devastation of post-World War II Europe by the great French quantum physicist Louis de Broglie. Possessing an almost religious faith in pure science as a force for good in the world, he proposed a new, pan-European foundation dedicated to exploring the subatomic realm. “At a time when the talk is of uniting the peoples of Europe,” he said, “[my] attention has turned to the question of developing this new international unit, a laboratory or institution where it would be possible to carry out scientific work above and beyond the framework of the various nations taking part. What each European nation is unable to do alone, a united Europe can do, and, I have no doubt, would do brilliantly.” After years of dedicated lobbying on de Broglie’s part, CERN officially came to be in 1954, with its base of operations in Geneva, Switzerland, one of the places where Europeans have traditionally come together for all manner of purposes.

The general technological trend at CERN over the following decades was the polar opposite of what was happening in computing: as scientists attempted to peer deeper and deeper into the subatomic realm, the machines they required kept getting bigger and bigger. Between 1983 and 1989, CERN built the Large Electron-Positron Collider in Geneva. With a circumference of almost seventeen miles, it was the largest single machine ever built in the history of the world. Managing projects of such magnitude, some of them employing hundreds of scientists and thousands of support staff, required a substantial computing infrastructure, along with many programmers and systems architects to run it. Among this group was a quiet Briton named Tim Berners-Lee.

Berners-Lee’s credentials were perfect for his role. He had earned a bachelor’s degree in physics from Oxford in 1976, only to find that pure science didn’t satisfy his urge to create practical things that real people could make use of. As it happened, both of his parents were computer scientists of considerable note; they had both worked on the University of Manchester’s Mark I computer, the world’s very first stored-program von Neumann machine. So, it was natural for their son to follow in their footsteps, to make a career for himself in the burgeoning new field of microcomputing. Said career took him to CERN for a six-month contract in 1980, then back to Geneva on a more permanent basis in 1984. Because of his background in physics, Berners-Lee could understand the needs of the scientists he served better than many of his colleagues; his talent for devising workable solutions to their problems turned him into something of a star at CERN. Among other projects, he labored long and hard to devise a way of making the thousands upon thousands of pages of documentation that were generated at CERN each year accessible, manageable, and navigable.

But, for all that Berners-Lee was being paid to create an internal documentation system for CERN, it’s clear that he began thinking along bigger lines fairly quickly. The same problems of navigation and discoverability that dogged his colleagues at CERN were massively present on the Internet as a whole. Information was hidden there in out-of-the-way repositories that could only be accessed using command-line-driven software with obscure command sets — if, that is, you knew that it existed at all.

His idea of a better way came courtesy of hypertext theory: a non-linear approach to reading texts and navigating an information space, built around associative links embedded within and between texts. First proposed by Vannevar Bush, the World War II-era MIT giant whom we briefly met in an earlier article in this series, hypertext theory had later proved a superb fit with a mouse-driven graphical computer interface which had been pioneered at Xerox PARC during the 1970s under the astute management of our old friend Robert Taylor. The PARC approach to user interfaces reached the consumer market in a prominent way for the first time in 1984 as the defining feature of the Apple Macintosh. And the Mac in turn went on to become the early hotbed of hypertext experimentation on consumer-grade personal computers, thanks to Apple’s own HyperCard authoring system and the HyperCard-driven laser discs and CD-ROMs that soon emerged from companies like Voyager.

The user interfaces found in HyperCard applications were surprisingly similar to those found in the web browsers of today, but they were limited to the curated, static content found on a single floppy disk or CD-ROM. “They’ve already done the difficult bit!” Berners-Lee remembers thinking. Now someone just needed to put hypertext on the Internet, to allow files on one computer to link to files on another, with anyone and everyone able to create such links. He saw how “a single hypertext link could lead to an enormous, unbounded world.” Yet no one else seemed to see this. So, he decided at last to do it himself. In a fit of self-deprecating mock-grandiosity, not at all dissimilar to J.C.R. Licklider’s call for an “Intergalactic Computer Network,” he named his proposed system the “World Wide Web.” He had no idea how perfect the name would prove.

He sat down to create his World Wide Web in October of 1990, using a NeXT workstation computer, the flagship product of the company Steve Jobs had formed after getting booted out of Apple several years earlier. It was an expensive machine — far too expensive for the ordinary consumer market — but supremely elegant, combining the power of the hacker-favorite operating system Unix with the graphical user interface of the Macintosh.

The NeXT computer on which Tim Berners-Lee created the foundations of the World Wide Web. It then went on to become the world’s first web server.

Progress was swift. In less than three months, Berners-Lee coded the world’s first web server and browser, which also entailed developing the Hypertext Transfer Protocol (HTTP) they used to communicate with one another and the Hypertext Markup Language (HTML) for embedding associative links into documents. These were the foundational technologies of the Web, which still remain essential to the networked digital world we know today.

The first page to go up on the nascent World Wide Web, which belied its name at this point by being available only inside CERN, was a list of phone numbers of the people who worked there. Clicking through its hypertext links being much easier than entering commands into the database application CERN had previously used for the purpose, it served to get Berners-Lee’s browser installed on dozens of NeXT computers. But the really big step came in August of 1991, when, having debugged and refined his system as thoroughly as he was able by using his CERN colleagues as guinea pigs, he posted his web browser, his web server, and documentation on how to use HTML to create web documents on Usenet. The response was not immediately overwhelming, but it was gratifying in a modest way. Berners-Lee:

People who saw the Web and realised the sense of unbound opportunity began installing the server and posting information. Then they added links to related sites that they found were complimentary or simply interesting. The Web began to be picked up by people around the world. The messages from system managers began to stream in: “Hey, I thought you’d be interested. I just put up a Web server.”

Tim Berners-Lee’s original web browser, which he named Nexus in honor of its host platform. The NeXT computer actually had quite impressive graphics capabilities, but you’d never know it by looking at Nexus.

In December of 1991, Berners-Lee begged for and was reluctantly granted a chance to demonstrate the World Wide Web at that year’s official Hypertext conference in San Antonio, Texas. He arrived with high hopes, only to be accorded a cool reception. The hypertext movement came complete with more than its fair share of stodgy theorists with rigid ideas about how hypertext ought to work — ideas which tended to have more to do with the closed, curated experiences of HyperCard than the anarchic open Internet. Normally modest almost to a fault, the Berners-Lee of today does allow himself to savor the fact that “at the same conference two years later, every project on display would have something to do with the Web.”

But the biggest factor holding the Web back at this point wasn’t the resistance of the academics; it was rather its being bound so tightly to the NeXT machines, which had a total user base of no more than a few tens of thousands, almost all of them at universities and research institutions like CERN. Although some browsers had been created for other, more popular computers, they didn’t sport the effortless point-and-click interface of Berners-Lee’s original; instead they presented their links like footnotes, whose numbers the user had to type in to visit them. Thus Berners-Lee and the fellow travelers who were starting to coalesce around him made it their priority in 1992 to encourage the development of more point-and-click web browsers. One for the X Window System, the graphical-interface layer which had been developed for the previously text-only Unix, appeared in April. Even more importantly, a Macintosh browser arrived just a month later; this marked the first time that the World Wide Web could be explored in the way Berners-Lee had envisioned on a computer that the proverbial ordinary person might own and use.

Amidst the organization directories and technical papers which made up most of the early Web — many of the latter inevitably dealing with the vagaries of HTTP and HTML themselves — Berners-Lee remembers one site that stood out for being something else entirely, for being a harbinger of the more expansive, humanist vision he had had for his World Wide Web almost from the start. It was a site about Rome during the Renaissance, built up from a traveling museum exhibition which had recently visited the American Library of Congress. Berners-Lee:

On my first visit, I wandered to a music room. There was an explanation of the events that caused the composer Carpentras to present a decorated manuscript of his Lamentations of Jeremiah to Pope Clement VII. I clicked, and was glad I had a 21-inch colour screen: suddenly it was filled with a beautifully illustrated score, which I could gaze at more easily and in more detail than I could have done had I gone to the original exhibit at the Library of Congress.

If we could visit this site today, however, we would doubtless be struck by how weirdly textual it was for being a celebration of the Renaissance, one of the most excitingly visual ages in all of history. The reality is that it could hardly have been otherwise; the pages displayed by Berners-Lee’s NeXT browser and all of the others could not mix text with images at all. The best they could do was to present links to images, which, when clicked, would lead to a picture being downloaded and displayed in a separate window, as Berners-Lee describes above.

But already another man on the other side of the ocean was working on changing that — working, one might say, on the last pieces necessary to make a World Wide Web that we can immediately recognize today.


Marc Andreessen barefoot on the cover of Time magazine, creating the archetype of the dot-com entrepreneur/visionary/rock star.

Tim Berners-Lee was the last of the old guard of Internet pioneers. Steeped in an ethic of non-profit research for the abstract good of the human race, he never attempted to commercialize his work. Indeed, he has seemed in the decades since his masterstroke almost to willfully shirk the money and fame that some might say are rightfully his for putting the finishing touch on the greatest revolution in communications since the printing press, one which has bound the world together in a way that Samuel Morse and Alexander Graham Bell could never have dreamed of.

Marc Andreessen, by contrast, was the first of a new breed of business entrepreneurs who have dominated our discussions of the Internet from the mid-1990s until the present day. Yes, one can trace the cult of the tech-sector disruptor, “making the world a better place” and “moving fast and breaking things,” back to the dapper young Steve Jobs who introduced the Apple Macintosh to the world in January of 1984. But it was Andreessen and the flood of similar young men that followed him during the 1990s who well and truly embedded the archetype in our culture.

Before any of that, though, he was just a kid who decided to make a web browser of his own.

Andreessen first discovered the Web not long after Berners-Lee first made his tools and protocols publicly available. At the time, he was a twenty-year-old student at the University of Illinois at Urbana-Champaign who held a job on the side at the National Center for Supercomputing Applications, a research institute with close ties to the university. The name sounded very impressive, but he found the job itself to be dull as ditch water. His dissatisfaction came down to the same old split between the “giant brain” model of computing of folks like Marvin Minsky and the more humanist vision espoused in earlier years by people like J.C.R. Licklider. The NCSA was in pursuit of the former, but Andreessen was a firm adherent of the latter.

Bored out of his mind writing menial code for esoteric projects he couldn’t care less about, Andreessen spent a lot of time looking for more interesting things to do on the Internet. And so he stumbled across the fledgling World Wide Web. It didn’t look like much — just a screen full of text — but he immediately grasped its potential.

In fact, he judged, the Web’s not looking like much was a big part of its problem. Casting about for a way to snazz it up, he had the stroke of inspiration that would make him a multi-millionaire within three years. He decided to add a new tag to Berners-Lee’s HTML specification: “<img>,” for “image.” By using it, one would be able to show pictures inline with text. It could make the Web an entirely different sort of place, a wonderland of colorful visuals to go along with its textual content.

As conceptual leaps go, this one really wasn’t that audacious. The biggest buzzword in consumer computing in recent years — bigger than hypertext — had been “multimedia,” a catch-all term describing exactly this sort of digital mixing of content types, something which was now becoming possible thanks to the ever-improving audiovisual capabilities of personal computers since those primitive early days of the trinity of 1977. Hypertext and multimedia had actually been sharing many of the same digs for quite some time. The HyperCard authoring system, for example, boasted capabilities much like those which Andreessen now wished to add to HTML, and the Voyager CD-ROMs already existed as compelling case studies in the potential of interactive multimedia hypertext in a non-networked context.

Still, someone had to be the first to put two and two together, and that someone was Marc Andreessen. An only moderately accomplished programmer himself, he convinced a much better one, another NCSA employee named Eric Bina, to help him create his new browser. The pair fell into roles vaguely reminiscent of those of Steve Jobs and Steve Wozniak during the early days of Apple Computer: Andreessen set the agenda and came up with the big ideas — many of them derived from tireless trawling of the Usenet newsgroups to find out what people didn’t like about the current browsers — and Bina turned his ideas into reality. Andreessen’s relentless focus on the end-user experience led to other important innovations beyond inline images, such as the “forward,” “back,” and “refresh” buttons that remain so ubiquitous in the browsers of today. The higher-ups at NCSA eventually agreed to allow Andreessen to brand his browser as a quasi-official product of their institute; on an Internet still dominated by academics, such an imprimatur was sure to be a useful aid. In January of 1993, the browser known as Mosaic — the name seemed an apt metaphor for the colorful multimedia pages it could display — went up on NCSA’s own servers. After that, “it spread like a virus,” in the words of Andreessen himself.

The slick new browser and its almost aggressively ambitious young inventor soon came to the attention of Tim Berners-Lee. He calls Andreessen “a total contrast to any of the other [browser] developers. Marc was not so much interested in just making the program work as in having his browser used by as many people as possible.” But, lest he sound uncharitable toward his populist counterpart, he hastens to add that “that was, of course, what the Web needed.” Berners-Lee made the Web; the garrulous Andreessen brought it to the masses in a way the self-effacing Briton could arguably never have managed on his own.

About six months after Mosaic hit the Internet, Tim Berners-Lee came to visit its inventor. Their meeting brought with it the first palpable signs of the tension that would surround the World Wide Web and the Internet as a whole almost from that point forward. It was the tension between non-profit idealism and the urge to commercialize, to brand, and finally to control. Even before the meeting, Berners-Lee had begun to feel disturbed by the press coverage Mosaic was receiving, helped along by the public-relations arm of NCSA itself: “The focus was on Mosaic, as if it were the Web. There was little mention of other browsers, or even the rest of the world’s effort to create servers. The media, which didn’t take the time to investigate deeper, started to portray Mosaic as if it were equivalent to the Web.” Now, at the meeting, he was taken aback by an atmosphere that smacked more of a business negotiation than a friendly intellectual exchange, even as he wasn’t sure what exactly was being negotiated. “Marc gave the impression that he thought of this meeting as a poker game,” Berners-Lee remembers.

Andreessen’s recollections of the meeting are less nuanced. Berners-Lee, he claims, “bawled me out for adding images to the thing.” Andreessen:

Academics in computer science are so often out to solve these obscure research problems. The universities may force it upon them, but they aren’t always motivated to just do something that people want to use. And that’s definitely the sense that we always had of CERN. And I don’t want to mis-characterize them, but whenever we dealt with them, they were much more interested in the Web from a research point of view rather than a practical point of view. And so it was no big deal to them to do a NeXT browser, even though nobody would ever use it. The concept of adding an image just for the sake of adding an image didn’t make sense [to them], whereas to us, it made sense because, let’s face it, they made pages look cool.

The first version of Mosaic ran only on X-Windows, but, as the above would indicate, Andreessen had never intended for that to be the case for long. He recruited more programmers to write ports for the Macintosh and, most importantly of all, for Microsoft Windows, whose market share of consumer computing in the United States was crossing the threshold of 90 percent. When the Windows version of Mosaic went online in September of 1993, it motivated hundreds of thousands of computer owners to engage with the Internet for the first time; the Internet to them effectively was Mosaic, just as Berners-Lee had feared would come to pass.

The Mosaic browser. It may not look like much today, but its ability to display inline images was a game-changer.

At this time, Microsoft Windows didn’t even include a TCP/IP stack, the software layer that could make a machine into a full-fledged denizen of the Internet, with its own IP address and all the trimmings. In the brief span of time before Microsoft remedied that situation, a doughty Australian entrepreneur named Peter Tattam provided an add-on TCP/IP stack, which he distributed as shareware. Meanwhile other entrepreneurs scrambled to set up Internet service providers to provide the unwashed masses with an on-ramp to the Web — no university enrollment required! —  and the shelves of computer stores filled up with all-in-one Internet kits that were designed to make the whole process as painless as possible.

The unabashed elitists who had been on the Internet for years scorned the newcomers, but there was nothing they could do to stop the invasion, which stormed their ivory towers with overwhelming force. Between December of 1993 and December of 1994, the total amount of Web traffic jumped by a factor of eight. By the latter date, there were more than 10,000 separate sites on the Web, thanks to people all over the world who had rolled up their sleeves and learned HTML so that they could get their own idiosyncratic messages out to anyone who cared to read them. If some (most?) of the sites they created were thoroughly frivolous, well, that was part of the charm of the thing. The World Wide Web was the greatest leveler in the history of media; it enabled anyone to become an author and a publisher rolled into one, no matter how rich or poor, talented or talent-less. The traditional gatekeepers of mass media have been trying to figure out how to respond ever since.

Marc Andreessen himself abandoned the browser that did so much to make all this happen before it celebrated its first birthday. He graduated from university in December of 1993, and, annoyed by the growing tendency of his bosses at NCSA to take credit for his creation, he decamped for — where else? — Silicon Valley. There he bumped into Jim Clark, a huge name in the Valley, who had founded Silicon Graphics twelve years earlier and turned it into the biggest name in digital special effects for the film industry. Feeling hamstrung by Silicon Graphics’s increasing bureaucracy as it settled into corporate middle age, Clark had recently left the company, leading to much speculation about what he would do next. The answer came on April 4, 1994, when he and Marc Andreessen founded Mosaic Communications in order to build a browser even better than the one the latter had built at NCSA. The dot-com boom had begun.

(Sources: the books A Brief History of the Future: The Origins of the Internet by John Naughton, From Gutenberg to the Internet: A Sourcebook on the History of Information Technology edited by Jeremy M. Norman, A History of Modern Computing (2nd ed.) by Paul E. Ceruzzi, Communication Networks: A Concise Introduction by Jean Walrand and Shyam Parekh, Weaving the Web by Tim Berners-Lee, How the Web was Born by James Gillies and Robert Calliau, and Architects of the Web by Robert H. Reid. InfoWorld of August 24 1987, September 7 1987, April 25 1988, November 28 1988, January 9 1989, October 23 1989, and February 4 1991; Computer Gaming World of May 1993.)

Footnotes

Footnotes
1 When he first stated his law in 1965, Moore actually proposed a doubling every single year, but revised his calculations in 1975.
 

Tags:

A Web Around the World, Part 3: …Try, Try Again

A major financial panic struck the United States in August of 1857, just as the Niagara was making the first attempt to lay the Atlantic cable. Cyrus Field had to mortgage his existing businesses heavily just to keep them going. But he was buoyed by one thing: as the aftershocks of the panic spread to Europe, packet steamers took to making St. John’s, Newfoundland, their first port of call in the Americas for the express purpose of passing the financial news they carried to the island’s telegraph operators so that it could reach Wall Street as quickly as possible. It had taken the widespread threat of financial ruin, but Frederick Gisborne’s predictions about the usefulness of a Newfoundland telegraph were finally coming true. Now just imagine if the line could be extended all the way across the Atlantic…

While he waited for the return of good weather to the Atlantic, Field sought remedies for everything that had gone wrong with the first attempt to lay a telegraph cable across an ocean. The Niagara‘s chief engineer, a man named William Everett, had examined Charles Bright’s paying-out mechanism with interest during the last expedition, and come up with a number of suggestions for improving it. Field sought and was granted Everett’s temporary release from the United States Navy, and brought him to London to redesign the machine. The result was actually simpler in most ways, being just one-fourth of the weight and one-third of the size of Bright’s design. But it incorporated a critical new feature: the brake now set and released itself automatically in response to the level of tension on the cable. “It seemed to have the intelligence of a human being, to know when to hold on and when to let go,” writes Henry Field. In reality, it was even better than a human being, in that it never got tired and never let its mind wander; no longer would a moment’s inattention on the part of a fallible human operator be able to wreck the whole project.

Charles Bright accepted the superseding of his original design with good grace; he was an engineer to the core, the new paying-out machine was clearly superior to the old one, and so there wasn’t much to discuss in his view. There was ongoing discord, however, between two more of Cyrus Field’s little band of advisors.

Wildman Whitehouse and William Thomson had been competing for Field’s ear for quite some time now. At first the former had won out, largely because he told Field what he most wished to hear: that a transatlantic telegraph could be made to work with an unusually long but otherwise fairly plebeian cable, using bog-standard sending and receiving mechanisms. But Field was a thoughtful man, and of late he’d begun losing faith in the surgeon and amateur electrical experimenter. He was particularly bothered by Whitehouse’s blasé attitude toward the issue of signal retardation.

Meanwhile Thomson was continuing to whisper contrary advice in his ear. He said that he still thought it would be best to use a thicker cable like the one he had originally proposed, but, when informed that there just wasn’t money in the budget for such a thing, he said that he thought he could get even Whitehouse’s design to work more efficiently. His scheme exploited the fact that even a heavily retarded signal probably wouldn’t become completely uniform: the current at the far end of the wire would still be full of subtle rises and falls where the formerly discrete dots and dashes of Morse Code had been. Thomson had been working on a new, ultrasensitive galvanometer, which ingeniously employed a lamp, a magnet, and a tiny mirror to detect the slightest variation in current amplitude. Two operators would work together to translate a signal on the receiving end of the cable: one, trained to interpret the telltale patterns of reflected light bobbing up and down in front of him, would translate them into Morse Code and call it out to his partner. Over the strident objections of Whitehouse, Field agreed to install the system, and also agreed to give Thomson access to the enormous spools of existing cable that were now warehoused in Plymouth, England, waiting for the return of spring. Thomson meticulously tested the cable one stretch at a time, and convinced Field to let him cut out those sections where its conductivity was worst.

The United States and Royal Navies agreed to lend the Atlantic Telegraph Company the same two vessels as last time for a second attempt at laying the cable. To save time, however, it was decided that the ships would work simultaneously: they would sail to the middle of the Atlantic, splice their cables together there, then each head toward a separate continent. So, in April of 1858, the Niagara and the Agamemnon arrived in Plymouth to begin the six-week process of loading the cable. They sailed together from there on June 10. Samuel Morse elected not to travel with the expedition this time, but Charles Bright, William Thomson, Cyrus Field and his two brothers, and many of the other principals were aboard one or the other ship.

They had been told that “June was the best month for crossing the Atlantic,” as Henry Field writes. They should be “almost sure of fair weather.” On the contrary, on June 13 the little fleet sailed into the teeth of one of the worst Atlantic storms of the nineteenth century. The landlubbers aboard had never imagined that such a natural fury as this could exist. For three days, the ships were lashed relentlessly by the wind and waves. With 1250 tons of cable each on their decks and in their holds, both the Niagara and the Agamemnon rode low in the water and were a handful to steer under the best of circumstances; now they were in acute danger of foundering, capsizing, or simply breaking to pieces under the battering.

The Agamemnon was especially hard-pressed: bracing beams snapped below decks, and the hull sprang leaks in multiple locations. “The ship was almost as wet inside as out,” wrote a horrified Times of London reporter who had joined the expedition. The crew’s greatest fear was that one of the spools of cable in the hold would break loose and punch right through the hull; they fought a never-ending battle to secure the spools against each successive onslaught. While they were thus distracted, the ship’s gigantic coal hampers gave way instead, sending tons of the filthy stuff skittering everywhere, injuring many of the crew. That the Agamemnon survived the storm at all was thanks to masterful seamanship on the part of its captain, who remained awake on the bridge for 72 hours straight, plotting how best to ride out each wave.

An artist’s rendering of the Agamemnon in the grip of the storm, as published in the Illustrated London News.

Separated from one another by the storm, the two ships met up again on June 25 smack dab in the middle of an Atlantic Ocean that was once again so tranquil as to “seem almost unnatural,” as Henry Field puts it. The men aboard the Niagara were shocked at the state of the Agamemnon; it was so badly battered and so covered in coal dust that it looked more like a garbage scow than a proud Royal Navy ship of the line. But no matter: it was time to begin the task they had come here to carry out.

So, the cables were duly spliced on June 26, and the process of laying them began — with far less ceremony than last time, given that there were no government dignitaries on the scene. The two ships steamed away from one another, the Niagara westward toward Newfoundland, the Agamemnon eastward toward Ireland, with telegraph operators aboard each ship constantly testing the tether that bound them together as they went. They had covered a combined distance of just 40 miles when the line suddenly went dead. Following the agreed-upon protocol in case of such an eventuality, both crews cut their end of the cable, letting it drop uselessly into the ocean, then turned around and steamed back to the rendezvous point; neither crew had any idea what had happened. Still, the break had at least occurred early enough that there ought still to be enough cable remaining to span the Atlantic. There was nothing for it but to splice the cables once more and try again.

This time, the distance between the ships steadily increased without further incident: 100 miles, 200 miles, 300 miles. “Why not lay 2000 [miles]!” thought Henry Field with a shiver of excitement. Then, just after the Agamemnon had made a routine splice from one spool to the next, the cable snapped in the ship’s wake. Later inspection would reveal that that section of it had been damaged in the storm. Nature’s fury had won the day after all. Again following protocol for a break this far into the cable-laying process, the two ships sailed separately back to Britain.

It was a thoroughly dejected group of men who met soon after in the offices of the Atlantic Telegraph Company. Whereas last year’s attempt to lay the cable had given reason for guarded optimism in the eyes of some of them, this latest attempt seemed an unadulterated fiasco. The inexplicable loss of signal the first time this expedition had tried to lay the cable was in its way much more disconcerting than the second, explicable disaster of a physically broken cable, as our steadfast Times of London reporter noted: “It proves that, after all that human skill and science can effect to lay the wire down with safety has been accomplished, there may be some fatal obstacle to success at the bottom of the ocean, which can never be guarded against, for even the nature of the peril must always remain as secret and unknown as the depths in which it is encountered.” The task seemed too audacious, the threats to the enterprise too unfathomable. Henry Field:

The Board was called together. It met in the same room where, six weeks before, it had discussed the prospects of the expedition with full confidence of success. Now it met as a council of war is summoned after a terrible defeat. When the Directors came together, the feeling — to call it by the mildest name — was one of extreme discouragement. They looked blankly in each other’s faces. With some, the feeling was almost one of despair. Sir William Brown of Liverpool, the first Chairman, wrote advising them to sell the cable. Mr. Brooking, the Vice-Chairman, who had given more time than any other Director, sent in his resignation, determined to take no further part in an undertaking which had proved hopeless, and to persist in which seemed mere rashness and folly.

Most of the members of the board assumed they were meeting only to deal with the practical matter of winding up the Atlantic Telegraph Company. But Cyrus Field had other ideas. When everyone was settled, he stood up to deliver the speech of his life. He told the room that he had talked to the United States and Royal Navies, and they had agreed to extend the loan of the Niagara and the Agamemnon for a few more weeks, enough to make one more attempt to lay the cable. And he had talked to his technical advisors as well, and they had agreed that there ought to be just enough cable left to span the Atlantic if everything went off without a hitch. Even if the odds against success were a hundred to one, why not try one more time? Why not go down swinging? After all, the money they stood to recoup by selling a second-hand telegraph cable wasn’t that much compared to what had already been spent.

It is a tribute to his passion and eloquence that his speech persuaded this roomful of very gloomy, very pragmatic businessmen. They voted to authorize one more attempt to create an electric bridge across the Atlantic.

The Niagara and the poor, long-suffering Agamemnon were barely given time to load coal and provisions before they sailed again, on July 17, 1858. This time the weather was propitious: blue skies and gentle breezes the whole way to the starting point. On July 29, after conducting tests to ensure that the entirety of the remaining cable was still in working order, they began the laying of it once more. Plenty of close calls ensued in the days that followed: a passing whale nearly entangled itself in the cable, then a passing merchant ship nearly did the same; more sections of cable turned up with storm-damaged insulation aboard the Agamemnon and had to be cut away, to the point that it was touch and go whether Ireland or the end of the last spool would come first. And yet the telegraph operators aboard each of the ships remained in contact with one another day after day as they crept further and further apart.

At 1:45 AM on August 6, the Niagara dropped anchor in Newfoundland at a point some distance west of St. John’s, in Trinity Bay, where a telegraph house had already been built to receive the cable. One hour later, the telegraph operator aboard the ship received a message from the Agamemnon that it too had made landfall, in Ireland. Cyrus Field’s one-chance-in-a-hundred gamble had apparently paid off.

Shouting like a lunatic, Field burst upon the crew manning the telegraph house, who had been blissfully asleep in their bunks. At 6:00 AM, the men spliced the cable that had been carried over from the Niagara with the one that went to St. John’s and beyond. Meanwhile, on the other side of the ocean, the crew of the Agamemnon was doing the same with a cable that stretched from the backwoods of southern Ireland to the heart of London. “The communication between the Old and the New World [has] been completed,” wrote the Times of London reporter.


The (apparently) successful laying of the cable in 1858 sparked almost a religious fervor, as shown in this commemorative painting by William Simpson, in which the Niagara is given something very like a halo as it arrives in Trinity Bay.

The news of the completed Atlantic cable was greeted with elation everywhere it traveled. Joseph Henry wrote in a public letter to Cyrus Field that the transatlantic telegraph would “mark an epoch in the advancement of our common humanity.” Scientific American wrote that “our whole country has been electrified by the successful laying of the Atlantic telegraph,” and Harper’s Monthly commissioned a portrait of Field for its cover. Countless cities and towns on both sides of the ocean held impromptu jubilees to celebrate the achievement. Ringing church bells, booming cannon, and 21-gun rifle salutes were the order of the day everywhere. Men who had or claimed to have sailed aboard the Niagara or the Agamemnon sold bits and pieces of leftover cable at exorbitant prices. Queen Victoria knighted the 26-year-old Charles Bright, and said she only wished Cyrus Field was a British citizen so she could do the same for him. On August 16, she sent a telegraph message to the American President James Buchanan and was answered in kind; this herald of a new era of instantaneous international diplomacy brought on yet another burst of public enthusiasm.

Indeed, the prospect of a worldwide telegraph network — for, with the Atlantic bridged, could the Pacific and all of the other oceans be far behind? — struck many idealistic souls as the facilitator of a new era of global understanding, cooperation, and peace. Once we allow for the changes that took place in rhetorical styles over a span of 140 years, we find that the most fulsome predictions of 1858 have much in common with those that would later be made with regard to the Internet and its digital World Wide Web. “The whole earth will be belted with electric current, palpitating with human thoughts and emotions,” read the hastily commissioned pamphlet The Story of the Telegraph.[1]No relation to the much more comprehensive history of the endeavor which Henry Field would later write under the same title. “It is impossible that old prejudices and hostilities should longer exist, while such an instrument has been created for the exchange of thoughts between all the nations of the earth.” Indulging in a bit of peculiarly British wishful thinking, the Times of London decided that “the Atlantic telegraph has half undone the Declaration of 1776, and has gone far to make us once again, in spite of ourselves, one people.” Others found prose woefully inadequate for the occasion, found they could give proper vent to their feelings only in verse.

‘Tis done! The angry sea consents,
The nations stand no more apart,
With clasped hands the continents
Feel throbbings of each other’s heart.

Speed, speed the cable; let it run
A loving girdle round the earth,
Till all the nations ‘neath the sun
Shall be as brothers of one hearth;

As brothers pledging, hand in hand,
One freedom for the world abroad,
One commerce every land,
One language and one God.

But one fact was getting lost — or rather was being actively concealed — amidst all the hoopla: the Atlantic cable was working after a fashion, but it wasn’t working very well. Even William Thomson’s new galvanometer struggled to make sense of a signal that grew weaker and more diffuse by the day. To compensate, the operators were forced to transmit more and more slowly, until the speed of communication became positively glacial. Queen Victoria’s 99-word message to President Buchanan, for example, took sixteen and a half hours to send — a throughput of all of one word every ten minutes. The entirety of another day’s traffic consisted of:

Repeat please.

Please send slower for the present.

How?

How do you receive?

Send slower.

Please send slower.

How do you receive?

Please say if you can read this.

Can you read this?

Yes.

How are signals?

Do you receive?

Please send something.

Please send Vs and Bs.

How are signals?

Cyrus Field managed to keep these inconvenient facts secret for some time while his associates scrambled fruitlessly for a solution. When Thomson could offer him no miracle cure, he turned back to Wildman Whitehouse. Insisting that there was no problem with his cable design which couldn’t be solved by more power, Whitehouse hooked it up to giant induction coils to try to force the issue. Shortly after he did so, on September 1, the cable failed completely. Thomson and others were certain that Whitehouse had burned right through the cable’s insulation with his high-voltage current, but of course it is impossible to know for sure. Still, that didn’t stop Field from making an irrevocable break with Whitehouse; he summarily fired him from the company. In response, Whitehouse went on a rampage in the British press, denouncing the “frantic fooleries of the Americans in the person of Cyrus W. Field”; he would soon publish a book giving his side of the story, filled with technical conclusions which history has demonstrated to be wrong.

On October 20, with all further recourse exhausted, Field bit the bullet and announced to the world that his magic thread was well, truly, and hopelessly severed. The press at both ends of the cable turned on a dime. The Atlantic Telegraph Company and its principal face were now savaged with the same enthusiasm with which they had so recently been praised. Many suspected loudly that it had all been an elaborate fraud. “How many shares of stock did Mr. Field sell in August?” one newspaper asked. (The answer: exactly one share.) The Atlantic Telegraph Company remained nominally in existence after the fiasco of 1858, but it would make no serious plans to lay another cable for half a decade.

Cyrus Field himself was, depending on whom you asked, either a foolish dreamer or a cynical grifter. His financial situation too was not what it once had been. His paper business had suffered badly in the panic of 1857; then came a devastating warehouse fire in 1860, and he sold it shortly thereafter at a loss. In April of 1861, the American Civil War, the product of decades of slowly building tension between the country’s industrial North and the agrarian, slave-holding South, finally began in earnest. Suddenly the paeans to universal harmony which had marked a few halcyon weeks in August of 1858 seemed laughable, and the moneyed men of Wall Street turned their focus to engines of war instead of peace.

Yet the British government at least was still wondering in its stolid, sluggish way how a project to which it had contributed considerable public resources, which had in fact nearly gotten one of Her Majesty’s foremost ships of the line sunk, had wound up being so useless. The same month that the American Civil War began, it formed a commission of inquiry to examine both this specific failure and the future prospects for undersea telegraphy in general. The commission numbered among its members none other than Charles Wheatstone, along with William Cooke one of the pair of inventors who had set up the first commercial telegraph line in the world. It read its brief very broadly, and ranged far afield to address many issues of importance to a slowly electrifying world. Most notably, it defined the standardized units of electrical measurement that we still use today: the watt, the volt, the ohm, and the ampere.

But much of its time was taken up by a war of words between Wildman Whitehouse and William Thomson, each of whom presented his case at length and in person. While Whitehouse laid the failure of the first transatlantic telegraph at the feet of a wide range of factors that had nothing to do with his cable but much to do with the gross incompetence of the Atlantic Telegraph Company in laying and operating it, Thomson argued that the choice of the wrong type of cable had been the central, precipitating mistake from which all of the other problems had cascaded. In the end, the commission found Thomson’s arguments more convincing; it did seem to it that “the heavier the cable, the greater its durability.” Its final conclusions, delivered in July of 1863, were simultaneously damning toward many of the specific choices of the Atlantic Telegraph Company and optimistic that a transatlantic telegraph should be possible, given much better planning and preparation. The previous failures were, it said, “due to causes which might have been guarded against had adequate preliminary investigation been made.” Nevertheless, “we are convinced that this class of enterprise may prove as successful as it has hitherto been disastrous.”

Meanwhile, even in the midst of the bloodiest conflict in American history, all Cyrus Field seemed to care about was his once and future transatlantic telegraph. Graduating from the status of dreamer or grifter, he now verged on becoming a laughingstock in some quarters. In New York City, for example, “he addressed the Chamber of Commerce, the Board of Brokers, and the Corn Exchange,” writes Henry Field, “and then he went almost literally door to door, calling on merchants and bankers to enlist their aid. Even of those who subscribed, a large part did so more from sympathy and admiration of his indomitable spirit than from confidence in the success of the enterprise.” One of his marks labeled him with grudging admiration “the most obstinately determined man in either hemisphere.” Yet in the course of some five years of such door-knocking, he managed to raise pledges amounting to barely one-third of the purchase price of the first Atlantic cable — never mind the cost of actually laying it. This was unsurprising, in that there lay a huge unanswered question at the heart of any renewal of the enterprise: a cable much thinner than the one which almost everyone except Wildman Whitehouse now agreed was necessary had dangerously overburdened two of the largest ships in the world, very nearly with tragic results for one of them. And yet, in contrast to the 2500 tons of Whitehouse’s cable, Thomson’s latest design was projected to weigh 4000 tons. How on earth was it to be laid?

But Cyrus Field’s years in the wilderness were not to last forever. In January of 1864, in the course of yet another visit to London, he secured a meeting with Thomas Brassey, one of the most famous of the new breed of financiers who were making fortunes from railroads all over the world. Field wrote in a letter immediately after the meeting that “he put me through such a cross-examination as I had never before experienced. I thought I was in the witness box.” (He doesn’t state in his letter whether he noticed the ironic contrast with the way this whole adventure had begun exactly one decade earlier, when it had been Frederick Gisborne who had come with hat in hand to his own stateroom for an equally skeptical cross-examination.)

It seems that Field passed the test. Brassey agreed to put some of his money and, even more importantly, his sterling reputation as one of the world’s foremost men of business behind the project. And just like that, things started to happen again. “The wheels were unloosed,” writes Henry Field, “and the gigantic machinery began to revolve.” The money poured in; the transatlantic telegraph was on again. Cyrus Field placed an order for a thick, well-insulated cable matching Thomson’s specifications. The only problem remaining was the same old one of how to actually get it aboard a ship. But, miraculously, Thomas Brassey believed he had a solution for that problem too.

During the previous decade, Isambard Kingdom Brunel, arguably the greatest steam engineer of the nineteenth century, had designed and overseen the construction of what he intended as his masterpiece: an ocean liner called the Great Eastern, which displaced a staggering 19,000 tons, could carry 4000 passengers, and could sail from Britain to Australia without ever stopping for coal. It was 693 feet long and 120 feet wide, with ten steam engines producing up to 10,000 horsepower and delivering it through both paddle wheels and a screw propeller. And, most relevantly for Brassey and Field, it could carry up to 7000 tons of cargo in its hold.

T.G. Dutton’s celebratory 1859 rendering of the Great Eastern.

Alas, its career to date read like a Greek tragedy about the sin of hubris. The Great Eastern almost literally killed its creator; undone by the stresses involved in getting his “Great Babe” built, Brunel died at the age of only 53 shortly after it was completed in 1859. During its sea trials, the ship suffered a boiler explosion that killed five men. And once it entered service, those who had paid to build it discovered that it was just too big: there just wasn’t enough demand to fill its holds and staterooms, even as it cost a fortune to operate. “Her very size was against her,” writes Henry Field, “and while smaller ships, on which she looked down with contempt, were continually flying to and fro across the sea, this leviathan could find nothing worthy of her greatness.” The Great Eastern developed the reputation of an ill-starred, hard-luck ship. Over the course of its career, it was involved in ten separate ship-to-ship collisions. In 1862, it ran aground outside New York Harbor; it was repaired and towed back to open waters only at enormous effort and expense, further burnishing its credentials as an unwieldy white elephant. Eighteen months later, the Great Eastern was retired from service and put up for sale. A financier named Daniel Gooch bought the ship for just £25,000, less than its value as scrap metal. And indeed, scrapping it for profit was quite probably foremost on his mind at the time.

But then Thomas Brassey came calling on his friend, asking what it would cost to acquire the ship for the purpose of laying the transatlantic cable. Gooch agreed to loan the Great Eastern to him in return for £50,000 in Atlantic Telegraph Company stock. And so Cyrus Field’s project acquired the one ship in the world that was actually capable of carrying Thomson’s cable. One James Anderson, a veteran captain with the Cunard Line, was hired to command it.

Observing the checkered record of the Atlantic Telegraph Company in laying working telegraph cables to date, Brassey and his fellow investors insisted that the latest attempt be subcontracted out to the recently formed Telegraph Construction and Maintenance Company, the entity which also provided the cable itself. During the second half of 1864, the latter company extensively modified the Great Eastern for the task before it. Intended as it was for a life lived underwater, the cable was to be stored aboard the ship immersed in water tanks in order to prevent its vital insulation from drying out and cracking.

Then, from January to July of 1865, the Great Eastern lay at a dock in Sheerness, England, bringing about 20 miles of cable per day onboard. The pendulum had now swung again with the press and public: the gargantuan ship became a place of pilgrimage for journalists, politicians, royalty, titans of industry, and ordinary folks, all come to see the progress of this indelible sign of Progress in the abstract. Cyrus Field was so caught up in the excitement of an eleven-year-old dream on the cusp of fulfillment that he hardly noticed when the final battle of the American Civil War ended with Southern surrender on April 9, 1865, nor the shocking assassination of the victorious President Abraham Lincoln just a few days later.

On July 15, the Great Eastern put to sea at last, laden with the 4000 tons of cable plus hundreds more tons of dead weight in the form of the tanks of water that were used to store it. Also aboard was a crew of 500 men, but only a small contingent of observers from the Atlantic Telegraph Company, among them the Field brothers and William Thomson. Due to its deep draft, the Great Eastern had to be very cautious when sailing near land; witness its 1862 grounding in New York Harbor. Therefore a smaller steamer, the Caroline, was enlisted to bring the cable ashore on the treacherous southern coast of Ireland and to lay the first 23 miles of it from there. On the evening of July 23, the splice was made and the Great Eastern took over responsibility for the rest of the journey.

So, the largest ship in the world made its way westward at an average speed of a little over six knots. Cyrus Field, who was prone to seasickness, noted with relief how different an experience it was to sail on a behemoth like this one even in choppy seas. He and everyone else aboard were filled with optimism, and with good reason on the whole; this was a much better planned, better thought-through expedition than those of the Niagara and the Agamemnon. Each stretch of cable was carefully tested before it fell off the stern of the ship, and a number of stretches were discarded for failing to meet Thomson’s stringent standards. Then, too, William Everett’s paying-out mechanism had been improved such that it could now reel cable back in again if necessary; this did indeed prove to be the case twice, when stretches of cable proved not to be as water-resistant as they ought to have been despite all of Thomson’s efforts.

The days went by, filled with minor snafus to be sure, but nothing that hadn’t been anticipated. The stolid and stable Great Eastern, writes Henry Field, “seemed as if made by Heaven to accomplish this great work of civilization.” And the cable itself continued to work even better than Thomson had said it would; the link with Ireland remained rock-solid, with a throughput to which Whitehouse’s cable could never have aspired.

At noon on August 2, the Great Eastern was well ahead of schedule, already almost two-thirds of the way to Newfoundland, when a fault was detected in the stretch of cable just laid. This was annoying, but nothing more than that; it had, after all, happened twice before and been dealt with by pulling the bad stretch out of the water and discarding it. But in the course of hauling it back in this time, an unfortunate burst of wind and current spelled disaster: the cable was pulled taut by the movement of the ship and snapped.

Captain Anderson had one gambit left — one more testament to the Telegraph Construction and Maintenance Company’s determination to plan for every eventuality. He ordered the huge grappling hook with which the Great Eastern had been equipped to be deployed over the side. It struck the naïve observers from the Atlantic Telegraph Company as an absurd proposition; the ocean here was two and a half miles deep — so deep that it took the hook two hours just to touch bottom. The ship steamed back and forth across its former course all night long, dragging the hook patiently along the ocean floor. Early in the morning, it caught on something. The crew saw with excitement that, as the grappling machinery pulled the hook gently up, its apparent weight increased. This was consistent with a cable, but not with anything else that anyone could conceive. But in the end, the increasing weight of it proved too much. When the hook was three quarters of a mile above the ocean floor, the rope snapped. Two more attempts with fresh grappling hooks ended the same way, until there wasn’t enough rope left aboard to touch bottom.

It had been a noble attempt, and had come tantalizingly close to succeeding, but there was nothing left to do now but mark the location with a buoy and sail back to Britain. “We thought you went down!” yelled the first journalist to approach the Great Eastern when it reached home. It seemed that, in the wake of the abrupt loss of communication with the ship, a rumor had spread that it had struck an iceberg and sunk.



Although the latest attempt to lay a transatlantic cable had proved another failure, one didn’t anymore have to be a dyed-in-the-wool optimist like Cyrus Field to believe that the prospects for a future success were very, very good. The cable had outperformed expectations by delivering a clear, completely usable signal from first to last. The final sticking point had not even been the cable’s own tensile strength but rather that of the ropes aboard the Great Eastern. Henry Field:

This confidence appeared at the first meeting of directors. The feeling was very different from that after the return of the first expedition of 1858. So animated were they with hope, and so sure of success the next time, that all felt that one cable was not enough, they must have two, and so it was decided to take measures not only to raise the broken end of the cable and to complete it to Newfoundland, but also to construct and lay an entirely new one, so as to have a double line in operation the following summer.

Nothing was to be left to chance next time around. William Thomson worked with the Telegraph Construction and Maintenance Company to make the next cable even better, incorporating everything that had been learned on the last expedition plus all the latest improvements in materials technology. The result was even more durable, whilst weighing about 10 percent less. The paying-out mechanism was refined further, with special attention paid to the task of pulling the cable in again without breaking it. And the Great Eastern too got a refit that made it even more suited to its new role in life. Its paddle wheels were decoupled from one another so each could be controlled separately; by spinning one forward and one backward, the massive ship could be made to turn in its own length, an improvement in maneuverability which should make grappling for a lost cable much easier. Likewise, twenty miles of much stronger grappling rope was taken onboard. Meanwhile the Atlantic Telegraph Company was reorganized and reincorporated as the appropriately trans-national Anglo-American Telegraph Company, with an initial capitalization of £600,000.

This time the smaller steamer William Corry laid the part of the cable closest to the Irish shore. On Friday, July 13, 1866, the splice was made and the Great Eastern took over. The weather was gray and sullen more often than not over the following days, but nothing seemed able to dampen the spirit of optimism and good cheer aboard; many a terrible joke was made about “shuffling off this mortal coil.” As they sailed along, the crew got a preview of the interconnected world they were so earnestly endeavoring to create: the long tether spooling out behind the ship brought them up-to-the-minute news of the latest stock prices on the London exchange and debates in Parliament, as well as dispatches from the battlefields of the Third Italian War of Independence, all as crystal clear as the weather around them was murky.

The Great Eastern maintained a slightly slower pace this time, averaging about five knots, because some felt that some of the difficulties last time had resulted from rushing things a bit too much. Whether due to the slower speed or all of the other improvements in equipment and procedure, the process did indeed go even more smoothly; the ship never failed to cover at least 100 miles — usually considerably more — every day. The Great Eastern sailed unperturbed beyond the point where it had lost the cable last time. By July 26, after almost a fortnight of steady progress, the excitement had reached a fever pitch, as the seasoned sailors aboard began to sight birds and declared that they could smell the approaching land.

The following evening, they reached their destination. “The Great Eastern,” writes Henry Field, “gliding in as if she had done nothing remarkable, dropped her anchor in front of the telegraph house, having trailed behind her a chain of 2000 miles, to bind the Old World to the New.” A different telegraph house had been built in Trinity Bay to receive this cable, in a tiny fishing village with the delightful name of Heart’s Content. The entire village rowed out to greet the largest ship by almost an order of magnitude ever to enter their bay, all dressed in their Sunday best.

The Great Eastern in Trinity Bay, 1866. This photograph does much to convey the sheer size of the ship. The three vessels lying alongside it are all oceangoing ships in their own right.

But there was one more fly in the ointment. When he came ashore, Cyrus Field learned that the underwater telegraph line he had laid between Newfoundland and Cape Breton ten years before had just given up the ghost. So, there was a little bit more work to be done. He chartered a coastal steamer to take onboard eleven miles of Thomson’s magic cable from the Great Eastern and use it to repair the vital span; such operations in relatively shallow water like this had by now become routine, a far cry from the New York, Newfoundland, and London Telegraph Company’s wild adventure of 1855. While he waited for that job to be completed, Field hired another steamer to bring news of his achievement to the mainland along with a slew of piping-hot headlines from Europe to serve as proof of it. It was less dramatic than an announcement via telegraph, but it would have to do.

Thus word of the completion of the first truly functional transatlantic telegraph cable, an event which took place on July 27, 1866, didn’t reach the United States until July 29. It was the last delay of its kind. Two separate networks had become one, two continents sewn together using an electric thread; the full potential of the telegraph had been fulfilled. The first worldwide web, the direct ancestor and prerequisite of the one we know today, was a reality.

(Sources: the books The Victorian Internet by Tom Standage, Power Struggles: Scientific Authority and the Creation of Practical Electricity Before Edison by Michael B. Schiffer, Lightning Man: The Accursed Life of Samuel F.B. Morse by Kenneth Silverman, A Thread across the Ocean: The Heroic Story of the Transatlantic Telegraph by John Steele Gordon, and The Story of the Atlantic Telegraph by Henry M. Field. Online sources include “Heart’s Content Cable Station” by Jerry Proc, Distant Writing: A History of the Telegraph Companies in Britain between 1838 and 1868 by Steven Roberts, and History of the Atlantic Cable & Undersea Communications.)

Footnotes

Footnotes
1 No relation to the much more comprehensive history of the endeavor which Henry Field would later write under the same title.
 
 

Tags:

The Neo-Classical Interactive Fiction of 1995

For all that it was a period with some significant sparks of heat and light, we might reasonably call the time between 1989 and 1994 the Dark Ages of Interactive Fiction. It was only in 1995 that the lights were well and truly turned on again and the Interactive Fiction Renaissance began in earnest. This was the point when a number of percolating trends — the evolving TADS and Inform programming languages, the new generation of Z-Machine interpreters, the serious discussions of design craft taking place on Usenet — bore a sudden and rather shockingly verdant fruit. It became, one might say, Year One of the interactive-fiction community as we know it today.

The year is destined always to be remembered most of all for the very first Interactive Fiction Competition, better known as simply the “IF Comp” to its friends. Its influence on the design direction of what used to be called text adventures would soon become as undeniable as it was unwelcome in the eyes of some ultra-traditionalists: its guidance that entries should be finishable in two hours or so led in the course of things to an interest in depth in place of breadth, in literary and formal experimentation in place of the “gamier” pleasures of point-scoring and map-making.

But the Comp’s influence would take time to make itself known. This first edition of it, organized by an early community pillar named G. Kevin Wilson, was a relatively modest affair, with just twelve entries, six in each of the two categories into which it was divided: one for TADS games, one for Inform games. (This division would fall by the wayside in future Comps.) The entries did prefigure some of the self-referential experimentation to come: Undo by Neil deMause placed you at the very end of a (deliberately) broken, corrupted game and expected you to muddle your way to victory; Mystery Science Theater 3000 Presents Detective by C.E. Forman made somewhat mean-spirited, television-inspired fun of a really, really bad game released a few years earlier by a twelve-year-old author; The Magic Toyshop by Gareth Rees took place all in one room, thus becoming the perfect treat for mapping haters. Yet in my opinion none of these games join the ranks of the year’s very best works.

In retrospect, the lineup of games in that first Comp is perhaps most notable for becoming the venue for the first polished work of interactive fiction by Andrew Plotkin; his influence on the future direction of the community, in terms of both aesthetics and technology, would be comparable only to that of Mike Roberts and Graham Nelson among the figures we’ve already met in previous articles. But his A Change in the Weather, a punishingly difficult meta-puzzle of a game which one couldn’t hope to solve without many replays, stands as a fairly minor entry in his impressive oeuvre today, despite winning the Inform category of that first Comp.

So, I’d like to reserve any more discussion of this and subsequent IF Comps for future articles, and focus today on what I consider to be the real standout text adventures of 1995, of which there are a gratifying number. The games below evince no concern whatsoever about keeping their playing time down to a couple of hours. On the contrary: all of the games that follow are big enough that Infocom could conceivably have released them, while at least one or two of them are actually bigger than Infocom’s technology could possibly have allowed. Over the years, I’ve come to realize that works like these are my personal sweet spot for interactive fiction: big, puzzly works which are well-written but which aren’t afraid to be games — albeit games which incorporate the design lessons of those pioneers that came before them. Neo-classical interactive fiction, if you will. (Yes, I’m aware that we’ve jumped from the Renaissance to Neoclassicism with dizzying speed. Such is life when you’re making broad — overly broad? — historical metaphors.) If your preferences are anything like mine, the games that follow will be heaven for you.

In fact, let me close this introduction with something of a personal plea. I’ve noticed a reluctance on the part of many diehard Infocom fans to give what came afterward a fair shake. I do understand that nostalgia is a big part of the reason people read sites like this one and play the games that are featured here, and there’s nothing inherently wrong with that. Although I do try very hard to keep nostalgia out of my own game criticism, I firmly believe that no reason to play a game is ever a wrong one, as long as you’re enjoying yourself. And yet I also believe, and with equal firmness, that the games you’ll find below aren’t just as good as those of Infocom: in a lot of ways, they’re superior. There’s nothing postmodern or pretentious or precious here (all of these being labels I’ve heard applied to other strands of post-Infocom interactive fiction as a reason for not engaging with it), just good clean old-school fun, generally absent the worst old-school annoyances. Please do consider giving one or more of these games a try, if you happen to be a fan of Infocom who hasn’t yet explored what came afterward. Nostalgia is all well and good, but sometimes it’s nice to make new memories.


Christminster

You haven't seen your brother Malcolm since he received his fellowship at Biblioll College - pressure of work was his excuse not to come down to London. So when you received that telegram from him you leapt at the excuse to come up to the university town of Christminster for the day and visit him.

It’s all too easy to dismiss Gareth Rees’s “interactive conspiracy” Christminster as a sort of Curses-lite. It shares with Graham Nelson’s epic a droll, very English prose style, an arch sense of humor, and a casual erudition manifested in a love of literary quotations and classical references. Indeed, the connections between the games go deeper still: Graham and Gareth were not only both Oxbridge academics but friends who helped one another out creatively and technically. If you spend enough time poking around in Christminster‘s library, you’ll discover that their games apparently belong to the same universe, when you uncover numerous references to the Meldrew family of Curses fame. But going too far with this line of description is doing Christminster a disservice. It may be smaller than Curses — to be fair, very few games aren’t — but it’s plenty rich in its own right, whilst being vastly more soluble by a reasonably motivated person in a reasonable amount of time.

Christminster takes place in the fictional English university town of the same name, but is obviously drawn to a large extent from the author’s lived experience.[1]For example, Graham Nelson informs us that “the appalling Professor Bungay,” the principal villain of the piece, “is a thinly disguised portrait of [name withheld], a Cambridge tutor, an awful man in a number of respects though not quite so bad as Gareth makes out. There is a wonderful bit where he can be heard gratuitously bullying a history undergraduate, winding up with a line like ‘Perhaps you had better change to Land Economy.’ This was an eccentric Cambridge degree which combined the second sons of the gentry, who would actually have to run large landed estates as their career, with a random selection of hapless students washed out of more high-brow subjects. Switching to Land Economy was Cambridge jargon for failing maths.” The time in which it occurs is kept deliberately vague; I vote for the 1950s, but one could almost equally opt for any point within a few decades to either side of that one. You play Christabel, a prim young lady who’s come up to Christminster to visit her brother Malcolm. But she soon discovers that he’s nowhere to be found, and that a shadowy occult enterprise seems to be afoot within his college’s ivy-covered walls. And so the hunt is on to find out what’s become of him and who is responsible.

None of this need be taken overly seriously. The game’s milieu of bumbling, slightly cracked old dons comes straight from the pages of Waugh, Amis, and Wodehouse, while its gloriously contrived central mystery would doubtless have pleased Agatha Christie. Thankfully, Christminster runs on plot time rather than clock time: the story evolves in response to your progress rather than placing you in thrall to some inexorable turn counter, in the way of the polarizing early Infocom mysteries. This leaves plenty of time to poke at every nook and cranny of the musty old campus and to enjoy some ingenious puzzles. In a few places, the design does show its age; the very first puzzle of the game is one of the very hardest, leaving you trapped outside of the college’s walls with nothing to do until you solve it — not exactly the most welcoming opening! But by all means do try to carry on, as the English like to say. If you do, you might just find Christminster to be one of the best cozy mysteries you’ll ever play.


John’s Fire Witch

It’s a cold weekend in December of 1990, and it’s been far too long since you have seen your friend John Baker! But you’ve finally managed to take some time out of your schedule to drive to Columbus and spend some “quality time” together. Quality time, of course, means that you and he are going to sample every bar that Ohio State University’s High Street has to offer.

John was to meet you at a favorite pizza and beer spot to start off the evening, but he hasn’t showed up. John’s always been rather spontaneous (read that as ‘erratic’), so you think he’ll show up eventually. But as the night wears on and you tire of downing beers by yourself, you decide to drive to his place and see if he’s left a note or something for you there.

You find his front door unlocked and John nowhere to be found. Pretty tired from your earlier drive, and also buzzing a bit from the beer you drank, you quickly doze off in the living room.

It is now morning. A terrible snow storm is raging outside, the worst you’ve ever seen. You can’t believe how much snow has piled up over the night. You still haven’t heard from John, and you seem to now be trapped in his apartment.

John’s Fire Witch by John Baker is an example of what we used to call “snack-sized interactive fiction” back in the day. Although the shortest game featured in this collection of reviews, it would be considered medium-sized today, with a typical play time in the range of two to five hours — i.e., not much if any shorter than, say, Infocom’s The Witness.

But no self-respecting member of the interactive-fiction literati would dare to release a game that opens like this one today. Waking up in your slovenly friend’s apartment is just one step removed from that ultimate in text-adventure clichés: the game that starts in your — or rather the author’s — bedroom. Make that half a step removed: note that the guy whose apartment you wake up in and the author of this game are the same person. “John, like many IF characters,” wrote David Welbourn in an online play-through of the game, “seems to live in a pigsty and eat nothing but snow.”

So, John’s Fire Witch is willfully unambitious; all it wants to do is entertain you for a few hours. Poking around your vanished friend’s apartment, you discover that he’s gotten himself caught up in a metaphysical struggle between an “ice wizard” and a “fire witch.” It’s up to you to rescue him by completing a number of unlikely tasks, such as collecting a handy grab bag of the seven deadly sins for a certain pitchfork-wielding character who dwells in the Down Below. (Luckily, good old John tends to partake in just about all of them on a regular basis, so his apartment makes a pretty good hunting ground.)

For two and a half decades now, critics like me have been intermittently trying to explain why John’s Fire Witch succeeds in being so appealing almost in spite of itself. Its prose treads that fine line between breezy and tossed-off, its thematic aspirations are non-existent, its puzzles are enjoyable but never breathtaking. In the end, maybe it just comes down to being good company. Its author’s personality comes through in droves, and you can’t help but like him. Beyond that… well, if it never does anything all that amazingly great, it never does anything all that egregiously wrong either.

The real John Baker disappeared without a trace after making this modest little game — good luck Googling that name! — leaving it behind as his only interactive-fiction legacy. He tells us that he’d like his players to send him $6, for lunch: “My favorite lunch is a soup & sandwich combo at a restaurant on Sawmill Road.” I for one would be happy to pay. Just drop me a line, John.


Lethe Flow Phoenix

A cool wind whips across the peak you stand on, sending tiny dust-devils whirling about your feet. The stars above you seem especially bright tonight, their silver light reaching across generations to speak to you. It is midnight, the hour of magic. The moon is not in sight tonight. All is still. All is waiting.

Perhaps it was a mistake to come and camp out here on this night. Not something you could have predicted in advance, of course, but still ... perhaps it was a little foolish. All Hallows’ Eve is not the most auspicious of nights. Still, you packed your bags up, tossed them next to the one-man tent in your trunk, and drove out here to spend a few days and get your life sorted out.

You were awakened in the middle of the night by something. You weren’t quite sure what, but you could tell something was wrong when you woke up. The desert was too quiet, too dark ... too eager. Like a sleep walker, you stumbled to the cliff nearby. You stood for a minute, catching your breath, and looked around. Behind you, at the other end of the shaky dirt trail, your car and tent wait patiently for your return. In other directions, you have a wide-open view of the desert, and can see it stretches in all directions, until it touches the feet of the mountains. The missing moon, curiously, does not concern you, nor does the fact that you can see as well now as if it were there.

You absentmindedly take another step forwards. If possible, the night becomes even more quiet, and the stars even brighter. Another step, and then another. You stand silently at the very edge of the cliff, looking outwards.

Then the ground gives way. “I’ve gone too far,” you think, almost casually. Not even screaming, you fall from the edge of the cliff.

***


There is a sudden sense of a presence around you as you fall. When you are rescued in mid-air, the event seems almost natural – bluesilver wings surround you, feathers caress you, and merciful darkness embraces you.

***

You awaken, and find yourself in a grassy field. The sun is shining brightly overhead, and a brook babbles gently as it flows along. A small tree grows in the center of the field, its branches ripe with apples.

If John Fire’s Witch is the My Stupid Apartment sub-genre of interactive fiction elevated to a weirdly sublime pitch, then Dan Shiovitz’s Lethe Flow Phoenix does the same for another hackneyed perennial of the post-commercial era: the Deeply Meaningful Exploration of the Subconscious. One always seems to find one or two games of this stripe, generally the products of younger scribes whose earnestness is almost painfully palpable, sloshing about in the lower rungs of any given IF Comp. Alas, their attempts to reveal inner truths through surrealistic imagery tend to come off as more banal than profound — rather like reading the diary of that angst-ridden fifteen-year-old so many of us used to be.

Dan Shiovitz was himself a fairly young man when he wrote Lethe Flow Phoenix, a game whose labored Latinate title doesn’t appear to bode well. Yet it turns out to be far better than one would ever dare to hope. Shiovitz has a knack for devising and describing beautifully twisted landscapes, through which he then proceeds to thread a series of deviously satisfying puzzles. At times, this game almost plays like a textual version of Myst, with much the same atmosphere of stately desolation and the same style of otherworldly but oddly logical dilemmas to overcome.

And then, around the halfway point, Lethe Flow Phoenix turns into something else entirely. Shiovitz provides an explanation for his protagonist’s personal problems, and it’s not at all what you might expect. I hesitate to say too much more here, but will go so far as to reveal that aliens from outer space — as opposed to just alienated humans — suddenly come into the picture. Again, this development should be disastrous, but somehow it works. The game manages to maintain your interest right up to its happy ending.

Dan Shovitz went on to write several other text adventures after this one, perhaps most notably Bad Machine, an exploration of the frontiers of language sufficient to set any postmodern linguistic theorist’s heart aflutter. But even that experimental masterstroke shouldn’t be allowed to overshadow this early piece of work. Yes, the author of Lethe Flow Phoenix is clearly a young man, but this particular young man is also an observant, talented writer. His protagonist’s final redemption is genuinely moving, the journey to that point satisfying on several levels. Lethe Flow Phoenix pairs heart with craftsmanship, and the results are pretty great.


The Light: Shelby’s Addendum

A strangeness has fallen. You first became aware of it with the darkening of the skies: the majestic, threatening storm clouds that seemed on the verge of deluging the earth in a torrent, yet hung motionless, impatient, as though awaiting further instructions from some unseen and malignant higher power. Of course Holcroft had on many occasions disproved to you the existence of such higher beings with his charts and calculations, and you do not believe in such foolishness as ghosts, gods and goblins, but events such as those unfolding before you now are causing you to question all that you have learned.

First the clouds, then the sudden silence of the birdsong, and the people. Where were the people? The village was deserted as you passed through. Not a soul to be seen. You knew you had to alert Barclay and Holcroft that something was terribly wrong with the balance of things, but before you had reached even the main gate an impenetrable mist had rolled in from below the cliffs and obscured the path to the lighthouse.

You decided to wait in the drum shed until the mist had lifted, rather than risk life and limb on the cliff walk, but you were weary from your journey and fell into a deep sleep. When you awoke it was near nightfall. The mist had barely dissipated, but your task was too important, so you took your chances on the cliff walk regardless. It was so dark. Why hadn’t Barclay or Holcroft lit the beacon? In the two years since beginning your apprenticeship you had never known the Regulators to neglect their duties. On the contrary, you found them to be slavishly by the book. “Routine begets knowledge,” Barclay once told you. (He had obviously never cleaned the septic tank every month for two straight years).

When, at last, you reached the courtyard entrance, something even stranger happened. You began to feel suddenly and inexplicably weak, as though the very life were being drawn from your bones. You had eaten well on the train journey from the Commission’s headquarters in the capital city, and passed your last physical with glowing colors, yet you felt as though you were at death’s door.

You had to see Holcroft. He, perhaps, could explain....

Colm McCarthy’s The Light: Shelby’s Addendum is another game that’s better than its ambiguously pretentious name. You play the eponymous Shelby, a junior — very junior — apprentice in a lonely lighthouse that provides more than just illumination: its beam maintains a delicate balance between our reality and other, alternate planes of existence. The hows and wherefores of its functioning are never explained all that well; ditto just when and where this story is supposed to be taking place. (We’re definitely on the Earth, probably in the near future, but is this our Earth or an alternate Earth?) In the end, the vagueness matters not a whit. A more thorough explanation would only interfere with the game’s atmosphere of mysterious Lovecraftian dread. You can almost smell the fetid seaside air as you play.

As the game opens, you’re returning to your post from a much-deserved holiday, only to find the lighthouse and even the village near it devoid of their usual inhabitants. Worse, the beacon itself has gone haywire, and the multiverse is slipping out of harmony as a result, producing unsettling effects all around you. Exploring the environs, you turn up evidence of the all-too-human disputes that gave rise to this slow-moving cosmic disaster. It looks like you are the only one who can correct the fault in our stars.

A big, lavish game, carefully written and implemented in most ways, The Light does from time to time trade in its polished personality for a more ramshackle old-school feel. If you don’t solve a pivotal puzzle within the first 100 turns — and you almost certainly won’t the first time through — it’s game over, thanks for playing. And there’s a mid-game submarine ride where the atmosphere suddenly changes from Lovecraftian dread to a scene straight out of the Beatles’ Yellow Submarine. Like most reviewers, I can only shake my head at this bit’s existence and wonder what the heck McCarthy was thinking.

Still, such breakdowns are very much the exception to the rule here. I’m nonplussed by some reviewers’ struggles with the puzzles; I solved the entire game without a hint, a feat which I’m happy to consider a testament to good design rather than any genius on my part. I’m kind of bummed that the sequel Colm McCarthy promises us in his denouement has never materialized. I’d love to know whether poor Shelby finally got a promotion after saving the multiverse and all.


Theatre

Another day, another dollar! Life is good at the moment, the property market is booming. Still, it does have its down side; when showing those Mulluer Corporation executives around that old theatre dump, err, opportunity you must have left your pager down in the basement. Better hurry, you have to meet the others at the opera in an hour, and be careful. It wouldn’t do to show up with your clothes all dirty.

Brendon Wyber’s “interactive night of horror” Theatre does us the favor of including its inspiration right in the game itself. As Wyber writes in his introduction, he made Theatre after reading an allegedly true haunted-house story by Joel Furr, one of the early Internet’s more prominent online characters, whose claims to fame include popularizing the term “spam.” Furr’s story, which is readable in its entirety via an in-game menu, is riveting whether you choose to go on to play said game itself or not. It involves the Lyric Theatre of Blacksburg, Virginia, a rambling old place stemming from 1930 that has been restored and is enjoying a new lease on life today, but was at its lowest ebb when Furr made its acquaintance in the early 1990s. As a Kiwi, Wyber had never been to the Lyric, yet that didn’t stop him from using Furr’s description of it as the basis for the setting if not the plot of his game.

You play a yuppie real-estate agent who rushes back inside the old theater he’s trying to unload to retrieve his forgotten pager — this is the 1990s, after all! — only to emerge again to find his car stolen. Rather than venturing out into the seedy neighborhood around the theater on foot, you opt to spend the night inside. Let the haunting begin…

Our frustrations with the medium understandably cause us to spend a lot of time talking about the things that textual interactive fiction, and adventure games in general for that matter, struggle to do well. For better or for worse, we tend to spend less time on the medium’s natural strengths. I’ll just note here, then, that setting must top any list of same. All of the games I’ve featured in this piece make this point, but none do it better than this one. Its name is no misnomer: the theater truly is this game’s main attraction. Its geography expands slowly and organically as you solve puzzles to open up new areas; there’s always some new cranny or crawlspace to uncover in the building, always some new aspect of its sinister history to bring to light. And it’s a fresh spine-shivering delight every time you do.

Before you become a full-fledged participant in the proceedings, you learn about the horror story at the center of it all through the journal pages you discover as you worm your way deeper and deeper into the theater’s bowels, deeper and deeper into its past. I must say that I like the first two-thirds of the game best, when it has a Gothic flavor in complete harmony with Joel Furr’s story. In time, however, it goes full Lovecraft, and not even in the relatively understated way of The Light. Still, one can’t accuse Wyber of pulling any punches; the big climax is as exciting as you could ask for.

Through it all, the real star remains the theater itself, whose faded elegance and delicious decay will remain with you long after you’ve exorcised the malevolent spirits that roam its spaces. You might want to save this one for Halloween.


Jigsaw

New Year's Eve, 1999, a quarter to midnight and where else to be but Century Park! Fireworks cascade across the sky, your stomach rumbles uneasily, music and lasers howl across the parkland... Not exactly your ideal party (especially as that rather attractive stranger in black has slipped back into the crowds) - but cheer up, you won't live to see the next.

As the follow-up to his two-year-old Curses, Graham Nelson’s “interactive history” Jigsaw was the most hotly anticipated text adventure of 1995. This game is even bigger than Curses — so big that Nelson had to employ a new, post-Infocom incarnation of the Z-Machine, a version 8 standard with the ability to handle story files of up to 512 K in size, in order to run the full version.[2]Nelson did also provide a version of Jigsaw that could run on older interpreters by moving his historical notes and some other bits to a separate story file. Although it will never be able to compete with its predecessor in terms of its importance to the history of its medium, in this critic’s opinion Jigsaw is the more accessible and enjoyable of the two games to play today.

It definitely doesn’t lack for ambition. Written just as millennial jitters were beginning to find a home in the minds of many of us, it’s a time-travel caper focusing on the horrible, magnificent century that was about to pass away, ranging in time and space from Kitty Hawk, North Carolina, on the day of the Wright brothers’ first flight to Berlin on the day the Wall came down. The principal antagonist and possible love interest — a timeline-wrecking “rather attractive stranger” of indeterminate gender, whom the game refers to only as “Black” after his or her choice of wardrobe — is misguided rather than evil, attempting to alleviate some of the century’s many injustices rather than bring on any apocalypse. But such retroactive changes are out of our mortal purview, of course, and can only lead to worse tragedies. “The time is out of joint,” as Hamlet said. Now, it’s up to you to set it right.

The amount of research required for the game’s fourteen historical vignettes was considerable to say the least — and that before a universe of information was only a visit to Wikipedia away, when one still had to go to brick-and-mortar libraries with printed encyclopedias on their shelves. Nelson doesn’t always get every detail correct: I could nitpick that the Titanic was actually not the first ship in history to send an SOS distress signal, for example, or note that his depiction of the Beatles of 1967 (“lurching wildly from one project to the next, hardly collaborating, always arguing”) seems displaced in time by at least a year.[3]Still less can I agree with his opinion that “a good deal of their music was dross by this stage.” I’ll be the first to argue that the Beatles never made a better album than A Hard Day’s Night, only different ones, but come on… Likewise, he’s sometimes a bit too eager to place ironic twists on the things we learned in our grade-school history classes. In light of what Nelson took on here, though, we can forgive him for all of this. He does a wonderful job of capturing the feel of each historical event. I also appreciate that his choices of historical linchpins aren’t always the obvious ones. For every voyage aboard a Titanic, there’s a visit to the cork-lined Parisian flat of Marcel Proust; for every trip to the Moon, there’s a sojourn in the filthy and disorganized laboratory of Alexander Fleming, the luckiest microbiologist who ever lived.

The episodic structure keeps Jigsaw manageable despite its overall sprawl, in marked contrast to Curses. Nelson, who had been thinking and writing seriously about design since his first game, went so far as to include a helpful little gadget which can alert you as to whether you’re leaving behind anything vital in each time period. Meanwhile the puzzles themselves are never less than solid, and are often inspired. One of them, in which you must decode a secret message using an only slightly simplified example of the German Enigma machines from the Second World War, has justly gone down in interactive-fiction lore as one of the best ever. Like so much of Jigsaw, it teaches even as it intrigues and entertains. I missed an important clue when I played through the game recently, which made this particular puzzle much harder than it was supposed to be. No worries — I enjoyed my two or three hours as a member of Alan Turing’s legendary team immensely, and positively jumped for joy when I finally produced a clear, cogent message from a meaningless scramble of letters.

My one real design complaint is the endgame, which takes place in a surreal fantasy landscape of the sort we’ve seen in too many other adventure games already. It feels both extraneous and thoroughly out of keeping with what has come before — and too darn hard to boot. I’ve said it before and I’ll say it again: by the time an adventurer reaches the endgame, especially of a work of this size, she just wants to be made to feel smart once or twice more and then to win. The designer’s job is to oblige her rather than to try to make himself feel smart. I must confess that I broke down and used hints for the endgame of Jigsaw, after solving the entirety of the rest of the game all by myself.

But the frustration of the endgame pales before the other delights on offer here. Nelson would never attempt a game of this size and scope again, making Jigsaw only that much more worth cherishing. Curses may be his most important game, but by my lights Jigsaw is his masterpiece.

Bonus:

Graham Nelson on Jigsaw


Curses had been written under the spell of the great cave games – Colossal Cave, Zork, Acheton. Games delving into a miscellany of doors, light puzzles, collection puzzles, and the like. Games written incrementally which ended up with epic, sprawling maps, but which started out only as entertainments written for friends. Each of those things is true about Curses as well.

But not Jigsaw. Once again Gareth Rees and Richard Tucker were the playtesters and de-facto editors, and the two games were recognisably from the same stable. There are many similarities, even down to having a one-word title, which I liked because it meant that the filename on an FTP server would likely be the whole title. It was always going to be a Z-machine story file once again, written with Inform. And it was playable under the same .z5 format as Curses, though I also offered a sort of director’s cut version with some extra annotation using the new .z8 format. (This was a sneaky way to try to persuade interpreter-writers to adopt .z8, which I worried people might think bogus and non-canonical, and so would not implement.)

Unlike Curses, though, Jigsaw was conceived holistically, had a rigorous plan, and was meant for the public rather than for friends. I set out to make the sort of rounded cultural artefact which middle-period Infocom might have offered — Dave Lebling’s Spellbreaker and Brian Moriarty’s Trinity are the obvious antecedents, but not the only ones. (Let me also praise Mike Dornbrook here, who was instrumental in making those games into clearly delineated works.) Those mature works of Infocom were satisfying to start, and satisfying to finish, and distinctive from each other. Infocom wasn’t big on historical settings (a shame that Stu Galley never completed his draft about the Boston of 1776), but in presentation, Jigsaw wouldn’t look out of place in their catalogue. In that sense, it’s rather derivative, even imitative, but this wasn’t seen as an eccentric or retro choice at the time; more of a mark of quality. But in any case, Jigsaw had other ambitions as well, and it’s on those other ambitions that it stands or falls.

Jigsaw strains to be a work of art, and though the strain shows from time to time, I think it mostly gets there. There are little embedded prose poems, generally at hinges in the story. Certain images – the nightjar, for example – are suggestive rather than explicated. There is also something a little poetic — and here I’m perhaps thinking more of the modernism of Ezra Pound’s cantos than of his more famous friend Eliot — about the interleaving of old formulations, old turns of speech. Jigsaw plays on the tantalising way that past times were so confident at being themselves. Nobody using an Apollo Guidance Computer thought of it as twee or retro. And you could say the same about a tram-ticket or a gas lamp, things that people used without a second thought. We have absolute confidence only about our own present moment, while the past seems hazy and uncertain. But the people who lived in that past felt exactly the same about their own present moments. For historical fiction to work, it has to side with them, not with us.

And on the other hand, while it is a modernist impulse to clash the old and the new, it’s a Romantic one to re-enact the old, to imaginatively take part in it. I’ve always liked the biographer Richard Holmes’s observation that to write a biography is an inherently Romantic act.

As I wrote Jigsaw in 1995, the twentieth century was coming to a relatively placid end — I hope anyone caught up in the Yugoslav civil wars will forgive me writing that. It was zeitgeisty to see the story of the age as being mostly done, even with a few years still to go. Francis Fukuyama’s The End of History (1992) was less sceptically received at the time than its later reputation might suggest. People were already gathering and tidying up the twentieth century. So I wasn’t the only one to jump the gun in writing about it.

Jigsaw has a classical IF structure, with a prologue, a middle game, and an end game. Less conventionally, a form of the end game – an area called “The Land” – is seen in a ghostly way throughout, while the middle game is divided into a grid of what amount to mini-games. Notably, these have named chapter headings.

The prologue takes place on the final night of 1999, on the margins of a public festival. I anticipated an event at a London park, and that was indeed the English response, though it turned out to be the ultra-modern Millennium Dome at Greenwich (begun in 1997) and not my more Victorian-sounding “Century Park”. The setting has something of the flavour of H. G. Wells’s The Time Machine, but in fact I semi-lifted it from an episode of Charles Chilton’s iconic BBC radio serial Journey into Space. That involved an enigmatic character named Whittaker who had been taken out of normally-running time in 1924 from a London park celebration (“There are special trains from Baker Street”). Other than scene-setting, the prologue’s goal is to make the complex jigsaw mechanism comprehensible. It’s a familiar IF travel-around-the-map mechanism, with the puzzle pieces serving as objects of desire which unlock further play. But at the same time, it is also the game’s organising metaphor. So these mechanics have to seem natural and fun to players. Getting the textual display and command verbs right was a major concern in early play-testing.

With prologue out of the way, we enter the past. Jigsaw claims in its banner to be “an interactive history”, which is awfully bold of it. As we’ve already established, it’s a work of fantasy. But perhaps the claim to be “a history” can just about be made. Attempts to define what that even means — cf. E. H. Carr, “What Is History?”; Richard Evans, “In Defence of History” — end up devoting much of their space just to enumerating lines of approach, after all. Mine is odder than most, but less odd than some. At its crudest, the historian’s choice is between asking “who took what decisions?” and asking “what was life like?”. Is 19th-century Europe the story of Napoleon and Bismarck and Garibaldi, who started wars and redrew maps, or is all of that froth compared to railways, manufacturing, anesthetics, and newspapers? Jigsaw goes the second way, with Lenin being I think the only world leader seen close up.

The Titanic sequence, the first one I wrote, is the one I would now leave out. Rich people drowned, but other rich people took their places, and history wasn’t much dented. Perhaps it left a greater sense of possible catastrophe in the popular imagination, but the Sarajevo 1914 sequence makes that point better anyway. Besides, having an accidental time traveller arrive on the Titanic is a very hackneyed plot device. (I’ve just been dismayed to find from Wikipedia that it’s even the pilot episode plot of Irwin Allen’s spangly TV show The Time Tunnel.) Still, the ocean liner was fun to recreate as a period piece. The bit where a passenger says, “Never mind, worse things happen at sea,” is my favourite joke in the whole game. And researching this did lead to one happy accident. Going through a heap of books and pamphlets in the Bodleian Library, I chanced on something I remembered from somewhere else, and this led to a short paper in the literary-discoveries journal Notes & Queries. That squib of a paper is still occasionally cited, and I was amused to see “Nelson, Graham” back to back with “Nietzsche, Friedrich” in the bibliography of a monograph as a result.

A better choice was the Apollo programme. The lunar module was controlled using VERB and NOUN commands, which made it pleasingly IF-sounding: why not send the player to the moon? I also wanted to have something about the mid-century zenith of big-state action — a world in which Kennedy could just decide that the United States would do something immense, and it would happen. (The Manhattan Project is another example, but Trinity had already done that.) Another take on Apollo would be that it changed our sensibility, forcing us to see ourselves from the outside. The cover art for Jigsaw is the Apollo 8 shot of the earth rising from lunar orbit, maybe the most reproduced photo of the century. But I also tried to evoke Apollo’s troubling sense of abandonment. First steps were last steps. The century’s most powerful civilisation did something astonishing and then just lost interest. To me, the question about the Pyramids is not why the pharaohs built them, but why they stopped.

In fact, even as I wrote, Apollo’s posthumous reputation was beginning a slow comeback. A new generation of geeks devoured Andrew Chaikin’s landmark book A Man on the Moon (1994). Also, the Internet had arrived. In 1995, Eric Jones’s Apollo Lunar Surface Journal became an extremely useful website. I corresponded a little with Eric at the time; he was, tellingly, having trouble finding a publisher. But thanks to his work, the Apollo sequence of Jigsaw — whatever its fantastical additions — is quite true to the actual Taurus-Littrow valley of the moon, and not a grey abstraction.

Fourteen historical vignettes is too many. It was hard to do much in so few rooms and items each, especially as they had to be playable in multiple orders. A fundamentally un-cave-like quality of Jigsaw is that you can’t wander about from era to era, and it is only rarely that something in one era is helpful in another. (Even then, alternative solutions are sometimes provided.) But I worried that the lack of space made these mini-games too easy, and over-compensated with highly convoluted device-based puzzles. Fly your very own B-52! I truly repent of how difficult that sequence is to play.

A happier example was the Enigma machine. I’ve used one in real life, encoding a very short message on a surviving Enigma which belongs to the science writer Simon Singh. Still, this section was really based on the oral histories of Bletchley Park edited by Hinsley and Stripp in 1993; accounts which, a bit madly, had only just been declassified. I imbibed some of the recherché jargon of the codebreakers, who lived in a strangely appealing world of their own. I was very taken with the vulnerability of Enigma, caused by the frequent presence of double letters in German words. One of the myths of Bletchley was that the invention of the computer flat-out defeated Enigma, as if you just had to press a button. It would be fairer to say that the computer made breaking the code just on the edge of what was possible. A certain cunning was still needed, and luck as well. They found ways to make their own luck, but there were also terrible periods when they failed, and when many sailors went to the bottom of the Atlantic as a result. My grandfather served on two Royal Navy convoys to Murmansk, and he was fortunate that those coincided with a good run at Bletchley, though he never knew it. That, and the thought that I might have been there myself if I had been an Oxford maths post-doc in 1942 rather than 1995, made this vignette more personal to me.

Fourteen vignettes is also too few. I chose Marcel Proust and the Beatles as my artists of the century, for example, and with them I had used up the entire space available for cultural history. My fourteen moments have to spread themselves very thinly over a lot of ground, and there is clearly no single or perfect solution to this. Still, Jigsaw has a clear Western bias. I probably should have chosen the release of Nelson Mandela in 1990 rather than the fall of the Berlin Wall in 1989. Africa appears only tangentially, in the Suez Crisis of 1956, which has to stand for the whole of postcolonialism. Even then, my main inspiration was Christopher Hampton’s autobiographical play White Chameleon, and Hampton is British. China does not appear at all, which from a 21st-century viewpoint seems very jarring. From the vantage point of 2021, civil rights also look pretty salient, but in 1995 it did not seem that way: the movement for women’s suffrage is all you get. Why no M.L.K.? That now seems very odd, except that I had plenty of the 1960s already. Some potential topics were also dropped just for lack of puzzles about them, or because they didn’t really fit anywhere. Though I don’t know to what extent players were ever aware of it, the connection points on the jigsaw pieces tried to suggest thematic links. The Wright brothers to Apollo, and so on.

Another consideration was, for want of a better word, taste. Fascism seemed mostly done in 1995, but it had clearly been a big part of the story. It isn’t a big part of Jigsaw because, in the end, is there any ethical way to recreate the experience of being massacred for no reason? The Holocaust does have a presence in Jigsaw, but very indirectly. Buried somewhere is a little anecdote about a young Jewish boy in Berlin in the 1930s, who had picked up a shiny badge in the street with no understanding that it was Nazi regalia which he could be killed just for touching – one of the few moments in Jigsaw told to me by an eye-witness, the boy himself, who survived to be a retired professor. What I really did not want to do was to recreate a version of Auschwitz which came with an escape hatch. And then of course Vietnam, Cambodia, the genocide of the Armenian Turks, Kosovo, Rwanda, you name it. Quite the charnal house we made for ourselves, you have to say. In a room of the end game which, if memory serves, was called the Toll Gate, there is a cumulative graph of humans deliberately killed, plotted against time. This graph surges at the World Wars but it certainly isn’t flat in between them.

There are a few other grim moments like that in the endgame, too. The endgame is the strangest part of Jigsaw and probably the least successful. But here’s what I think I was trying for. The Land does partly bring in concerns not tied to specific moments – pollution, for example, though not global warming, which we were all cheerfully ignoring in 1995. (But not now, right? Right?) At the same time, I didn’t want bleakness to dominate, and I wanted to end on brighter, more fantastical colours. There is supposed to be a sort of Eden-like rebirth as another century is coming, with this endgame area as the Garden of that Eden. Underlying all of history, but often invisible from it, there is always the goodness of the world, our one place of happiness. The chapter title for the endgame is “The Living Land”, and it’s about life in opposition to death.

But it is also too fiddly and is not the enjoyable romp I intended it to be. I don’t like the self-indulgent references to past IF games: what are they even for? The extent of the Land was a more understandable mistake — it’s because of the structural obsession of Jigsaw with its key mechanic. Rooms in The Land correspond to the original pieces, but that meant having quite a lot of them, which in turn meant padding out this space with puzzles. In fact, the endgame is so long that it has a little endgame of its own, taking us back to Century Park. But that was absolutely the right way to end. When you are composing a set of variations, finish on a da capo repetition of the original theme.

Finally, whereas Curses has no significant characters other than the protagonist, in Jigsaw the player has a significant other, called Black. In timecop sci-fi novels, the hero generally does battle with a rival time traveller. One tries to rewrite history, the other to keep it on track. Well, that is basically the situation here. Emphasising this, Black is a symbolic and non-human sort of name: White’s opponent in a game. (The Apollo lunar lander shared with Black has the call-sign “Othello”, and this is a reference to the strategy game, not the Shakespeare play.) The neutral name Black also worked better for blurring gender than having to use contrived unisex forenames like Hilary, Pat, or Stevie.

In retrospect, this genderless romance is the main thing people remember about Jigsaw. I wouldn’t make much claim for the depth or solidity of that romantic subplot: but at least it was there, and was something you wouldn’t find in the Nancy Drew/Hardy Boys sort of milieu of most earlier IF. There is even, however glancingly, a presence of sex. That much was deliberate. But when I was writing, the absence of genders seemed just another narrative choice. I wanted a certain universalism, a sort of every-person quality to the player. And I didn’t want some sort of performative nonsense like the barroom scene at the start of Leather Goddesses of Phobos, where you demonstrate your gender by picking a bathroom, but have no way to demonstrate your orientation.

Anyway, this seemed like a statement only after publication, when I began to get rather touching emails from players. I think Jigsaw may have been quite widely played, and this was easily the aspect most responded to. Happy emails were often from women. I did also get a smaller amount of homophobic mail, and that was invariably from men, who reacted as if they’d been catfished.

We easily forget now that in 1995 gay relationships were socially invisible. There were no openly gay characters in The West Wing, Gilmore Girls, or Star Trek: The Next Generation. A handful of New York sitcoms were just starting to go there, but for the most part, in popular culture, gay people existed as people with problems. Tom Hanks won an Oscar for Philadelphia in 1993, but it’s a movie about a closeted man with AIDS. Sleepless in Seattle, the same year, could easily have played some non-binary games with its two lovers, since they don’t meet until the very end. But it doesn’t. In the 1990s, romance in popular culture was almost exclusively straight. Nobody thought that odd at the time, and nor did I. I didn’t write a gay romance at all, I simply wrote a romance which was whatever you wanted to imagine it was. I would like to say that the gender games in Jigsaw were a nod to the gradual emancipation of love in the twentieth century. But that was the one thing about Jigsaw which was completely unplanned.

One of those emails I received was from the young Emily Short, though we did not meet for many years, and it was in another century that we married. History is full of surprises.


(All of the games reviewed in this article are freely available via the individual links provided above and are playable on Windows, Macintosh, and Linux using the Gargoyle interpreter among other options.)

Footnotes

Footnotes
1 For example, Graham Nelson informs us that “the appalling Professor Bungay,” the principal villain of the piece, “is a thinly disguised portrait of [name withheld], a Cambridge tutor, an awful man in a number of respects though not quite so bad as Gareth makes out. There is a wonderful bit where he can be heard gratuitously bullying a history undergraduate, winding up with a line like ‘Perhaps you had better change to Land Economy.’ This was an eccentric Cambridge degree which combined the second sons of the gentry, who would actually have to run large landed estates as their career, with a random selection of hapless students washed out of more high-brow subjects. Switching to Land Economy was Cambridge jargon for failing maths.”
2 Nelson did also provide a version of Jigsaw that could run on older interpreters by moving his historical notes and some other bits to a separate story file.
3 Still less can I agree with his opinion that “a good deal of their music was dross by this stage.” I’ll be the first to argue that the Beatles never made a better album than A Hard Day’s Night, only different ones, but come on…
 
32 Comments

Posted by on September 3, 2021 in Digital Antiquaria, Interactive Fiction

 

Tags: , , , , , , , , , ,