Tales of the Mirror World, Part 2: From Mainframes to Micros

The BESM-6

Seen from certain perspectives, Soviet computer hardware as an innovative force of its own peaked as early as 1968, the year the first BESM-6 computer was powered up. The ultimate evolution of the line of machines that had begun with Sergei Lebedev’s original MESM, the BESM-6 was the result of a self-conscious attempt on the part of Lebedev’s team at ITMVT to create a world-class supercomputer. By many measures, they succeeded. Despite still being based on transistors rather than the integrated circuits that were becoming more and more common in the West, the BESM-6’s performance was superior to all but the most powerful of its Western peers. The computers generally acknowledged as the fastest in the world at the time, a line of colossi built by Control Data in the United States, were just a little over twice as fast as the BESM-6, which had nothing whatsoever to fear from the likes of the average IBM mainframe. And in comparison to other Soviet computers, the BESM-6 was truly a monster, ten times as fast as anything the country had managed to produce before. In its way, the BESM-6 was as amazing an achievement on Lebedev’s part as had been the MESM almost two decades earlier. Using all home-grown technology, Lebedev and his people had created a computer almost any Western computer lab would have been proud to install.

At the same time, though, the Soviet computer industry’s greatest achievement to date was, almost paradoxically, symbolic of all its limitations. Sparing no expense nor effort to build the best computer they possibly could, Lebedev’s team had come close to but not exceeded the Western state of the art, which in the meantime continued marching inexorably forward. All the usual inefficiencies of the Soviet economy conspired to prevent the BESM-6 from becoming a true game changer rather than a showpiece. BESM-6s would trickle only slowly out of the factories; only about 350 of them would be built over the course of the next 20 years. They became useful tools for the most well-heeled laboratories and military bases, but there simply weren’t enough of them to implement even a fraction of the cybernetics dream.

A census taken in January of 1970 held that there were just 5500 computers operational in the Soviet Union, as compared with 62,500 in the United States and 24,000 in Western Europe. Even if one granted that the BESM-6 had taken strides toward solving the problem of quality, the problem of quantity had yet to be addressed. Advanced though the BESM-6 was in so many ways, for Soviet computing in general the same old story held sway. A Rand Corporation study from 1970 noted that “the Soviets are known to have designed micro-miniaturized circuits far more advanced than any observed in Soviet computers.” The Soviet theory of computing, in other words, continued to far outstrip the country’s ability to make practical use of it. “In the fundamental design of hardware and software the Russian computer art is as clever as that to be found anywhere in the world,” said an in-depth Scientific American report on the state of Soviet computing from the same year. “It is in the quality of production, not design, that the USSR is lagging.”

One way to build more computers more quickly, the Moscow bureaucrats concluded, was to share the burden among their partners (more accurately known to the rest of the world as their vassal states) in the Warsaw Pact. Several members states — notably East Germany, Czechoslovakia, and Hungary — had fairly advanced electronics industries whose capabilities in many areas exceeded that of the Soviets’ own, not least because their geographical locations left them relatively less isolated from the West. At the first conference of the International Center of Scientific and Technical Information in January of 1970, following at least two years of planning and negotiating, the Soviet Union signed an agreement with East Germany, Czechoslovakia, Bulgaria,  Hungary, Poland, and Romania to make the first full-fledged third-generation computer — one based on integrated circuits rather than transistors — to come out of Eastern Europe. The idea of dividing the labor of producing the new computer was taken very literally. In a testimony to the “from each according to his means” tenet of communism, Poland would make certain ancillary processors, tape readers, and printers; East Germany would make other peripherals; Hungary would make magnetic memories and some systems software; Czechoslovakia would make many of the integrated circuits; Romania and Bulgaria, the weakest sisters in terms of electronics, would make various mechanical and structural odds and ends; and the Soviet Union would design the machines, make the central processors, and be the final authority on the whole project, which was dubbed “Ryad,” a word meaning “row” or “series.”

The name was no accident. On the contrary, it was key to the nature of the computer — or, rather, computers — the Soviet Union and its partners were now planning to build. With the BESM-6 having demonstrated that purely home-grown technology could get their countries close to the Western state of the art but not beyond it, they would give up on trying to outdo the West. Instead they would take the West’s best, most proven designs and clone them, hoping to take advantage of the eye toward mass production that had been baked into them from the start. If all went well, 35,000 Ryad computers would be operational across the Warsaw Pact by 1980.

In a sense, the West had made it all too easy for them, given Project Ryad all too tempting a target for cloning. In 1964, in one of the most important developments in the history of computers, IBM had introduced a new line of mainframes called the System/360. The effect it had on the mainframe industry of the time was very similar to the one which the IBM PC would have on the young microcomputer industry 17 years later: it brought order and stability to what had been a confusion of incompatible machines. For the first time with the System/360, IBM created not just a single machine or even line of machines but an entire computing ecosystem built around hardware and software compatibility across a wide swathe of models. The effect this had on computing in the West is difficult to overstate. There was, for one thing, soon a large enough installed base of System/360 machines that companies could make a business out of developing software and selling it to others; this marked the start of the software industry as we’ve come to know it today. Indeed, our modern notion of computing platforms really begins with the System/360. Dag Spicer of the Computer History Museum calls it IBM’s Manhattan Project. Even at the time, IBM’s CEO Thomas Watson Jr. called it the most important product in his company’s already storied history, a distinction which is challenged today only by the IBM PC.

The System/360 ironically presaged the IBM PC in another respect: as a modular platform built around well-documented standards, it was practically crying out to be cloned by companies that might have trailed IBM in terms of blue-sky technical innovation, but who were more than capable of copying IBM’s existing technology and selling it at a cheaper price. Companies like Amdahl — probably the nearest equivalent to IBM’s later arch-antagonist Compaq in this case of parallel narratives — lived very well on mainframes compatible with those of IBM, machines which were often almost as good as IBM’s best but were always cheaper. None too pleased about this, IBM responded with various sometimes shady countermeasures which landed them in many years of court cases over alleged antitrust violations. (Yes, the histories of mainframe computing and PC computing really do run on weirdly similar tracks.)

If the System/360 from the standpoint of would-be Western cloners was an unlocked door waiting to be opened, from the standpoint of the Soviet Union, which had no rules for intellectual property whatsoever which applied to the West, the door was already flung wide. Thus, instead of continuing down the difficult road of designing its high-end computers from scratch, the Soviet Union decided to stroll on through.

An early propaganda shot shows a Ryad machine in action.

There’s much that could be said about what this decision symbolized for Soviet computing and, indeed, for Soviet society in general. For all the continuing economic frustrations lurking below the surface of the latest Pravda headlines, Khrushchev’s rule had been the high-water mark of Soviet achievement, when the likes of the Sputnik satellite and Yuri Gagarin’s flight into space had seemed to prove that communism really could go toe-to-toe with capitalism. But the failure to get to the Moon before the United States among other disappointments had taken much of the shine off that happy thought.1 In the rule of Leonid Brezhnev, which began with Khrushchev’s unceremonious toppling from power in October of 1964, the Soviet Union gradually descended into a lazy decrepitude that gave only the merest lip service to the old spirit of revolutionary communism. Corruption had always been a problem, but now, taking its cue from its new leader, the country became a blatant oligarchy. While Brezhnev and his cronies collected dachas and cars, their countryfolk at times literally starved. Perhaps the greatest indictment of the system Brezhnev perpetuated was the fact that by the 1970s the Soviet Union, in possession of more arable land than any nation on earth and with one of the sparsest populations of any nation in relation to its land mass, somehow still couldn’t feed itself, being forced to import millions upon millions of tons of wheat and other basic foodstuffs every year. Thus Brezhnev found himself in the painful position, all too familiar to totalitarian leaders, of being in some ways dependent on the good graces of the very nations he denigrated.

In the Soviet Union of Leonid Brezhnev, bold ideas like the dream of cybernetic communism fell decidedly out of fashion in favor of nursing along the status quo. Every five years, the Party Congress reauthorized ongoing research into what had become known as the “Statewide Automated Management System for Collection and Processing of Information for the Accounting, Planning, and Management of the National Economy” (whew!), but virtually nothing got done. The bureaucratic infighting that had always negated the perceived advantages of communism — as perceived optimistically by the Soviets, and with great fear by the West — was more pervasive than ever in these late years. “The Ministry of Metallurgy decides what to produce, and the Ministry of Supplies decides how to distribute it. Neither will yield its power to anyone,” said one official. Another official described each of the ministries as being like a separate government unto itself. Thus there might not be enough steel to make the tractors the country’s farmers needed to feed its people one year; the next, the steel might pile up to rust on railway sidings while the erstwhile tractor factories were busy making something else.

Amidst all the infighting, Project Ryad crept forward, behind schedule but doggedly determined. This new face of computing behind the Iron Curtain made its public bow at last in May of 1973, when six of the seven planned Ryad “Unified System” models were in attendance at the Exposition of Achievements of the National Economy in Moscow. All were largely hardware- and software-compatible with the IBM System/360 line. Even the operating systems that were run on the new machines were lightly modified copies of Western operating systems like IBM’s DOS/360. Project Ryad and its culture of copying would come to dominate Soviet computing during the remainder of the 1970s. A Rand Corporation intelligence report from 1978 noted that “by now almost everything offered by IBM to 360 installations has been acquired” by the Soviet Union.

Project Ryad even copied the white lab coats worn by the IBM “priesthood” (and gleefully scorned by the scruffier hackers who worked on the smaller but often more innovative machines produced by companies like DEC).

During the five years after the Ryad machines first appeared, IBM sold about 35,000 System/360 machines, while the Soviet Union and its partners managed to produce about 5000 Ryad machines. Still, compared to what the situation had been before, 5000 reasonably modern machines was real progress, even if the ongoing inefficiencies of the Eastern Bloc economies kept Project Ryad from ever reaching more than a third of its stated yearly production goals. (A telling sign of the ongoing disparities between West and East was the way that all Western estimates of future computer production tended to vastly underestimate the reality that actually arrived, while Eastern estimates did just the opposite.) If it didn’t exactly allow Eastern Europe to make strides toward any bold cybernetic future — on the contrary, the Warsaw Pact economies continued to limp along in as desultory a fashion as ever — Project Ryad did do much to keep its creator nations from sliding still further into economic dysfunction. Unsurprisingly, a Ryad-2 generation of computers was soon in the works, cloning the System/370, IBM’s anointed successor to the System/360 line. Other projects cloned the DEC PDP line of machines, smaller so-called “minicomputers” suitable for more modest — but, at least in the West, often more interesting and creative — tasks than the hulking mainframes of IBM. Soviet watcher Seymour Goodman summed up the current situation in an article for the journal World Politics in 1979:

The USSR has learned that the development of its national computing capabilities on the scale it desires cannot be achieved without a substantial involvement with the rest of the world’s computing community. Its considerable progress over the last decade has been characterized by a massive transfer of foreign computer technology. The Soviet computing industry is now much less isolated than it was during the 1960s, although its interfaces with the outside world are still narrowly defined. It would appear that the Soviets are reasonably content with the present “closer but still at a distance” relationship.

Reasonable contentment with the status quo would continue to be the Kremlin’s modus operandi in computing, as in most other things. The fiery rhetoric of the past had little relevance to the morally and economically bankrupt Soviet state of the 1970s and 1980s.

Even in this gray-toned atmosphere, however, the old Russian intellectual tradition remained. Many of the people designing and programming the nation’s computers barely paid attention to the constant bureaucratic turf wars. They’d never thought that much about philosophical abstractions like cybernetics, which had always been more a brainchild of the central planners and social theorists than the people making the Soviet Union’s extant computer infrastructure, such as it was, work. Like their counterparts in the West, Soviet hackers were more excited by a clever software algorithm or a neat hardware re-purposing than they were by high-flown social theory. Protected by the fact that the state so desperately needed their skills, they felt free at times to display an open contempt for the supposedly inviolate underpinnings of the Soviet Union. Pressed by his university’s dean to devote more time to the ideological studies that were required of every student, one young hacker said bluntly that “in the modern world, with its super-speedy tempo of life, time is too short to study even more necessary things” than Marxism.

Thus in the realm of pure computing theory, where advancement could still be made without the aid of cutting-edge technology, the Soviet Union occasionally made news on the world stage with work evincing all the originality that Project Ryad and its ilk so conspicuously lacked. In October of 1978, a quiet young researcher at the Moscow Computer Center of the Soviet Academy of Sciences named Leonid Genrikhovich Khachiyan submitted a paper to his superiors with the uninspiring — to non-mathematicians, anyway — title of “Polynomial Algorithms in Linear Programming.” Following its publication in the Soviet journal Reports of the Academy of Sciences, the paper spread like wildfire across the international community of mathematics and computer science, even garnering a write-up in the New York Times in November of 1979. (Such reports were always written in a certain tone of near-disbelief, of amazement that real thinking was going on in the Mirror World.) What Khachiyan’s paper actually said was almost impossible to clearly explain to people not steeped in theoretical mathematics, but the New York Times did state that it had the potential to “dramatically ease the solution of problems involving many variables that up to now have required impossibly large numbers of separate computer calculations,” with potential applications in fields as diverse as economic planning and code-breaking. In other words, Khachiyan’s new algorithms, which have indeed stood the test of time in many and diverse fields of practical application, can be seen as a direct response to the very lack of computing power with which Soviet researchers constantly had to contend. Sometimes less really could be more.

As Khachiyan’s discoveries were spreading across the world, the computer industries of the West were moving into their most world-shaking phase yet. A fourth generation of computers, defined by the placing of the “brain” of the machine, or central processing unit, all on a single chip, had arrived. Combined with a similar miniaturization of the other components that went into a computer, this advancement meant that people were able for the first time to buy these so-called “microcomputers” to use in their homes — to write letters, to write programs, to play games. Likewise, businesses could now think about placing a computer on every single desk. Still relatively unremarked by devotees of big-iron institutional computing as the 1970s expired, over the course of the 1980s and beyond the PC revolution would transform the face of business and entertainment, empowering millions of people in ways that had heretofore been unimaginable. How was the Soviet Union to respond to this?

Alexi Alexandrov, the president of the Moscow Academy of Sciences, responded with a rhetorical question: “Have [the Americans] forgotten that problems of no less complexity, such as the creation of the atomic bomb or space-rocket technology… [we] were able to solve ourselves without any help from abroad, and in a short time?” Even leaving aside the fact that the Soviet atomic bomb was itself built largely using stolen Western secrets, such words sounded like they heralded a new emphasis on original computer engineering, a return to the headier days of Khrushchev. In reality, though, the old ways were difficult to shake loose. The first Soviet microprocessor, the KP580BM80A of 1977, had its “inspiration” couched inside its very name: the Intel 8080, which was along with the Motorola 6800 one of the two chips that had launched the PC revolution in the West in 1974.

Yet in the era of the microchip the Soviet Union ran into problems continuing the old practices. While technical schematics for chips much newer and more advanced than the Intel 8080 were soon readily enough available, they were of limited use in Soviet factories, which lacked the equipment to stamp out the ever more miniaturized microchip designs coming out of Western companies like Intel.

One solution might be for the Soviets to hold their noses and outright buy the chip-fabricating equipment they needed from the West. In earlier decades, such deals had hardly been unknown, although they tended to be kept quiet by both parties for reasons of pride (on the Eastern side) and public relations (on the Western side). But, unfortunately for the Soviets, the West had finally woken up to the reality that microelectronics were as critical to a modern war machine as missiles and fighter planes. A popular story that circulated around Western intelligence circles for years involved Viktor Belenko, a Soviet pilot who went rogue, flying his state-of-the-art MIG-25 fighter jet to a Japanese airport and defecting there in 1976. When American engineers examined his MIG-25, they found a plane that was indeed a technological marvel in many respects, able to fly faster and higher than any Western fighter. Yet its electronics used unreliable vacuum tubes rather than transistors, much less integrated circuits — a crippling disadvantage on the field of battle. The contrast with the West, which had left the era of the vacuum tube behind almost two decades ago, was so extreme that there was some discussion of whether Belenko might be a double agent, his whole defection a Soviet plot to convince the West that they were absurdly far behind in terms of electronics technology. Sadly for the Soviets, the vacuum tubes weren’t the result of any elaborate KGB plot, but rather just a backward electronics industry.

In 1979, the Carter Administration began to take a harder line against the Soviet Union, pushing through Congress as part of the Export Administration Act a long list of restrictions on what sorts of even apparently non-military computer technology could legally be sold to the Eastern Bloc. Ronald Reagan then enforced and extended these restrictions upon becoming president in 1981, working with the rest of the West in what was known as the Coordination Committee on Export Controls, or COCOM — a body that included all of the NATO member nations, plus Japan and Australia — to present a unified front. By this point, with the Cold War heading into its last series of dangerous crises thanks to Reagan’s bellicosity and the Soviet invasion of Afghanistan, the United States in particular was developing a real paranoia about the Soviet Union’s long-standing habits of industrial espionage. The paranoia was reflected in CIA director William Casey’s testimony to Congress in 1982:

The KGB has developed a large, independent, specialized organization which does nothing but work on getting access to Western science and technology. They have been recruiting about 100 young scientists and engineers a year for the last 15 years. They roam the world looking for technology to pick up. Back in Moscow, there are 400 to 500 assessing what they might need and where they might get it — doing their targeting and then assessing what they get. It’s a very sophisticated and far-flung organization.

By the mid-1980s, restrictions on Western computer exports to the East were quite draconian, a sometimes bewildering maze of regulations to be navigated: 8-bit microcomputers could be exported but 16-bit microcomputers couldn’t be; a singe-user accounting package could be exported but not a multi-user version; a monochrome monitor could be exported but not a color monitor.

Even as the barriers between East and West were being piled higher than ever, Western fascination with the Mirror World remained stronger than ever. In August of 1983, an American eye surgeon named Leo D. Bores, organizer of the first joint American/Soviet seminar in medicine in Moscow and a computer hobbyist in his spare time, had an opportunity to spend a week with what was billed as the first ever general-purpose Soviet microcomputer. It was called the “Agat” — just a pretty name, being Russian for the mineral agate — and it was largely a copy — in Bores’s words a bad copy — of the Apple II. His report, appearing belatedly in the November 1984 issue of Byte magazine, proved unexpectedly popular among the magazine’s readership.

The Agat computer

The Agat was, first of all, much, much bigger and heavier than a real Apple II; Bores generously referred to it as “robust.” It was made in a factory more accustomed to making cars and trucks, and, indeed, it looked much as one might imagine a computer built in an automotive plant would look. The Soviets had provided software for displaying text in Cyrillic, albeit with some amount of flicker, using the Apple II’s bitmap-graphics modes. The keyboard also offered Cyrillic input, thus solving, after a fashion anyway, a big problem in adapting Western technology to Soviet needs. But that was about the extent to which the Agat impressed. “The debounce circuitry [on the keyboard] is shaky,” noted Bores, “and occasionally a stray character shows up, especially during rapid data entry. The elevation of the keyboard base (about 3.5 centimeters) and the slightly steeper-than-normal board angle would cause rapid fatigue as well as wrist pain after prolonged use.” Inside the case was a “nightmarish wiring maze.” Rather than being built into a single motherboard, the computer’s components were all mounted on separate breadboards cobbled together by all that cabling, the way Western engineers worked only in the very early prototyping stage of hardware development. The Soviet clone of the MOS 6502 chip found at the heart of the Agat was as clumsily put together as the rest of the machine, spanning across several breadboards; thus this “first Soviet microcomputer” arguably wasn’t really a microcomputer at all by the strict definition of the term. The kicker was the price: about $17,000. As that price would imply, the Agat wasn’t available to private citizens at all, being reserved for use in universities and other centers of higher learning.

With the Cold War still going strong, Byte‘s largely American readership was all too happy to jeer at this example of Soviet backwardness, which certainly did show a computer industry lagging years behind the West. That said, the situation wasn’t quite as bad as Bores’s experience would imply. It’s very likely that the machine he used was a pre-production model of the Agat, and that many of the problems he encountered were ironed out in the final incarnation.

For all the engineering challenges, the most important factor impeding truly personal computing in the Soviet Union was more ideological than technical. As so many of the visionaries who had built the first PCs in the West had so well recognized, these were tools of personal empowerment, of personal freedom, the most exciting manifestation yet of Norbert Wiener’s original vision of cybernetics as a tool for the betterment of the human individual. For an Eastern Bloc still tossing and turning restlessly under the blanket of collectivism, this was anathema. Poland’s propaganda ministry made it clear that they at least feared the existence of microcomputers far more than they did their absence: “The tendency in the mass-proliferation of computers is creating a variety of ideological endangerments. Some programmers, under the inspiration of Western centers of ideological subversion, are creating programs that help to form anti-communistic political consciousness.” In countries like Poland and the Soviet Union, information freely exchanged could be a more potent weapon than any bomb or gun. For this reason, photocopiers had been guarded with the same care as military hardware for decades, and even owning a typewriter required a special permit in many Warsaw Pact countries. These restrictions had led to the long tradition of underground defiance known euphemistically simply as “samizdat,” or self-publishing: the passing of “subversive” ideas from hand to hand as one-off typewritten or hand-written texts. Imagine what a home computer with a word processor and a printer could mean for samizdat. The government of Romania was so terrified by the potential of the computer for spreading freedom that it banned the very word for a time. Harry R. Meyer, an American Soviet watcher with links to the Russian expatriate community, made these observations as to the source of such terror:

I can imagine very few things more destructive of government control of information flow than having a million stations equivalent to our Commodore 64 randomly distributed to private citizens, with perhaps a thousand in activist hands. Even a lowly Commodore 1541 disk drive can duplicate a 160-kilocharacter disk in four or five minutes. The liberating effect of not having to individually enter every character every time information is to be shared should dramatically increase the flow of information.

Information distributed in our society is mainly on paper rather than magnetic media for reasons of cost-effectiveness: the message gets to more people per dollar. The bottleneck of samizdat is not money, but time. If computers were available at any cost, it would be more effective to invest the hours now being spent in repetitive typing into earning cash to get a computer, no matter how long it took.

If I were circulating information the government didn’t like in the Soviet Bloc, I would have little interest in a modem — too easily monitored. But there is a brisk underground trade in audio cassettes of Western music. Can you imagine the headaches (literal and figurative) for security agents if text files were transported by overwriting binary onto one channel in the middle of a stereo cassette of heavy-metal music? One would hope it would be less risk to carry such a cassette than a disk, let alone a compromising manuscript.

If we accept Meyer’s arguments, there’s an ironic follow-on argument to be made: that, in working so hard to keep the latest versions of these instruments of freedom out of the hands of the Soviet Union and its vassal states, the COCOM was actually hurting rather than helping the cause of freedom. As many a would-be autocrat has learned to his dismay in the years since, it’s all but impossible to control the free flow of information in a society with widespread access to personal-computing technology. The new dream of personal computing, of millions of empowered individuals making things and communicating, stood in marked contrast to the Soviet cyberneticists’ old dream of perfect, orderly, top-down control implemented via big mainframe computers. For the hard-line communists, the dream of personal computing sounded more like a nightmare. The Soviet Union faced a stark dilemma: embrace the onrushing computer age despite the loss of control it must imply, or accept that it must continue to fall further and further behind the West. A totalitarian state like the Soviet Union couldn’t survive alongside the free exchange of ideas, while a modern economy couldn’t survive without the free exchange of ideas.

Thankfully for everyone involved, a man now stepped onto the stage who was willing to confront the seemingly insoluble contradictions of Soviet society. On March 11, 1985, Mikhail Gorbachev was named General Secretary of the Communist Party of the Soviet Union, the eighth and, as it would transpire, the last man to hold that title. He almost immediately signaled a new official position toward computing, as he did toward so many other things. In one of his first major policy speeches just weeks after assuming power, Gorbachev announced a plan to put personal computers into every classroom in the Soviet Union.

Unlike the General Secretaries who had come before him, Gorbachev recognized that the problems of rampant corruption and poor economic performance which had dogged the Soviet Union throughout its existence were not obstacles external to the top-down collectivist state envisioned by Validimir Lenin but its inevitable results. “Glasnost,” the introduction of unprecedented levels of personal freedom, and “Perestroika,” the gradual replacement of the planned economy with a more market-oriented version permitting a degree of private ownership, were his responses. These changes would snowball in a way that no one — certainly not Gorbachev himself — had quite anticipated, leading to the effective dissolution of the Warsaw Pact and the end of the Cold War before the 1980s were over. Unnerved by it all though he was, Gorbachev, to his everlasting credit, let it happen, rejecting the calls for a crackdown like those that had ended the Hungarian Revolution of 1956 and the Prague Spring of 1968 in such heartbreak and tragedy.

The Elektronika BK 0010

Very early in Gorbachev’s tenure, well before its full import had even started to become clear, it became at least theoretically possible for the first time for individuals in the Soviet Union to buy a private computer of their own for use in the home. Said opportunity came in the form of the Elektronika BK-0010. Costing about one-fifth as much as the Agat, the BK-0010 was a predictably slapdash product in some areas, such as its horrid membrane keyboard. In other ways, though, it impressed far more than anyone had a right to expect. The BK-0010, the very first Soviet microcomputer designed to be a home computer, was a 16-bit machine, placing it in this respect at least ahead of the typical Western Apple II, Commodore 64, or Sinclair Spectrum of the time. The microprocessor inside it was a largely original creation, borrowing the instruction set from the DEC PDP-11 line of minicomputers but borrowing its actual circuitry from no one. The Soviets’ struggles to stamp out the ever denser circuitry of the latest Western CPUs in their obsolete factories was ironically forcing them to be more innovative, to start designing chips of their own which their factories could manage to produce.

Supplies of the BK-0010 were always chronically short and the waiting lists long, but as early as 1985 a few lucky Soviet households could boast real, usable computers. Those who were less lucky might be able to build a bare-bones computer from schematics published in do-it-yourself technology magazines like Tekhnika Molodezhi, the Soviet equivalent to Popular Electronics. Just as had happened in the United States, Britain, and many other Western countries, a vibrant culture of hobbyist computing spread across the Soviet Union and the other Warsaw Pact nations. In time, as the technology advanced in rhythm with Perestroika, these hobbyists would become the founding spirits of a new Soviet computer industry — a capitalist computer industry. “These are people who have felt useless — useless — all their lives!” said American business pundit Esther Dyson after a junket to a changing Eastern Europe. “Do you know what it is like to feel useless all your life? Computers are turning many of these people into entrepreneurs. They are creating the entrepreneurs these countries need.” As one glance at the flourishing underground economy of the Soviet Union of any era had always been enough to prove, Russians had a natural instinct for capitalism. Now, they were getting the chance to exercise it.

In August of 1988, in a surreal sign of these changing times, a delegation including many senior members of the Soviet Academy of Sciences — the most influential theoretical voice in Soviet computing dating back to the early 1950s — arrived in New York City on a mission that would have been unimaginable just a couple of years before. To a packed room of technology journalists — the Mirror World remained as fascinating as ever — they demonstrated a variety of software which they hoped to sell to the West: an equation solver; a database responsive to natural-language input; a project manager; an economic-modelling package. Byte magazine called the presentation “clever, flashy, and unabashedly commercial,” with “lots of colored windows popping up everywhere” and lots of sound effects. The next few years would bring several ventures which served to prove to any doubters from that initial gathering that the Soviets were capable of programming world-class software if given half a chance. In 1991, for instance, Soviet researchers sold a system of handwriting recognition to Apple for use in the pioneering Apple Newton personal digital assistant. Reflecting the odd blend of greed and idealism that marked the era, a Russian programmer wrote to Byte magazine that “I do hope the world software market will be the only battlefield for American and Soviet programmers and that we’ll become friends during this new battle now that we’ve stopped wasting our intellects on the senseless weapons race.”

As it would transpire, though, the greatest Russian weapon in this new era of happy capitalism wasn’t a database, a project manager, or even a handwriting-recognition system. It was instead a game — a piece of software far simpler than any of those aforementioned things but with perhaps more inscrutable genius than all of them put together. Its unlikely story is next.

(Sources: the academic-journal articles “Soviet Computing and Technology Transfer: An Overview” by S.E. Goodman, “InterNyet: Why the Soviet Union Did Not Build a Nationwide Computer Network.” by Slava Gerovitch, “The Soviet Bloc’s Unified System of Computers” by N.C. Davis and S.E. Goodman; the January 1970 and May 1972 issues of Rand Corporation’s Soviet Cybernetics Review; The New York Times of August 28 1966, May 7 1973, and November 27 1979; Scientific American of October 1970; Bloomberg Businessweek of November 4 1991; Byte of August 1980, April 1984, November 1984, July 1985, November 1986, February 1987, October 1988, and April 1989; a video recording the Computer History Museum’s commemoration of the IBM System/360 on April 7 2004. Finally, my huge thanks to Peter Sovietov, who grew up in the Soviet Union of the 1980s and the Russia of the 1990s and has been an invaluable help in sharing his memories and his knowledge and saving me from some embarrassing errors.)

  1. Some in the Soviet space program actually laid their failure to get to the Moon, perhaps a bit too conveniently, directly at the feet of the computer technology they were provided, noting that the lack of computers on the ground equal to those employed by NASA — which happened to be System/360s — had been a crippling disadvantage. Meanwhile the computers that went into space with the Soviets were bigger, heavier, and less capable than their American counterparts. 



Tales of the Mirror World, Part 1: Calculators and Cybernetics

Back in my younger days, when the thought of sleeping for nights on end in campground tents and hostel cots awakened a spirit of adventure instead of a premonition of an aching back, I used to save up my vacation time and undertake a big backpacker-style journey every summer. In 2002, this habit took me to Russia.

I must confess that I found St. Petersburg and Moscow a bit of a disappointment. They just struck me as generic big cities of the sort that I’d seen plenty of in my life. While I’m sure they have their unique qualities — even I couldn’t fail to notice a few such qualities1 — much of what I saw there didn’t look all that distinct from what one could expect to see in any of dozens of major European cities. What I was looking for was the Russia — or, better said, the Soviet Union — of my youth, that semi-mythical Mirror World of fascination and nightmare.

I could feel myself coming closer to my goal as soon as I quit Moscow to board the Trans-Siberian Railroad for the long, long journey to Vladivostok. As everyone who lived in Siberia was all too happy to tell me, I was now experiencing the real Russia. In the city of Ulan-Ude, closed to all outsiders until 1991, I found the existential goal I hadn’t consciously known I’d been seeking. From the central square of Ulan-Ude, surrounded on three sides by government offices still bearing faded hammers and sickles on their facades, glowered a massive bust of Vladimir Lenin. I’d later learn that at a weight of 42 tons the bust was the largest such ever built in the Soviet Union, and that it had been constructed in 1971 as one of the last gasps of the old tradition of Stalinist monumentalism. But the numbers didn’t matter on that scorching-hot summer day when I stood in that square, gazing up in awe. In all my earlier travels, I’d never seen a sight so alien to me. This was it, my personal Ground Zero of the Mirror World, where all the values in which I’d been indoctrinated as a kid growing up deep in the heart of Texas were flipped. Lenin was the greatest hero the world had ever known, the United States the nation of imperialist oppression… it was all so wrong, and because of that it was all so right. I’ve never felt so far from home as I did on that day — and this feeling, of course, was exactly the reason I’d come.

I’m a child of the 1980s, the last decade during which the Soviet Union was an extant power in the world. The fascination which I still felt so keenly in 2002 had been a marked feature of my childhood. Nothing, after all, gives rise to more fascination than telling people that something is forbidden to them, as the Kremlin did by closing off their country from the world. Certainly I wasn’t alone in jumping after any glimpse I could get behind the Iron Curtain.

Thus the bleakly alluring version of Moscow found in Martin Cruz Smith’s otherwise workmanlike crime novel Gorky Park turned it into a bestseller, and then a hit film a couple of years later. (I remember the film well because it was the first R-rated movie my parents ever allowed me to see; I remember being intrigued and a little confused by my first glimpse of bare breasts on film — as if the glimpse behind the Iron Curtain wasn’t attraction enough!) And when David Willis, an American journalist who had lived several years in Moscow, purported to tell his countrymen “how Russians really live” in a book called Klass, it too became a bestseller. Even such a strident American patriot as Tom Clancy could understand the temptation of the Mirror World. In Red Storm Rising, his novel of World War III, straitlaced intelligence officer Robert Toland gets a little too caught up in the classic films of Sergei Eisenstein.

The worst part of the drive home was the traffic to the Hampton Roads tunnel, after which things settled down to the usual superhighway ratrace. All the way home, Toland’s mind kept going over the scenes from Eisenstein’s movie. The one that kept coming back was the most horrible of all, a German knight wearing a crusader’s cross tearing a Pskov infant from his mother’s breast and throwing him — her? — into a fire. Who could see that and not be enraged? No wonder the rabble-rousing song “Arise, you Russian People” had been a genuinely popular favorite for years. Some scenes cried out for bloody revenge, the theme for which was Prokofiev’s fiery call to arms. Soon he found himself humming the song. A real intelligence officer you are … Toland smiled to himself, thinking just like the people you’re supposed to study … defend our fair native land … za nashu zyemlyu chestnuyu!

“Excuse me, sir?” the toll collector asked.

Toland shook his head. Had he been singing aloud? He handed over the seventy-five cents with a sheepish grin. What would this lady think, an American naval officer singing in Russian?

Those involved with computers were likewise drawn to the Mirror World. When Byte magazine ran a modest piece buried hundreds of pages deep in their November 1984 issue on a Soviet personal computer showing the clear “influence” of the Apple II, it became the second most popular article in the issue according to the magazine’s surveys. Unsurprisingly in light of that reception, similar tantalizing glimpses behind the Iron Curtain became a regular part of the magazine from that point forward. According to the best estimates of the experts, the Soviets remained a solid three years behind the United States in their top-end chip-fabrication capabilities, and much further behind than that in their ability to mass-produce dependable computers that could be sold for a reasonable price. If the rudimentary Soviet computers Byte described had come from anywhere else, in other words, no one would have glanced at them twice. Yet the fact that they came from the Mirror World gave them the attraction that clung to all glimpses into that fabled land. For jaded veterans grown bored with an American computer industry that was converging inexorably from the Wild West that had been its early days toward a few standard, well-defined — read, boring — platforms, Soviet computers were the ultimate exotica.

Before the end of the 1980s, an odd little game of falling blocks would ride this tidal wave of Soviet chic to become by some measures the most popular videogame of all time. An aura of inscrutable otherness clung to Tetris, which the game’s various publishers — its publication history is one of the most confusing in the history of videogames — were smart enough to tie in with the sense of otherness that surrounded the entirety of the Soviet Union, the game’s unlikely country of origin, in so many Western minds. Spectrum Holobyte, the most prominent publisher of the game on computers, wrote the name in Cyrillic script on the box front, subtitled it “the Soviet Challenge,” and commissioned background graphics showing iconic — at least to Western eyes — Soviet imagery, from Cosmonauts in space to the “Red Machine” hockey team on the ice. As usual, Nintendo cut more to the chase with their staggeringly successful Game Boy version: “From Russia with Fun!”

Tetris mania was at its peak as the 1990s began. The walls were coming down between West and East, both figuratively and literally, thanks to Mikhail Gorbachev’s impossibly brave choice to let his empire go — peacefully. Western eyes peered eagerly eastward, motivated now not only by innocent if burning curiosity but by the possibilities for tapping those heretofore untapped markets. Having reached this very point here in this blog’s overarching history of interactive entertainment and matters related, let’s hit pause long enough to join those first Western discoverers now in exploring the real story of computing in the Mirror World.

In the very early days of computing, before computer science was a recognized discipline in which you could get a university degree, the most important thinkers in the nascent field tended to be mathematicians. It was, for instance, the British mathematician Alan Turing who laid much of the groundwork for modern computer science in the 1930s, then went on to give many of his theories practical expression as part of the Allied code-breaking effort that did so much to win World War II. And it was the mathematics department of Cambridge University who built the EDSAC in 1949, the first truly programmable computer in the sense that we understand that term today.

The strong interconnection between mathematics and early work with computers should have left the Soviet Union as well-equipped for the dawning age as any nation. Russia had a long, proud tradition of mathematical innovation, dating back through centuries of Czarist rule. The list of major Russian mathematicians included figures like Nikolai Lobachevsky, the pioneer of non-Euclidean geometry, and Sofia Kovalevskaya, who developed equations for the rotation of a solid body around a fixed axis. Even Joseph Stalin’s brutal purges of the 1930s, which strove to expunge anyone with the intellectual capacity to articulate a challenge to his rule, failed to kill the Russian mathematical tradition. On the contrary, Leonid Kantorovich in 1939 discovered the technique of linear programming ten years before American mathematicians would do the same, while Andrey Kolmogorov did much fundamental work in probability theory and neural-network modeling over a long career that spanned from the 1920s through the 1980s. Indeed, in the decades following Stalin’s death, Soviet mathematicians in general would continue to solve fundamental problems of theory. And Soviet chess players — the linkage between mathematics and chess is almost as pronounced in history as that between mathematics and computers — would remain the best in the world, at least if the results of international competitions were any guide.

But, ironically in light of all this, it would be an electrical engineer named Sergei Alexeevich Lebedev rather than a mathematician who would pioneer Soviet computing. Lebedev was 46 years old in 1948 when he was transferred from his cushy position at the Lenin State Electrical Institute in Moscow to the relative backwater of Kiev, where he was to take over as head of the Ukraine Academy’s Electrotechnical Institute. There, free from the scrutiny of Moscow bureaucrats who neither understood nor wanted to understand the importance of the latest news of computing coming out of Britain and the United States, Lebedev put together a small team to build a Small Computing Machine; in Russian its acronym was MESM. Unlike the team of scientists and engineers who detonated the Soviet Union’s first atomic bomb in 1949, Lebedev developed the MESM without the assistance of espionage; he had access to the published papers of figures like Alan Turing and the exiled Hungarian mathematician John von Neumann, but no access to schematics or inside information about the machines on which they were working.

Lebedev had to build the MESM on a shoestring. Just acquiring the vacuum tubes and magnetic drums he needed in a backwater city of a war-devastated country was a major feat in itself, one that called for the skills of a junk trader as much as it did those of an electrical engineer. Seymour Goodman, one of the more notable historians of Soviet computing, states that “perhaps the most incredible aspect of the MESM was that it was successfully built at all. No electronic computer was ever built under more difficult conditions.” When it powered up for the first time in 1951, the MESM was not only the first stored-program computer in the Soviet Union but the first anywhere in continental Europe, trailing Britain by just two years and the United States by just one — a remarkable achievement by any standard.

Having already shown quite a diverse skill set in getting the MESM made at all, Lebedev proved still more flexible after it was up and running. He became the best advocate for computing inside the Soviet Union, a sort of titan of industry in a country that officially had no room for such figures. Goodman credits him with playing the role that a CEO would have played in the West. He even managed to get a script written for a documentary film to “advertise” his computer’s capabilities throughout the Soviet bureaucracy. In the end, the film never got made, but then it really wasn’t needed. The Soviet space and nuclear-weapons programs, not to mention the conventional military, all had huge need of the fast calculations the MESM could provide. At the time, the nuclear-weapons program was using what they referred to as calculator “brigades,” consisting of 100 or more mostly young girls, who worked eight-hour shifts with mechanical devices to crank out solutions to hugely complicated equations. Already by 1950, an internal report had revealed that the chief obstacle facing Soviet nuclear scientists wasn’t the theoretical physics involved but rather an inability to do the math necessary to bring theory to life fast enough.

Within months of his machine going online, Lebedev was called back to Moscow to become the leader of the Institute for Precision Mechanics and Computing Technology — or ITMVT in the Russian acronym — of the Soviet Academy of Sciences. There Lebedev proceeded to develop a series of machines known as the BESM line, which, unlike the one-off MESM, were suitable for — relatively speaking — production in quantity.

But Lebedev soon had rivals. Contrary to the image the Kremlin liked to project of a unified front — of comrades in communism all moving harmoniously toward the same set of goals — the planned economy of the Soviet Union was riddled with as much in-fighting as any other large bureaucracy. “Despite its totalitarian character,” notes historian Nikolai Krementsov, “the Soviet state had a very complex internal structure, and the numerous agents and agencies involved in the state science-policy apparatus pursued their own, often conflicting policies.” Thus very shortly after the MESM became operational, the second computer to be built in the Soviet Union (and continental Europe as well), a machine called the M-1 which had been designed by one Isaak Semyenovich Bruk, went online. If Lebedev’s achievement in building the MESM was remarkable, Bruk’s achievement in building the M-1, again without access to foreign espionage — or for that matter the jealously guarded secrets of Lebedev’s rival team — was equally so. But Bruk lacked Lebedev’s political skills, and thus his machine proved a singular achievement rather than the basis for a line of computers.

A much more dangerous rival  was a computer called Strela, or “Arrow,” the brainchild of one Yuri Yakovlevich Bazilevskii in the Special Design Bureau 245 — abbreviated SKB-245 in Russian — of the Ministry of Machine and Instrument Construction in Moscow. The BESM and Strela projects, funded by vying factions within the Politburo, spent several years in competition with one another, each project straining to monopolize scarce components, both for its own use and, just as importantly, to keep them out of the hands of its rival. It was a high-stakes war that was fought in deadly earnest, and its fallout could be huge. When, for instance, the Strela people managed to buy up the country’s entire supply of cathode-ray tubes for use as memory, the BESM people were forced to use less efficient and reliable mercury delay lines instead. As anecdotes like this attest, Brazilevskii was every bit Lebedev’s equal at the cutthroat game of bureaucratic politicking, even managing to secure from his backers the coveted title of Hero of Socialist Labor a couple of years before Lebedev.

The Strela computer. Although it’s hard to see it here, it was described by its visitors as a “beautiful machine in a beautiful hall,” with hundreds of lights blinking away in impressive fashion. Many bureaucrats likely chose to support the Strela simply because it looked so much like the ideal of high technology in the popular imagination of the 1950s.

During its first official trial in the spring of 1954, the Strela solved in ten hours a series of equations that would have taken a single human calculator about 100,000 days. And the Strela was designed to be a truly mass-produced computer, to be cranked out in the thousands in identical form from factories. But, as so often happened in the Soviet Union, the reality behind the statistics which Pravda trumpeted so uncritically was somewhat less flattering. The Strela “worked very badly” according to one internal report; according to another it “very often failed and did not work properly.” Pushed by scientists and engineers who needed a reliable computer in order to get things done, the government decided in the end to go ahead with the BESM instead of the Strela. Ironically, only seven examples of the first Soviet computer designed for true mass-production were ever actually produced. Sergei Lebedev was now unchallenged as the preeminent voice in Soviet computing, a distinction he would enjoy until his death in 1974.

The first BESM computer. It didn’t look as nice as the Strela, but it would prove far more capable and reliable.

Like so much other Soviet technology, Soviet computers were developed in secrecy, far from the prying eyes of the West. In December of 1955, a handful of American executives and a few journalists on a junket to the Soviet Union became the first to see a Soviet computer in person. A report of the visit appeared in the New York Times of December 11, 1955. It helpfully describes an early BESM computer as an “electronic brain” — the word “computer” was still very new in the popular lexicon — and pronounces it equal to the best American models of same. In truth, the American delegation had fallen for a bit of a dog-and-pony show. Soviet computers were already lagging well behind the American models that were now being churned out in quantities Lebedev could only dream of by companies like IBM.

Sergei Lebedev’s ITMVT. (Sorry for the atrocious quality of these images. Clear pictures of the Mirror World of the 1950s are hard to come by.)

In May of 1959, during one of West and East’s periodic periods of rapprochement, a delegation of seven American computer experts from business and government was invited to spend two weeks visiting most of the important hubs of computing research in the Soviet Union. They were met at the airport in Moscow by Lebedev himself; the Soviets were every bit as curious about the work of their American guests as said Americans were about theirs. The two most important research centers of all, the American delegation learned, were Lebedev’s ITMVT and the newer Moscow Computing Center of the Soviet Academy of Sciences, which was coming to play a role in software similar to that which the ITMVT played in hardware. The report prepared by the delegation is fascinating for the generalized glimpses it provides into the Soviet Mirror World of the 1950s as much as it is for the technical details it includes. Here, for instance, is its description of the ITMVT’s physical home:

The building itself is reminiscent more of an academic building than an industrial building. It is equipped with the usual offices and laboratory facilities as well as a large lecture hall. Within an office the decor tends to be ornate; the entrance door is frequently padded on both sides with what appeared to be leather, and heavy drapery is usually hung across the doorway and at the windows. The ceiling height was somewhat higher than that of contemporary American construction, but we felt in general that working conditions in the offices and in the laboratories were good. There appeared to be an adequate amount of room and the workers were comfortably supplied with material and equipment. The building was constructed in 1951. Many things testified to the steady and heavy usage it has received. In Russian tradition, the floor is parqueted and of unfinished oak. As in nearly every building, there are two sets of windows for weather protection.

The Moscow Computing Center

And here’s how a Soviet programmer had to work:

Programmers from the outside who come to the [Moscow] Computing Center with a problem apply to the scientific secretary of the Computing Center. He assigns someone from the Computing Center to provide any assistance needed by the outside programmer. In general an operator is provided for each machine, and only programmers with specific permission can operate the machine personally. Normally a programmer can expect only one code check pass per day at a machine; with a very high priority he might get two or three passes.

A programmer is required to submit his manuscript in ink. Examples of manuscripts which we saw indicated that often a manuscript is written in pencil until it is thought to be correct, and then redone in ink. The manuscript is then key-punched twice, and the two decks compared, before being sent to the machine. The output cards are handled on an off-line printer.

Other sections describe the Soviet higher-education system (“Every student is required to take 11 terms of ideological subjects such as Marxism-Leninism, dialectical materialism, history of the Communist Party, political economy, and economics.”); the roles of the various Academies of Sciences (“The All Union Academy of Sciences of the USSR and the 15 Republican Academies of Sciences play a dominant role in the scientific life of the Soviet Union.”); the economics of daily life (“In evaluating typical Russian salaries it must be remembered that the highest income tax in the Soviet Union is 13 percent and that all other taxes are indirect.”); the resources being poured into the new scientific and industrial center of Novosibirsk (“It is a general belief in Russia that the future of the Soviet Union is closely allied with the development of the immense and largely unexplored natural resources of Siberia.”).

But of course there are also plenty of pages devoted to technical discussion. What’s most surprising about these is the lack of the hysteria that had become so typical of Western reports of Soviet technology in the wake of the Sputnik satellite of 1957 and the beginning of the Space Race which it heralded. It was left to a journalist from the New York Times to ask the delegation upon their return the money question: who was really ahead in the field of computers? Willis Ware, a member of the delegation from the Rand Corporation and the primary architect of the final report, replied that the Soviet Union had “a wealth of theoretical knowledge in the field,” but “we didn’t see any hardware that we don’t have here.” Americans had little cause to worry; whatever their capabilities in the fields of aerospace engineering and nuclear-weapons delivery, it was more than clear that the Soviets weren’t likely to rival even IBM alone, much less the American computer industry as a whole, anytime soon. With that worry dispensed with, the American delegation had felt free just to talk shop with their Soviet counterparts in what would prove the greatest meeting of Eastern and Western computing minds prior to the Gorbachev era. The Soviets responded in kind; the visit proved remarkably open and friendly.

One interesting fact gleaned by the Americans during their visit was that, in addition to all the differences born of geography and economy, the research into computers conducted in the East and the West had also heretofore had markedly different theoretical scopes. For all that so much early Western research had been funded by the military for such plebeian tasks as code-breaking and the calculation of artillery trajectories, and for all that so much of that research had been conducted by mathematicians, the potential of computers to change the world had always been understood by the West’s foremost visionaries as encompassing far more than a faster way to do complex calculations. Alan Turing, for example, had first proposed his famous Turing Test of artificial intelligence all the way back in 1950.

But in the Soviet Union, where the utilitarian philosophy of dialectical materialism was the order of the day, such humanistic lines of research were, to say the least, not encouraged. Those involved with Soviet computing had to be, as they themselves would later put it, “cautious” about the work they did and the way they described that work to their superiors. The official view of computers in the Soviet Union during the early and mid-1950s hewed to the most literal definition of the word: they were electronic replacements for those brigades of human calculators cranking out solutions to equations all day long. Computers were, in other words, merely a labor-saving device, not a revolution in the offing; being a state founded on the all-encompassing ideology of communist revolution, the Soviet Union had no use for other, ancillary revolutions. Even when Soviet researchers were allowed to stray outside the realm of pure mathematics, their work was always expected to deliver concrete results that served very practical goals in fairly short order. For example, considerable effort was put into a program for automatically translating texts between languages, thereby to better bind together the diverse peoples of the sprawling Soviet empire and its various vassal states. (Although the translation program was given a prominent place in that first 1955 New York Times report about the Soviets’ “electronic brain,” one has to suspect that, given how difficult a task automated translation is even with modern computers, it never amounted to much more than a showpiece for use under carefully controlled conditions.)

And yet even by the time the American delegation arrived in 1959 all of that was beginning to change, thanks to one of the odder ideological alliances in the history of the twentieth century. In a new spirit of relative openness that was being fostered by Khrushchev, the Soviet intelligentsia was becoming more and more enamored with the ideas of an American named Norbert Wiener, yet another of those wide-ranging mathematicians who were doing so much to shape the future. In 1948, Wiener had described a discipline he called “cybernetics” in a book of the same name. The book bore the less-than-enticing subtitle Control and Communication in the Animal and the Machine, making it sound rather like an engineering text. But if it was engineering Wiener was practicing, it was social engineering, as became more clear in 1950, when he repackaged his ideas into a more accessible book with the title The Human Use of Human Beings.

Coming some 35 years before William Gibson and his coining of the term “cyberspace,” Norbert Wiener marks the true origin point of our modern mania for all things “cyber.” That said, his ideas haven’t been in fashion for many years, a fact which might lead us to dismiss them from our post-millennial perch as just another musty artifact of the twentieth century and move on. In actuality, though, Wiener is well worth revisiting, and with an eye to more than dubious linguistic trends. Cybernetics as a philosophy may be out of fashion, but cybernetics as a reality is with us a little more every day. And, most pertinently for our purposes today, we need to understand a bit of what Wiener was on about if we hope to understand what drove much of Soviet computing for much of its existence.

“Cybernetics” is one of those terms which can seem to have as many definitions as definers. It’s perhaps best described as the use of machines not just to perform labor but to direct labor. Wiener makes much of the increasing numbers of machines even in his time which incorporated a feedback loop — machines, in other words, that were capable of accepting input from the world around them and responding to that input in an autonomous way. An example of such a feedback loop can be something as simple as an automatic door which opens when it senses people ready to step through it, or as complex as the central computer in charge of all of the functions of an automated factory.

At first blush, the idea of giving computers autonomous control over the levers of power inevitably conjures up all sorts of dystopian visions. Yet Wiener himself was anything but a fan of totalitarian or collectivist governments. Invoking in The Human Use of Human Beings the popular metaphor of the collectivist society as an ant colony, he goes on to explore the many ways in which humans and ants are in fact — ideally, at any rate — dissimilar, thus seemingly exploding the “from each according to his ability, to each according to his need” founding principle of communism.

In the ant community, each worker performs its proper functions. There may be a separate caste of soldiers. Certain highly specialized individuals perform the functions of king and queen. If man were to adopt this community as a pattern, he would live in a fascist state, in which ideally each individual is conditioned from birth for his proper occupation: in which rulers are perpetually rulers, soldiers perpetually soldiers, the peasant is never more than a peasant, and the worker is doomed to be a worker.

This aspiration of the fascist for a human state based on the model of the ant results from a profound misapprehension both of the nature of the ant and of the nature of man. I wish to point out that the very physical development of the insect conditions it to be an essentially stupid and unlearning individual, cast in a mold which cannot be modified to any great extent. I also wish to show how these physiological conditions make it into a cheap mass-produced article, of no more individual value than a paper pie plate to be thrown away after it is used. On the other hand, I wish to show that the human individual, capable of vast learning and study, which may occupy almost half his life, is physically equipped, as the ant is not, for this capacity. Variety and possibility are inherent in the human sensorium — and are indeed key to man’s most noble flights — because variety and possibility belong to the very structure of the human organism.

While it is possible to throw away this enormous advantage that we have over the ants, and to organize the fascist ant-state with human material, I certainly believe that this is a degradation of man’s very nature, and economically a waste of the great human values which man possesses.

I am afraid that I am convinced that a community of human beings is a far more useful thing than a community of ants, and that if the human being is condemned and restricted to perform the same functions over and over again, he will not even be a good ant, not to mention a good human being. Those who would organize us according to personal individual functions and permanent individual restrictions condemn the human race to move at much less than half-steam. They throw away nearly all our human possibilities and, by limiting the modes in which we may adapt ourselves to future contingencies, they reduce our chances for a reasonably long existence on this earth.

Wiener’s vision departs markedly from the notion, popular already in science fiction by the time he wrote those words, of computers as evil overlords. In Wiener’s cybernetics, computers will not enslave people but give them freedom; the computers’ “slaves” will themselves be machines. Together computers and the machines they control will take care of all the boring stuff, as it were, allowing people to devote themselves to higher purposes. Wiener welcomes the “automatic age” he sees on the horizon, even as he is far from unaware of the disruptions the period of transition will bring.

What can we expect of its economic and social consequences? In the first place, we can expect an abrupt and final cessation of the demand for the type of factory labor performing purely repetitive tasks. In the long run, the deadly uninteresting nature of the repetitive task may make this a good thing and the source of leisure necessary for man’s full cultural development.

Be that as it may, the intermediate period of the introduction of the new means will lead to an immediate transitional period of disastrous confusion.

In terms of cybernetics, we’re still in this transitional period today, with huge numbers of workers accustomed to “purely repetitive tasks” cast adrift in this dawning automatic age; this explains much about recent political developments over much of the world. But of course our main interest right now isn’t contemporary politics, but rather how a fellow who so explicitly condemned the collectivist state came to be regarded as something of a minor prophet by the Soviet bureaucracy.

Wiener’s eventual acceptance in the Soviet Union is made all the more surprising by the Communist Party’s first reaction to cybernetics. In 1954, a year after Stalin’s death, the Party’s official Brief Philosophical Dictionary still called cybernetics “a reactionary pseudo-science originating in the USA after World War II and spreading widely in other capitalistic countries as well.” It was “in essence aimed against materialistic dialectics” and “against the scientific Marxist understanding of the laws of societal life.” Seemingly plucking words at random from a grab bag of adjectives, the dictionary concluded that “this mechanistic, metaphysical pseudo-science coexists very well with idealism in philosophy, psychology, and sociology” — the word “idealism” being a kiss of death under Soviet dogma.

In 1960, six years after the Soviets condemned cybernetics as an “attempt to transform toilers into mere appendices of the machine, into a tool of production and war,” Nobert Wiener lectures the Leningrad Mathematical Society. A colleague who visited the Soviet Union at the same time said that Wiener was “wined and dined everywhere, even in the privacy of the homes of the Russian scientists.” He died four years later, just as the influence of cybernetics was reaching a peak in the Soviet Union.

Still, when stripped of its more idealistic, humanistic attributes, there was much about cybernetics which held immense natural appeal for Soviet bureaucrats. Throughout its existence, the Soviet Union’s economy had been guided, albeit imperfectly at best, by an endless number of “five-year plans” that attempted to control its every detail. Given this obsession with economic command and control and the dispiriting results it had so far produced, the prospect of information-management systems — namely, computers — capable of aiding decision-making, or perhaps even in time of making the decisions, was a difficult enticement to resist; never mind how deeply antithetical the idea of computerized overlords making the decisions for human laborers was to Norbert Wiener’s original conception of cybernetics. Thus cybernetics went from being a banned bourgeois philosophy during the final years of Stalin’s reign to being a favorite buzzword during the middle years of Khrushchev’s. In December of 1957, the Soviet Academy of Sciences declared their new official position to be that “the use of computers for statistics and planning must have an absolutely exceptional significance in terms of its efficiency. In most cases, such use would make it possible to increase the speed of decision-making by hundreds of times and avoid errors that are currently produced by the unwieldy bureaucratic apparatus involved in these activities.”

In October of 1961, the new Cybernetics Council of the same body published an official guide called Cybernetics in the Service of Communism — essentially Norbert Wiener with the idealism and humanism filed off. Khrushchev may have introduced a modicum of cultural freedom to the Soviet Union, but at heart he was still a staunch collectivist, as he made clear:

In our time, what is needed is clarity, ideal coordination, and organization of all links in the social system both in material production and in spiritual life.

Maybe you think there will be absolute freedom under communism? Those who think so don’t understand what communism is. Communism is an orderly, organized society. In that society, production will be organized on the basis of automation, cybernetics, and assembly lines. If a single screw is not working properly, the entire mechanism will grind to a halt.

Soviet ambitions for cybernetics were huge, and in different circumstances might have led to a Soviet ARPANET going online years before the American version. It was envisioned that each factory and other center of production in the country would be controlled by its own computer, and that each of these computers would in turn be linked together into “complexes” reporting to other computers, all of which would send their data yet further up the chain, culminating in a single “unified automated management system” directing the entire economy. The system would encompass tens of thousands of computers, spanning the width and breadth of the largest country in the world, “from the Pacific to the Carpathian foothills,” as academician Sergei Sobolev put it. Some more wide-eyed prognosticators said that in time the computerized cybernetic society might allow the government to eliminate money from the economy entirely, long a cherished dream of communism. “The creation of an automated management system,” wrote proponent Anatolii Kitov, “would mean a revolutionary leap in the development of our country and would ensure a complete victory of socialism over capitalism.” With the Soviet Union’s industrial output declining every year between 1959 and 1964 while the equivalent Western figures skyrocketed, socialism needed all the help it could get.

In May of 1962, in an experiment trumpeted as the first concrete step toward socialism’s glorious cybernetic future, a computer located in Kiev poured steel in a factory located hundreds of kilometers away in Dniprodzerzhynsk (known today as Kamianske). A newspaper reporter was inspired to wax poetic:

In ancient Greece the man who steered ships was called Kybernetes. This steersman, whose name is given to one of the boldest sciences of the present — cybernetics — lives on in our own time. He steers the spaceships and governs the atomic installations, he takes part in working out the most complicated projects, he helps to heal humans and to decipher the writings of ancient peoples. As of today he has become an experienced metallurgist.

Some Soviet cybernetic thinking is even more astonishing than their plans for binding the country in a web of telecommunications long before “telecommunications” was a word in popular use. Driverless cars and locomotives were seriously discussed, and experiments with the latter were conducted in the Moscow subway system. (“Experiments on the ‘auto-pilot’ are being concluded. This device, provided with a program for guiding a train, automatically decreases and increases speed at corresponding points along its route, continually selecting the most advantageous speed, and stops the train at the required points.”) Serious attention was given to a question that still preoccupies futurists today: that of the role of human beings in a future of widespread artificially intelligent computers. The mathematician Kolmogorov wrote frankly that such computers could and inevitably would “surpass man in his development” in the course of time, and even described a tipping point that we still regard as seminal today: the point when artificial intelligence begins to “breed,” to create its own progeny without the aid of humans. At least some within the Soviet bureaucracy seemed to welcome humanity’s new masters; proposals were batted around to someday replace human teachers and doctors with computers. Sergei Sobolev wrote that “in my view the cybernetic machines are people of the future. These people will probably be much more accomplished than we, the present people.” Soviet thinking had come a long way indeed from the old conception of computers as nothing more than giant calculators.

But the Soviet Union was stuck in a Catch-22 situation: the cybernetic command-and-control network its economy supposedly needed in order to spring to life was made impossible to build by said economy’s current moribund state. Some skeptical planners drew pointed comparisons to the history of another sprawling land: Egypt. While the Pharaohs of ancient Egypt had managed to build the Pyramids, the cybernetics skeptics noted, legend held that they’d neglected everything else so much in the process that a once-fertile land had become a desert. Did it really make sense to be thinking already about building a computer network to span the nation when 40 percent of villages didn’t yet boast a single telephone within their borders? By the same token, perhaps the government should strive for the more tangible goal of placing a human doctor within reach of every citizen before thinking about replacing all the extant human doctors with some sort of robot.

A computer factory in Kiev, circa 1970. Note that all of the assembly work is still apparently done by hand.

The skeptics probably needn’t have worried overmuch about their colleagues’ grandiose dreams. With its computer industry in the shape it was, it was doubtful whether the Soviet Union had any hope of building its cybernetic Pyramids even with all the government will in the world.

In November of 1964, another American delegation was allowed a glimpse into the state of Soviet computing, although the Cuban Missile Crisis and other recent conflicts meant that their visit was much shorter and more restricted than the one of five and a half years earlier. Regardless, the Americans weren’t terribly impressed by the factory they were shown. It was producing computers at the rate of about seven or eight per month, and the visitors estimated its products to be roughly on par with an IBM 704 — a model that IBM had retired four years before. It was going to be damnably hard to realize the Soviet cybernetic dream with this trickle of obsolete machines; estimates were that about 1000 computers were currently operational in the Soviet Union, as compared to 30,000 in the United States. The Soviets were still struggling to complete the changeover from first-generation computer hardware, characterized by its reliance on vacuum tubes, to the transistor-based second generation. The Americans had accomplished this changeover years before; indeed, they were well on their way to an integrated-circuit-based third generation.  Looking at a Soviet transistor, the delegation said it was roughly equivalent to an American version of same from 1957.

But when the same group visited the academics, they were much more impressed, noting that the Soviets “were doing quite a lot of very good and forward-thinking work.” Thus was encapsulated what would remain the curse of Soviet computer science: plenty of ideas, plenty of abstract know-how, and a dearth of actual hardware to try it all out on. The reports of the Soviet researchers ooze frustration with their lot in life. Their computers break down “each and every day,” reads one, “and information on a tape lasts without any losses no longer than one month.”

Their American visitors were left to wonder just why it was that the Soviet Union struggled so mightily to build a decent computing infrastructure. Clearly the Soviets weren’t complete technological dunces; this was after all the country that had detonated an atomic bomb years before anyone had dreamed it could, that had shocked the world by putting the first satellite and then the first man into space, that was even now giving the United States a run for its money to put a man on the moon.

The best way to address the Americans’ confusion might be to note that exploding atomic bombs and launching things into space encompassed a series of individual efforts responsive to brilliant individual minds, while the mass-production of the standardized computers that would be required to realize the cybernetics dream required a sort of infrastructure-building at which the Soviet system was notoriously poor. The world’s foremost proponent of collectivism was, ironically, not all that good at even the most fundamental long-term collectivist projects. The unstable Soviet power grid was only one example; the builders of many Soviet computer installations had to begin by building their own power plant right outside the computer lab just to get a dependable electrical supply.

The Soviet Union was a weird mixture of backwardness and forwardness in terms of technology, and the endless five-year plans only exacerbated its issues by emphasizing arbitrary quotas rather than results that mattered in the real world. Stories abounded of factories that produced lamp shades in only one color because that was the easiest way to make their quota, or that churned out uselessly long, fat nails because the quota was given in kilograms rather than in numbers of individual pieces. The Soviet computer industry was exposed to all these underlying economic issues. It was hard to make computers to rival those of the West when the most basic electrical components that went into them had failure rates dozens of times higher than their Western equivalents. Whether a planned economy run by computers could have fixed these problems is doubtful in the extreme, but at any rate the Soviet cyberneticists would never get a chance to try. It was the old chicken-or-the-egg conundrum. They thought they needed lots of good computers to build a better economy — but they knew they needed a better economy to build lots of good computers.

As the 1960s became the 1970s, these pressures would lead to a new approach to computer production in the Soviet Union. If they couldn’t beat the West’s computers with their homegrown designs, the Soviets decided, then they would just have to  clone them.

(Sources: the academic-journal articles “Soviet Computing and Technology Transfer: An Overview” by S.E. Goodman, “MESM and the Beginning of the Computer Era in the Soviet Union” by Anne Fitzpatrick, Tatiana Kazakova, and Simon Berkovich, “S.A. Lebedev and the Birth of Soviet Computing” by G.D. Crowe and S.E. Goodman, “The Origin of Digital Computing in Europe” by S.E. Goodman, “Strela-1, The First Soviet Computer: Political Success and Technological Failure” by Hiroshi Ichikawa, and “InterNyet: Why the Soviet Union Did Not Build a Nationwide Computer Network.” by Slava Gerovitch; studies from the Rand Corporation entitled “Soviet Cybernetics Technology I: Soviet Cybernetics, 1959-1962” and “Soviet Computer Technology — 1959”; the January 1970 issue of Rand Corporation’s Soviet Cybernetics Review; the books Stalinist Science by Nikolai Krementsov, The Human Use of Human Beings by Norbert Wiener, Red Storm Rising by Tom Clancy, and From Newspeak to Cyberspeak: A History of Soviet Cybernetics by Slava Gerovitch; The New York Times of December 11 1955, December 2 1959, and August 28 1966; Scientific American of October 1970; Byte of November 1984, February 1985, and October 1987.)

  1. Driving around Moscow with several other backpackers and a guide we’d scraped together enough money to hire, I noticed some otherwise unmarked cars sporting flashing lights on the roof, of the sort that the driver can reach up through the window to set in place as she drives. There were a lot of these cars, all rushing about purposefully with lights undulating like mad. Was there really that much crime in the city? Or was some sort of emergency in progress? Weird if so, as the cars with the flashing lights by no means all seemed to be headed in the same direction. Curious about all these things, I finally asked our guide. “Oh, most of those cars aren’t actually police,” she said. “If you pay the right person, you can get a light like that for your personal car. Then you don’t have to stop at the traffic lights.” I have no idea if she was telling the truth or putting one over on the naive foreigner, but I do know that this story has always struck me as the quintessential tale of Russia, home of the greatest capitalists the world has ever known. 



Memos from Digital Antiquarian Corporate Headquarters, June 2017 Edition

From the Publications Department:

Those of you who enjoy reading the blog in ebook format will be pleased to hear that Volume 12 in that ongoing series is now available, full of articles centering roughly on the year 1990. As usual, the ebook is entirely the work of Richard Lindner. Thank you, Richard!

From the Security Department:

A few days ago, a reader notified me of an alarming development: he was getting occasional popup advertisements for a shady online betting site when he clicked article links within the site. Oddly enough, the popups were very intermittent; in lots of experimenting, I was only able to get them to appear on one device — an older iPad, for it’s worth — and even then only every tenth or twelfth time I tapped a link. But investigation showed that there was indeed some rogue JavaScript that was causing them. I’ve cleaned it up and hardened that part of the site a bit more, but I remain a little concerned in that I haven’t identified precisely how someone or something got access to the file that was tampered with in the first place. If anything suspicious happens during your browsing, please do let me know. I don’t take advertisements of any sort, so any that you see on this site are by definition a security breach of some sort. In the meantime, I’ll continue to scan the site daily in healthily paranoid fashion. The last I thing I want is a repeat of the Great Handbag Hack of 2012. (Do note, however, that none of your Patreon or PayPal information is stored on the site, and the database containing commenters’ email addresses has remained uncompromised — so nothing to worry too much over.)

From the Scheduling Department:

I’ve had to skip publishing an article more weeks than I wanted to this year. First I got sick after coming home from my research trip to the Strong Museum in Rochester, New York. Then we moved (within Denmark) from Odense to Aarhus, and I’m sure I don’t need to tell most of you what a chaotic process that can be. Most recently, I’ve had to do a lot more research than usual for my next subject; see the next two paragraphs for more on that. In a couple of weeks my wife and I are going to take a little holiday, which means I’m going to have to take one more bye week in June. After that, though, I hope I can settle back into the groove and start pumping out a reliable article every week for a while. Thanks for bearing with me!

From the Long-Term-Planning Department:

I thought I’d share a taste of what I plan to cover in the context of 1991 — i.e., until I write another of these little notices to tell you the next ebook is available. If you prefer that each new article be a complete surprise, you’ll want to skip the next paragraph.

(Spoiler Alert!)

I’ve got a series in the works for the next few weeks covering the history of computing in the Soviet Union, culminating in East finally meeting West in the age of Tetris. I’m already very proud of the articles that are coming together on this subject, and hope you’re going to find this little-known story as fascinating as I do. Staying with the international theme, we’ll then turn our attention to Britain for a while; in that context, I’m planning articles on the great British tradition of open-world action-adventures, on the iconic software house Psygnosis, and finally on Psygnosis’s most enduring game, Lemmings. Then we’ll check in with the Amiga 3000 and CDTV. I’m hoping that Bob Bates and I will be able to put together something rather special on Timequest. Then some coverage of the big commercial online services that predated the modern World Wide Web, along with the early experiments with massively multiplayer games which they fostered. We’ll have some coverage of the amateur text-adventure scene; 1991 was a pretty good year there, with some worthy but largely forgotten games released. I may have more to say about the Eastgate school of hypertext, in the form of Sarah Smith’s King of Space, if I can get the thing working and if it proves worthy of writing about. Be that as it may, we’ll definitely make time for Corey Cole’s edutainment classic The Castle of Dr. Brain and other contemporary doings around Sierra. Then we’ll swing back around to Origin, with a look at the two Worlds of Ultima titles — yes, thanks to your recommendations I’ve decided to give them more coverage than I’d originally planned — and Wing Commander II. We’ll wrap up 1991 with Civilization, a game which offers so much scope for writing that it’s a little terrifying. I’m still mulling over how best to approach that one, but I’m already hugely looking forward to it.

(End Spoilers)

From the Accounting Department:

I’ve seen a nice uptick in Patreon participation in recent months, for which I’m very grateful. Thank you to every reader who’s done this writer the supreme honor of paying for the words I scribble on the (virtual) page, whether you’ve been doing so for years or you just signed up yesterday.

If you’re a regular reader who hasn’t yet taken the plunge, please do think about supporting these serious long-form articles about one of the most important cultural phenomenons of our times by signing up as a Patreon subscriber or making a one-time donation via the links to the right. Remember that I can only do this work thanks to the support of people just like you.

See you Friday! Really, I promise this time…


An Independent Interplay Takes on Tolkien

When Brian Fargo made the bold decision in 1988 to turn his company Interplay into a computer-game publisher as well as developer, he was simply steering onto the course that struck him as most likely to ensure Interplay’s survival. Interplay had created one of the more popular computer games of the 1980s in the form of the 400,000-plus-selling CRPG The Bard’s Tale, yet had remained a tiny company living hand-to-mouth while their publisher Electronic Arts sucked up the lion’s share of the profits. And rankling almost as much as that disparity was the fact that Electronic Arts sucked up the lion’s share of the credit as well; very few gamers even recognized the name of Interplay in 1988. If this was what it was like to be an indentured developer immediately after making the best-selling single CRPG of the 1980s, what would it be like when The Bard’s Tale faded into ancient history? The odds of making it as an independent publisher may not have looked great, but from some angles at least they looked better than Interplay’s prospects if the status quo was allowed to continue.

Having taken their leave of Electronic Arts and signed on as an affiliated label with Mediagenic in order to piggyback on the latter’s distribution network, the newly independent Interplay made their public bow by releasing two games simultaneously. One of these was Neuromancer, a formally ambitious, long-in-the-works adaptation of the landmark William Gibson novel. The other was the less formally ambitious Battle Chess, an initially Commodore Amiga-based implementation of chess in which the pieces didn’t just slide into position each time a player made a move but rather walked around the board to do animated battle. Not a patch on hardcore computerized chess games like The Chessmaster 2000 in terms of artificial intelligence — its chess-playing engine actually had its origin in a simple chess implementation released in source-code form by Borland to demonstrate their Turbo Pascal programming language1Battle Chess sold far better than any of them. Owners of Amigas were always eager for opportunities to show off their machines’ spectacular audiovisual capabilities, and Battle Chess delivered on that in spades, becoming one of the Amiga’s iconic games; it even featured prominently in a Computer Chronicles television episode about the Amiga. Today, long after its graphics have lost their power to wow us, it may be a little hard to understand why so many people were so excited about this slow-playing, gussied-up version of chess. Battle Chess, in other words, is unusually of-its-time even by the standards of old computer games. In its time, though, it delivered exactly what Interplay most needed as they stepped out on their own: it joined The Bard’s Tale to become the second major hit of their history, allowing them to firmly establish their footing on this new frontier of software publishing.

The King takes out a Knight in Battle Chess by setting off a bomb. Terrorism being what it is, this would not, needless to say, appear in the game if it was released today.

The remarkable success of Battle Chess notwithstanding, Interplay was hardly ready to abandon CRPGs — not after the huge sales racked up by The Bard’s Tale and the somewhat fewer but still substantial sales enjoyed by The Bard’s Tale II, The Bard’s Tale III, and Wasteland. Unfortunately, leaving Electronic Arts behind had also meant leaving those franchises behind; as was typical of publisher/developer relationships of the time, those trademarks had been registered by Electronic Arts, not Interplay. Faced with this reality, Interplay embarked on the difficult challenge of interesting gamers in an entirely new name on the CRPG front. Which isn’t to say that the new game would have nothing in common with what they’d done before. On the contrary, Interplay’s next CRPG was conceived as a marriage of the fantasy setting of The Bard’s Tale, which remained far more popular with gamers than alternative settings, with the more sophisticated game play of the post-apocalyptic Wasteland, which had dared to go beyond mowing down hordes of monsters as its be-all end-all.

Following a precedent he had established with Wasteland, Fargo hired established veterans of the tabletop-RPG industry to design the new game. But in lieu of Michael Stackpole and Ken St. Andre, this time he went with Steve Peterson and Paul Ryan O’Connor. The former was best known as the designer of 1981’s Champions, one of the first superhero RPGs and by far the most popular prior to TSR entering the fray with the official Marvel Comics license in 1984. The latter was yet another of the old Flying Buffalo crowd who had done so much to create Wasteland; at Flying Buffalo, O’Connor had been best known as the originator of the much-loved and oft-hilarious Grimtooth’s Traps series of supplements. In contrast to Stackpole and St. Andre, the two men didn’t work on Interplay’s latest CRPG simultaneously but rather linearly, with Peterson handing the design off to O’Connor after creating its core mechanics but before fleshing out its plot and setting. Only quite late into O’Connor’s watch did the game, heretofore known only as “Project X,” finally pick up its rather generic-sounding name of Dragon Wars.

At a casual glance, Dragon Wars‘s cheesecake cover art looks like that of any number of CRPGs of its day. But for this game, Brian Fargo went straight to the wellspring of cheesecake fantasy art, commissioning Boris Vallejo himself to paint the cover. The end results set Interplay back $4000. You can judge for yourself whether it was money well-spent.

A list of Interplay’s goals for Dragon Wars reads as follows:

  • Deemphasis of “levels” — less of a difference in ability from one level to another.
  • Experience for something besides killing.
  • No random treasure — limit it such as Wasteland’s.
  • Ability to print maps (dump to printer).
  • Do something that has an effect — it does not necessarily have to be done to win (howitzer shell in Wasteland that blows up fast-food joint).
  • Characters should not begin as incompetents — thieves that disarm traps only 7 percent of the time, etc. Fantasy Hero/GURPS levels of competence at the start are more appropriate (50 percent or better success rate to start with).
  • Reduce bewildering array of slightly different spells.
  • If character “classes” are to be used, they should all be distinctive and different from each other and useful. None, however, should be absolutely vital.
  • Less linear puzzles — there should be a number of quests that can be done at any given time.

The finished game hews to these goals fairly well, and to fairly impressive effect. The skill-based — as opposed to class-based — character system of Wasteland is retained, and there are multiple approaches available at every turn. Dragon Wars still runs on 8-bit computers like the Apple II and Commodore 64 in addition to more advanced machines, but it’s obvious throughout that Interplay has taken steps to remedy the shortcomings of their previous CRPGs as much as possible within the limitations of 8-bit technology. For instance, there’s an auto-map system included which, limited though it is, shows that they were indeed trying. As with Wasteland, an obvious priority is to bring more of the tabletop experience to the computer. Another priority, though, is new: to throttle back the pace of character development, thus steering around the “Monty Haul” approach so typical of CRPGs. Characters do gain in levels and thus in power in Dragon Wars, but only very slowly, while the game is notably stingy with the gold and magic that fill most CRPGs from wall to wall. Since leveling up and finding neat loot is such a core part of the joy of CRPGs for so many of us, these choices inevitably lead to a game that’s a bit of an acquired taste. That reality, combined with the fact that the game does no hand-holding whatsoever when it comes to building your characters or anything else — it’s so tough to create a viable party of your own out of the bewildering list of possible skills that contemporary reviewers recommended just playing with the included sample party — make it a game best suited for hardened old-school CRPG veterans. That said, it should also be said that many in that select group consider Dragon Wars a classic.

Dragon Wars would mark the end of the line for Interplay games on 8-bit home computers. From now on, MS-DOS and consoles would dominate, with an occasional afterthought of an Amiga version.

The game didn’t do very well at retail, but that situation probably had more to do with external than intrinsic factors. It was introduced as yet another new name into a CRPG market that was drowning in more games than even the most hardcore fan could possibly play. And for all Interplay’s determination to advance the state of the art over The Bard’s Tale games and even Wasteland, Dragon Wars was all too obviously an 8-bit CRPG at a time when the 8-bit market was collapsing.

In the wake of Dragon Wars‘s underwhelming reception, Interplay was forced to accept that the basic technical approach they had used with such success in all three Bard’s Tale games and Wasteland had to give way to something else, just as the 8-bit machines that had brought them this far had to fall by the wayside. Sometimes called The Bard’s Tale IV by fans — it would doubtless have been given that name had Interplay stayed with Electronic Arts — Dragon Wars was indeed the ultimate evolution of what Interplay had begun with the original Bard’s Tale. It was also, however, the end of that particular evolutionary branch of the CRPG.

Luckily, the ever industrious Brian Fargo had something entirely new in the works in the realm of CRPGs. And, as had become par for the course, that something would involve a veteran of the tabletop world.

Paul Jaquays2 had discovered Dungeons & Dragons in 1975 in his first year of art college and never looked back. After founding The Dungeoneer, one of the young industry’s most popular early fanzines, he kicked around as a free-lance writer, designer, and illustrator, coming to know most of the other tabletop veterans we’ve already met in the context of their work with Interplay. Then he spent the first half of the 1980s working on videogames for Coleco; he was brought on there by none other than Wasteland designer Michael Stackpole. Jaquays, however, remained at Coleco much longer than Stackpole, rising to head their design department. When Coleco gave up on their ColecoVision console and laid off their design staff in 1985, Jaquays went back to freelancing in both tabletop and digital gaming. Thus it came to pass that Brian Fargo signed him up to make Interplay’s next CRPG while Dragon Wars was still in production. The game was to be called Secrets of the Magi. While it was to have run on the 8-bit Commodore 64 among other platforms, it was planned as a fast-paced, real-time affair, in marked contrast to Interplay’s other CRPGs, with free-scrolling movement replacing their grid-based movement, action-oriented combat replacing their turn-based combat. But the combination of the commercial disappointment that had been Dragon Wars and the collapse of the 8-bit market which it signified combined with an entirely new development to change most of those plans. Jaquays was told one day by one of Magi‘s programmers that “we’re not doing this anymore. We’re doing a Lord of the Rings game.”

Fargo’s eyes had been opened to the possibilities for literary adaptations by his friendship with Timothy Leary, which had led directly to Interplay’s adaptation of Neuromancer and, more indirectly but more importantly in this context, taught him something about wheeling and dealing with the established powers of Old Media. At the time, the Tolkien estate, the holders of J.R.R. Tolkien’s literary copyrights, were by tacit agreement with Tolkien Enterprises, holders of the film license, the people to talk to if you wanted to create a computer game based on Tolkien. The only publisher that had yet released such a beast was Australia’s Melbourne House, who over the course of the 1980s had published four text adventures and a grand-strategy game set in Middle-earth. But theirs wasn’t an ongoing licensing arrangement; it had been negotiated anew for each successive game. And they hadn’t managed to make a Tolkien game that became a notable critical or commercial success since their very first one, a text-adventure adaptation of The Hobbit from way back in 1982. In light of all this, there seemed ample reason to believe that the Tolkien estate might be amenable to changing horses. So, Brian Fargo called them up and asked if he could make a pitch.

Fargo told me recently that he believes it was his “passion” for the source material that sealed the deal. Fargo:

I had obsessed over the books when I was little, had the calendar and everything. And inside the front cover of The Fellowship of the Ring was a computer program I’d written down by hand when I was in seventh grade. I brought it to them and showed them: “This was my first computer program, written inside the cover of this book.” I don’t know if that’s what got them to agree, but they did. I think they knew they were dealing with people that were passionate about the license.

One has to suspect that Fargo’s honest desire to make a Lord of the Rings game for all the right reasons was indeed the determining factor. Christopher Tolkien, always the prime mover among J.R.R. Tolkien’s heirs, has always approached the question of adaptation with an eye to respecting and preserving the original literary works above all other considerations. And certainly the Tolkien estate must have seen little reason to remain loyal to Melbourne House, whose own adaptations had grown so increasingly lackluster since the glory days of their first Hobbit text adventure.

A bemused but more than willing Paul Jacquays thus saw his Secrets of the Magi transformed into a game with the long-winded title — licensing deals produce nothing if not long-winded titles — of J.R.R. Tolkien’s The Lord of the Rings, Volume 1. (For some reason known only to the legal staff, the name The Fellowship of the Ring wasn’t used, even though the part of Tolkien’s story covered by the game dovetails almost perfectly with the part covered by that first book in the trilogy.) While some of the ideas that were to have gone into Jaquays’s original plan for Secrets of the Magi were retained, such as the real-time play and free-scrolling movement, the game would now be made for MS-DOS rather than the Commodore 64. Combined with the Tolkien license, which elevated the game at a stroke to the status of the most high-profile ongoing project at Interplay, the switch in platforms led to a dramatic up-scaling in ambition.

Thrilled though everyone had been to acquire the license, making The Lord of the Rings, by far the biggest thing Interplay had ever done in terms of sheer amount of content, turned into a difficult grind that was deeply affected by external events, starting with a certain crisis of identity and ending with a full-blown existential threat.

Like so many American computer-game executives at the time, Brian Fargo found the Nintendo Entertainment System and its tens of millions of active players hard to resist. While one piece of his company was busy making The Lord of the Rings into a game, he therefore set another piece to work churning out Interplay’s first three Nintendo games. Having no deal with the notoriously fickle Nintendo and thus no way to enter their walled garden as a publisher in their own right, Interplay was forced to publish two of these games through Mediagenic’s Activision label, the other through Acclaim Entertainment. Unfortunately, Fargo was also like many other computer-game executives in discovering to his dismay that there was far more artistry to Nintendo hits like Super Mario Bros. than their surface simplicity might imply — and that Nintendo gamers, young though they mostly were, were far from undiscerning. None of Interplay’s Nintendo games did very well at all, which in turn did no favors to Interplay’s bottom line.

Interplay sorely needed another big hit like Battle Chess, but it was proving damnably hard to find. Even the inevitable Battle Chess II: Chinese Chess performed only moderately. The downside of a zeitgeist-in-a-bottle product like Battle Chess was that it came with a built-in sell-by date. There just wasn’t that much to be done to build on the original game’s popularity other than re-skinning it with new graphics, and the brief-lived historical instant when animated chessmen were enough to sell a game was already passing.

Then, in the midst of these other struggles, Interplay was very nearly buried by the collapse of Mediagenic in 1990. I’ve already described the reasons for that collapse and much of the effect it had on Interplay and the rest of the industry in an earlier article, so I won’t retread that ground in detail here. Instead I’ll just reiterate that the effect was devastating for Interplay. With the exception only of the single Nintendo game published through Acclaim and the trickle of royalties still coming in from Electronic Arts for their old titles, Interplay’s entire revenue stream had come through Mediagenic. Now that stream had run dry as the Sahara. In the face of almost no income whatsoever, Brian Fargo struggled to keep Interplay’s doors open, to keep his extant projects on track, and to establish his own distribution channel to replace the one he had been renting from Mediagenic. His company mired in the most serious crisis it had ever faced, Fargo went to his shareholders — Interplay still being privately held, these consisted largely of friends, family, and colleagues — to ask for the money he needed to keep it alive. He managed to raise over $500,000 in short-term notes from them, along with almost $200,000 in bank loans, enough to get Interplay through 1990 and get the Lord of the Rings game finished. The Lord of the Rings game, in other words, had been elevated by the misfortunes of 1990 from an important project to a bet-the-company project. It was to be finished in time for the Christmas of 1990, and if it became a hit then Interplay might just live to make more games. And if not… it didn’t bear thinking about.

The Lord of the Rings‘s free-scrolling movement and overhead perspective were very different from what had come before, ironically resembling Origin’s Ultima games more than Interplay’s earlier CRPGs. But the decision to have the interface get out of the way when it wasn’t needed, thus giving more space to the world, was very welcome, especially in comparison to the cluttered Ultima VI engine. Interplay’s approach may well have influenced Ultima VII.

If a certain technical approach to the CRPG — a certain look and feel, if you will — can be seen as having been born with the first Bard’s Tale and died after Dragon Wars, a certain philosophical approach can be seen just as validly as having been born with Wasteland and still being alive and well at Interplay at the time of The Lord of the Rings. The design of the latter would once again emphasize character skills rather than character class, and much of the game play would once again revolve around applying your party’s suite of skills to the situations encountered. Wasteland‘s approach to experience and leveling up had been fairly traditional; characters increased in power relatively quickly, especially during the early stages of the game, and could become veritable demigods by the end. Dragon Wars, though, had departed from tradition by slowing this process dramatically, and now The Lord of the Rings would eliminate the concept of character level entirely; skills would still increase with use, but only slowly, and only quietly behind the scenes. These mechanical changes would make the game unlike virtually any CRPG that had come before it, to such an extent that some have argued over whether it quite manages to qualify as a CRPG at all. It radically de-emphasizes the character-building aspect of the genre — you don’t get to make your own characters at all, but start out in the Shire with only Frodo and assemble a party over the course of your travels — and with it the tactical min/maxing that is normally such a big part of old-school CRPGs. As I noted in my previous article, Middle-earth isn’t terribly well-suited to traditional RPG mechanics. The choice Interplay made to focus less on mechanics and more on story and exploration feels like a logical response, an attempt to make a game that does embody Tolkien’s ethos.

In addition to the unique challenges of adapting CRPG mechanics to reflect the spirit of Middle-earth, Interplay’s Lord of the Rings game faced all the more typical challenges of adapting a novel to interactive form. To simply walk the player through the events of the book would be uninteresting and, given the amount of texture and exposition that would be lost in the transition from novel to game, would yield far too short of an experience. Interplay’s solution was to tackle the novel in terms of geography rather than plot. They created seven large maps for you to progress through, covering the stages of Frodo and company’s journey in the novel: the Shire, the Old Forest, Bree, Rivendell, Moria, Lothlórien, and Dol Guldur. (The last reflects the game’s only complete deviation from the novel; for its climax, it replaces the psychological drama of Boromir’s betrayal of the Fellowship with a more ludically conventional climactic assault on the fortress of the Witch-King of Angmar — the Lord of the Nazgûl —  who has abducted Frodo.) Paul Jaquays scattered episodes from the novel over the maps in what seemed the most logical places. Then, he went further, adding all sorts of new content.

Interplay understood that reenacting the plot of the novel wasn’t really what players would find most appealing about a CRPG set in Middle-earth. The real appeal was that of simply wandering about in the most beloved landscapes in all of fantasy fiction. For all that the Fellowship was supposed to be on a desperate journey to rid the world of its greatest threat in many generations, with the forces of evil hot on their trail, it wouldn’t do to overemphasize that aspect of the book. Players would want to stop and smell the roses. Jaquays therefore stuffed each of the maps with content, almost all of it optional; there’s very little that you need to do to finish the game. While a player who takes the premise a bit too literally could presumably rush through the maps in a mere handful of hours, the game clearly wants you to linger over its geography, scouring it from end to end to see what you can turn up.

In crafting the maps, and especially in crafting the new content on them, Jaquays was hugely indebted to Iron Crown Enterprises’s Middle-earth Role Playing tabletop RPG and its many source books which filled in the many corners of Middle-earth in even greater detail than Tolkien had managed in his voluminous notes. For legal reasons — Interplay had bought a Fellowship of the Ring novel license, not a Middle-earth Role Playing game license — care had to be taken not to lift anything too blatantly, but anyone familiar with Iron Crown’s game and Interplay’s game can’t help but notice the similarities. The latter’s vision of Middle-earth is almost as indebted to the former as it is to Tolkien himself. One might say that it plays like an interactive version of one of those Iron Crown source books.

Conversation takes the Ultima “guess the keyword” approach. Sigh… at least you can usually identify topics by watching for capitalized words in the text.

Interplay finished development on the game in a mad frenzy, with the company in full crisis mode, trying to get it done in time for the Christmas of 1990. But in the end, they were forced to make the painful decision to miss that deadline, allowing the release date to slip to the beginning of 1991. Then, with it shipping at last, they waited to see whether their bet-the-company game would indeed save their skins. Early results were not encouraging.

Once you got beyond the awful, unwieldy name, J.R.R. Tolkien’s The Lord of the Rings, Volume 1 seemingly had everything going for it: a developer with heaps of passion and heaps of experience making CRPGs, a state-of-the-art free-scrolling engine with full-screen graphics, and of course a license for the most universally known and beloved series of books in all of fantasy fiction. It ought to have been a sure thing, a guaranteed hit if ever there was one. All of which makes its reception and subsequent reputation all the more surprising. If it wasn’t quite greeted with a collective shrug, Interplay’s first Tolkien game was treated with far more skepticism than its pedigree might lead one to expect.

Some people were doubtful of the very idea of trying to adapt Tolkien, that most holy name in the field of fantasy, into a game in much the same way that some Christians might be doubtful of making Jesus Christ the star of a game. For those concerned above all else with preserving the integrity of the original novel, Interplay’s approach to the task of adaptation could only be aggravating. Paul Jaquays had many talents, but he wasn’t J.R.R. Tolkien, and the divisions between content drawn from the books and new content were never hard to spot. What right had a bunch of game developers to add on to Middle-earth? It’s a question, of course, with no good answer.

But even those who were more accepting of the idea of The Lord of the Rings in game form found a lot of reasons to complain about this particular implementation of the idea. The most immediately obvious issue was the welter of bugs. Bugs in general were becoming a more and more marked problem in the industry as a whole as developers strained to churn out ever bigger games capable of running on an ever more diverse collection of MS-DOS computing hardware. Still, even in comparison to its peers Interplay’s Lord of the Rings game is an outlier, being riddled with quests that can’t be completed, areas that can’t be accessed, dialog that doesn’t make sense. Its one saving grace is the generosity and flexibility that Jaquays baked into the design, which makes it possible to complete the game even though it can sometimes seem like at least half of it is broken in one way or another. A few more months all too obviously should have been appended to the project, even if it was already well behind schedule. Given the state of the game Interplay released in January of 1991, one shudders to think what they had seriously considered rushing to market during the holiday season.

You’ll spend a lot of time playing matchy-matchy with lists of potentially applicable skills. A mechanic directly imported from tabletop RPGs, it isn’t the best fit for a computer game, for reasons I explicated in my article on Wasteland.

Other issues aren’t quite bugs in the traditional sense, but do nevertheless feel like artifacts of the rushed development cycle. The pop-up interface which overlays the full-screen graphics was innovative in its day, but it’s also far more awkward to use than it needs to be, feeling more than a little unfinished. It’s often too difficult to translate actions into the terms of the interface, a problem that’s also present in Wasteland and Dragon Wars but is even more noticeable here. Good, logical responses to many situations — responses which are actually supported by the game — can fall by the wayside because you fail to translate them correctly into the terms of the tortured interface. Throwing some food to a band of wolves to make them go away rather than attack you early in the game, for instance, requires you divine that you need to “trade” the food to them. Few things are more frustrating than looking up the solution to a problem like this one and learning that you went awry because you “used” food on the wolves instead of “trading” it to them.

But perhaps the most annoying issue is that of simply finding your way around. Each of those seven maps is a big place, and no auto-map facility is provided;  Interplay had intended to include such a feature, but dropped it in the name of saving time. The manual does provide a map of the Shire, but after that you’re on your own. With paper-and-pencil mapping made damnably difficult by the free-scrolling movement that it makes it impossible to accurately judge distances, just figuring out where you are, where you’ve been, and where you need to go often turns into the most challenging aspect of the game.

Combat can be kind of excruciating, especially when you’re stuck with nothing but a bunch of hobbits.

It all adds up to something of a noble failure — a game which, despite the best intentions of everyone involved, just isn’t as magical as it ought to have been. The game sold in moderate numbers on the strength of the license, but, its commercial prospects damaged as much by missing the Christmas buying season as by the lukewarm reviews, it never became the major hit Interplay so desperately needed. That disappointment may very well have marked the end of Interplay, if not for a stroke of good fortune from a most unexpected quarter.

Shortly after electing to turn Interplay into an independent publisher, Brian Fargo had begun looking for more games to publish beyond those his small internal team could develop. He’d found some worthwhile titles, albeit titles reflective of the small size and relative lack of clout of his company: a classical chess game designed to appeal to those uninterested in Battle Chess‘s eye-candy approach; a series of typing tutors; a clever word game created by a couple of refugees from the now-defunct Cinemaware; a series of European imports sourced through France’s Delphine Software. None had set the world on fire, but then no one had really expected them to.

That all changed when Interplay agreed to publish a game called Castles, from a group of outside developers who called themselves Quicksilver Software. Drawing from King Edward I of England’s castle-building campaign in Wales for its historical antecedent, Castles at its core was essentially a Medieval take on SimCity. Onto this template, however, Quicksilver grafted the traditional game elements some had found lacking in Will Wright’s software toy. The player’s castles would be occasionally attacked by enemy armies, forcing her to defend them in simple tactical battles, and she would also have to deal with the oft-conflicting demands of the clergy, the nobility, and the peasantry in embodied exchanges that gave the game a splash of narrative interest. Not a deathless classic by any means, it was a game that just about everyone could while away a few hours with. Castles was able to attract the building crowd who loved SimCity, the grognard crowd who found its historical scenario appealing, the adventure and CRPG crowd who liked the idea of playing the role of a castle’s chief steward, while finishing the mixture off with a salting of educational appeal. With some of the most striking cover art of any game released that year to serve as the finishing touch, its combination of appeals proved surprisingly potent. In fact, no one was more surprised by the game’s success than Interplay, who, upon releasing Castles just weeks after the Lord of the Rings game, found themselves with an unexpected but well-nigh life-saving hit on their hands. Every time you thought you understood gamers, Brian Fargo was continuing to learn, they’d turn around and surprise you.

So, thanks to this most fortuitous of saviors, Interplay got to live on. Almost in spite of himself, Fargo continued to pull a hit out of his sleeve every two or three years, always just when his company most needed one. He’d done it with The Bard’s Tale, he’d done it with Battle Chess, and now he’d done it with Castles.

Castles had rather stolen The Lord of the Rings‘s thunder, but Interplay pressed on with the second game in the trilogy, which was allowed the name The Two Towers to match that of its source novel. Released in August of 1992 after many delays, it’s very similar in form and execution to its predecessor — including, alas, lots more bugs — despite the replacement of Paul Jaquays with a team of designers that this time included Ed Greenwood, one of the more prominent creative figures of the post-Gary Gygax era of TSR. Interplay did try to address some of the complaints about the previous game by improving the interface, by making the discrete maps smaller and thus more manageable, and by including the auto-mapping feature that had been planned for but left out of its predecessor. But it still wasn’t enough. Reviewers were even more unkind to the sequel despite Interplay’s efforts, and it sold even worse. By this point, Interplay had scored another big hit with Star Trek: 25th Anniversary, the first officially licensed Star Trek game to be worthy of the name, and had other projects on the horizon that felt far more in keeping with the direction the industry was going than did yet another sprawling Middle-earth CRPG. Brian Fargo’s passion for Tolkien may have been genuine, but at some point in business passion has to give way to financial logic. Interplay’s vision of The Lord of the Rings was thus quietly abandoned at the two-thirds mark.

In a final bid to eke a bit more out of it, Interplay in 1993 repackaged the first Lord of the Rings game for CD-ROM, adding an orchestral soundtrack and interspersing the action, rather jarringly, with clips from Ralph Bakshi’s 1978 animated Lord of the Rings film, which Fargo had also managed to license. But the most welcome improvement came in the form of a slightly more advanced game engine, including an auto-map. Despite the improvements, sales of this version were so poor that Interplay never bothered to give The Two Towers the CD-ROM treatment. A dire port/re-imagining of the first game for the Super Nintendo was the final nail in the coffin, marking the last gasp of Interplay’s take on Tolkien. Just as Bakshi had left his hobbits stranded on the way to Mordor when he failed to secure the financing to make his second Lord of the Rings movie, Interplay left theirs in limbo only a little closer to the Crack of Doom. The irony of this was by no means lost on so dedicated a Tolkien fan as Brian Fargo.

Unlike Dragon Wars, which despite its initial disappointing commercial performance has gone on to attain a cult-classic status among hardcore CRPG fans, the reputations of the two Interplay Lord of the Rings games have never been rehabilitated. Indeed, to a large extent the games have simply been forgotten, bizarre though that situation reads given their lineage in terms of both license and developer. Being neither truly, comprehensively bad games nor truly good ones, they fall into a middle ground of unmemorable mediocrity. In response to their poor reception by a changing marketplace, Interplay would all but abandon CRPGs for the next several years. The company The Bard’s Tale had built could now make a lot more money in other genres. If there’s one thing the brief marriage of Interplay with Tolkien demonstrates, it’s that a sure thing is never a sure thing.

(Sources: This article is largely drawn from the collection of documents that Brian Fargo donated to the Strong Museum of Play. Also, Questbusters of March 1989, December 1989, January 1991, June 1991, April 1992, and August 1992; Antic of July 1985; Commodore Magazine of October 1988; Creative Computing of September 1981; Computer Gaming World of December 1989 and September 1990. Online sources include a Jennell Jaquays Facebook posting and the Polygon article “There and Back Again: A History of The Lord of the Rings in Video Games.” Finally, my huge thanks to Brian Fargo for taking time from his busy schedule to discuss his memories of Interplay’s early days with me.

Neither of the two Interplay Lord of the Rings games have been available for purchase for a long, long time, a situation that is probably down to the fine print of the licensing deal that was made with the Tolkien estate all those years ago. I hesitate to host them here out of fear of angering either of the parties who signed that deal, but they aren’t hard to find elsewhere online with a little artful Googling.)

  1. The later Apple II and Commodore 64 ports of the game ironically played a much stronger game of chess despite running on much more limited hardware. For them, Interplay licensed a chess engine from Julio Kaplan, an International Chess Master and former World Junior Chess Champion who had had written the firmware for a number of custom chess-playing computers and served as an all-purpose computer-chess consultant for years. 

  2. Paul Jaquays now lives as Jennell Jaquays. As per my usual editorial policy on these matters, I refer to her as “he” and by her original name only to avoid historical anachronisms and to stay true to the context of the times. 


Posted by on May 26, 2017 in Uncategorized


The Many Faces of Middle-earth, 1954-1989

The transformation of J.R.R. Tolkien’s The Lord of the Rings from an off-putting literary trilogy — full of archaic diction, lengthy appendixes, and poetry, for God’s sake — into some of the most bankable blockbuster fodder on the planet must be one of the most unlikely stories in the history of pop culture. Certainly Tolkien himself must be about the most unlikely mass-media mastermind imaginable. During his life, he was known to his peers mostly as a philologist, or historian of languages. The whole Lord of the Rings epic was, he once admitted, “primarily linguistic in inspiration, and was begun in order to provide the necessary background history” for the made-up languages it contained. On another occasion, he called the trilogy “a fundamentally religious and Catholic work.” That doesn’t exactly sound like popcorn-movie material, does it?

So, what would this pipe-smoking, deeply religious old Oxford don have made of our modern takes on his work, of CGI spellcraft and 3D-rendered hobbits mowing down videogame enemies by the dozen? No friend of modernity in any of its aspects, Tolkien would, one has to suspect, have been nonplussed at best, outraged at worst. But perhaps — just perhaps, if he could contort himself sufficiently — he might come to see all this sound and fury as at least as much validation as betrayal of his original vision. In writing The Lord of the Rings, he had explicitly set out to create a living epic in the spirit of Homer, Virgil, Dante, and Malory. For better or for worse, the living epics of our time unspool on screens rather than on the page or in the chanted words of bards, and come with niceties like copyright and trademark attached.

And where those things exist, so exist also the corporations and the lawyers. It would be those entities rather than Tolkien or even any of his descendants who would control how his greatest literary work was adapted to screens large, small, and in between. Because far more people in this modern age of ours play games and watch movies than read books of any stripe  — much less daunting doorstops like The Lord of the Rings trilogy — this meant that Middle-earth as most people would come to know it wouldn’t be quite the same land of myth that Tolkien himself had created so laboriously over so many decades in his little tobacco-redolent office. Instead, it would be Big Media’s interpretations and extrapolations therefrom. In the first 48 years of its existence, The Lord of the Rings managed to sell a very impressive 100 million copies in book form. In only the first year of its existence, the first installment of Peter Jackson’s blockbuster film trilogy was seen by 150 million people.

To understand how The Lord of the Rings and its less daunting predecessor The Hobbit were transformed from books authored by a single man into a palimpsest of interpretations, we need to understand how J.R.R. Tolkien lost control of his creations in the first place. And to begin to do that, we need to cast our view back to the years immediately following the trilogy’s first issuance in 1954 and 1955 by George Allen and Unwin, who had already published The Hobbit with considerable success almost twenty years earlier.

During its own early years, The Lord of the Rings didn’t do anywhere near as well as The Hobbit had, but did do far better than its publisher or its author had anticipated. It sold at least 225,000 copies (this and all other sales figures given in this article refer to sales of the trilogy as a whole, not to sales of the individual volumes that made up the trilogy) in its first decade, the vast majority of them in its native Britain, despite being available only in expensive hardcover editions and despite being roundly condemned, when it was noticed at all, by the very intellectual and literary elites that made up its author’s peer group. In the face of their rejection by polite literary society, the books sold mostly to existing fans of fantasy and science fiction, creating some decided incongruities; Tolkien never quite seemed to know how to relate to this less mannered group of readers. In 1957, the trilogy won the only literary prize it would ever be awarded, becoming the last recipient of the brief-lived International Fantasy Award, which belied its hopeful name by being a largely British affair. Tolkien, looking alternately bemused and deeply uncomfortable, accepted the award, shook hands and signed autographs for his fans, smiled for the cameras, and got the hell out of there just as quickly as he could.

The books’ early success, such as it was, was centered very much in Britain; the trilogy only sold around 25,000 copies in North America during the entirety of its first decade. It enjoyed its first bloom of popularity there only in the latter half of the 1960s, ironically fueled by two developments deeply antithetical to its author. The first was a legally dubious mass-market paperback edition published in the United States by Ace Books in 1965; the second was the burgeoning hippie counterculture.

Donald Wollheim, senior editor at Ace Books, had discovered what he believed to be a legal loophole giving him the right to publish the trilogy, thanks to the failure of Houghton Mifflin, Tolkien’s American hardcover publisher, to properly register their copyright to it in the United States. Never a man prone to hesitation, he declared that Houghton Mifflin’s negligence had effectively left The Lord of the Rings in the public domain, and proceeded to publish a paperback edition without consulting Tolkien or paying him anything at all. Condemned by the resolutely old-fashioned Tolkien for taking the “degenerate” form of the paperback as much as for the royalties he wasn’t paid, the Ace editions nevertheless sold in the hundreds of thousands in a matter of months. Elizabeth Wollheim, daughter of Donald and herself a noted science-fiction and fantasy editor, has characterized the instant of the appearance of the Ace editions of The Lord of the Rings in October of 1965 as the “Big Bang” that led to the modern cottage industry in doorstop fantasy novels. Along with Frank Herbert’s Dune, which appeared the following year, they obliterated almost at a stroke the longstanding tradition in publishing of genre novels as concise works coming in at under 250 pages.

Even as these cheap Ace editions of Tolkien became a touchstone of what would come to be known as nerd culture, they were also seized on by a very different constituency. With the Summer of Love just around the corner, the counterculture came to see in the industrialized armies of Sauron and Saruman the modern American war machine they were protesting, in the pastoral peace of the Shire the life they saw as their naive ideal. The Lord of the Rings became one of the hippie movement’s literary totems, showing up in the songs of Led Zeppelin and Argent, and, as later memorably described by Peter S. Beagle in the most famous introduction to the trilogy ever written, even scrawled on the walls of New York City’s subways (“Frodo lives!”). Beagle’s final sentiments in that piece could stand in very well for the counterculture’s as a whole: “We are raised to honor all the wrong explorers and discoverers — thieves planting flags, murderers carrying crosses. Let us at last praise the colonizers of dreams.”

If Tolkien had been uncertain how to respond to the earnest young science-fiction fans who had started showing up at his doorstep seeking autographs in the late 1950s, he had no shared frame of reference whatsoever with these latest readers. He was a man at odds with his times if ever there was one. On the rare occasions when contemporary events make an appearance in his correspondence, it always reads as jarring. Tolkien comes across a little confused by it all, can’t even get the language quite right. For example, in a letter from 1964, he writes that “in a house three doors away dwells a member of a group of young men who are evidently aiming to turn themselves into a Beatle Group. On days when it falls to his turn to have a practice session the noise is indescribable.” Whatever the merits of the particular musicians in question, one senses that the “noise” of the “Beatle group” music wouldn’t have suited Tolkien one bit in any scenario. And as for Beagle’s crack about “murderers carrying crosses,” it will perhaps suffice to note that his introduction was published only after Tolkien, the devout Catholic, had died. Like the libertarian conservative Robert Heinlein, whose Stranger in a Strange Land became another of the counterculture’s totems, Tolkien suffered the supreme irony of being embraced as a pseudo-prophet by a group whose sociopolitical worldview was almost the diametrical opposite of his own. As the critic Leonard Jackson has noted, it’s decidedly odd that the hippies, who “lived in communes, were anti-racist, were in favour of Marxist revolution and free love” should choose as their favorite “a book about a largely racial war, favouring feudal politics, jam-full of father figures, and entirely devoid of sex.”

Note the pointed reference to these first Ballantine editions of The Lord of the Rings as the “authorized” editions.

To what extent Tolkien was even truly aware of his works’ status with the counterculture is something of an open question, although he certainly must have noticed the effect it had on his royalty checks after the Ace editions were forced off the market, to be replaced by duly authorized Ballantine paperbacks. In the first two years after issuing the paperbacks, Ballantine sold almost 1 million copies of the series in North America alone.

In October of 1969, smack dab in the midst of all this success, Tolkien, now 77 years old and facing the worry of a substantial tax bill in his declining years, made one of the most retrospectively infamous deals in the history of pop culture. He sold the film rights to The Hobbit and Lord of the Rings to the Hollywood studio United Artists for £104,602 and a fixed cut of 7.5 percent of any profits that might result from cinematic adaptations. And along with film rights went “merchandising rights.” Specifically, United Artists was given rights to the “manufacture, sale, and distribution of any and all articles of tangible personal property other than novels, paperbacks, and other printed published matter.” All of these rights were granted “in perpetuity.”

What must have seemed fairly straightforward in 1969 would in decades to come turn into a Gordian Knot involving hundreds of lawyers, all trying to resolve once and for all just what part of Tolkien’s legacy he had retained and what part he had sold. In the media landscape of 1969, the merchandising rights to “tangible personal property” which Tolkien and United Artists had envisioned must have been limited to toys, trinkets, and souvenirs, probably associated with any films United Artists should choose to make based on Tolkien’s books. Should the law therefore limit the contract to its signers’ original intent, or should it be read literally? If the law chose the latter course, Tolkien had unknowingly sold off the videogame rights to his work before videogames even existed in anything but the most nascent form. Or did he really? Should videogames, being at their heart intangible code, really be lumped even by the literalists into the rights sold to United Artists? After all, the contract explicitly reserves “the right to utilize and/or dispose of all rights and/or interests not herein specifically granted” to Tolkien. This question of course only gets more fraught in our modern age of digital distribution, when games are often sold with no tangible component at all. And then what of tabletop games? They’re quite clearly neither novels nor paperbacks, but they might be, at least in part, “other printed published matter.” What precisely did that phrase mean? The contract doesn’t stipulate. In the absence of any clear pathways through this legal thicket, the history of Tolkien licensing would become that of a series of uneasy truces occasionally  erupting into open legal warfare. About the only things that were clear were that Tolkien — soon, his heirs — owned the rights to the original books and that United Artists — soon, the person who bought the contract from them — owned the rights to make movies out of them. Everything else was up for debate. And debated it would be, at mind-numbing length.

It would, however, be some time before the full ramifications of the document Tolkien had signed started to become clear. In the meantime, United Artists began moving forward with a film adaptation of The Lord of the Rings that was to have been placed in the hands of the director and screenwriter John Boorman. Boorman worked on the script for years, during which Tolkien died and his literary estate passed into the hands of his heirs, most notably his third son and self-appointed steward of his legacy Christopher Tolkien. The final draft of Boorman’s script compressed the entire trilogy into a single 150-minute film, and radically changed it in terms of theme, character, and plot to suit a Hollywood sensibility. For instance, Boorman added the element of sex that was so conspicuously absent from the books, having Frodo and Galadriel engage in a torrid affair after the Fellowship comes to Lothlórien. (Given the disparity in their sizes, one does have to wonder about the logistics, as it were, of such a thing.) But in the end, United Artists opted, probably for the best, not to let Boorman turn his script into a movie. (Many elements from the script would turn up later in Boorman’s Arthurian epic Excalibur.)

Of course, it’s unlikely that literary purity was foremost on United Artists’s minds when they made their decision. As the 1960s had turned into the 1970s and the Woodstock generation had gotten jobs and started families, Tolkien’s works had lost some of their trendy appeal, retaining their iconic status only among fantasy fandom. Still, the books continued to sell well; they would never lose the status they had acquired almost from the moment the Ace editions had been published of being the bedrock of modern fantasy fiction, something everyone with even a casual interest in the genre had to at least attempt to read. Not being terribly easy books, they defeated plenty of these would-be readers, who went off in search of the more accessible, more contemporary-feeling epic-fantasy fare so many publishers were by now happily providing. Yet even among the readers it rebuffed The Lord of the Rings retained the status of an aspirational ideal.

In 1975, a maverick animator named Ralph Bakshi, who had heretofore been best known for Fritz the Cat, the first animated film to earn an X rating, came to United Artists with a proposal to adapt The Lord of the Rings into a trio of animated features that would be relatively inexpensive in comparison to Boorman’s plans for a live-action epic. United Artists didn’t bite, but did signify that they might be amenable to selling the rights they had purchased from Tolkien if Bakshi could put together a few million dollars to make it happen. In December of 1976, following a string of proposals and deals too complicated and imperfectly understood to describe here, a hard-driving music and movie mogul named Saul Zaentz wound up owning the whole package of Tolkien rights that had previously belonged to United Artists. He intended to use his purchase first to let Bakshi make his films and thereafter for whatever other opportunities might happen to come down the road.

Saul Zaentz, seated at far left, with Creedence Clearwater Revival.

Saul Zaentz had first come to prominence back in 1967, when he’d put together a group of investors to buy a struggling little jazz label called Fantasy Records. His first signing as the new president of Fantasy was Creedence Clearwater Revival, a rock group he had already been managing. Whether due to Zaentz’s skill as a talent spotter or sheer dumb luck, it was the sort of signing that makes a music mogul rich for life. Creedence promptly unleashed eleven top-ten singles and five top-ten albums over the course of the next three and a half years, the most concentrated run of hits of any 1960s band this side of the Beatles. And Zaentz got his fair share of all that filthy lucre — more than his fair share, his charges eventually came to believe. When the band fell apart in 1972, much of the cause was infighting over matters of business. The other members came to blame Creedence’s lead singer and principal songwriter John Fogerty for convincing them to sign a terrible contract with Zaentz that gave away rights to their songs to him for… well, in perpetuity, actually. And as for Fogerty, he of course blamed Zaentz for all the trouble. Decades of legal back and forth followed the breakup. At one point, Zaentz sued Fogerty on the novel legal theory of “self-plagarization”: the songs Fogerty was now writing as a solo artist, went the brief, were too similar to the ones he used to write for Creedence, all of whose copyrights Zaentz owned. While his lawyers pleaded his case in court, Fogerty vented his rage via songs like “Zanz Kant Danz,” the story of a pig who, indeed, can’t dance, but will happily “steal your money.”

I trust that this story gives a sufficient impression of just what a ruthless, litigious man now owned adaptation rights to the work of our recently deceased old Oxford don. But whatever else you could say about Saul Zaentz, he did know how to get things done. He secured financing for the first installment of Bakshi’s animated Lord of the Rings, albeit on the condition that he cut the planned three-film series down to two. Relying heavily on rotoscoping to give his cartoon figures an uncannily naturalistic look, Bakshi finished the film for release in November of 1978. Regarded as something of a cult classic among certain sectors of Tolkien fandom today, in its own day the film was greeted with mixed to poor reviews. The financial picture is equally muddled. While it’s been claimed, including by Bakshi himself, that the movie was a solid success, earning some $30 million on a budget of a little over $4 million, the fact remains that Zaentz was unable to secure funding for the sequel, leaving poor Frodo, Sam, and Gollum forever in limbo en route to Mount Doom. It is, needless to say, difficult to reconcile a successful first film with this refusal to back a second. But regardless of the financial particulars, The Lord of the Rings wouldn’t make it back to the big screen for more than twenty years, until the enormous post-millennial Peter Jackson productions that well and truly, once and for all, broke Middle-earth into the mainstream.

Yet, although the Bakshi adaptation was the only Tolkien film to play in theaters during this period, it wasn’t actually the only Tolkien film on offer. In November of 1977, a year before the Bakshi Lord of the Rings made its bow, a decidedly less ambitious animated version of The Hobbit had played on American television. The force behind it was Rankin/Bass Productions, who had previously been known in television broadcasting for holiday specials such as Rudolph the Red-Nosed Reindeer. Their take on Tolkien was authorized not by Saul Zaentz but by the Tolkien estate. Being shot on video rather than film and then broadcast rather than shown in theaters, the Rankin/Bass Hobbit was not, legally speaking, a “movie” under the terms of the 1969 contract. Nor was it a “tangible” product, thus making it fair game for the Tolkien estate to authorize without involving Zaentz. That, anyway, was the legal theory under which the estate was operating. They even authorized a sequel to the Rankin/Bass Hobbit in 1980, which rather oddly took the form of an adaptation of The Return of the King, the last book of The Lord of the Rings. A precedent of dueling licenses, authorizing different versions of what to casual eyes at least often seemed to be the very same things, was thus established.

But these flirtations with mainstream visibility came to an end along with the end of the 1970s. After the Ralph Baski and Rankin/Bass productions had all had their moments in the sun, The Lord of the Rings was cast back into its nerdy ghetto, where it remained more iconic than ever. Yet the times were changing in some very important ways. From the moment he had clear ownership of the rights Tolkien had once sold to United Artists, Saul Zaentz had taken to interpreting their compass in the broadest possible way, and had begun sending his lawyers after any real or alleged infringers who grew large enough to come to his attention. This marked a dramatic change from the earliest days of Tolkien fandom, when no one had taken any apparent notice of fannish appropriations of Middle-earth, to such an extent that fans had come to think of all use of Tolkien’s works as fair use. In that spirit, in 1975 a tiny game publisher called TSR, incubator of an inchoate revolution called Dungeons & Dragons, had started selling a non-Dungeons & Dragons strategy game called Battle of the Five Armies that was based on the climax of The Hobbit. In late 1977, Zaentz sent them a cease-and-desist letter demanding that the game be immediately taken off the market. And, far more significantly in the long run, he also demanded that all Tolkien references be excised from Dungeons & Dragons. It wasn’t really clear that Zanetz ought to have standing to sue, given that Battle of the Five Armies and especially Dungeons & Dragons consisted of so much of the “printed published matter” that was supposedly reserved to the Tolkien estate. But, hard charger that he was, Zaentz wasn’t about to let such niceties stop him. He was establishing legal precedent, and thereby cementing his position for the future.

The question of just how much influence Tolkien had on Dungeons & Dragons has been long obscured by this specter of legal action, which gave everyone on the TSR side ample reason to be less than entirely forthcoming. That said, certain elements of Dungeons & Dragons — most obviously the “hobbit” character class found in the original game — undeniably walked straight off the pages of Tolkien and into those of Gary Gygax’s rule books. At the same time, though, the mechanics of Dungeons & Dragons had, as Gygax always strenuously asserted, much more to do with the pulpier fantasy stories of Jack Vance and Robert E. Howard than they did with Tolkien. Ditto the game’s default personality, which hewed more to the “a group of adventurers meet in a bar and head out to bash monsters and collect treasure” modus operandi of the pulps than they did to Tolkien’s deeply serious, deeply moralistic, deeply tragic universe. You could play a more “serious” game of Dungeons & Dragons even in the early days, and some presumably did, but you had to bend the mechanics to make them fit. The more light-hearted tone of The Hobbit might seem better suited, but wound up being a bit too light-hearted, almost as much fairy tale as red-blooded adventure fiction. Some of the book’s episodes, like Bilbo and the dwarves’ antics with the trolls near the beginning of the story, verge on cartoon slapstick, with none of the swashbuckling swagger of Dungeons & Dragons. I love it dearly — far more, truth be told, than I love The Lord of the Rings — but not for nothing was The Hobbit conceived and marketed as a children’s novel.

Gygax’s most detailed description of the influence of Tolkien on Dungeons & Dragons appeared in the March 1985 issue of Dragon magazine. There he explicated the dirty little secret of adapting Tolkien to gaming: that the former just wasn’t all that well-suited for the latter without lots of sweeping changes.

Considered in the light of fantasy action adventure, Tolkien is not dynamic. Gandalf is quite ineffectual, plying a sword at times and casting spells which are quite low-powered (in terms of the D&D game). Obviously, neither he nor his magic had any influence on the games. The Professor drops Tom Bombadil, my personal favorite, like the proverbial hot potato; had he been allowed to enter the action of the books, no fuzzy-footed manling would have needed to undergo the trials and tribulations of the quest to destroy the Ring. Unfortunately, no character of Bombadil’s power can enter the games either — for the selfsame reasons! The wicked Sauron is poorly developed, virtually depersonalized, and at the end blows away in a cloud of evil smoke… poof! Nothing usable there. The mighty Ring is nothing more than a standard ring of invisibility, found in the myths and legends of most cultures (albeit with a nasty curse upon it). No influence here, either…

What Gygax gestures toward here but doesn’t quite touch is that The Lord of the Rings is at bottom a spiritual if not overtly religious tale, Middle-earth a land of ineffable unknowables. It’s impossible to translate that ineffability into the mechanistic system of causes and effects required by a game like Dungeons & Dragons. For all that Gygax is so obviously missing the point of Tolkien’s work in the extract above — rather hilariously so, actually — it’s also true that no Dungeon Master could attempt something like, say, Gandalf’s transformation from Gandalf the Grey to Gandalf the White without facing a justifiable mutiny from the players. Games — at least this kind of game — demand knowable universes.

Gygax claimed that Tolkien was ultimately far more important to the game’s commercial trajectory than he was to its rules. He noted, accurately, that the trilogy’s popularity from 1965 on had created an appetite for more fantasy, in the form of both books and things that weren’t quite books. It was largely out of a desire to ride this bandwagon, Gygax claimed, that Chainmail, the proto-Dungeons & Dragons which TSR released in 1971, promised players right there on the cover that they could use it to “refight the epic struggles related by J.R.R. Tolkien, Robert E. Howard, and other fantasy writers.” Gygax said that “the seeming parallels and inspirations are actually the results of a studied effort to capitalize on the then-current ‘craze’ for Tolkien’s literature.” Questionable though it is how “studied” his efforts really were in this respect, it does seem fairly clear that the biggest leg-up Tolkien gave to Gygax and his early design partner Dave Arneson was in giving so many potential players a taste for epic fantasy in the first place.

At any rate, we can say for certain that, beyond prompting a grudge in Gary Gygax against all things Tolkien — which, like most Gygaxian grudges, would last the rest of its holder’s life — Zaentz’s legal threat had a relatively modest effect on the game of Dungeons & Dragons. Hobbits were hastily renamed “halflings,” a handful of other references were scrubbed away or obfuscated, and life went on.

More importantly for Zaentz, the case against TSR and a few other even smaller tabletop-game publishers had now established the precedent that this field was within his licensing purview. In 1982, Tolkien Enterprises, the umbrella corporation Zaentz had created to manage his portfolio, authorized a three-employee publisher called Iron Crown Enterprises, heretofore known for the would-be Dungeons & Dragons competitor Rolemaster, to adapt their system to Middle-earth. Having won the license by simple virtue of being the first publisher to work up the guts to ask for it, Iron Crown went on to create Middle-earth Role Playing. The system rather ran afoul of the problem we’ve just been discussing: that, inspiring though so many found the setting in the broad strokes, the mechanics — or perhaps lack thereof — of Middle-earth just didn’t lend themselves all that well to a game. Unsurprisingly in light of this, Middle-earth Role Playing acquired a reputation as a “game” that was more fun to read, in the form of its many lengthy and lovingly detailed supplements exploring the various corners of Middle-earth, than it was to actually play; some wags took to referring to the line as a whole as Encyclopedia Middle-earthia. Nevertheless, it lasted more than fifteen years, was translated into twelve languages, and sold over 250,000 copies in English alone, thereby becoming one of the most successful tabletop RPGs ever not named Dungeons & Dragons.

But by no means was it all smooth sailing for Iron Crown. During the game’s early years, which were also its most popular, they were very nearly undone by an episode that serves to illustrate just how dangerously confusing the world of Tolkien licensing could become. In 1985, Iron Crown decided to jump on the gamebook bandwagon with a line of paperbacks they initially called Tolkien Quest, but quickly renamed to Middle-earth Quest to tie it more closely to their extant tabletop RPG. Their take on the gamebook was very baroque in comparison to the likes of Choose Your Own Adventure or even Fighting Fantasy; the rules for “reading” their books took up thirty pages on their own, and some of the books included hex maps for plotting your movements around the world, thus rather blurring the line between gamebook and, well, game. Demian Katz, who operates the definitive Internet site devoted to gamebooks, calls the Middle-earth Quest line “among the most complex gamebooks ever published,” and he of all people certainly ought to know. Whether despite their complexity or because of it, the first three volumes in the line were fairly successful for Iron Crown — and then the legal troubles started.

The Tolkien estate decided that Iron Crown had crossed a line with their gamebooks, encroaching on the literary rights to Tolkien which belonged to them. Whether the gamebooks truly were more book or game is an interesting philosophical question to ponder — particularly so given that they were such unusually crunchy iterations on the gamebook concept. Questions of philosophical taxonomy aside, though, they certainly were “printed published matter” that looked for all the world like everyday books. Tolkien Enterprises wasn’t willing to involve themselves in a protracted legal showdown over something as low-stakes as a line of gamebooks. Iron Crown would be on their own in this battle, should they choose to wage it. Deciding the potential rewards weren’t worth the risks of trying to convince a judge who probably wouldn’t know Dungeons & Dragons from Maze & Monsters that these things which looked like conventional paperback books were actually something quite different, Iron Crown pulled the line off the market and destroyed all copies as part of a settlement agreement. The episode may have cost them as much as $2.5 million. A few years later, the ever dogged Iron Crown would attempt to resuscitate the line after negotiating a proper license with the Tolkien estate — no mean feat in itself; Christopher Tolkien in particular is famously protective of that portion of his father’s legacy which is his to protect — but by then the commercial moment of the gamebook in general had passed. The whole debacle would continue to haunt Iron Crown for a long, long time. In 2000, when they filed for Chapter 11 bankruptcy, they would state that the debt they had been carrying for almost fifteen years from the original gamebook settlement was a big part of the reason.

By that point, of course, the commercial heyday of the tabletop RPG was also long past. Indeed, already by the time that Iron Crown and Tolkien Enterprises had inked their first licensing deal back in 1982 computer-based fantasies, in the form of games like Zork, Ultima and Wizardry, were threatening to eclipse the tabletop varieties that had done so much to inspire them. Here, perhaps more so even than in tabletop RPGs, the influence of Tolkien was pervasive. Designers of early computer games often appropriated Middle-earth wholesale, writing what amounted to interactive Tolkien fan fiction. The British text-adventure house Level 9, for example, first made their name with Colossal Adventure, a re-implementation of Will Crowther and Don Woods’s original Adventure with a Middle-earth coda tacked onto the end, thus managing the neat trick of extensively plagiarizing two different works in a single game. There followed two more Level 9 games set in Middle-earth, completing what they were soon proudly advertising, in either ignorance or defiance of the concept of copyright, as their Middle-earth Trilogy.

But the most famous constant devotee and occasional plagiarist of Tolkien among the early computer-game designers was undoubtedly Richard Garriott, who had discovered The Lord of the Rings and Dungeons & Dragons, the two influences destined more than any other to shape the course of his life, within six months of one another during his teenage years. Garriott called his first published game Akalabeth, after Tolkien’s Akallabêth, the name of a chapter in The Silmarillion, a posthumously published book of Middle-earth legends. The word means “downfall” in one of Tolkien’s invented languages, but Garriott chose it simply because he thought it sounded cool; his game otherwise had little to no explicit connection to Middle-earth. Regardless, the computer-game industry wouldn’t remain small enough that folks could get away with this sort of thing for very long. Akalabeth soon fell out of print, superseded by Garriott’s more complex series of Ultima games that followed it, while Level 9 was compelled to scrub the erstwhile Middle-earth Trilogy free of Tolkien and re-release it as the Jewels of Darkness Trilogy.

In the long-run, the influence of Tolkien on digital games would prove subtler but also even more pervasive than these earliest forays into blatant plagiarism would imply. Richard Garriott may have dropped the Tolkien nomenclature from his subsequent games, but he remained thoroughly inspired by the example of Tolkien, that ultimate fantasy world-builder, when he built the world of Britannia for his Ultima series. Of course, there were obvious qualitative differences between Middle-earth and Britannia. How could there not be? One was the creation of an erudite Oxford don, steeped in a lifetime worth of study of classical and Medieval literature; the other was the creation of a self-described non-reader barely out of high school. Nowhere is the difference starker than in the area of language, Tolkien’s first love. Tolkien invented entire languages from scratch, complete with grammars and pronunciation charts; Garriott substituted a rune for each letter in the English alphabet and seemed to believe he had done something equivalent. Garriott’s clumsy mishandling of Elizabethan English, meanwhile, all “thees” and “thous” in places where the formal “you” should be used, is enough to make any philologist roll over in his grave. But his heart was in the right place, and despite its creator’s limitations Britannia did take on a life of its own over the course of many Ultima iterations. If there is a parallel in computer gaming to what The Lord of the Rings and Middle-earth came to mean to fantasy literature, it must be Ultima and its world of Britannia.

In addition to the unlicensed knock-offs that were gradually driven off the market during the early 1980s and the more abstracted homages that replaced them, there was also a third category of Tolkien-derived computer games: that of licensed products. The first and only such licensee during the 1980s was Melbourne House, a book publisher turned game maker located in far-off Melbourne, Australia. Whether out of calculation or happenstance, Melbourne House approached the Tolkien estate rather than Tolkien Enterprises in 1982 to ask for a license. They were duly granted the right to make a text-adventure adaptation of The Hobbit, under certain conditions, very much in character for Christopher Tolkien, intended to ensure respect for The Hobbit‘s status as a literary work; most notably, they would be required to include a paperback copy of the novel with the game. In a decision he would later come to regret, Saul Zaentz elected to cede this ground to the Tolkien estate without a fight, apparently deeming a computer game intangible enough to be dangerous to quibble over. Another uneasy, tacit, yet surprisingly enduring precedent was thus set: Tolkien Enterprises would have control of Tolkien tabletop games, while the Tolkien estate would have control of Tolkien videogames. Zaentz’s cause for regret would come as he watched the digital-gaming market explode into tens and then hundreds of times the size of the tabletop market.

In fact, that first adaptation of The Hobbit played a role in that very process. The game became a sensation in Europe — playing it became a rite of passage for a generation of gamers there — and a substantial hit in the United States as well. It went on to become almost certainly the best-selling single text adventure ever made, with worldwide sales that may have exceeded half a million units. I’ve written at length about the Hobbit text adventure earlier, so I’ll refer you back to that article rather than describe its bold innovations and weird charm here. Otherwise, suffice to say that The Hobbit‘s success proved, if anyone was doubting, that licenses in computer games worked in commercial terms, no matter how much some might carp about the lack of originality they represented.

Still, Melbourne House appears to have had some trepidation about tackling the greater challenge of adapting The Lord of the Rings to the computer. The reasons are understandable: the simple quest narrative that was The Hobbit — the book is actually subtitled There and Back Again — read like a veritable blueprint for a text adventure, while the epic tale of spiritual, military, and political struggle that was The Lord of the Rings represented, to say the least, a more substantial challenge for its would-be adapters. Melbourne House’s first anointed successor to The Hobbit‘s thus became Sherlock, a text adventure based on another literary property entirely. They didn’t finally return to Middle-earth until 1986, four years after The Hobbit, when they made The Fellowship of the Ring into a text adventure. Superficially, the new game played much like The Hobbit, but much of the charm was gone, with quirks that had seemed delightful in the earlier game now just seeming annoying. Even had The Fellowship of the Ring been a better game, by 1986 it was getting late in the day for text adventures — even text adventures like this one with illustrations. Reviews were lukewarm at best. Nevertheless, Melbourne House kept doggedly at the task of completing the story of Frodo and the One Ring, releasing The Shadow of Mordor in 1987 and The Crack of Doom in 1989. All of these games went largely unloved in their day, and remain so in our own.

In a belated attempt to address the formal mismatch between the epic narrative of The Lord of the Rings and the granular approach of the text adventure, Melbourne House released War in Middle-earth in 1988. Partially designed by Mike Singleton, and drawing obvious inspiration from his older classic The Lords of Midnight, it was a strategy game which let the player refight the entirety of the War of the Ring, on the level of both armies and individual heroes. The Lords of Midnight had been largely inspired by Singleton’s desire to capture the sweep and grandeur of The Lord of the Rings in a game, so in a sense this new project had him coming full circle. But, just as Melbourne House’s Lord of the Rings text adventures had lacked the weird fascination of The Hobbit, War in Middle-earth failed to rise to the heights of The Lords of Midnight, despite enjoying the official license the latter had lacked.

As the 1980s came to a close, then, the Tolkien license was beginning to rival the similarly demographically perfect Star Trek license for the title of the most misused and/or underused — take your pick — in computer gaming. Tolkien Enterprises, normally the more commercially savvy and aggressive of the two Tolkien licensers, had ceded that market to the Tolkien estate, who seemed content to let Melbourne House doddle along with an underwhelming and little-noticed game every year or two. At this point, though, another computer-game developer would pick up the mantle from Melbourne House and see if they could manage to do something less underwhelming with it. We’ll continue with that story next time.

Before we get to that, though, we might take a moment to think about how different things might have been had the copyrights to Tolkien’s works been allowed to expire with their creator. There is some evidence that Tolkien himself held to this as the fairest course. In the late 1950s, in a letter to one of the first people to approach him about making a movie out of The Lord of the Rings, he expressed his wish that any movie made during his lifetime not deviate too far from the books, citing as an example of what he didn’t want to see the 1950 movie of H. Rider Haggard’s Victorian adventure novel King’s Solomon’s Mines and the many liberties it took with its source material. “I am not Rider Haggard,” he wrote. “I am not comparing myself with that master of Romance, except in this: I am not dead yet. When the film of King’s Solomon’s Mines was made, it had already passed, one might say, into the public property of the imagination. The Lord of Rings is still the vivid concern of a living person, and is nobody’s toy to play with.” Can we read into this an implicit assumption that The Lord of the Rings would become part of “the public property of the imagination” after its own creator’s death? If so, things turned out a little differently than he thought they would. A “property of the imagination” Middle-earth has most certainly become. It’s the “public” part that remains problematic.

(Sources: the books Designers & Dragons Volume 1 and Volume 2 by Shannon Appelcline, Tolkien’s Triumph: The Strange History of The Lord of the Rings by John Lennard, The Frodo Franchise: The Lord of the Rings and Modern Hollywood by Kristin Thompson, Unfiltered: The Complete Ralph Bakshi by John M. Gibson, Playing at the World by Jon Peterson, and Dungeons and Dreamers: The Rise of Computer Game Culture from Geek to Chic by Brad King and John Borland; Dragon Magazine of March 1985; Popular Computing Weekly of December 30 1982; The Times of December 15 2002. Online sources include Janet Brennan Croft’s essay “Three Rings for Hollywood” and The Hollywood Reporter‘s archive of a 2012 court case involving Tolkien’s intellectual property.)


Tags: , , , ,