RSS

Category Archives: Interactive Fiction

A Tale of the Mirror World, Part 2: From Mainframes to Micros

The BESM-6

Seen from certain perspectives, Soviet computer hardware as an innovative force of its own peaked as early as 1968, the year the first BESM-6 computer was powered up. The ultimate evolution of the line of machines that had begun with Sergei Lebedev’s original MESM, the BESM-6 was the result of a self-conscious attempt on the part of Lebedev’s team at ITMVT to create a world-class supercomputer. By many measures, they succeeded. Despite still being based on transistors rather than the integrated circuits that were becoming more and more common in the West, the BESM-6’s performance was superior to all but the most powerful of its Western peers. The computers generally acknowledged as the fastest in the world at the time, a line of colossi built by Control Data in the United States, were just a little over twice as fast as the BESM-6, which had nothing whatsoever to fear from the likes of the average IBM mainframe. And in comparison to other Soviet computers, the BESM-6 was truly a monster, ten times as fast as anything the country had managed to produce before. In its way, the BESM-6 was as amazing an achievement on Lebedev’s part as had been the MESM almost two decades earlier. Using all home-grown technology, Lebedev and his people had created a computer almost any Western computer lab would have been proud to install.

At the same time, though, the Soviet computer industry’s greatest achievement to date was, almost paradoxically, symbolic of all its limitations. Sparing no expense nor effort to build the best computer they possibly could, Lebedev’s team had come close to but not exceeded the Western state of the art, which in the meantime continued marching inexorably forward. All the usual inefficiencies of the Soviet economy conspired to prevent the BESM-6 from becoming a true game changer rather than a showpiece. BESM-6s would trickle only slowly out of the factories; only about 350 of them would be built over the course of the next 20 years. They became useful tools for the most well-heeled laboratories and military bases, but there simply weren’t enough of them to implement even a fraction of the cybernetics dream.

A census taken in January of 1970 held that there were just 5500 computers operational in the Soviet Union, as compared with 62,500 in the United States and 24,000 in Western Europe. Even if one granted that the BESM-6 had taken strides toward solving the problem of quality, the problem of quantity had yet to be addressed. Advanced though the BESM-6 was in so many ways, for Soviet computing in general the same old story held sway. A Rand Corporation study from 1970 noted that “the Soviets are known to have designed micro-miniaturized circuits far more advanced than any observed in Soviet computers.” The Soviet theory of computing, in other words, continued to far outstrip the country’s ability to make practical use of it. “In the fundamental design of hardware and software the Russian computer art is as clever as that to be found anywhere in the world,” said an in-depth Scientific American report on the state of Soviet computing from the same year. “It is in the quality of production, not design, that the USSR is lagging.”

One way to build more computers more quickly, the Moscow bureaucrats concluded, was to share the burden among their partners (more accurately known to the rest of the world as their vassal states) in the Warsaw Pact. Several member states — notably East Germany, Czechoslovakia, and Hungary — had fairly advanced electronics industries whose capabilities in many areas exceeded that of the Soviets’ own, not least because their geographical locations left them relatively less isolated from the West. At the first conference of the International Center of Scientific and Technical Information in January of 1970, following at least two years of planning and negotiating, the Soviet Union signed an agreement with East Germany, Czechoslovakia, Bulgaria,  Hungary, Poland, and Romania to make the first full-fledged third-generation computer — one based on integrated circuits rather than transistors — to come out of Eastern Europe. The idea of dividing the labor of producing the new computer was taken very literally. In a testimony to the “from each according to his means” tenet of communism, Poland would make certain ancillary processors, tape readers, and printers; East Germany would make other peripherals; Hungary would make magnetic memories and some systems software; Czechoslovakia would make many of the integrated circuits; Romania and Bulgaria, the weakest sisters in terms of electronics, would make various mechanical and structural odds and ends; and the Soviet Union would design the machines, make the central processors, and be the final authority on the whole project, which was dubbed “Ryad,” a word meaning “row” or “series.”

The name was no accident. On the contrary, it was key to the nature of the computer — or, rather, computers — the Soviet Union and its partners were now planning to build. With the BESM-6 having demonstrated that purely home-grown technology could get their countries close to the Western state of the art but not beyond it, they would give up on trying to outdo the West. Instead they would take the West’s best, most proven designs and clone them, hoping to take advantage of the eye toward mass production that had been baked into them from the start. If all went well, 35,000 Ryad computers would be operational across the Warsaw Pact by 1980.

In a sense, the West had made it all too easy for them, given Project Ryad all too tempting a target for cloning. In 1964, in one of the most important developments in the history of computers, IBM had introduced a new line of mainframes called the System/360. The effect it had on the mainframe industry of the time was very similar to the one which the IBM PC would have on the young microcomputer industry 17 years later: it brought order and stability to what had been a confusion of incompatible machines. For the first time with the System/360, IBM created not just a single machine or even line of machines but an entire computing ecosystem built around hardware and software compatibility across a wide swathe of models. The effect this had on computing in the West is difficult to overstate. There was, for one thing, soon a large enough installed base of System/360 machines that companies could make a business out of developing software and selling it to others; this marked the start of the software industry as we’ve come to know it today. Indeed, our modern notion of computing platforms really begins with the System/360. Dag Spicer of the Computer History Museum calls it IBM’s Manhattan Project. Even at the time, IBM’s CEO Thomas Watson Jr. called it the most important product in his company’s already storied history, a distinction which is challenged today only by the IBM PC.

The System/360 ironically presaged the IBM PC in another respect: as a modular platform built around well-documented standards, it was practically crying out to be cloned by companies that might have trailed IBM in terms of blue-sky technical innovation, but who were more than capable of copying IBM’s existing technology and selling it at a cheaper price. Companies like Amdahl — probably the nearest equivalent to IBM’s later arch-antagonist Compaq in this case of parallel narratives — lived very well on mainframes compatible with those of IBM, machines which were often almost as good as IBM’s best but were always cheaper. None too pleased about this, IBM responded with various sometimes shady countermeasures which landed them in many years of court cases over alleged antitrust violations. (Yes, the histories of mainframe computing and PC computing really do run on weirdly similar tracks.)

If the System/360 from the standpoint of would-be Western cloners was an unlocked door waiting to be opened, from the standpoint of the Soviet Union, which had no rules for intellectual property whatsoever which applied to the West, the door was already flung wide. Thus, instead of continuing down the difficult road of designing its high-end computers from scratch, the Soviet Union decided to stroll on through.

An early propaganda shot shows a Ryad machine in action.

There’s much that could be said about what this decision symbolized for Soviet computing and, indeed, for Soviet society in general. For all the continuing economic frustrations lurking below the surface of the latest Pravda headlines, Khrushchev’s rule had been the high-water mark of Soviet achievement, when the likes of the Sputnik satellite and Yuri Gagarin’s flight into space had seemed to prove that communism really could go toe-to-toe with capitalism. But the failure to get to the Moon before the United States among other disappointments had taken much of the shine off that happy thought. [1]Some in the Soviet space program actually laid their failure to get to the Moon, perhaps a bit too conveniently, directly at the feet of the computer technology they were provided, noting that the lack of computers on the ground equal to those employed by NASA — which happened to be System/360s — had been a crippling disadvantage. Meanwhile the computers that went into space with the Soviets were bigger, heavier, and less capable than their American counterparts. In the rule of Leonid Brezhnev, which began with Khrushchev’s unceremonious toppling from power in October of 1964, the Soviet Union gradually descended into a lazy decrepitude that gave only the merest lip service to the old spirit of revolutionary communism. Corruption had always been a problem, but now, taking its cue from its new leader, the country became a blatant oligarchy. While Brezhnev and his cronies collected dachas and cars, their countryfolk at times literally starved. Perhaps the greatest indictment of the system Brezhnev perpetuated was the fact that by the 1970s the Soviet Union, in possession of more arable land than any nation on earth and with one of the sparsest populations of any nation in relation to its land mass, somehow still couldn’t feed itself, being forced to import millions upon millions of tons of wheat and other basic foodstuffs every year. Thus Brezhnev found himself in the painful position, all too familiar to totalitarian leaders, of being in some ways dependent on the good graces of the very nations he denigrated.

In the Soviet Union of Leonid Brezhnev, bold ideas like the dream of cybernetic communism fell decidedly out of fashion in favor of nursing along the status quo. Every five years, the Party Congress reauthorized ongoing research into what had become known as the “Statewide Automated Management System for Collection and Processing of Information for the Accounting, Planning, and Management of the National Economy” (whew!), but virtually nothing got done. The bureaucratic infighting that had always negated the perceived advantages of communism — as perceived optimistically by the Soviets, and with great fear by the West — was more pervasive than ever in these late years. “The Ministry of Metallurgy decides what to produce, and the Ministry of Supplies decides how to distribute it. Neither will yield its power to anyone,” said one official. Another official described each of the ministries as being like a separate government unto itself. Thus there might not be enough steel to make the tractors the country’s farmers needed to feed its people one year; the next, the steel might pile up to rust on railway sidings while the erstwhile tractor factories were busy making something else.

Amidst all the infighting, Project Ryad crept forward, behind schedule but doggedly determined. This new face of computing behind the Iron Curtain made its public bow at last in May of 1973, when six of the seven planned Ryad “Unified System” models were in attendance at the Exposition of Achievements of the National Economy in Moscow. All were largely hardware- and software-compatible with the IBM System/360 line. Even the operating systems that were run on the new machines were lightly modified copies of Western operating systems like IBM’s DOS/360. Project Ryad and its culture of copying would come to dominate Soviet computing during the remainder of the 1970s. A Rand Corporation intelligence report from 1978 noted that “by now almost everything offered by IBM to 360 installations has been acquired” by the Soviet Union.

Project Ryad even copied the white lab coats worn by the IBM “priesthood” (and gleefully scorned by the scruffier hackers who worked on the smaller but often more innovative machines produced by companies like DEC).

During the five years after the Ryad machines first appeared, IBM sold about 35,000 System/360 machines, while the Soviet Union and its partners managed to produce about 5000 Ryad machines. Still, compared to what the situation had been before, 5000 reasonably modern machines was real progress, even if the ongoing inefficiencies of the Eastern Bloc economies kept Project Ryad from ever reaching more than a third of its stated yearly production goals. (A telling sign of the ongoing disparities between West and East was the way that all Western estimates of future computer production tended to vastly underestimate the reality that actually arrived, while Eastern estimates did just the opposite.) If it didn’t exactly allow Eastern Europe to make strides toward any bold cybernetic future — on the contrary, the Warsaw Pact economies continued to limp along in as desultory a fashion as ever — Project Ryad did do much to keep its creator nations from sliding still further into economic dysfunction. Unsurprisingly, a Ryad-2 generation of computers was soon in the works, cloning the System/370, IBM’s anointed successor to the System/360 line. Other projects cloned the DEC PDP line of machines, smaller so-called “minicomputers” suitable for more modest — but, at least in the West, often more interesting and creative — tasks than the hulking mainframes of IBM. Soviet watcher Seymour Goodman summed up the current situation in an article for the journal World Politics in 1979:

The USSR has learned that the development of its national computing capabilities on the scale it desires cannot be achieved without a substantial involvement with the rest of the world’s computing community. Its considerable progress over the last decade has been characterized by a massive transfer of foreign computer technology. The Soviet computing industry is now much less isolated than it was during the 1960s, although its interfaces with the outside world are still narrowly defined. It would appear that the Soviets are reasonably content with the present “closer but still at a distance” relationship.

Reasonable contentment with the status quo would continue to be the Kremlin’s modus operandi in computing, as in most other things. The fiery rhetoric of the past had little relevance to the morally and economically bankrupt Soviet state of the 1970s and 1980s.

Even in this gray-toned atmosphere, however, the old Russian intellectual tradition remained. Many of the people designing and programming the nation’s computers barely paid attention to the constant bureaucratic turf wars. They’d never thought that much about philosophical abstractions like cybernetics, which had always been more a brainchild of the central planners and social theorists than the people making the Soviet Union’s extant computer infrastructure, such as it was, work. Like their counterparts in the West, Soviet hackers were more excited by a clever software algorithm or a neat hardware re-purposing than they were by high-flown social theory. Protected by the fact that the state so desperately needed their skills, they felt free at times to display an open contempt for the supposedly inviolate underpinnings of the Soviet Union. Pressed by his university’s dean to devote more time to the ideological studies that were required of every student, one young hacker said bluntly that “in the modern world, with its super-speedy tempo of life, time is too short to study even more necessary things” than Marxism.

Thus in the realm of pure computing theory, where advancement could still be made without the aid of cutting-edge technology, the Soviet Union occasionally made news on the world stage with work evincing all the originality that Project Ryad and its ilk so conspicuously lacked. In October of 1978, a quiet young researcher at the Moscow Computer Center of the Soviet Academy of Sciences named Leonid Genrikhovich Khachiyan submitted a paper to his superiors with the uninspiring — to non-mathematicians, anyway — title of “Polynomial Algorithms in Linear Programming.” Following its publication in the Soviet journal Reports of the Academy of Sciences, the paper spread like wildfire across the international community of mathematics and computer science, even garnering a write-up in the New York Times in November of 1979. (Such reports were always written in a certain tone of near-disbelief, of amazement that real thinking was going on in the Mirror World.) What Khachiyan’s paper actually said was almost impossible to clearly explain to people not steeped in theoretical mathematics, but the New York Times did state that it had the potential to “dramatically ease the solution of problems involving many variables that up to now have required impossibly large numbers of separate computer calculations,” with potential applications in fields as diverse as economic planning and code-breaking. In other words, Khachiyan’s new algorithms, which have indeed stood the test of time in many and diverse fields of practical application, can be seen as a direct response to the very lack of computing power with which Soviet researchers constantly had to contend. Sometimes less really could be more.

As Khachiyan’s discoveries were spreading across the world, the computer industries of the West were moving into their most world-shaking phase yet. A fourth generation of computers, defined by the placing of the “brain” of the machine, or central processing unit, all on a single chip, had arrived. Combined with a similar miniaturization of the other components that went into a computer, this advancement meant that people were able for the first time to buy these so-called “microcomputers” to use in their homes — to write letters, to write programs, to play games. Likewise, businesses could now think about placing a computer on every single desk. Still relatively unremarked by devotees of big-iron institutional computing as the 1970s expired, over the course of the 1980s and beyond the PC revolution would transform the face of business and entertainment, empowering millions of people in ways that had heretofore been unimaginable. How was the Soviet Union to respond to this?

Alexi Alexandrov, the president of the Moscow Academy of Sciences, responded with a rhetorical question: “Have [the Americans] forgotten that problems of no less complexity, such as the creation of the atomic bomb or space-rocket technology… [we] were able to solve ourselves without any help from abroad, and in a short time?” Even leaving aside the fact that the Soviet atomic bomb was itself built largely using stolen Western secrets, such words sounded like they heralded a new emphasis on original computer engineering, a return to the headier days of Khrushchev. In reality, though, the old ways were difficult to shake loose. The first Soviet microprocessor, the KP580BM80A of 1977, had its “inspiration” couched inside its very name: the Intel 8080, which was along with the Motorola 6800 one of the two chips that had launched the PC revolution in the West in 1974.

Yet in the era of the microchip the Soviet Union ran into problems continuing the old practices. While technical schematics for chips much newer and more advanced than the Intel 8080 were soon readily enough available, they were of limited use in Soviet factories, which lacked the equipment to stamp out the ever more miniaturized microchip designs coming out of Western companies like Intel.

One solution might be for the Soviets to hold their noses and outright buy the chip-fabricating equipment they needed from the West. In earlier decades, such deals had hardly been unknown, although they tended to be kept quiet by both parties for reasons of pride (on the Eastern side) and public relations (on the Western side). But, unfortunately for the Soviets, the West had finally woken up to the reality that microelectronics were as critical to a modern war machine as missiles and fighter planes. A popular story that circulated around Western intelligence circles for years involved Viktor Belenko, a Soviet pilot who went rogue, flying his state-of-the-art MiG-25 fighter jet to a Japanese airport and defecting there in 1976. When American engineers examined his MiG-25, they found a plane that was indeed a technological marvel in many respects, able to fly faster and higher than any Western fighter. Yet its electronics used unreliable vacuum tubes rather than transistors, much less integrated circuits — a crippling disadvantage on the field of battle. The contrast with the West, which had left the era of the vacuum tube behind almost two decades ago, was so extreme that there was some discussion of whether Belenko might be a double agent, his whole defection a Soviet plot to convince the West that they were absurdly far behind in terms of electronics technology. Sadly for the Soviets, the vacuum tubes weren’t the result of any elaborate KGB plot, but rather just a backward electronics industry.

In 1979, the Carter Administration began to take a harder line against the Soviet Union, pushing through Congress as part of the Export Administration Act a long list of restrictions on what sorts of even apparently non-military computer technology could legally be sold to the Eastern Bloc. Ronald Reagan then enforced and extended these restrictions upon becoming president in 1981, working with the rest of the West in what was known as the Coordination Committee on Export Controls, or COCOM — a body that included all of the NATO member nations, plus Japan and Australia — to present a unified front. By this point, with the Cold War heading into its last series of dangerous crises thanks to Reagan’s bellicosity and the Soviet invasion of Afghanistan, the United States in particular was developing a real paranoia about the Soviet Union’s long-standing habits of industrial espionage. The paranoia was reflected in CIA director William Casey’s testimony to Congress in 1982:

The KGB has developed a large, independent, specialized organization which does nothing but work on getting access to Western science and technology. They have been recruiting about 100 young scientists and engineers a year for the last 15 years. They roam the world looking for technology to pick up. Back in Moscow, there are 400 to 500 assessing what they might need and where they might get it — doing their targeting and then assessing what they get. It’s a very sophisticated and far-flung organization.

By the mid-1980s, restrictions on Western computer exports to the East were quite draconian, a sometimes bewildering maze of regulations to be navigated: 8-bit microcomputers could be exported but 16-bit microcomputers couldn’t be; a single-user accounting package could be exported but not a multi-user version; a monochrome monitor could be exported but not a color monitor.

Even as the barriers between East and West were being piled higher than ever, Western fascination with the Mirror World remained stronger than ever. In August of 1983, an American eye surgeon named Leo D. Bores, organizer of the first joint American/Soviet seminar in medicine in Moscow and a computer hobbyist in his spare time, had an opportunity to spend a week with what was billed as the first ever general-purpose Soviet microcomputer. It was called the “Agat” — just a pretty name, being Russian for the mineral agate — and it was largely a copy — in Bores’s words a bad copy — of the Apple II. His report, appearing belatedly in the November 1984 issue of Byte magazine, proved unexpectedly popular among the magazine’s readership.

The Agat computer

The Agat was, first of all, much, much bigger and heavier than a real Apple II; Bores generously referred to it as “robust.” It was made in a factory more accustomed to making cars and trucks, and, indeed, it looked much as one might imagine a computer built in an automotive plant would look. The Soviets had provided software for displaying text in Cyrillic, albeit with some amount of flicker, using the Apple II’s bitmap-graphics modes. The keyboard also offered Cyrillic input, thus solving, after a fashion anyway, a big problem in adapting Western technology to Soviet needs. But that was about the extent to which the Agat impressed. “The debounce circuitry [on the keyboard] is shaky,” noted Bores, “and occasionally a stray character shows up, especially during rapid data entry. The elevation of the keyboard base (about 3.5 centimeters) and the slightly steeper-than-normal board angle would cause rapid fatigue as well as wrist pain after prolonged use.” Inside the case was a “nightmarish wiring maze.” Rather than being built into a single motherboard, the computer’s components were all mounted on separate breadboards cobbled together by all that cabling, the way Western engineers worked only in the very early prototyping stage of hardware development. The Soviet clone of the MOS 6502 chip found at the heart of the Agat was as clumsily put together as the rest of the machine, spanning across several breadboards; thus this “first Soviet microcomputer” arguably wasn’t really a microcomputer at all by the strict definition of the term. The kicker was the price: about $17,000. As that price would imply, the Agat wasn’t available to private citizens at all, being reserved for use in universities and other centers of higher learning.

With the Cold War still going strong, Byte‘s largely American readership was all too happy to jeer at this example of Soviet backwardness, which certainly did show a computer industry lagging years behind the West. That said, the situation wasn’t quite as bad as Bores’s experience would imply. It’s very likely that the machine he used was a pre-production model of the Agat, and that many of the problems he encountered were ironed out in the final incarnation.

For all the engineering challenges, the most important factor impeding truly personal computing in the Soviet Union was more ideological than technical. As so many of the visionaries who had built the first PCs in the West had so well recognized, these were tools of personal empowerment, of personal freedom, the most exciting manifestation yet of Norbert Wiener’s original vision of cybernetics as a tool for the betterment of the human individual. For an Eastern Bloc still tossing and turning restlessly under the blanket of collectivism, this was anathema. Poland’s propaganda ministry made it clear that they at least feared the existence of microcomputers far more than they did their absence: “The tendency in the mass-proliferation of computers is creating a variety of ideological endangerments. Some programmers, under the inspiration of Western centers of ideological subversion, are creating programs that help to form anti-communistic political consciousness.” In countries like Poland and the Soviet Union, information freely exchanged could be a more potent weapon than any bomb or gun. For this reason, photocopiers had been guarded with the same care as military hardware for decades, and even owning a typewriter required a special permit in many Warsaw Pact countries. These restrictions had led to the long tradition of underground defiance known euphemistically simply as “samizdat,” or self-publishing: the passing of “subversive” ideas from hand to hand as one-off typewritten or hand-written texts. Imagine what a home computer with a word processor and a printer could mean for samizdat. The government of Romania was so terrified by the potential of the computer for spreading freedom that it banned the very word for a time. Harry R. Meyer, an American Soviet watcher with links to the Russian expatriate community, made these observations as to the source of such terror:

I can imagine very few things more destructive of government control of information flow than having a million stations equivalent to our Commodore 64 randomly distributed to private citizens, with perhaps a thousand in activist hands. Even a lowly Commodore 1541 disk drive can duplicate a 160-kilocharacter disk in four or five minutes. The liberating effect of not having to individually enter every character every time information is to be shared should dramatically increase the flow of information.

Information distributed in our society is mainly on paper rather than magnetic media for reasons of cost-effectiveness: the message gets to more people per dollar. The bottleneck of samizdat is not money, but time. If computers were available at any cost, it would be more effective to invest the hours now being spent in repetitive typing into earning cash to get a computer, no matter how long it took.

If I were circulating information the government didn’t like in the Soviet Bloc, I would have little interest in a modem — too easily monitored. But there is a brisk underground trade in audio cassettes of Western music. Can you imagine the headaches (literal and figurative) for security agents if text files were transported by overwriting binary onto one channel in the middle of a stereo cassette of heavy-metal music? One would hope it would be less risk to carry such a cassette than a disk, let alone a compromising manuscript.

If we accept Meyer’s arguments, there’s an ironic follow-on argument to be made: that, in working so hard to keep the latest versions of these instruments of freedom out of the hands of the Soviet Union and its vassal states, the COCOM was actually hurting rather than helping the cause of freedom. As many a would-be autocrat has learned to his dismay in the years since, it’s all but impossible to control the free flow of information in a society with widespread access to personal-computing technology. The new dream of personal computing, of millions of empowered individuals making things and communicating, stood in marked contrast to the Soviet cyberneticists’ old dream of perfect, orderly, top-down control implemented via big mainframe computers. For the hard-line communists, the dream of personal computing sounded more like a nightmare. The Soviet Union faced a stark dilemma: embrace the onrushing computer age despite the loss of control it must imply, or accept that it must continue to fall further and further behind the West. A totalitarian state like the Soviet Union couldn’t survive alongside the free exchange of ideas, while a modern economy couldn’t survive without the free exchange of ideas.

Thankfully for everyone involved, a man now stepped onto the stage who was willing to confront the seemingly insoluble contradictions of Soviet society. On March 11, 1985, Mikhail Gorbachev was named General Secretary of the Communist Party, the eighth and, as it would transpire, the last leader of the Soviet Union. He almost immediately signaled a new official position toward computing, as he did toward so many other things. In one of his first major policy speeches just weeks after assuming power, Gorbachev announced a plan to put personal computers into every classroom in the Soviet Union.

Unlike the General Secretaries who had come before him, Gorbachev recognized that the problems of rampant corruption and poor economic performance which had dogged the Soviet Union throughout its existence were not obstacles external to the top-down collectivist state envisioned by Vladimir Lenin but its inevitable results. “Glasnost,” the introduction of unprecedented levels of personal freedom, and “Perestroika,” the gradual replacement of the planned economy with a more market-oriented version permitting a degree of private ownership, were his responses. These changes would snowball in a way that no one — certainly not Gorbachev himself — had quite anticipated, leading to the effective dissolution of the Warsaw Pact and the end of the Cold War before the 1980s were over. Unnerved by it all though he was, Gorbachev, to his everlasting credit, let it happen, rejecting the calls for a crackdown like those that had ended the Hungarian Revolution of 1956 and the Prague Spring of 1968 in such heartbreak and tragedy.

The Elektronika BK 0010

Very early in Gorbachev’s tenure, well before its full import had even started to become clear, it became at least theoretically possible for the first time for individuals in the Soviet Union to buy a private computer of their own for use in the home. Said opportunity came in the form of the Elektronika BK-0010. Costing about one-fifth as much as the Agat, the BK-0010 was a predictably slapdash product in some areas, such as its horrid membrane keyboard. In other ways, though, it impressed far more than anyone had a right to expect. The BK-0010, the very first Soviet microcomputer designed to be a home computer, was a 16-bit machine, placing it in this respect at least ahead of the typical Western Apple II, Commodore 64, or Sinclair Spectrum of the time. The microprocessor inside it was a largely original creation, borrowing the instruction set from the DEC PDP-11 line of minicomputers but borrowing its actual circuitry from no one. The Soviets’ struggles to stamp out the ever denser circuitry of the latest Western CPUs in their obsolete factories was ironically forcing them to be more innovative, to start designing chips of their own which their factories could manage to produce.

Supplies of the BK-0010 were always chronically short and the waiting lists long, but as early as 1985 a few lucky Soviet households could boast real, usable computers. Those who were less lucky might be able to build a bare-bones computer from schematics published in do-it-yourself technology magazines like Tekhnika Molodezhi, the Soviet equivalent to Popular Electronics. Just as had happened in the United States, Britain, and many other Western countries, a vibrant culture of hobbyist computing spread across the Soviet Union and the other Warsaw Pact nations. In time, as the technology advanced in rhythm with Perestroika, these hobbyists would become the founding spirits of a new Soviet computer industry — a capitalist computer industry. “These are people who have felt useless — useless — all their lives!” said American business pundit Esther Dyson after a junket to a changing Eastern Europe. “Do you know what it is like to feel useless all your life? Computers are turning many of these people into entrepreneurs. They are creating the entrepreneurs these countries need.” As one glance at the flourishing underground economy of the Soviet Union of any era had always been enough to prove, Russians had a natural instinct for capitalism. Now, they were getting the chance to exercise it.

In August of 1988, in a surreal sign of these changing times, a delegation including many senior members of the Soviet Academy of Sciences — the most influential theoretical voice in Soviet computing dating back to the early 1950s — arrived in New York City on a mission that would have been unimaginable just a couple of years before. To a packed room of technology journalists — the Mirror World remained as fascinating as ever — they demonstrated a variety of software which they hoped to sell to the West: an equation solver; a database responsive to natural-language input; a project manager; an economic-modelling package. Byte magazine called the presentation “clever, flashy, and unabashedly commercial,” with “lots of colored windows popping up everywhere” and lots of sound effects. The next few years would bring several ventures which served to prove to any doubters from that initial gathering that the Soviets were capable of programming world-class software if given half a chance. In 1991, for instance, Soviet researchers sold a system of handwriting recognition to Apple for use in the pioneering Apple Newton personal digital assistant. Reflecting the odd blend of greed and idealism that marked the era, a Russian programmer wrote to Byte magazine that “I do hope the world software market will be the only battlefield for American and Soviet programmers and that we’ll become friends during this new battle now that we’ve stopped wasting our intellects on the senseless weapons race.”

As it would transpire, though, the greatest Russian weapon in this new era of happy capitalism wasn’t a database, a project manager, or even a handwriting-recognition system. It was instead a game — a piece of software far simpler than any of those aforementioned things but with perhaps more inscrutable genius than all of them put together. Its unlikely story is next.

(Sources: the academic-journal articles “Soviet Computing and Technology Transfer: An Overview” by S.E. Goodman, “InterNyet: Why the Soviet Union Did Not Build a Nationwide Computer Network” by Slava Gerovitch, “The Soviet Bloc’s Unified System of Computers” by N.C. Davis and S.E. Goodman; the January 1970 and May 1972 issues of Rand Corporation’s Soviet Cybernetics Review; The New York Times of August 28 1966, May 7 1973, and November 27 1979; Scientific American of October 1970; Bloomberg Businessweek of November 4 1991; Byte of August 1980, April 1984, November 1984, July 1985, November 1986, February 1987, October 1988, and April 1989; a video recording the Computer History Museum’s commemoration of the IBM System/360 on April 7 2004. Finally, my huge thanks to Peter Sovietov, who grew up in the Soviet Union of the 1980s and the Russia of the 1990s and has been an invaluable help in sharing his memories and his knowledge and saving me from some embarrassing errors.)

Footnotes

Footnotes
1 Some in the Soviet space program actually laid their failure to get to the Moon, perhaps a bit too conveniently, directly at the feet of the computer technology they were provided, noting that the lack of computers on the ground equal to those employed by NASA — which happened to be System/360s — had been a crippling disadvantage. Meanwhile the computers that went into space with the Soviets were bigger, heavier, and less capable than their American counterparts.
 

Tags:

A Tale of the Mirror World, Part 1: Calculators and Cybernetics

Back in my younger days, when the thought of sleeping for nights on end in campground tents and hostel cots awakened a spirit of adventure instead of a premonition of an aching back, I used to save up my vacation time and undertake a big backpacker-style journey every summer. In 2002, this habit took me to Russia.

I must confess that I found St. Petersburg and Moscow a bit of a disappointment. They just struck me as generic big cities of the sort that I’d seen plenty of in my life. While I’m sure they have their unique qualities, much of what I saw there didn’t look all that distinct from what one could expect to see in any of dozens of major European cities. What I was looking for was the Russia — or, better said, the Soviet Union — of my youth, that semi-mythical Mirror World of fascination and nightmare.

I could feel myself coming closer to my goal as soon as I quit Moscow to board the Trans-Siberian Railroad for the long, long journey to Vladivostok. As everyone who lived in Siberia was all too happy to tell me, I was now experiencing the real Russia. In the city of Ulan-Ude, closed to all outsiders until 1991, I found the existential goal I hadn’t consciously known I’d been seeking. From the central square of Ulan-Ude, surrounded on three sides by government offices still bearing faded hammers and sickles on their facades, glowered a massive bust of Vladimir Lenin. I’d later learn that at a weight of 42 tons the bust was the largest such ever built in the Soviet Union, and that it had been constructed in 1971 as one of the last gasps of the old tradition of Stalinist monumentalism. But the numbers didn’t matter on that scorching-hot summer day when I stood in that square, gazing up in awe. In all my earlier travels, I’d never seen a sight so alien to me. This was it, my personal Ground Zero of the Mirror World, where all the values in which I’d been indoctrinated as a kid growing up deep in the heart of Texas were flipped. Lenin was the greatest hero the world had ever known, the United States the nation of imperialist oppression… it was all so wrong, and because of that it was all so right. I’ve never felt so far from home as I did on that day — and this feeling, of course, was exactly the reason I’d come.

I’m a child of the 1980s, the last decade during which the Soviet Union was an extant power in the world. The fascination which I still felt so keenly in 2002 had been a marked feature of my childhood. Nothing, after all, gives rise to more fascination than telling people that something is forbidden to them, as the Kremlin did by closing off their country from the world. Certainly I wasn’t alone in jumping after any glimpse I could get behind the Iron Curtain.

Thus the bleakly alluring version of Moscow found in Martin Cruz Smith’s otherwise workmanlike crime novel Gorky Park turned it into a bestseller, and then a hit film a couple of years later. (I remember the film well because it was the first R-rated movie my parents ever allowed me to see; I remember being intrigued and a little confused by my first glimpse of bare breasts on film — as if the glimpse behind the Iron Curtain wasn’t attraction enough!) And when David Willis, an American journalist who had lived several years in Moscow, purported to tell his countrymen “how Russians really live” in a book called Klass, it too became a bestseller. Even such a strident American patriot as Tom Clancy could understand the temptation of the Mirror World. In Red Storm Rising, his novel of World War III, straitlaced intelligence officer Robert Toland gets a little too caught up in the classic films of Sergei Eisenstein.

The worst part of the drive home was the traffic to the Hampton Roads tunnel, after which things settled down to the usual superhighway ratrace. All the way home, Toland’s mind kept going over the scenes from Eisenstein’s movie. The one that kept coming back was the most horrible of all, a German knight wearing a crusader’s cross tearing a Pskov infant from his mother’s breast and throwing him — her? — into a fire. Who could see that and not be enraged? No wonder the rabble-rousing song “Arise, you Russian People” had been a genuinely popular favorite for years. Some scenes cried out for bloody revenge, the theme for which was Prokofiev’s fiery call to arms. Soon he found himself humming the song. A real intelligence officer you are … Toland smiled to himself, thinking just like the people you’re supposed to study … defend our fair native land … za nashu zyemlyu chestnuyu!

“Excuse me, sir?” the toll collector asked.

Toland shook his head. Had he been singing aloud? He handed over the seventy-five cents with a sheepish grin. What would this lady think, an American naval officer singing in Russian?

Those involved with computers were likewise drawn to the Mirror World. When Byte magazine ran a modest piece buried hundreds of pages deep in their November 1984 issue on a Soviet personal computer showing the clear “influence” of the Apple II, it became the second most popular article in the issue according to the magazine’s surveys. Unsurprisingly in light of that reception, similar tantalizing glimpses behind the Iron Curtain became a regular part of the magazine from that point forward. According to the best estimates of the experts, the Soviets remained a solid three years behind the United States in their top-end chip-fabrication capabilities, and much further behind than that in their ability to mass-produce dependable computers that could be sold for a reasonable price. If the rudimentary Soviet computers Byte described had come from anywhere else, in other words, no one would have glanced at them twice. Yet the fact that they came from the Mirror World gave them the attraction that clung to all glimpses into that fabled land. For jaded veterans grown bored with an American computer industry that was converging inexorably from the Wild West that had been its early days toward a few standard, well-defined — read, boring — platforms, Soviet computers were the ultimate exotica.

Before the end of the 1980s, an odd little game of falling blocks would ride this tidal wave of Soviet chic to become by some measures the most popular videogame of all time. An aura of inscrutable otherness clung to Tetris, which the game’s various publishers — its publication history is one of the most confusing in the history of videogames — were smart enough to tie in with the sense of otherness that surrounded the entirety of the Soviet Union, the game’s unlikely country of origin, in so many Western minds. Spectrum Holobyte, the most prominent publisher of the game on computers, wrote the name in Cyrillic script on the box front, subtitled it “the Soviet Challenge,” and commissioned background graphics showing iconic — at least to Western eyes — Soviet imagery, from Cosmonauts in space to the “Red Machine” hockey team on the ice. As usual, Nintendo cut more to the chase with their staggeringly successful Game Boy version: “From Russia with Fun!”

Tetris mania was at its peak as the 1990s began. The walls were coming down between West and East, both figuratively and literally, thanks to Mikhail Gorbachev’s impossibly brave choice to let his empire go — peacefully. Western eyes peered eagerly eastward, motivated now not only by innocent if burning curiosity but by the possibilities for tapping those heretofore untapped markets. Having reached this very point here in this blog’s overarching history of interactive entertainment and matters related, let’s hit pause long enough to join those first Western discoverers now in exploring the real story of computing in the Mirror World.


 

In the very early days of computing, before computer science was a recognized discipline in which you could get a university degree, the most important thinkers in the nascent field tended to be mathematicians. It was, for instance, the British mathematician Alan Turing who laid much of the groundwork for modern computer science in the 1930s, then went on to give many of his theories practical expression as part of the Allied code-breaking effort that did so much to win World War II. And it was the mathematics department of Cambridge University who built the EDSAC in 1949, the first truly programmable computer in the sense that we understand that term today.

The strong interconnection between mathematics and early work with computers should have left the Soviet Union as well-equipped for the dawning age as any nation. Russia had a long, proud tradition of mathematical innovation, dating back through centuries of Czarist rule. The list of major Russian mathematicians included figures like Nikolai Lobachevsky, the pioneer of non-Euclidean geometry, and Sofia Kovalevskaya, who developed equations for the rotation of a solid body around a fixed axis. Even Joseph Stalin’s brutal purges of the 1930s, which strove to expunge anyone with the intellectual capacity to articulate a challenge to his rule, failed to kill the Russian mathematical tradition. On the contrary, Leonid Kantorovich in 1939 discovered the technique of linear programming ten years before American mathematicians would do the same, while Andrey Kolmogorov did much fundamental work in probability theory and neural-network modeling over a long career that spanned from the 1920s through the 1980s. Indeed, in the decades following Stalin’s death, Soviet mathematicians in general would continue to solve fundamental problems of theory. And Soviet chess players — the linkage between mathematics and chess is almost as pronounced in history as that between mathematics and computers — would remain the best in the world, at least if the results of international competitions were any guide.

But, ironically in light of all this, it would be an electrical engineer named Sergei Alexeevich Lebedev rather than a mathematician who would pioneer Soviet computing. Lebedev was 46 years old in 1948 when he was transferred from his cushy position at the Lenin State Electrical Institute in Moscow to the relative backwater of Kiev, where he was to take over as head of the Ukraine Academy’s Electrotechnical Institute. There, free from the scrutiny of Moscow bureaucrats who neither understood nor wanted to understand the importance of the latest news of computing coming out of Britain and the United States, Lebedev put together a small team to build a Small Computing Machine; in Russian its acronym was MESM. Unlike the team of scientists and engineers who detonated the Soviet Union’s first atomic bomb in 1949, Lebedev developed the MESM without the assistance of espionage; he had access to the published papers of figures like Alan Turing and the exiled Hungarian mathematician John von Neumann, but no access to schematics or inside information about the machines on which they were working.

Lebedev had to build the MESM on a shoestring. Just acquiring the vacuum tubes and magnetic drums he needed in a backwater city of a war-devastated country was a major feat in itself, one that called for the skills of a junk trader as much as it did those of an electrical engineer. Seymour Goodman, one of the more notable historians of Soviet computing, states that “perhaps the most incredible aspect of the MESM was that it was successfully built at all. No electronic computer was ever built under more difficult conditions.” When it powered up for the first time in 1951, the MESM was not only the first stored-program computer in the Soviet Union but the first anywhere in continental Europe, trailing Britain by just two years and the United States by just one — a remarkable achievement by any standard.

Having already shown quite a diverse skill set in getting the MESM made at all, Lebedev proved still more flexible after it was up and running. He became the best advocate for computing inside the Soviet Union, a sort of titan of industry in a country that officially had no room for such figures. Goodman credits him with playing the role that a CEO would have played in the West. He even managed to get a script written for a documentary film to “advertise” his computer’s capabilities throughout the Soviet bureaucracy. In the end, the film never got made, but then it really wasn’t needed. The Soviet space and nuclear-weapons programs, not to mention the conventional military, all had huge need of the fast calculations the MESM could provide. At the time, the nuclear-weapons program was using what they referred to as calculator “brigades,” consisting of 100 or more mostly young girls, who worked eight-hour shifts with mechanical devices to crank out solutions to hugely complicated equations. Already by 1950, an internal report had revealed that the chief obstacle facing Soviet nuclear scientists wasn’t the theoretical physics involved but rather an inability to do the math necessary to bring theory to life fast enough.

Within months of his machine going online, Lebedev was called back to Moscow to become the leader of the Institute for Precision Mechanics and Computing Technology — or ITMVT in the Russian acronym — of the Soviet Academy of Sciences. There Lebedev proceeded to develop a series of machines known as the BESM line, which, unlike the one-off MESM, were suitable for — relatively speaking — production in quantity.

But Lebedev soon had rivals. Contrary to the image the Kremlin liked to project of a unified front — of comrades in communism all moving harmoniously toward the same set of goals — the planned economy of the Soviet Union was riddled with as much in-fighting as any other large bureaucracy. “Despite its totalitarian character,” notes historian Nikolai Krementsov, “the Soviet state had a very complex internal structure, and the numerous agents and agencies involved in the state science-policy apparatus pursued their own, often conflicting policies.” Thus very shortly after the MESM became operational, the second computer to be built in the Soviet Union (and continental Europe as well), a machine called the M-1 which had been designed by one Isaak Semyenovich Bruk, went online. If Lebedev’s achievement in building the MESM was remarkable, Bruk’s achievement in building the M-1, again without access to foreign espionage — or for that matter the jealously guarded secrets of Lebedev’s rival team — was equally so. But Bruk lacked Lebedev’s political skills, and thus his machine proved a singular achievement rather than the basis for a line of computers.

A much more dangerous rival  was a computer called Strela, or “Arrow,” the brainchild of one Yuri Yakovlevich Bazilevskii in the Special Design Bureau 245 — abbreviated SKB-245 in Russian — of the Ministry of Machine and Instrument Construction in Moscow. The BESM and Strela projects, funded by vying factions within the Politburo, spent several years in competition with one another, each project straining to monopolize scarce components, both for its own use and, just as importantly, to keep them out of the hands of its rival. It was a high-stakes war that was fought in deadly earnest, and its fallout could be huge. When, for instance, the Strela people managed to buy up the country’s entire supply of cathode-ray tubes for use as memory, the BESM people were forced to use less efficient and reliable mercury delay lines instead. As anecdotes like this attest, Bazilevskii was every bit Lebedev’s equal at the cutthroat game of bureaucratic politicking, even managing to secure from his backers the coveted title of Hero of Socialist Labor a couple of years before Lebedev.

The Strela computer. Although it’s hard to see it here, it was described by its visitors as a “beautiful machine in a beautiful hall,” with hundreds of lights blinking away in impressive fashion. Many bureaucrats likely chose to support the Strela simply because it looked so much like the ideal of high technology in the popular imagination of the 1950s.

During its first official trial in the spring of 1954, the Strela solved in ten hours a series of equations that would have taken a single human calculator about 100,000 days. And the Strela was designed to be a truly mass-produced computer, to be cranked out in the thousands in identical form from factories. But, as so often happened in the Soviet Union, the reality behind the statistics which Pravda trumpeted so uncritically was somewhat less flattering. The Strela “worked very badly” according to one internal report; according to another it “very often failed and did not work properly.” Pushed by scientists and engineers who needed a reliable computer in order to get things done, the government decided in the end to go ahead with the BESM instead of the Strela. Ironically, only seven examples of the first Soviet computer designed for true mass-production were ever actually produced. Sergei Lebedev was now unchallenged as the preeminent voice in Soviet computing, a distinction he would enjoy until his death in 1974.

The first BESM computer. It didn’t look as nice as the Strela, but it would prove far more capable and reliable.

Like so much other Soviet technology, Soviet computers were developed in secrecy, far from the prying eyes of the West. In December of 1955, a handful of American executives and a few journalists on a junket to the Soviet Union became the first to see a Soviet computer in person. A report of the visit appeared in the New York Times of December 11, 1955. It helpfully describes an early BESM computer as an “electronic brain” — the word “computer” was still very new in the popular lexicon — and pronounces it equal to the best American models of same. In truth, the American delegation had fallen for a bit of a dog-and-pony show. Soviet computers were already lagging well behind the American models that were now being churned out in quantities Lebedev could only dream of by companies like IBM.

Sergei Lebedev’s ITMVT. (Sorry for the atrocious quality of these images. Clear pictures of the Mirror World of the 1950s are hard to come by.)

In May of 1959, during one of West and East’s periodic periods of rapprochement, a delegation of seven American computer experts from business and government was invited to spend two weeks visiting most of the important hubs of computing research in the Soviet Union. They were met at the airport in Moscow by Lebedev himself; the Soviets were every bit as curious about the work of their American guests as said Americans were about theirs. The two most important research centers of all, the American delegation learned, were Lebedev’s ITMVT and the newer Moscow Computing Center of the Soviet Academy of Sciences, which was coming to play a role in software similar to that which the ITMVT played in hardware. The report prepared by the delegation is fascinating for the generalized glimpses it provides into the Soviet Mirror World of the 1950s as much as it is for the technical details it includes. Here, for instance, is its description of the ITMVT’s physical home:

The building itself is reminiscent more of an academic building than an industrial building. It is equipped with the usual offices and laboratory facilities as well as a large lecture hall. Within an office the decor tends to be ornate; the entrance door is frequently padded on both sides with what appeared to be leather, and heavy drapery is usually hung across the doorway and at the windows. The ceiling height was somewhat higher than that of contemporary American construction, but we felt in general that working conditions in the offices and in the laboratories were good. There appeared to be an adequate amount of room and the workers were comfortably supplied with material and equipment. The building was constructed in 1951. Many things testified to the steady and heavy usage it has received. In Russian tradition, the floor is parqueted and of unfinished oak. As in nearly every building, there are two sets of windows for weather protection.

The Moscow Computing Center

And here’s how a Soviet programmer had to work:

Programmers from the outside who come to the [Moscow] Computing Center with a problem apply to the scientific secretary of the Computing Center. He assigns someone from the Computing Center to provide any assistance needed by the outside programmer. In general an operator is provided for each machine, and only programmers with specific permission can operate the machine personally. Normally a programmer can expect only one code check pass per day at a machine; with a very high priority he might get two or three passes.

A programmer is required to submit his manuscript in ink. Examples of manuscripts which we saw indicated that often a manuscript is written in pencil until it is thought to be correct, and then redone in ink. The manuscript is then key-punched twice, and the two decks compared, before being sent to the machine. The output cards are handled on an off-line printer.

Other sections describe the Soviet higher-education system (“Every student is required to take 11 terms of ideological subjects such as Marxism-Leninism, dialectical materialism, history of the Communist Party, political economy, and economics.”); the roles of the various Academies of Sciences (“The All Union Academy of Sciences of the USSR and the 15 Republican Academies of Sciences play a dominant role in the scientific life of the Soviet Union.”); the economics of daily life (“In evaluating typical Russian salaries it must be remembered that the highest income tax in the Soviet Union is 13 percent and that all other taxes are indirect.”); the resources being poured into the new scientific and industrial center of Novosibirsk (“It is a general belief in Russia that the future of the Soviet Union is closely allied with the development of the immense and largely unexplored natural resources of Siberia.”).

But of course there are also plenty of pages devoted to technical discussion. What’s most surprising about these is the lack of the hysteria that had become so typical of Western reports of Soviet technology in the wake of the Sputnik satellite of 1957 and the beginning of the Space Race which it heralded. It was left to a journalist from the New York Times to ask the delegation upon their return the money question: who was really ahead in the field of computers? Willis Ware, a member of the delegation from the Rand Corporation and the primary architect of the final report, replied that the Soviet Union had “a wealth of theoretical knowledge in the field,” but “we didn’t see any hardware that we don’t have here.” Americans had little cause to worry; whatever their capabilities in the fields of aerospace engineering and nuclear-weapons delivery, it was more than clear that the Soviets weren’t likely to rival even IBM alone, much less the American computer industry as a whole, anytime soon. With that worry dispensed with, the American delegation had felt free just to talk shop with their Soviet counterparts in what would prove the greatest meeting of Eastern and Western computing minds prior to the Gorbachev era. The Soviets responded in kind; the visit proved remarkably open and friendly.

One interesting fact gleaned by the Americans during their visit was that, in addition to all the differences born of geography and economy, the research into computers conducted in the East and the West had also heretofore had markedly different theoretical scopes. For all that so much early Western research had been funded by the military for such plebeian tasks as code-breaking and the calculation of artillery trajectories, and for all that so much of that research had been conducted by mathematicians, the potential of computers to change the world had always been understood by the West’s foremost visionaries as encompassing far more than a faster way to do complex calculations. Alan Turing, for example, had first proposed his famous Turing Test of artificial intelligence all the way back in 1950.

But in the Soviet Union, where the utilitarian philosophy of dialectical materialism was the order of the day, such humanistic lines of research were, to say the least, not encouraged. Those involved with Soviet computing had to be, as they themselves would later put it, “cautious” about the work they did and the way they described that work to their superiors. The official view of computers in the Soviet Union during the early and mid-1950s hewed to the most literal definition of the word: they were electronic replacements for those brigades of human calculators cranking out solutions to equations all day long. Computers were, in other words, merely a labor-saving device, not a revolution in the offing; being a state founded on the all-encompassing ideology of communist revolution, the Soviet Union had no use for other, ancillary revolutions. Even when Soviet researchers were allowed to stray outside the realm of pure mathematics, their work was always expected to deliver concrete results that served very practical goals in fairly short order. For example, considerable effort was put into a program for automatically translating texts between languages, thereby to better bind together the diverse peoples of the sprawling Soviet empire and its various vassal states. (Although the translation program was given a prominent place in that first 1955 New York Times report about the Soviets’ “electronic brain,” one has to suspect that, given how difficult a task automated translation is even with modern computers, it never amounted to much more than a showpiece for use under carefully controlled conditions.)

And yet even by the time the American delegation arrived in 1959 all of that was beginning to change, thanks to one of the odder ideological alliances in the history of the twentieth century. In a new spirit of relative openness that was being fostered by Khrushchev, the Soviet intelligentsia was becoming more and more enamored with the ideas of an American named Norbert Wiener, yet another of those wide-ranging mathematicians who were doing so much to shape the future. In 1948, Wiener had described a discipline he called “cybernetics” in a book of the same name. The book bore the less-than-enticing subtitle Control and Communication in the Animal and the Machine, making it sound rather like an engineering text. But if it was engineering Wiener was practicing, it was social engineering, as became more clear in 1950, when he repackaged his ideas into a more accessible book with the title The Human Use of Human Beings.

Coming some 35 years before William Gibson and his coining of the term “cyberspace,” Norbert Wiener marks the true origin point of our modern mania for all things “cyber.” That said, his ideas haven’t been in fashion for many years, a fact which might lead us to dismiss them from our post-millennial perch as just another musty artifact of the twentieth century and move on. In actuality, though, Wiener is well worth revisiting, and with an eye to more than dubious linguistic trends. Cybernetics as a philosophy may be out of fashion, but cybernetics as a reality is with us a little more every day. And, most pertinently for our purposes today, we need to understand a bit of what Wiener was on about if we hope to understand what drove much of Soviet computing for much of its existence.

“Cybernetics” is one of those terms which can seem to have as many definitions as definers. It’s perhaps best described as the use of machines not just to perform labor but to direct labor. Wiener makes much of the increasing numbers of machines even in his time which incorporated a feedback loop — machines, in other words, that were capable of accepting input from the world around them and responding to that input in an autonomous way. An example of such a feedback loop can be something as simple as an automatic door which opens when it senses people ready to step through it, or as complex as the central computer in charge of all of the functions of an automated factory.

At first blush, the idea of giving computers autonomous control over the levers of power inevitably conjures up all sorts of dystopian visions. Yet Wiener himself was anything but a fan of totalitarian or collectivist governments. Invoking in The Human Use of Human Beings the popular metaphor of the collectivist society as an ant colony, he goes on to explore the many ways in which humans and ants are in fact — ideally, at any rate — dissimilar, thus seemingly exploding the “from each according to his ability, to each according to his need” founding principle of communism.

In the ant community, each worker performs its proper functions. There may be a separate caste of soldiers. Certain highly specialized individuals perform the functions of king and queen. If man were to adopt this community as a pattern, he would live in a fascist state, in which ideally each individual is conditioned from birth for his proper occupation: in which rulers are perpetually rulers, soldiers perpetually soldiers, the peasant is never more than a peasant, and the worker is doomed to be a worker.

This aspiration of the fascist for a human state based on the model of the ant results from a profound misapprehension both of the nature of the ant and of the nature of man. I wish to point out that the very physical development of the insect conditions it to be an essentially stupid and unlearning individual, cast in a mold which cannot be modified to any great extent. I also wish to show how these physiological conditions make it into a cheap mass-produced article, of no more individual value than a paper pie plate to be thrown away after it is used. On the other hand, I wish to show that the human individual, capable of vast learning and study, which may occupy almost half his life, is physically equipped, as the ant is not, for this capacity. Variety and possibility are inherent in the human sensorium — and are indeed key to man’s most noble flights — because variety and possibility belong to the very structure of the human organism.

While it is possible to throw away this enormous advantage that we have over the ants, and to organize the fascist ant-state with human material, I certainly believe that this is a degradation of man’s very nature, and economically a waste of the great human values which man possesses.

I am afraid that I am convinced that a community of human beings is a far more useful thing than a community of ants, and that if the human being is condemned and restricted to perform the same functions over and over again, he will not even be a good ant, not to mention a good human being. Those who would organize us according to personal individual functions and permanent individual restrictions condemn the human race to move at much less than half-steam. They throw away nearly all our human possibilities and, by limiting the modes in which we may adapt ourselves to future contingencies, they reduce our chances for a reasonably long existence on this earth.

Wiener’s vision departs markedly from the notion, popular already in science fiction by the time he wrote those words, of computers as evil overlords. In Wiener’s cybernetics, computers will not enslave people but give them freedom; the computers’ “slaves” will themselves be machines. Together computers and the machines they control will take care of all the boring stuff, as it were, allowing people to devote themselves to higher purposes. Wiener welcomes the “automatic age” he sees on the horizon, even as he is far from unaware of the disruptions the period of transition will bring.

What can we expect of its economic and social consequences? In the first place, we can expect an abrupt and final cessation of the demand for the type of factory labor performing purely repetitive tasks. In the long run, the deadly uninteresting nature of the repetitive task may make this a good thing and the source of leisure necessary for man’s full cultural development.

Be that as it may, the intermediate period of the introduction of the new means will lead to an immediate transitional period of disastrous confusion.

In terms of cybernetics, we’re still in this transitional period today, with huge numbers of workers accustomed to “purely repetitive tasks” cast adrift in this dawning automatic age; this explains much about recent political developments over much of the world. But of course our main interest right now isn’t contemporary politics, but rather how a fellow who so explicitly condemned the collectivist state came to be regarded as something of a minor prophet by the Soviet bureaucracy.

Wiener’s eventual acceptance in the Soviet Union is made all the more surprising by the Communist Party’s first reaction to cybernetics. In 1954, a year after Stalin’s death, the Party’s official Brief Philosophical Dictionary still called cybernetics “a reactionary pseudo-science originating in the USA after World War II and spreading widely in other capitalistic countries as well.” It was “in essence aimed against materialistic dialectics” and “against the scientific Marxist understanding of the laws of societal life.” Seemingly plucking words at random from a grab bag of adjectives, the dictionary concluded that “this mechanistic, metaphysical pseudo-science coexists very well with idealism in philosophy, psychology, and sociology” — the word “idealism” being a kiss of death under Soviet dogma.

In 1960, six years after the Soviets condemned cybernetics as an “attempt to transform toilers into mere appendices of the machine, into a tool of production and war,” Nobert Wiener lectures the Leningrad Mathematical Society. A colleague who visited the Soviet Union at the same time said that Wiener was “wined and dined everywhere, even in the privacy of the homes of the Russian scientists.” He died four years later, just as the influence of cybernetics was reaching a peak in the Soviet Union.

Still, when stripped of its more idealistic, humanistic attributes, there was much about cybernetics which held immense natural appeal for Soviet bureaucrats. Throughout its existence, the Soviet Union’s economy had been guided, albeit imperfectly at best, by an endless number of “five-year plans” that attempted to control its every detail. Given this obsession with economic command and control and the dispiriting results it had so far produced, the prospect of information-management systems — namely, computers — capable of aiding decision-making, or perhaps even in time of making the decisions, was a difficult enticement to resist; never mind how deeply antithetical the idea of computerized overlords making the decisions for human laborers was to Norbert Wiener’s original conception of cybernetics. Thus cybernetics went from being a banned bourgeois philosophy during the final years of Stalin’s reign to being a favorite buzzword during the middle years of Khrushchev’s. In December of 1957, the Soviet Academy of Sciences declared their new official position to be that “the use of computers for statistics and planning must have an absolutely exceptional significance in terms of its efficiency. In most cases, such use would make it possible to increase the speed of decision-making by hundreds of times and avoid errors that are currently produced by the unwieldy bureaucratic apparatus involved in these activities.”

In October of 1961, the new Cybernetics Council of the same body published an official guide called Cybernetics in the Service of Communism — essentially Norbert Wiener with the idealism and humanism filed off. Khrushchev may have introduced a modicum of cultural freedom to the Soviet Union, but at heart he was still a staunch collectivist, as he made clear:

In our time, what is needed is clarity, ideal coordination, and organization of all links in the social system both in material production and in spiritual life.

Maybe you think there will be absolute freedom under communism? Those who think so don’t understand what communism is. Communism is an orderly, organized society. In that society, production will be organized on the basis of automation, cybernetics, and assembly lines. If a single screw is not working properly, the entire mechanism will grind to a halt.

Soviet ambitions for cybernetics were huge, and in different circumstances might have led to a Soviet ARPANET going online years before the American version. It was envisioned that each factory and other center of production in the country would be controlled by its own computer, and that each of these computers would in turn be linked together into “complexes” reporting to other computers, all of which would send their data yet further up the chain, culminating in a single “unified automated management system” directing the entire economy. The system would encompass tens of thousands of computers, spanning the width and breadth of the largest country in the world, “from the Pacific to the Carpathian foothills,” as academician Sergei Sobolev put it. Some more wide-eyed prognosticators said that in time the computerized cybernetic society might allow the government to eliminate money from the economy entirely, long a cherished dream of communism. “The creation of an automated management system,” wrote proponent Anatolii Kitov, “would mean a revolutionary leap in the development of our country and would ensure a complete victory of socialism over capitalism.” With the Soviet Union’s industrial output declining every year between 1959 and 1964 while the equivalent Western figures skyrocketed, socialism needed all the help it could get.

In May of 1962, in an experiment trumpeted as the first concrete step toward socialism’s glorious cybernetic future, a computer located in Kiev poured steel in a factory located hundreds of kilometers away in Dniprodzerzhynsk (known today as Kamianske). A newspaper reporter was inspired to wax poetic:

In ancient Greece the man who steered ships was called Kybernetes. This steersman, whose name is given to one of the boldest sciences of the present — cybernetics — lives on in our own time. He steers the spaceships and governs the atomic installations, he takes part in working out the most complicated projects, he helps to heal humans and to decipher the writings of ancient peoples. As of today he has become an experienced metallurgist.

Some Soviet cybernetic thinking is even more astonishing than their plans for binding the country in a web of telecommunications long before “telecommunications” was a word in popular use. Driverless cars and locomotives were seriously discussed, and experiments with the latter were conducted in the Moscow subway system. (“Experiments on the ‘auto-pilot’ are being concluded. This device, provided with a program for guiding a train, automatically decreases and increases speed at corresponding points along its route, continually selecting the most advantageous speed, and stops the train at the required points.”) Serious attention was given to a question that still preoccupies futurists today: that of the role of human beings in a future of widespread artificially intelligent computers. The mathematician Kolmogorov wrote frankly that such computers could and inevitably would “surpass man in his development” in the course of time, and even described a tipping point that we still regard as seminal today: the point when artificial intelligence begins to “breed,” to create its own progeny without the aid of humans. At least some within the Soviet bureaucracy seemed to welcome humanity’s new masters; proposals were batted around to someday replace human teachers and doctors with computers. Sergei Sobolev wrote that “in my view the cybernetic machines are people of the future. These people will probably be much more accomplished than we, the present people.” Soviet thinking had come a long way indeed from the old conception of computers as nothing more than giant calculators.

But the Soviet Union was stuck in a Catch-22 situation: the cybernetic command-and-control network its economy supposedly needed in order to spring to life was made impossible to build by said economy’s current moribund state. Some skeptical planners drew pointed comparisons to the history of another sprawling land: Egypt. While the Pharaohs of ancient Egypt had managed to build the Pyramids, the cybernetics skeptics noted, legend held that they’d neglected everything else so much in the process that a once-fertile land had become a desert. Did it really make sense to be thinking already about building a computer network to span the nation when 40 percent of villages didn’t yet boast a single telephone within their borders? By the same token, perhaps the government should strive for the more tangible goal of placing a human doctor within reach of every citizen before thinking about replacing all the extant human doctors with some sort of robot.

A computer factory in Kiev, circa 1970. Note that all of the assembly work is still apparently done by hand.

The skeptics probably needn’t have worried overmuch about their colleagues’ grandiose dreams. With its computer industry in the shape it was, it was doubtful whether the Soviet Union had any hope of building its cybernetic Pyramids even with all the government will in the world.

In November of 1964, another American delegation was allowed a glimpse into the state of Soviet computing, although the Cuban Missile Crisis and other recent conflicts meant that their visit was much shorter and more restricted than the one of five and a half years earlier. Regardless, the Americans weren’t terribly impressed by the factory they were shown. It was producing computers at the rate of about seven or eight per month, and the visitors estimated its products to be roughly on par with an IBM 704 — a model that IBM had retired four years before. It was going to be damnably hard to realize the Soviet cybernetic dream with this trickle of obsolete machines; estimates were that about 1000 computers were currently operational in the Soviet Union, as compared to 30,000 in the United States. The Soviets were still struggling to complete the changeover from first-generation computer hardware, characterized by its reliance on vacuum tubes, to the transistor-based second generation. The Americans had accomplished this changeover years before; indeed, they were well on their way to an integrated-circuit-based third generation.  Looking at a Soviet transistor, the delegation said it was roughly equivalent to an American version of same from 1957.

But when the same group visited the academics, they were much more impressed, noting that the Soviets “were doing quite a lot of very good and forward-thinking work.” Thus was encapsulated what would remain the curse of Soviet computer science: plenty of ideas, plenty of abstract know-how, and a dearth of actual hardware to try it all out on. The reports of the Soviet researchers ooze frustration with their lot in life. Their computers break down “each and every day,” reads one, “and information on a tape lasts without any losses no longer than one month.”

Their American visitors were left to wonder just why it was that the Soviet Union struggled so mightily to build a decent computing infrastructure. Clearly the Soviets weren’t complete technological dunces; this was after all the country that had detonated an atomic bomb years before anyone had dreamed it could, that had shocked the world by putting the first satellite and then the first man into space, that was even now giving the United States a run for its money to put a man on the moon.

The best way to address the Americans’ confusion might be to note that exploding atomic bombs and launching things into space encompassed a series of individual efforts responsive to brilliant individual minds, while the mass-production of the standardized computers that would be required to realize the cybernetics dream required a sort of infrastructure-building at which the Soviet system was notoriously poor. The world’s foremost proponent of collectivism was, ironically, not all that good at even the most fundamental long-term collectivist projects. The unstable Soviet power grid was only one example; the builders of many Soviet computer installations had to begin by building their own power plant right outside the computer lab just to get a dependable electrical supply.

The Soviet Union was a weird mixture of backwardness and forwardness in terms of technology, and the endless five-year plans only exacerbated its issues by emphasizing arbitrary quotas rather than results that mattered in the real world. Stories abounded of factories that produced lamp shades in only one color because that was the easiest way to make their quota, or that churned out uselessly long, fat nails because the quota was given in kilograms rather than in numbers of individual pieces. The Soviet computer industry was exposed to all these underlying economic issues. It was hard to make computers to rival those of the West when the most basic electrical components that went into them had failure rates dozens of times higher than their Western equivalents. Whether a planned economy run by computers could have fixed these problems is doubtful in the extreme, but at any rate the Soviet cyberneticists would never get a chance to try. It was the old chicken-or-the-egg conundrum. They thought they needed lots of good computers to build a better economy — but they knew they needed a better economy to build lots of good computers.

As the 1960s became the 1970s, these pressures would lead to a new approach to computer production in the Soviet Union. If they couldn’t beat the West’s computers with their homegrown designs, the Soviets decided, then they would just have to  clone them.

(Sources: the academic-journal articles “Soviet Computing and Technology Transfer: An Overview” by S.E. Goodman, “MESM and the Beginning of the Computer Era in the Soviet Union” by Anne Fitzpatrick, Tatiana Kazakova, and Simon Berkovich, “S.A. Lebedev and the Birth of Soviet Computing” by G.D. Crowe and S.E. Goodman, “The Origin of Digital Computing in Europe” by S.E. Goodman, “Strela-1, The First Soviet Computer: Political Success and Technological Failure” by Hiroshi Ichikawa, and “InterNyet: Why the Soviet Union Did Not Build a Nationwide Computer Network” by Slava Gerovitch; studies from the Rand Corporation entitled “Soviet Cybernetics Technology I: Soviet Cybernetics, 1959-1962” and “Soviet Computer Technology — 1959”; the January 1970 issue of Rand Corporation’s Soviet Cybernetics Review; the books Stalinist Science by Nikolai Krementsov, The Human Use of Human Beings by Norbert Wiener, Red Storm Rising by Tom Clancy, and From Newspeak to Cyberspeak: A History of Soviet Cybernetics by Slava Gerovitch; The New York Times of December 11 1955, December 2 1959, and August 28 1966; Scientific American of October 1970; Byte of November 1984, February 1985, and October 1987.)

 

Tags:

Memos from Digital Antiquarian Corporate Headquarters, June 2017 Edition

From the Publications Department:

Those of you who enjoy reading the blog in ebook format will be pleased to hear that Volume 12 in that ongoing series is now available, full of articles centering roughly on the year 1990. As usual, the ebook is entirely the work of Richard Lindner. Thank you, Richard!

From the Security Department:

A few days ago, a reader notified me of an alarming development: he was getting occasional popup advertisements for a shady online betting site when he clicked article links within the site. Oddly enough, the popups were very intermittent; in lots of experimenting, I was only able to get them to appear on one device — an older iPad, for what it’s worth — and even then only every tenth or twelfth time I tapped a link. But investigation showed that there was indeed some rogue JavaScript that was causing them. I’ve cleaned it up and hardened that part of the site a bit more, but I remain a little concerned in that I haven’t identified precisely how someone or something got access to the file that was tampered with in the first place. If anything suspicious happens during your browsing, please do let me know. I don’t take advertisements of any sort, so any that you see on this site are by definition a security breach of some sort. In the meantime, I’ll continue to scan the site daily in healthily paranoid fashion. The last I thing I want is a repeat of the Great Handbag Hack of 2012. (Do note, however, that none of your Patreon or PayPal information is stored on the site, and the database containing commenters’ email addresses has remained uncompromised — so nothing to worry too much over.)

From the Scheduling Department:

I’ve had to skip publishing an article more weeks than I wanted to this year. First I got sick after coming home from my research trip to the Strong Museum in Rochester, New York. Then we moved (within Denmark) from Odense to Aarhus, and I’m sure I don’t need to tell most of you what a chaotic process that can be. Most recently, I’ve had to do a lot more research than usual for my next subject; see the next two paragraphs for more on that. In a couple of weeks my wife and I are going to take a little holiday, which means I’m going to have to take one more bye week in June. After that, though, I hope I can settle back into the groove and start pumping out a reliable article every week for a while. Thanks for bearing with me!

From the Long-Term-Planning Department:

I thought I’d share a taste of what I plan to cover in the context of 1991 — i.e., until I write another of these little notices to tell you the next ebook is available. If you prefer that each new article be a complete surprise, you’ll want to skip the next paragraph.

(Spoiler Alert!)

I’ve got a series in the works for the next few weeks covering the history of computing in the Soviet Union, culminating in East finally meeting West in the age of Tetris. I’m already very proud of the articles that are coming together on this subject, and hope you’re going to find this little-known story as fascinating as I do. Staying with the international theme, we’ll then turn our attention to Britain for a while; in that context, I’m planning articles on the great British tradition of open-world action-adventures, on the iconic software house Psygnosis, and finally on Psygnosis’s most enduring game, Lemmings. Then we’ll check in with the Amiga 3000 and CDTV. I’m hoping that Bob Bates and I will be able to put together something rather special on Timequest. Then some coverage of the big commercial online services that predated the modern World Wide Web, along with the early experiments with massively multiplayer games which they fostered. We’ll have some coverage of the amateur text-adventure scene; 1991 was a pretty good year there, with some worthy but largely forgotten games released. I may have more to say about the Eastgate school of hypertext, in the form of Sarah Smith’s King of Space, if I can get the thing working and if it proves worthy of writing about. Be that as it may, we’ll definitely make time for Corey Cole’s edutainment classic The Castle of Dr. Brain and other contemporary doings around Sierra. Then we’ll swing back around to Origin, with a look at the two Worlds of Ultima titles — yes, thanks to your recommendations I’ve decided to give them more coverage than I’d originally planned — and Wing Commander II. We’ll wrap up 1991 with Civilization, a game which offers so much scope for writing that it’s a little terrifying. I’m still mulling over how best to approach that one, but I’m already hugely looking forward to it.

(End Spoilers)

From the Accounting Department:

I’ve seen a nice uptick in Patreon participation in recent months, for which I’m very grateful. Thank you to every reader who’s done this writer the supreme honor of paying for the words I scribble on the (virtual) page, whether you’ve been doing so for years or you just signed up yesterday.

If you’re a regular reader who hasn’t yet taken the plunge, please do think about supporting these serious long-form articles about one of the most important cultural phenomenons of our times by signing up as a Patreon subscriber or making a one-time donation via the links to the right. Remember that I can only do this work thanks to the support of people just like you.

See you Friday! Really, I promise this time…

 

The Many Faces of Middle-earth, 1954-1989

The transformation of J.R.R. Tolkien’s The Lord of the Rings from an off-putting literary trilogy — full of archaic diction, lengthy appendixes, and poetry, for God’s sake — into some of the most bankable blockbuster fodder on the planet must be one of the most unlikely stories in the history of pop culture. Certainly Tolkien himself must be about the most unlikely mass-media mastermind imaginable. During his life, he was known to his peers mostly as a philologist, or historian of languages. The whole Lord of the Rings epic was, he once admitted, “primarily linguistic in inspiration, and was begun in order to provide the necessary background history” for the made-up languages it contained. On another occasion, he called the trilogy “a fundamentally religious and Catholic work.” That doesn’t exactly sound like popcorn-movie material, does it?

So, what would this pipe-smoking, deeply religious old Oxford don have made of our modern takes on his work, of CGI spellcraft and 3D-rendered hobbits mowing down videogame enemies by the dozen? No friend of modernity in any of its aspects, Tolkien would, one has to suspect, have been nonplussed at best, outraged at worst. But perhaps — just perhaps, if he could contort himself sufficiently — he might come to see all this sound and fury as at least as much validation as betrayal of his original vision. In writing The Lord of the Rings, he had explicitly set out to create a living epic in the spirit of Homer, Virgil, Dante, and Malory. For better or for worse, the living epics of our time unspool on screens rather than on the page or in the chanted words of bards, and come with niceties like copyright and trademark attached.

And where those things exist, so exist also the corporations and the lawyers. It would be those entities rather than Tolkien or even any of his descendants who would control how his greatest literary work was adapted to screens large, small, and in between. Because far more people in this modern age of ours play games and watch movies than read books of any stripe  — much less daunting doorstops like The Lord of the Rings trilogy — this meant that Middle-earth as most people would come to know it wouldn’t be quite the same land of myth that Tolkien himself had created so laboriously over so many decades in his little tobacco-redolent office. Instead, it would be Big Media’s interpretations and extrapolations therefrom. In the first 48 years of its existence, The Lord of the Rings managed to sell a very impressive 100 million copies in book form. In only the first year of its existence, the first installment of Peter Jackson’s blockbuster film trilogy was seen by 150 million people.

To understand how The Lord of the Rings and its less daunting predecessor The Hobbit were transformed from books authored by a single man into a palimpsest of interpretations, we need to understand how J.R.R. Tolkien lost control of his creations in the first place. And to begin to do that, we need to cast our view back to the years immediately following the trilogy’s first issuance in 1954 and 1955 by George Allen and Unwin, who had already published The Hobbit with considerable success almost twenty years earlier.

During its own early years, The Lord of the Rings didn’t do anywhere near as well as The Hobbit had, but did do far better than its publisher or its author had anticipated. It sold at least 225,000 copies (this and all other sales figures given in this article refer to sales of the trilogy as a whole, not to sales of the individual volumes that made up the trilogy) in its first decade, the vast majority of them in its native Britain, despite being available only in expensive hardcover editions and despite being roundly condemned, when it was noticed at all, by the very intellectual and literary elites that made up its author’s peer group. In the face of their rejection by polite literary society, the books sold mostly to existing fans of fantasy and science fiction, creating some decided incongruities; Tolkien never quite seemed to know how to relate to this less mannered group of readers. In 1957, the trilogy won the only literary prize it would ever be awarded, becoming the last recipient of the brief-lived International Fantasy Award, which belied its hopeful name by being a largely British affair. Tolkien, looking alternately bemused and uncomfortable, accepted the award, shook hands and signed autographs for his fans, smiled for the cameras, and got the hell out of there just as quickly as he could.

The books’ early success, such as it was, was centered very much in Britain; the trilogy only sold around 25,000 copies in North America during the entirety of its first decade. It enjoyed its first bloom of popularity there only in the latter half of the 1960s, ironically fueled by two developments that its author found thoroughly antithetical. The first was a legally dubious mass-market paperback edition published in the United States by Ace Books in 1965; the second was the burgeoning hippie counterculture.

Donald Wollheim, senior editor at Ace Books, had discovered what he believed to be a legal loophole giving him the right to publish the trilogy, thanks to the failure of Houghton Mifflin, Tolkien’s American hardcover publisher, to properly register their copyright to it in the United States. Never a man prone to hesitation, he declared that Houghton Mifflin’s negligence had effectively left The Lord of the Rings in the public domain, and proceeded to publish a paperback edition without consulting Tolkien or paying him anything at all. Condemned by the resolutely old-fashioned Tolkien for taking the “degenerate” form of the paperback as much as for the royalties he wasn’t paid, the Ace editions nevertheless sold in the hundreds of thousands in a matter of months. Elizabeth Wollheim, daughter of Donald and herself a noted science-fiction and fantasy editor, has characterized the instant of the appearance of the Ace editions of The Lord of the Rings in October of 1965 as the “Big Bang” that led to the modern cottage industry in doorstop fantasy novels. Along with Frank Herbert’s Dune, which appeared the same year, they obliterated almost at a stroke the longstanding tradition in publishing of genre novels as concise works coming in at under 250 pages.

Even as these cheap Ace editions of Tolkien became a touchstone of what would come to be known as nerd culture, they were also seized on by a very different constituency. With the Summer of Love just around the corner, the counterculture came to see in the industrialized armies of Sauron and Saruman the modern American war machine they were protesting, in the pastoral peace of the Shire the life they saw as their naive ideal. The Lord of the Rings became one of the hippie movement’s literary totems, showing up in the songs of Led Zeppelin and Argent, and, as later memorably described by Peter S. Beagle in the most famous introduction to the trilogy ever written, even scrawled on the walls of New York City’s subways (“Frodo lives!”). Beagle’s final sentiments in that piece could stand in very well for the counterculture’s as a whole: “We are raised to honor all the wrong explorers and discoverers — thieves planting flags, murderers carrying crosses. Let us at last praise the colonizers of dreams.”

If Tolkien had been uncertain how to respond to the earnest young science-fiction fans who had started showing up at his doorstep seeking autographs in the late 1950s, he had no shared frame of reference whatsoever with these latest readers. He was a man at odds with his times if ever there was one. On the rare occasions when contemporary events make an appearance in his correspondence, it always reads as jarring. Tolkien comes across a little confused by it all, can’t even get the language quite right. For example, in a letter from 1964, he writes that “in a house three doors away dwells a member of a group of young men who are evidently aiming to turn themselves into a Beatle Group. On days when it falls to his turn to have a practice session the noise is indescribable.” Whatever the merits of the particular musicians in question, one senses that the “noise” of the “Beatle group” music wouldn’t have suited Tolkien one bit in any scenario. And as for Beagle’s crack about “murderers carrying crosses,” it will perhaps suffice to note that his introduction was published only after Tolkien, the devout Catholic, had died. Like the libertarian conservative Robert Heinlein, whose Stranger in a Strange Land became another of the counterculture’s totems, Tolkien suffered the supreme irony of being embraced as a pseudo-prophet by a group whose sociopolitical worldview was almost the diametrical opposite of his own. As the critic Leonard Jackson has noted, it’s decidedly odd that the hippies, who “lived in communes, were anti-racist, were in favour of Marxist revolution and free love” should choose as their favorite “a book about a largely racial war, favouring feudal politics, jam-full of father figures, and entirely devoid of sex.”

Note the pointed reference to these first Ballantine editions of The Lord of the Rings as the “authorized” editions.

To what extent Tolkien was even truly aware of his works’ status with the counterculture is something of an open question, although he certainly must have noticed the effect it had on his royalty checks after the Ace editions were forced off the market, to be replaced by duly authorized Ballantine paperbacks. In the first two years after issuing the paperbacks, Ballantine sold almost 1 million copies of the series in North America alone.

In October of 1969, smack dab in the midst of all this success, Tolkien, now 77 years old and facing the worry of a substantial tax bill in his declining years, made one of the most retrospectively infamous deals in the history of pop culture. He sold the film rights to The Hobbit and Lord of the Rings to the Hollywood studio United Artists for £104,602 and a fixed cut of 7.5 percent of any profits that might result from cinematic adaptations. And along with film rights went “merchandising rights.” Specifically, United Artists was given rights to the “manufacture, sale, and distribution of any and all articles of tangible personal property other than novels, paperbacks, and other printed published matter.” All of these rights were granted “in perpetuity.”

What must have seemed fairly straightforward in 1969 would in decades to come turn into a Gordian Knot involving hundreds of lawyers, all trying to resolve once and for all just what part of Tolkien’s legacy he had retained and what part he had sold. In the media landscape of 1969, the merchandising rights to “tangible personal property” which Tolkien and United Artists had envisioned must have been limited to toys, trinkets, and souvenirs, probably associated with any films United Artists should choose to make based on Tolkien’s books. Should the law therefore limit the contract to its signers’ original intent, or should it be read literally? If the law chose the latter course, Tolkien had unknowingly sold off the videogame rights to his work before videogames even existed in anything but the most nascent form. Or did he really? Should videogames, being at their heart intangible code, really be lumped even by the literalists into the rights sold to United Artists? After all, the contract explicitly reserves “the right to utilize and/or dispose of all rights and/or interests not herein specifically granted” to Tolkien. This question only gets more fraught in our modern age of digital distribution, when games are often sold with no tangible component at all. And then what of tabletop games? They’re quite clearly neither novels nor paperbacks, but they might be, at least in part, “other printed published matter.” What precisely did that phrase mean? The contract doesn’t stipulate. In the absence of any clear pathways through this legal thicket, the history of Tolkien licensing would become that of a series of uneasy truces occasionally  erupting into open legal warfare. About the only things that were clear were that Tolkien — soon, his heirs — owned the rights to the original books and that United Artists — soon, the person who bought the contract from them — owned the rights to make movies out of them. Everything else was up for debate. And debated it would be, at mind-numbing length.

It would, however, be some time before the full ramifications of the document Tolkien had signed started to become clear. In the meantime, United Artists began moving forward with a film adaptation of The Lord of the Rings that was to have been placed in the hands of the director and screenwriter John Boorman. Boorman worked on the script for years, during which Tolkien died and his literary estate passed into the hands of his heirs, most notably his third son and self-appointed steward of his legacy Christopher Tolkien. The final draft of Boorman’s script compressed the entire trilogy into a single 150-minute film, and radically changed it in terms of theme, character, and plot to suit a Hollywood sensibility. For instance, Boorman added the element of sex that was so conspicuously absent from the books, having Frodo and Galadriel engage in a torrid affair after the Fellowship comes to Lothlórien. (Given the disparity in their sizes, one does have to wonder about the logistics, as it were, of such a thing.) But in the end, United Artists opted, probably for the best, not to let Boorman turn his script into a movie. (Many elements from the script would turn up later in Boorman’s Arthurian epic Excalibur.)

Of course, it’s unlikely that literary purity was foremost on United Artists’s minds when they made their decision. As the 1960s had turned into the 1970s and the Woodstock generation had gotten jobs and started families, Tolkien’s works had lost some of their trendy appeal, retaining their iconic status only among fantasy fandom. Still, the books continued to sell well; they would never lose the status they had acquired almost from the moment the Ace editions had been published of being the bedrock of modern fantasy fiction, something everyone with even a casual interest in the genre had to at least attempt to read. Not being terribly easy books, they defeated plenty of these would-be readers, who went off in search of the more accessible, more contemporary-feeling epic-fantasy fare so many publishers were by now happily providing. Yet even among the readers it rebuffed The Lord of the Rings retained the status of an aspirational ideal.

In 1975, a maverick animator named Ralph Bakshi, who had heretofore been best known for Fritz the Cat, the first animated film to earn an X rating, came to United Artists with a proposal to adapt The Lord of the Rings into a trio of animated features that would be relatively inexpensive in comparison to Boorman’s plans for a live-action epic. United Artists didn’t bite, but did signify that they might be amenable to selling the rights they had purchased from Tolkien if Bakshi could put together a few million dollars to make it happen. In December of 1976, following a string of proposals and deals too complicated and imperfectly understood to describe here, a hard-driving music and movie mogul named Saul Zaentz wound up owning the whole package of Tolkien rights that had previously belonged to United Artists. He intended to use his purchase first to let Bakshi make his films and thereafter for whatever other opportunities might happen to come down the road.

Saul Zaentz, seated at far left, with Creedence Clearwater Revival.

Saul Zaentz had first come to prominence back in 1967, when he’d put together a group of investors to buy a struggling little jazz label called Fantasy Records. His first signing as the new president of Fantasy was Creedence Clearwater Revival, a rock group he had already been managing. Whether due to Zaentz’s skill as a talent spotter or sheer dumb luck, it was the sort of signing that makes a music mogul rich for life. Creedence promptly unleashed eleven top-ten singles and five top-ten albums over the course of the next three and a half years, the most concentrated run of hits of any 1960s band this side of the Beatles. And Zaentz got his fair share of all that filthy lucre — more than his fair share, his charges eventually came to believe. When the band fell apart in 1972, much of the cause was infighting over matters of business. The other members came to blame Creedence’s lead singer and principal songwriter John Fogerty for convincing them to sign a terrible contract with Zaentz that gave away rights to their songs to him for… well, in perpetuity, actually. And as for Fogerty, he of course blamed Zaentz for all the trouble. Decades of legal back and forth followed the breakup. At one point, Zaentz sued Fogerty on the novel legal theory of “self-plagiarization”: the songs Fogerty was now writing as a solo artist, went the brief, were too similar to the ones he used to write for Creedence, all of whose copyrights Zaentz owned. While his lawyers pleaded his case in court, Fogerty vented his rage via songs like “Zanz Kant Danz,” the story of a pig who, indeed, can’t dance, but will happily “steal your money.”

I trust that this story gives a sufficient impression of just what a ruthless, litigious man now owned adaptation rights to the work of our recently deceased old Oxford don. But whatever else you could say about Saul Zaentz, he did know how to get things done. He secured financing for the first installment of Bakshi’s animated Lord of the Rings, albeit on the condition that he cut the planned three-film series down to two. Relying heavily on rotoscoping to give his cartoon figures an uncannily naturalistic look, Bakshi finished the film for release in November of 1978. Regarded as something of a cult classic among certain sectors of Tolkien fandom today, in its own day the film was greeted with mixed to poor reviews. The financial picture is equally muddled. While it’s been claimed, including by Bakshi himself, that the movie was a solid success, earning some $30 million on a budget of a little over $4 million, the fact remains that Zaentz was unable to secure funding for the sequel, leaving poor Frodo, Sam, and Gollum forever in limbo en route to Mount Doom. It is, needless to say, difficult to reconcile a successful first film with this refusal to back a second. But regardless of the financial particulars, The Lord of the Rings wouldn’t make it back to the big screen for more than twenty years, until the enormous post-millennial Peter Jackson productions that well and truly, once and for all, broke Middle-earth into the mainstream.

Yet, although the Bakshi adaptation was the only Tolkien film to play in theaters during this period, it wasn’t actually the only Tolkien film on offer. In November of 1977, a year before the Bakshi Lord of the Rings made its bow, a decidedly less ambitious animated version of The Hobbit had played on American television. The force behind it was Rankin/Bass Productions, who had previously been known in television broadcasting for holiday specials such as Rudolph the Red-Nosed Reindeer. Their take on Tolkien was authorized not by Saul Zaentz but by the Tolkien estate. Being shot on video rather than film and then broadcast rather than shown in theaters, the Rankin/Bass Hobbit was not, legally speaking, a “movie” under the terms of the 1969 contract. Nor was it a “tangible” product, thus making it fair game for the Tolkien estate to authorize without involving Zaentz. That, anyway, was the legal theory under which the estate was operating. They even authorized a sequel to the Rankin/Bass Hobbit in 1980, which rather oddly took the form of an adaptation of The Return of the King, the last book of The Lord of the Rings. A precedent of dueling licenses, authorizing different versions of what to casual eyes at least often seemed to be the very same things, was thus established.

But these flirtations with mainstream visibility came to an end along with the end of the 1970s. After the Ralph Bakshi and Rankin/Bass productions had all had their moments in the sun, The Lord of the Rings was cast back into its nerdy ghetto, where it remained more iconic than ever. Yet the times were changing in some very important ways. From the moment he had clear ownership of the rights Tolkien had once sold to United Artists, Saul Zaentz had taken to interpreting their compass in the broadest possible way, and had begun sending his lawyers after any real or alleged infringers who grew large enough to come to his attention. This marked a dramatic change from the earliest days of Tolkien fandom, when no one had taken any apparent notice of fannish appropriations of Middle-earth, to such an extent that fans had come to think of all use of Tolkien’s works as fair use. In that spirit, in 1975 a tiny game publisher called TSR, incubator of an inchoate revolution called Dungeons & Dragons, had started selling a non-Dungeons & Dragons strategy game called Battle of the Five Armies that was based on the climax of The Hobbit. In late 1977, Zaentz sent them a cease-and-desist letter demanding that the game be immediately taken off the market. And, far more significantly in the long run, he also demanded that all Tolkien references be excised from Dungeons & Dragons. It wasn’t really clear that Zaentz ought to have standing to sue, given that Battle of the Five Armies and especially Dungeons & Dragons consisted of so much of the “printed published matter” that was supposedly reserved to the Tolkien estate. But, hard charger that he was, Zaentz wasn’t about to let such niceties stop him. He was establishing legal precedent, and thereby cementing his position for the future.

The question of just how much influence Tolkien had on Dungeons & Dragons has been long obscured by this specter of legal action, which gave everyone on the TSR side ample reason to be less than entirely forthcoming. That said, certain elements of Dungeons & Dragons — most obviously the “hobbit” character class found in the original game — undeniably walked straight off the pages of Tolkien and into those of Gary Gygax’s rule books. At the same time, though, the mechanics of Dungeons & Dragons had, as Gygax always strenuously asserted, much more to do with the pulpier fantasy stories of Jack Vance and Robert E. Howard than they did with Tolkien. Ditto the game’s default personality, which hewed more to the “a group of adventurers meet in a bar and head out to bash monsters and collect treasure” modus operandi of the pulps than it did to Tolkien’s deeply serious, deeply moralistic, deeply tragic universe. You could play a more “serious” game of Dungeons & Dragons even in the early days, and some presumably did, but you had to bend the mechanics to make them fit. The more light-hearted tone of The Hobbit might seem better suited, but wound up being a bit too light-hearted, almost as much fairy tale as red-blooded adventure fiction. Some of the book’s episodes, like Bilbo and the dwarves’ antics with the trolls near the beginning of the story, verge on cartoon slapstick, with none of the swashbuckling swagger of Dungeons & Dragons. I love it dearly — far more, truth be told, than I love The Lord of the Rings — but not for nothing was The Hobbit conceived and marketed as a children’s novel.

Gygax’s most detailed description of the influence of Tolkien on Dungeons & Dragons appeared in the March 1985 issue of Dragon magazine. There he explicated the dirty little secret of adapting Tolkien to gaming: that the former just wasn’t all that well-suited for the latter without lots of sweeping changes.

Considered in the light of fantasy action adventure, Tolkien is not dynamic. Gandalf is quite ineffectual, plying a sword at times and casting spells which are quite low-powered (in terms of the D&D game). Obviously, neither he nor his magic had any influence on the games. The Professor drops Tom Bombadil, my personal favorite, like the proverbial hot potato; had he been allowed to enter the action of the books, no fuzzy-footed manling would have needed to undergo the trials and tribulations of the quest to destroy the Ring. Unfortunately, no character of Bombadil’s power can enter the games either — for the selfsame reasons! The wicked Sauron is poorly developed, virtually depersonalized, and at the end blows away in a cloud of evil smoke… poof! Nothing usable there. The mighty Ring is nothing more than a standard ring of invisibility, found in the myths and legends of most cultures (albeit with a nasty curse upon it). No influence here, either…

What Gygax gestures toward here but doesn’t quite touch is that The Lord of the Rings is at bottom a spiritual if not overtly religious tale, Middle-earth a land of ineffable unknowables. It’s impossible to translate that ineffability into the mechanistic system of causes and effects required by a game like Dungeons & Dragons. For all that Gygax is so obviously missing the point of Tolkien’s work in the extract above — rather hilariously so, actually — it’s also true that no Dungeon Master could attempt something like, say, Gandalf’s transformation from Gandalf the Grey to Gandalf the White without facing a justifiable mutiny from the players. Games — at least this kind of game — demand knowable universes.

Gygax claimed that Tolkien was ultimately far more important to the game’s commercial trajectory than he was to its rules. He noted, accurately, that the trilogy’s popularity from 1965 on had created an appetite for more fantasy, in the form of both books and things that weren’t quite books. It was largely out of a desire to ride this bandwagon, Gygax claimed, that Chainmail, the proto-Dungeons & Dragons which TSR released in 1971, promised players right there on the cover that they could use it to “refight the epic struggles related by J.R.R. Tolkien, Robert E. Howard, and other fantasy writers.” Gygax said that “the seeming parallels and inspirations are actually the results of a studied effort to capitalize on the then-current ‘craze’ for Tolkien’s literature.” Questionable though it is how “studied” his efforts really were in this respect, it does seem fairly clear that the biggest leg-up Tolkien gave to Gygax and his early design partner Dave Arneson was in giving so many potential players a taste for epic fantasy in the first place.

At any rate, we can say for certain that, beyond prompting a grudge in Gary Gygax against all things Tolkien — which, like most Gygaxian grudges, would last the rest of its holder’s life — Zaentz’s legal threat had a relatively modest effect on the game of Dungeons & Dragons. Hobbits were hastily renamed “halflings,” a handful of other references were scrubbed away or obfuscated, and life went on.

More importantly for Zaentz, the case against TSR and a few other even smaller tabletop-game publishers had now established the precedent that this field was within his licensing purview. In 1982, Tolkien Enterprises, the umbrella corporation Zaentz had created to manage his portfolio, authorized a three-employee publisher called Iron Crown Enterprises, heretofore known for the would-be Dungeons & Dragons competitor Rolemaster, to adapt their system to Middle-earth. Having won the license by simple virtue of being the first publisher to work up the guts to ask for it, Iron Crown went on to create Middle-earth Role Playing. The system rather ran afoul of the problem we’ve just been discussing: that, inspiring though so many found the setting in the broad strokes, the mechanics — or perhaps lack thereof — of Middle-earth just didn’t lend themselves all that well to a game. Unsurprisingly in light of this, Middle-earth Role Playing acquired a reputation as a “game” that was more fun to read, in the form of its many lengthy and lovingly detailed supplements exploring the various corners of Middle-earth, than it was to actually play; some wags took to referring to the line as a whole as Encyclopedia Middle-earthia. Nevertheless, it lasted more than fifteen years, was translated into twelve languages, and sold over 250,000 copies in English alone, thereby becoming one of the most successful tabletop RPGs ever not named Dungeons & Dragons.

But by no means was it all smooth sailing for Iron Crown. During the game’s early years, which were also its most popular, they were very nearly undone by an episode that serves to illustrate just how dangerously confusing the world of Tolkien licensing could become. In 1985, Iron Crown decided to jump on the gamebook bandwagon with a line of paperbacks they initially called Tolkien Quest, but quickly renamed to Middle-earth Quest to tie it more closely to their extant tabletop RPG. Their take on the gamebook was very baroque in comparison to the likes of Choose Your Own Adventure or even Fighting Fantasy; the rules for “reading” their books took up thirty pages on their own, and some of the books included hex maps for plotting your movements around the world, thus rather blurring the line between gamebook and, well, game. Demian Katz, who operates the definitive Internet site devoted to gamebooks, calls the Middle-earth Quest line “among the most complex gamebooks ever published,” and he of all people certainly ought to know. Whether despite their complexity or because of it, the first three volumes in the line were fairly successful for Iron Crown — and then the legal troubles started.

The Tolkien estate decided that Iron Crown had crossed a line with their gamebooks, encroaching on the literary rights to Tolkien which belonged to them. Whether the gamebooks truly were more book or game is an interesting philosophical question to ponder — particularly so given that they were such unusually crunchy iterations on the gamebook concept. Questions of philosophical taxonomy aside, though, they certainly were “printed published matter” that looked for all the world like everyday books. Tolkien Enterprises wasn’t willing to involve themselves in a protracted legal showdown over something as low-stakes as a line of gamebooks. Iron Crown would be on their own in this battle, should they choose to wage it. Deciding the potential rewards weren’t worth the risks of trying to convince a judge who probably wouldn’t know Dungeons & Dragons from Maze & Monsters that these things which looked like conventional paperback books were actually something quite different, Iron Crown pulled the line off the market and destroyed all copies as part of a settlement agreement. The episode may have cost them as much as $2.5 million. A few years later, the ever dogged Iron Crown would attempt to resuscitate the line after negotiating a proper license with the Tolkien estate — no mean feat in itself; Christopher Tolkien in particular is famously protective of that portion of his father’s legacy which is his to protect — but by then the commercial moment of the gamebook in general had passed. The whole debacle would continue to haunt Iron Crown for a long, long time. In 2000, when they filed for Chapter 11 bankruptcy, they would state that the debt they had been carrying for almost fifteen years from the original gamebook settlement was a big part of the reason.

By that point, the commercial heyday of the tabletop RPG was also long past. Indeed, already by the time that Iron Crown and Tolkien Enterprises had inked their first licensing deal back in 1982 computer-based fantasies, in the form of games like Zork, Ultima and Wizardry, were threatening to eclipse the tabletop varieties that had done so much to inspire them. Here, perhaps more so even than in tabletop RPGs, the influence of Tolkien was pervasive. Designers of early computer games often appropriated Middle-earth wholesale, writing what amounted to interactive Tolkien fan fiction. The British text-adventure house Level 9, for example, first made their name with Colossal Adventure, a re-implementation of Will Crowther and Don Woods’s original Adventure with a Middle-earth coda tacked onto the end, thus managing the neat trick of extensively plagiarizing two different works in a single game. There followed two more Level 9 games set in Middle-earth, completing what they were soon proudly advertising, in either ignorance or defiance of the concept of copyright, as their Middle-earth Trilogy.

But the most famous constant devotee and occasional plagiarist of Tolkien among the early computer-game designers was undoubtedly Richard Garriott, who had discovered The Lord of the Rings and Dungeons & Dragons, the two influences destined more than any other to shape the course of his life, within six months of one another during his teenage years. Garriott called his first published game Akalabeth, after Tolkien’s Akallabêth, the name of a chapter in The Silmarillion, a posthumously published book of Middle-earth legends. The word means “downfall” in one of Tolkien’s invented languages, but Garriott chose it simply because he thought it sounded cool; his game otherwise had little to no explicit connection to Middle-earth. Regardless, the computer-game industry wouldn’t remain small enough that folks could get away with this sort of thing for very long. Akalabeth soon fell out of print, superseded by Garriott’s more complex series of Ultima games that followed it, while Level 9 was compelled to scrub the erstwhile Middle-earth Trilogy free of Tolkien and re-release it as the Jewels of Darkness Trilogy.

In the long-run, the influence of Tolkien on digital games would prove subtler but also even more pervasive than these earliest forays into blatant plagiarism would imply. Richard Garriott may have dropped the Tolkien nomenclature from his subsequent games, but he remained thoroughly inspired by the example of Tolkien, that ultimate fantasy world-builder, when he built the world of Britannia for his Ultima series. Of course, there were obvious qualitative differences between Middle-earth and Britannia. How could there not be? One was the creation of an erudite Oxford don, steeped in a lifetime worth of study of classical and Medieval literature; the other was the creation of a self-described non-reader barely out of high school. Nowhere is the difference starker than in the area of language, Tolkien’s first love. Tolkien invented entire languages from scratch, complete with grammars and pronunciation charts; Garriott substituted a rune for each letter in the English alphabet and seemed to believe he had done something equivalent. Garriott’s clumsy mishandling of Elizabethan English, meanwhile, all “thees” and “thous” in places where the formal “you” should be used, is enough to make any philologist roll over in his grave. But his heart was in the right place, and despite its creator’s limitations Britannia did take on a life of its own over the course of many Ultima iterations. If there is a parallel in computer gaming to what The Lord of the Rings and Middle-earth came to mean to fantasy literature, it must be Ultima and its world of Britannia.

In addition to the unlicensed knock-offs that were gradually driven off the market during the early 1980s and the more abstracted homages that replaced them, there was also a third category of Tolkien-derived computer games: that of licensed products. The first and only such licensee during the 1980s was Melbourne House, a book publisher turned game maker located in far-off Melbourne, Australia. Whether out of calculation or happenstance, Melbourne House approached the Tolkien estate rather than Tolkien Enterprises in 1982 to ask for a license. They were duly granted the right to make a text-adventure adaptation of The Hobbit, under certain conditions, very much in character for Christopher Tolkien, intended to ensure respect for The Hobbit‘s status as a literary work; most notably, they would be required to include a paperback copy of the novel with the game. In a decision he would later come to regret, Saul Zaentz elected to cede this ground to the Tolkien estate without a fight, apparently deeming a computer game intangible enough to be dangerous to quibble over. Another uneasy, tacit, yet surprisingly enduring precedent was thus set: Tolkien Enterprises would have control of Tolkien tabletop games, while the Tolkien estate would have control of Tolkien videogames. Zaentz’s cause for regret would come as he watched the digital-gaming market explode into tens and then hundreds of times the size of the tabletop market.

In fact, that first adaptation of The Hobbit played a role in that very process. The game became a sensation in Europe — playing it became a rite of passage for a generation of gamers there — and a substantial hit in the United States as well. It went on to become almost certainly the best-selling single text adventure ever made, with worldwide sales that may have exceeded half a million units. I’ve written at length about the Hobbit text adventure earlier, so I’ll refer you back to that article rather than describe its bold innovations and weird charm here. Otherwise, suffice to say that The Hobbit‘s success proved, if anyone was doubting, that licenses in computer games worked in commercial terms, no matter how much some might carp about the lack of originality they represented.

Still, Melbourne House appears to have had some trepidation about tackling the greater challenge of adapting The Lord of the Rings to the computer. The reasons are understandable: the simple quest narrative that was The Hobbit — the book is actually subtitled There and Back Again — read like a veritable blueprint for a text adventure, while the epic tale of spiritual, military, and political struggle that was The Lord of the Rings represented, to say the least, a more substantial challenge for its would-be adapters. Melbourne House’s first anointed successor to The Hobbit‘s thus became Sherlock, a text adventure based on another literary property entirely. They didn’t finally return to Middle-earth until 1986, four years after The Hobbit, when they made The Fellowship of the Ring into a text adventure. Superficially, the new game played much like The Hobbit, but much of the charm was gone, with quirks that had seemed delightful in the earlier game now just seeming annoying. Even had The Fellowship of the Ring been a better game, by 1986 it was getting late in the day for text adventures — even text adventures like this one with illustrations. Reviews were lukewarm at best. Nevertheless, Melbourne House kept doggedly at the task of completing the story of Frodo and the One Ring, releasing The Shadows of Mordor in 1987 and The Crack of Doom in 1989. All of these games went largely unloved in their day, and remain so in our own.

In a belated attempt to address the formal mismatch between the epic narrative of The Lord of the Rings and the granular approach of the text adventure, Melbourne House released War in Middle-earth in 1988. Partially designed by Mike Singleton, and drawing obvious inspiration from his older classic The Lords of Midnight, it was a strategy game which let the player refight the entirety of the War of the Ring, on the level of both armies and individual heroes. The Lords of Midnight had been largely inspired by Singleton’s desire to capture the sweep and grandeur of The Lord of the Rings in a game, so in a sense this new project had him coming full circle. But, just as Melbourne House’s Lord of the Rings text adventures had lacked the weird fascination of The Hobbit, War in Middle-earth failed to rise to the heights of The Lords of Midnight, despite enjoying the official license the latter had lacked.

As the 1980s came to a close, then, the Tolkien license was beginning to rival the similarly demographically perfect Star Trek license for the title of the most misused and/or underused — take your pick — in computer gaming. Tolkien Enterprises, normally the more commercially savvy and aggressive of the two Tolkien licensers, had ceded that market to the Tolkien estate, who seemed content to let Melbourne House doddle along with an underwhelming and little-noticed game every year or two. At this point, though, another computer-game developer would pick up the mantle from Melbourne House and see if they could manage to do something less underwhelming with it. We’ll continue with that story next time.

Before we get to that, though, we might take a moment to think about how different things might have been had the copyrights to Tolkien’s works been allowed to expire with their creator. There is some evidence that Tolkien himself held to this as the fairest course. In the late 1950s, in a letter to one of the first people to approach him about making a movie out of The Lord of the Rings, he expressed his wish that any movie made during his lifetime not deviate too far from the books, citing as an example of what he didn’t want to see the 1950 movie of H. Rider Haggard’s Victorian adventure novel King’s Solomon’s Mines and the many liberties it took with its source material. “I am not Rider Haggard,” he wrote. “I am not comparing myself with that master of Romance, except in this: I am not dead yet. When the film of King’s Solomon’s Mines was made, it had already passed, one might say, into the public property of the imagination. The Lord of Rings is still the vivid concern of a living person, and is nobody’s toy to play with.” Can we read into this an implicit assumption that The Lord of the Rings would become part of “the public property of the imagination” after its own creator’s death? If so, things turned out a little differently than he thought they would. A “property of the imagination” Middle-earth has most certainly become. It’s the “public” part that remains problematic.

(Sources: the books Designers & Dragons Volume 1 and Volume 2 by Shannon Appelcline, Tolkien’s Triumph: The Strange History of The Lord of the Rings by John Lennard, The Frodo Franchise: The Lord of the Rings and Modern Hollywood by Kristin Thompson, Unfiltered: The Complete Ralph Bakshi by John M. Gibson, Playing at the World by Jon Peterson, and Dungeons and Dreamers: The Rise of Computer Game Culture from Geek to Chic by Brad King and John Borland; Dragon Magazine of March 1985; Popular Computing Weekly of December 30 1982; The Times of December 15 2002. Online sources include Janet Brennan Croft’s essay “Three Rings for Hollywood” and The Hollywood Reporter‘s archive of a 2012 court case involving Tolkien’s intellectual property.)

 

Tags: , , , ,

The View from the Trenches (or, Some Deadly Sins of CRPG Design)

From the beginning of this project, I’ve worked to remove the nostalgia factor from my writing about old games, to evaluate each game strictly on its own merits and demerits. I like to think that this approach has made my blog a uniquely enlightening window into gaming history. Still, one thing my years as a digital antiquarian have taught me is that you tread on people’s nostalgia at your peril. Some of what I’ve written here over the years has certainly generated its share of heat as well as light, not so much among those of you who are regular readers and commenters — you remain the most polite, thoughtful, insightful, and just plain nice readers any writer could hope to have — as among the ones who fire off nasty emails from anonymous addresses, who post screeds on less polite sites to which I’m occasionally pointed, or who offer up their drive-by comments right here every once in a while.

A common theme of these responses is that I’m not worthy of writing about this stuff, whether because I wasn’t there at the time — actually, I was, but whatever — or because I’m just not man enough to take my lumps and power through the really evil, unfair games. This rhetoric of inclusion and exclusion is all too symptomatic of the uglier sides of gaming culture. Just why so many angry, intolerant personalities are so attracted to computer games is a fascinating question, but must remain a question for another day. For today I will just say that, even aside from their ugliness, I find such sentiments strange. As far as I know, there’s zero street cred to be gained in the wider culture from being good at playing weird old videogames — or for that matter from being good at playing videogames of any stripe. What an odd thing to construct a public persona around. I’ve made a job out of analyzing old games, and even I sometimes want to say, “Dude, they’re just old games! Really, truly, they’re not worth getting so worked up over.”

That said, there do remain some rays of light amidst all this heat. It’s true that my experience of these games today — of playing them in a window on this giant monitor screen of mine, or playing them on the go on a laptop — must be in some fairly fundamental ways different from the way the same games were experienced all those years ago. One thing that gets obviously lost is the tactile, analog side of the vintage experience: handling the physical maps and manuals and packages (I now reference that stuff as PDF files, which isn’t quite the same); drawing maps and taking notes using real pen and paper (I now keep programs open in separate windows on that aforementioned giant monitor for those purposes); listening to the chuck-a-chunk of disk drives loading in the next bit of text or scenery (replacing the joy of anticipation is the instant response of my modern supercomputer). When I allow myself to put on my own nostalgia hat, just for a little while, I recognize that all these things are intimately bound up with my own memories of playing games back in the day.

And I also recognize that the discrepancies between the way I play now and the way I played back then go even further. Some of the most treasured of vintage games weren’t so much single works to be played and completed as veritable lifestyle choices. Ultima IV, to name a classic example, was huge enough and complicated enough that a kid who got it for Christmas in 1985 might very well still be playing it by the time Ultima V arrived in 1988; rinse and repeat for the next few entries in the series. From my jaded perspective, I wouldn’t brand any of these massive CRPGs as overly well-designed in the sense of being a reasonably soluble game to be completed in a reasonable amount of time, but then that wasn’t quite what most of the people who played them way back when were looking for in them. Actually solving the games became almost irrelevant for a kid who wanted to live in the world of Britannia.

I get that. I really do. No matter how deep a traveler in virtual time delves into the details of any era of history, there are some things he can never truly recapture. Were I to try, I would have to go away to spend a year or two disconnected from the Web and playing no other game — or at least no other CRPG — than the Ultima I planned to write about next. That, as I hope you can all appreciate, wouldn’t be a very good model for a blog like this one.

When I think in the abstract about this journey through gaming history I’ve been on for so long now, I realize that I’ve been trying to tell at least three intertwining stories.

One story is a critical design history of games. When I come to a game I judge worthy of taking the time to write about in depth — a judgment call that only becomes harder with every passing year, let me tell you — I play it and offer you my thoughts on it, trying to judge it not only in the context of our times but also in the context of its own times, and in the context of its peers.

A second story is that of the people who made these games, and how they went about doing so — the inevitable postmortems, as it were.

Doing these first two things is relatively easy. What’s harder is the third leg of the stool: what was it like to be a player of computer games all those years ago? Sometimes I stumble upon great anecdotes in this area. For instance, did you know about Clancy Shaffer?

In impersonal terms, Shaffer was one of the slightly dimmer stars among the constellation of adventure-game superfans — think Roe Adams III, Shay Addams, Computer Gaming World‘s indomitable Scorpia — who parlayed their love of the genre and their talent for solving games quickly into profitable sidelines if not full-on careers as columnists, commentators, play-testers, occasionally even design consultants; for his part, Shaffer contributed his long experience as a player to the much-loved Sir-Tech title Jagged Alliance.

Most of the many people who talked with Shaffer via post, via email, or via telephone assumed he was pretty much like them, an enthusiastic gamer and technology geek in his twenties or thirties. One of these folks, Rich Heimlich, has told of a time when a phone conversation turned to the future of computer technology in the longer view. “Frankly,” said Shaffer, “I’m not sure I’ll even be here to see it.” He was, he explained to his stunned interlocutor, 84 years old. He credited his hobby for the mental dexterity that caused so many to assume he was in his thirties at the oldest. Shaffer believed he had remained mentally sharp through puzzling his way through so many games, while he needed only look at the schedule of upcoming releases in a magazine to have something to which to look forward in life.  Many of his friends who, like him, had retired twenty years ago were dead or senile, a situation Shaffer blamed on their having failed to find anything to do with themselves after leaving the working world behind.

Shaffer died in 2010 at age 99. Only after his passing, after reading his obituary, did Heimlich and other old computer-game buddies realize what an extraordinary life Shaffer had actually led, encompassing an education from Harvard University, a long career in construction and building management, 18 patents in construction engineering, an active leadership role in the Republican party, a Golden Glove championship in heavyweight boxing, and a long and successful run as a yacht racer and sailor of the world’s oceans. And yes, he had also loved to play computer games, parlaying that passion into more than 500 published articles.

But great anecdotes like this one from the consumption side of the gaming equation are the exception rather than the rule, not because they aren’t out there in spades in theory — I’m sure there have been plenty of other fascinating characters like Clancy Shaffer who have also made a passion for games a part of their lives — but because they rarely get publicized. The story of the players of vintage computer games is that of a huge, diffuse mass of millions of people whose individual stories almost never stretch beyond their immediate families and friends.

The situation becomes especially fraught when we try to zero in on the nitty-gritty details of how games were played and judged in their day. Am I as completely out of line as some have accused me of being in harping so relentlessly on the real or alleged design problems of so many games that others consider to be classics? Or did people back in the day, at least some of them, also get frustrated and downright angry at betrayals of their trust in the form of illogical puzzles and boring busywork? I know that I certainly did, but I’m only one data point.

One would think that the magazines, that primary link between the people who made games and those who played them, would be the best way of finding out what players were really thinking. In truth, though, the magazines rarely provided skeptical coverage of the games industry. The companies whose games they were reviewing were of course the very same companies that were helping to pay their bills by buying advertising — an obvious conflict of interest if ever there was one. More abstractly but no less significantly, there was a sense among those who worked for the magazines and those who worked for the game publishers that they were all in this together, living as they all were off the same hobby. Criticizing individual games too harshly, much less entire genres, could damage that hobby, ultimately damaging the magazines as much as the publishers. Thus when the latest heavily hyped King’s Quest came down the pipe, littered with that series’s usual design flaws, there was little incentive for the magazines to note that this monarch had no clothes.

So, we must look elsewhere to find out what average players were really thinking. But where? Most of the day-to-day discussions among gamers back in the day took place over the telephone, on school playgrounds, on computer bulletin boards, or on the early commercial online services that preceded the World Wide Web. While Jason Scott has done great work snarfing up a tiny piece of the online world of the 1980s and early 1990s, most of it is lost, presumably forever. (In this sense at least, historians of later eras of gaming history will have an easier time of it, thanks to archive.org and the relative permanence of the Internet.) The problem of capturing gaming as gamers knew it thus remains one without a comprehensive solution. I must confess that this is one reason I’m always happy when you, my readers, share your experiences with this or that game in the comments section — even, or perhaps especially, when you disagree with my own judgments on a game.

Still, relying exclusively on first-hand accounts from decades later to capture what it was like to be a gamer in the old days can be problematic in the same way that it can be problematic to rely exclusively on interviews with game developers to capture how and why games were made all those years ago: memories can fade, personal agendas can intrude, and those rose-colored glasses of nostalgia can be hard to take off. Pretty soon we’re calling every game from our adolescence a masterpiece and dumping on the brain-dead games played by all those stupid kids today — and get off my lawn while you’re at it. The golden age of gaming, like the golden age of science fiction, will always be twelve or somewhere thereabouts. All that’s fine for hoisting a beer with the other old-timers, but it can be worse than useless for doing serious history.

Thankfully, every once in a while I stumble upon another sort of cracked window into this aspect of gaming’s past. As many of you know, I’ve spent a couple of weeks over the last couple of years trolling through the voluminous (and growing) game-history archives of the Strong Museum of Play. Most of this material, hugely valuable to me though it’s been and will doubtless continue to be, focuses on the game-making side of the equation. Some of the archives, though, contain letters from actual players, giving that unvarnished glimpse into their world that I so crave. Indeed, these letters are among my favorite things in the archives. They are, first of all, great fun. The ones from the youngsters are often absurdly cute; it’s amazing how many liked to draw pictures to accompany their missives.

But it’s when I turn to the letters from older writers that I’m gratified and, yes, made to feel a little validated when I read that people were in fact noticing that games weren’t always playing fair with them. I’d like to share a couple of the more interesting letters of this type with you today.

We’ll begin with a letter from one Wes Irby of Plano, Texas, describing what he does and especially what he doesn’t enjoy in CRPGs. At the time he sent it to the Questbusters adventure-game newsletter in October of 1988, Irby was a self-described “grizzled computer adventurer” of age 43. Shay Addams, Questbusters’s editor, found the letter worthy enough to spread around among publishers of CRPGs. (Perhaps tellingly, he didn’t choose to publish it in his newsletter.)

Irby titles his missive “Things I Hate in a Fantasy-Role-Playing Game.” Taken on its own, it serves very well as a companion piece to a similar article I once wrote about graphic adventures. But because I just can’t shut up, and because I can’t resist taking the opportunity to point out places where Irby is unusually prescient or insightful, I’ve inserted my own comments into the piece; they appear in italics in the text that follows. Otherwise, I’ve only cleaned up the punctuation and spelling a bit here and there. The rest is Irby’s original letter from 1988.


I hate rat killing!!! In Shard of Spring, I had to kill dozens of rats, snakes, kobolds, and bats before I could get back to the tower after a Wind Walk to safety. In Wizardry, the rats were Murphy’s ghosts, which I pummeled for hours when developing a new character. Ultima IV was perhaps the ultimate rat-killing game of all time; hour upon hour was spent in tedious little battles that I could not possibly lose and that offered little reward for victory. Give me a good battle to test my mettle, but don’t sentence me to rat killing!

Amen. The CRPG genre became the victim of an expectation which took hold early on that the games needed to be really, really long, needed to consume dozens if not hundreds of hours, in order for players to get their money’s worth. With disk space precious and memory space even more so on the computers of the era, developers had to pad out their games with a constant stream of cheap low-stakes random encounters to reach that goal. Amidst the other Interplay materials hosted at the Strong archive are several mentions of a version of Wasteland, prepared specially for testers in a hurry, in which the random encounters were left out entirely. That’s the version of Wasteland I’d like to play.

I hate being stuck!!! I enjoy the puzzles, riddles, and quests as a way to give some story line to the real heart of the game, which is killing bad guys. Just don’t give me any puzzles I can’t solve in a couple of hours. I solved Rubik’s Cube in about thirty hours, and that was nothing compared to some of the puzzles in The Destiny Knight. The last riddle in Knight of Diamonds delayed my completion (and purchase of the sequel) for nearly six months, until I made a call to Sir-Tech.

I haven’t discussed the issue of bad puzzle design in CRPGs to the same extent as I have the same issue in adventure games, but suffice to say that just about everything I’ve written in the one context applies equally in the other. Certainly riddles remain among the laziest — they require almost no programming effort to implement — and most problematic — they rely by definition on intuition and external cultural knowledge — forms of puzzle in either genre. Riddles aren’t puzzles at all really; the answer either pops into your head right away or it doesn’t, meaning the riddle turns into either a triviality or a brick wall. A good puzzle, by contrast, is one you can experiment with on your way to the correct solution. And as for the puzzles in The Bard’s Tale II: The Destiny Knight… much more on them a little later.

Perhaps the worst aspect of being stuck is the clue-book dilemma. Buying a clue book is demeaning. In addition, buying clue books could encourage impossible puzzles to boost the aftermarket for clue books. I am a reformed game pirate (that is how I got hooked), and I feel it is just as unfair for a company to charge me to finish the game I bought as it was for me to play the games (years ago) without paying for them. Multiple solutions, a la Might and Magic, are very nice. That game also had the desirable feature of allowing you to work on several things simultaneously so that being stuck on one didn’t bring the whole game to a standstill.

Here Irby brings up an idea I’ve also touched on once or twice: that the very worst examples of bad design can be read as not just good-faith disappointments but actual ethical lapses on the part of developers and publishers. Does selling consumers a game with puzzles that are insoluble except through hacking or the most tedious sort of brute-force approaches equate to breaching good faith by knowingly selling them a defective product? I tend to feel that it does.

As part of the same debate, the omnipresent clue books became a locus of much dark speculation and conspiracy theorizing back in the day. Did publishers, as Irby suggests, intentionally release games that couldn’t be solved without buying the clue book, thereby to pick up additional sales? The profit margins on clue books, not incidentally, tended to be much higher than that enjoyed by the games themselves. Still, the answer is more complicated than the question may first appear. Based on my research into the industry of the time, I don’t believe that any publishers or developers made insoluble games with the articulated motive of driving clue-book sales. To the extent that there was an ulterior motive surrounding the subject of clue books, it was that the clue books would allow them to make money off some of the people who pirated their games. (Rumors — almost certainly false, but telling by their very presence — occasionally swirled around the industry about this or that popular title whose clue-book sales had allegedly outstripped the number of copies of the actual game which had been sold.) Yet the fact does remain that even the hope of using clue books as a way of getting money out of pirates required games that would be difficult enough to cause many pirates to go out and buy the book. The human mind is a funny place, and the clue-book business likely did create certain almost unconscious pressures on game designers to design less soluble games.

I hate no-fault life insurance! If there is no penalty, there is no risk, there is no fear — translate that to no excitement. The adrenaline actually surged a few times during play of the Wizardry series when I encountered a group of monsters that might defeat me. In Bard’s Tale II, death was so painless that I committed suicide several times because it was the most expedient way to return to the Adventurer’s Guild.

When you take the risk of loss out of the game, it might as well be a crossword puzzle. The loss of possessions in Ultima IV and the loss of constitution in Might and Magic were tolerable compromises. The undead status in Phantasie was very nice. Your character was unharmed except for the fact that no further advancement was possible. Penalties can be too severe, of course. In Shard of Spring, loss of one battle means all characters are permanently lost. Too tough.

Here Irby hits on one of the most fraught debates in CRPG design, stretching from the days of the original Wizardry to today: what should be the penalty for failure? There’s no question that the fact that you couldn’t save in the dungeon was one of the defining aspects of Wizardry, the game that did more than any other to popularize the budding genre in the very early 1980s. Exultant stories of escaping the dreaded Total Party Loss by the skin of one’s teeth come up again and again when you read about the game. Andrew Greenberg and Bob Woodhead, the designers of Wizardry, took a hard-line stance on the issue, insisting that the lack of an in-dungeon save function was fundamental to an experience they had carefully crafted. They went so far as to issue legal threats against third-party utilities designed to mitigate the danger.

Over time, though, the mainstream CRPG industry moved toward the save-often, save-anywhere model, leaving Wizardry’s approach only to a hardcore sub-genre known as roguelikes. It seems clear that the change had some negative effects on encounter design; designers, assuming that players were indeed saving often and saving everywhere, felt they could afford to worry less about hitting players with impossible fights. Yet it also seems clear that many or most players, given the choice, would prefer to avoid the exhilaration of escaping near-disasters in Wizardry in favor of avoiding the consequences of unescaped disasters. The best solution, it seems to me, is to make limited or unlimited saving a player-selectable option. Failing that, it strikes me as better to err on the side of generosity; after all, hardcore players can still capture the exhilaration and anguish of an iron-man mode by simply imposing their own rules for when they allow themselves to save. All that said, the debate will doubtless continue to rage.

I hate being victimized. Loss of life, liberty, etc., in a situation I could have avoided through skillful play is quite different from a capricious, unavoidable loss. The Amulet of Skill in Knight of Diamonds was one such situation. It was not reasonable to expect me to fail to try the artifacts I found — a fact I soon remedied with my backup disk!!! The surprise attacks of the mages in Wizardry was another such example. Each of the Wizardry series seems to have one of these, but the worst was the teleportation trap on the top level of Wizardry III, which permanently encased my best party in stone.

Beyond rather putting the lie to some of Greenberg and Woodhead’s claims of having exhaustively balanced the Wizardry games, these criticisms again echo those I’ve made in the context of adventure games. Irby’s examples are the CRPG equivalents of the dreaded adventure-game Room of Sudden Death — except that in CRPGs like Wizardry with perma-death, their consequences are much more dire than just having to go back to your last save.

I hate extraordinary characters! If everyone is extraordinary then extraordinary becomes extra (extremely) ordinary and uninteresting. The characters in Ultima III and IV and Bard’s Tale I and II all had the maximum ratings for all stats before the end of the game. They lose their personalities that way.

This is one of Irby’s subtler complaints, but also I think one of his most insightful. Characters in CRPGs are made interesting, as he points out, through a combination of strengths and weaknesses. I spent considerable time in a recent article describing how the design standards of SSI’s “Gold Box” series of licensed Dungeons & Dragons CRPGs declined over time, but couldn’t find a place for the example of Pools of Darkness, the fourth and last game in the series that began with Pool of Radiance. Most of the fights in Pools of Darkness are effectively unwinnable if you don’t have “extraordinary” characters, in that they come down to quick-draw contests to find out whether your party or the monsters can fire off devastating area-effect magic first. Your entire party needs to have a maxed-out dexterity score of 18 to hope to consistently survive these battles. Pools of Darkness thus rewards cheaters and punishes honest players; it represents a cruel betrayal of players who had played through the entire series honestly to that point, without availing themselves of character editors or the like. CRPGs should strive not to make the extraordinary ordinary, and they should certainly not demand extraordinary characters that the player can only come by through cheating.

There are several more features which I find undesirable, but are not sufficiently irritating to put them in the “I hate” category. One such feature is the inability to save the game in certain places or situations. It is miserable to find yourself in a spot you can’t get out of (or don’t want to leave because of the difficulty in returning) at midnight (real time). I have continued through the wee hours on occasion, much to my regret the next day. At other times it has gotten so bad I have dozed off at the keyboard. The trek from the surface to the final set of riddles in Ultima IV takes nearly four hours. Without the ability to save along the way, this doesn’t make for good after-dinner entertainment. Some of the forays in the Phantasie series are also long and difficult, with no provision to save. This problem is compounded when you have an old machine like mine that locks up periodically. Depending on the weather and the phase of the moon, sometimes I can’t rely on sessions that average over half an hour.

There’s an interesting conflict here, which I sense that the usually insightful Irby may not have fully grasped, between his demand that death have consequences in CRPGs and his belief that he should be able to save anywhere. At the same time, though, it’s not an irreconcilable conflict. Roguelikes have traditionally made it possible to save anywhere by quitting the game, but immediately delete the save when you start to play again, thus making it impossible to use later on as a fallback position.

Still, it should always raise a red flag when a given game’s designers claim something which just happens to have been the easier choice from a technical perspective to have been a considered design choice. This skepticism should definitely be applied to Wizardry. Were the no-save dungeons that were such an integral part of the Wizardry experience really a considered design choice or a (happy?) accident arising from technical affordances? It’s very difficult to say this many years on. What is clear is that saving state in any sort of comprehensive way was a daunting challenge for 8-bit CRPGs spread over multiple disk sides. Wizardry and The Bard’s Tale didn’t really even bother to try; literally the only persistent data in these games and many others like them is the state of your characters, meaning not only that the dungeons are completely reset every time you enter them but that it’s possible to “win” them over and over again by killing the miraculously resurrected big baddie again and again. The 8-bit Ultima games did a little better, saving the state of the world map but not that of the cities or the dungeons. (I’ve nitpicked the extreme cruelty of Ultima IV’s ending, which Irby also references, enough on earlier occasions that I won’t belabor it any more here.) Only quite late in the day for the 8-bit CRPG did games like Wasteland work out ways to create truly, comprehensively persistent environments — in the case of Wasteland, by rewriting all of the data on each disk side on the fly as the player travels around the world (a very slow process, particularly in the case of the Commodore 64 and its legendarily slow disk drive).

Tedium is a killer. In Bard’s Tale there was one battle with 297 bersekers that always took fifteen or twenty minutes with the same results (this wasn’t rat-killing because the reward was significant and I could lose, maybe). The process of healing the party in the dungeon in Wizardry and the process of identifying discovered items in Shard of Spring are laborious. How boring it was in Ultima IV to stand around waiting for a pirate ship to happen along so I could capture it. The same can be said of sitting there holding down a key in Wasteland or Wrath of Denethenor while waiting for healing to occur. At least give me a wait command so I can read a book until something interesting happens.

I’m sort of ambivalent toward most aspects of mapping. A good map is satisfying and a good way to be sure nothing has been missed. Sometimes my son will use my maps (he hates mapping) in a game and find he is ready to go to the next level before his characters are. Mapping is a useful way to pace the game. The one irritating aspect of mapping is running off the edge of the paper. In Realms of Darkness mapping was very difficult because there was no “locater” or “direction” spell. More bothersome to me, though, was the fact that I never knew where to start on my paper. I had the same problem with Shard of Spring, but in retrospect that game didn’t require mapping.

Mapping is another area where the technical affordances of the earliest games had a major effect on their designs. The dungeon levels in most 8-bit CRPGs were laid out on grids of a consistent number of squares across and down; such a template minimized memory usage and simplified the programmer’s task enormously. Unrealistic though it was, it was also a blessing for mappers. Wizardry, a game that was oddly adept at turning its technical limitations into player positives, even included sheets of graph paper of exactly the right size in the box. Later games like Dungeon Master, whose levels sprawl everywhere, run badly afoul of the problem Irby describes above — that of maps “running off the edge of the paper.” In the case of Dungeon Master, it’s the one glaring flaw in what could otherwise serve as a masterclass in designing a challenging yet playable dungeon crawl.

I don’t like it when a program doesn’t take advantage of my second disk drive, and I would feel that way about my printer if I had one. I don’t like junk magic (spells you never use), and I don’t like being stuck forever with the names I pick on the spur of the moment. A name that struck my fancy one day may not on another.

Another problem similar to “junk magic” that only really began to surface around the time that Irby was writing this letter is junk skills. Wasteland is loaded with skills that are rarely or never useful, along with others that are essential, and there’s no way for the new player to identify which are which. It’s a more significant problem than junk magic usually is because you invest precious points into learning and advancing your skills; there’s a well-nigh irreversible opportunity cost to your choices. All of what we might call the second generation of Interplay CRPGs, which began with Wasteland, suffer at least somewhat from this syndrome. Like the sprawling dungeon levels in Dungeon Master, it’s an example of the higher ambitions and more sophisticated programming of later games impacting the end result in ways that are, at best, mixed in terms of playability.

I suppose you are wondering why I play these stupid games if there is so much about them I don’t like. Actually, there are more things I do like, particularly when compared to watching Gilligan’s Island or whatever the current TV fare is. I suppose it would be appropriate to mention a few of the things I do like.

In discussing the unavoidably anachronistic experience we have of old games today, we often note how many other games are at our fingertips — a luxury a kid who might hope to get one new game every birthday and Christmas most definitely didn’t enjoy. What we perhaps don’t address as much as we should is how much the entertainment landscape in general has changed. It can be a little tough even for those of us who lived through the 1980s to remember what a desert television was back then. I remember a television commercial — and from the following decade at that — in which a man checked into a hotel of the future, and was told that every movie ever made was available for viewing at the click of a remote control. Back then, this was outlandish science fiction. Today, it’s reality.

I like variety and surprises. Give me a cast of thousands over a fixed party anytime. Of course, the game designer has to force the need for multiple parties on me, or I will stick with the same group throughout because that is the best way to “win” the game. The Minotaur Temple in Phantasie I and the problems men had in Portsmouth in Might and Magic and the evil and good areas of Wizardry III were nice. More attractive are party changes for strategic reasons. What good are magic users in no-magic areas or a bard in a silent room? A rescue mission doesn’t need a thief and repetitive battles with many small opponents don’t require a fighter that deals heavy damage to one bad guy.

I like variety and surprises in the items found, the map, the specials encountered, in short in every aspect of the game. I like figuring out what things are and how they work. What a delight the thief’s dagger in Wizardry was! The maps in Wasteland are wonderful because any map may contain a map. The countryside contains towns and villages, the towns contain buildings, some buildings contain floors or secret passages. What fun!!!

I like missions and quests to pursue as I proceed. Some of these games are so large that intermediate goals are necessary to keep you on track. Might and Magic, Phantasie, and Bard’s Tale do a good job of creating a path with the “missions.” I like self-contained clues about the puzzles. In The Return of Heracles the sage was always there to provide an assist (for money, of course)  if you got stuck. The multiple solutions or sources of vital information in Might and Magic greatly enhanced the probability of completing the missions and kept the game moving.

I like the idea of recruiting new characters, as opposed to starting over from scratch. In Galactic Adventures your crew could be augmented by recruiting survivors of a battle, provided they were less experienced than your leader. Charisma (little used in most games) could impact recruiting. Wasteland provides for recruiting of certain predetermined characters you encounter. These NPCs can be controlled almost like your characters and will advance with experience. Destiny Knight allows you to recruit (with a magic spell) any of the monsters you encounter, and requires that some specific characters be recruited to solve some of the puzzles, but these NPCs can’t be controlled and will not advance in level, so they are temporary members. They will occasionally turn on you, an interesting twist!!!

I like various skills, improved by practice or training for various characters. This makes the characters unique individuals, adding to the variety. This was implemented nicely in both Galactic Adventurers and Wasteland.

Eternal growth for my characters makes every session a little different and intriguing. If the characters “top out” too soon that aspect of the game loses its fascination. Wizardry was the best at providing continual growth opportunities because of the opportunity to change class and retain some of the abilities of the previous class. The Phantasie series seemed nicely balanced, with the end of the quest coming just before/as my characters topped out.

Speaking of eternal, I have never in all of my various adventures had a character retire because of age. Wizardry tried, but it never came into play because it was cheaper to heal at the foot of the stairs while identifying loot (same trip or short run to the dungeon for that purpose). Phantasie kept up with age, but it never affected play. I thought Might and Magic might, but I found the Fountain of Youth. The only FRPG I have played where you had to beat the clock is Tunnels of Doom, a simple hack-and-slash on my TI 99/4A that takes about ten hours for a game. Of course, it is quite different to spend ten hours and fail because the king died than it is to spend three months and fail by a few minutes. I like for time to be a factor to prevent me from being too conservative.

This matter of time affecting play really doesn’t fit into the “like” or the “don’t like” because I’ve never seen it effectively implemented. There are a couple of other items like that on my wish list. For example, training of new characters by older characters should take the place of slugging it out with Murphy’s ghost while the newcomers watch from the safety of the back row.

The placing of time limits on a game sounds to me like a very dangerous proposal. It was tried in 1989, the year after Irby wrote this letter, by The Magic Candle, a game that I haven’t played but that is quite well-regarded by the CRPG cognoscenti. That game was, however, kind enough to offer three difficulty levels, each with its own time limit, and the easiest level was generous enough that most players report that time never became a major factor. I don’t know of any game, even from this much crueler era of game design in general, that was cruel enough to let you play 100 hours or more and then tell you you’d lost because the evil wizard had finished conquering the world, thank you very much. Such an approach might have been more realistic than the alternative, where the evil wizard cackles and threatens occasionally but doesn’t seem to actually do much, but, as Sid Meier puts it, fun ought to trump realism every time in game design.

A very useful feature would be the ability to create my own macro consisting of a dozen or so keystrokes. Set up Control-1 through Control-9 and give me a simple way to specify the keystrokes to be executed when one is pressed.

Interestingly, this exact feature showed up in Interplay’s CRPGs very shortly after Irby wrote this letter, beginning with the MS-DOS version of Wasteland in March of 1989. And we do know that Interplay was one of the companies to which Shay Addams sent the letter. Is this a case of a single gamer’s correspondence being responsible for a significant feature in later games? The answer is likely lost forever to the vagaries of time and the inexactitude of memory.

A record of sorts of what has happened during the game would be nice. The chevron in Wizardry and the origin in Phantasie is the most I’ve ever seen done with this. How about a screen that told me I had 93 sessions, 4 divine interventions (restore backup), completed 12 quests, raised characters from the dead 47 times, and killed 23,472 monsters? Cute, huh?

Another crazily prescient proposal. These sorts of meta-textual status screens would become commonplace in CRPGs in later years. In this case, though, “later years” means much later. Thus, rather than speculating on whether he actively drove the genre’s future innovations, we can credit Irby this time merely with predicting them.

One last suggestion for the manufacturers: if you want that little card you put in each box back, offer me something I want. For example, give me a list of all the other nuts in my area code who have purchased this game and returned their little cards.

Enough of this, Wasteland is waiting.


With some exceptions — the last suggestion, for instance, would be a privacy violation that would make even the NSA raise an eyebrow — I agree with most of Irby’s positive suggestions, just as I do his complaints. It strikes me as I read through his letter that my own personal favorite among 8-bit CRPGs, Pool of Radiance, manages to avoid most of Irby’s pitfalls while implementing much from his list of desirable features — further confirmation of just what a remarkable piece of work that game, and to an only slightly lesser extent its sequel Curse of the Azure Bonds, really were. I hope Wes Irby got a chance to play them.

I have less to say about the second letter I’d like to share with you, and will thus present it without in-line commentary. This undated letter was sent directly to Interplay by its writer: Thomas G. Gutheil, an associate professor at the Harvard Medical School Department of Psychiatry, on whose letterhead it’s written. Its topic is The Bard’s Tale II: The Destiny Knight, a game I’ve written about only in passing but one with some serious design problems in the form of well-nigh insoluble puzzles. Self-serving though it may be, I present Gutheil’s letter to you today as one more proof that players did notice the things that were wrong with games back in the day — and that my perspective on them today therefore isn’t an entirely anachronistic one. More importantly, Gutheil’s speculations are still some of the most cogent I’ve ever seen on how bad puzzles make their way into games in the first place. For this reason alone, it’s eminently worthy of being preserved for posterity.


I am writing you a combination fan letter and critique in regard to the two volumes of The Bard’s Tale, of which I am a regular and fanatic user.

First, the good news: this is a TERRIFIC game, and I play it with addictive intensity, approximately an hour almost every day. The richness of the graphics, the cute depictions of the various characters, monsters, etc., and rich complexity and color of the mazes, tasks, issues, as well as the dry wit that pervades the program, make it a superb piece and probably the best maze-type adventure product on the market today. I congratulate you on this achievement.

Now, the bad news: the one thing I feel represents a defect in your program (and I only take your time to comment on it because it is so central) and one which is perhaps the only area where the Wizardry series (of which I am also an avid player and expert) is superior, is the notion of the so-called puzzles, a problem which becomes particularly noticeable in the “snares of death” in the second scenario. In all candor, speaking as an old puzzle taker and as a four-time grand master of the Boston Phoenix Puzzle Contest, I must say that these puzzles are simply too personal and idiosyncratic to be fair to the player. I would imagine you are doing a booming business in clue books since many of the puzzles are simply not accomplishable otherwise without hours of frustrating work, most of it highly speculative.

Permit me to try to clarify this point, since I am aware of the sensitive nature of these comments, given that I would imagine you regard the puzzles as being the “high art” of the game design. There should be an organic connection between the clues and the puzzles. For example, in Wizardry (sorry to plug the competition), there is a symbolic connection between the clue and its function. As one simplistic example, at the simplest level a bear statuette get you through a gate guarded by a bear, a key opens a particular door, and a ship-in-a-bottle item gets you across an open expanse of water.

Let me try to contrast this with some of the situations in your scenarios. You may recall that in one of the scenarios the presence of a “winged one” in the party was necessary to get across a particular chasm. The Winged One introduces himself to the party as one of almost a thousand individual wandering creatures that come and offer to join the party, to be attacked, or to be left in peace. This level of dilution and the failure to separate out the Winged One in some way makes it practically unrecallable much later on when you need it, particularly since there are several levels of dungeon (and in real life perhaps many interposing days and weeks) between the time you meet the Winged One (who does not stand out among the other wandering characters in any particular way) and the time you actually need him. Even if (as I do) you keep notes, there would be no particular reason to record this creature out of all. Moreover, to have this added character stuck in your party for long periods of time, when you could instead have the many-times more effective demons, Kringles, and salamanders, etc., would seem strategically self-defeating and therefore counter-intuitive for the normal strategy of game play AS IT IS ACTUALLY PLAYED.

This is my point: in many ways your puzzles in the scenarios seem to have been designed by someone who is not playing the game in the usual sequence, but designed as it were from the viewpoint of the programmer, who looks at the scenario “from above” — that is, from omniscient knowledge. In many situations the maze fails to take into account the fact that parties will not necessarily explore the maze in the predictable direct sequence you have imagined. The flow of doors and corridors do not appropriately guide a player so that they will take the puzzles in a meaningful sequence. Thus, when one gets a second clue before a first clue, only confusion results, and it is rarely resolved as the play advances.

Every once in a while you do catch on, and that is when something like the rock-scissors-paper game is invoked in your second scenario. That’s generally playing fair, although not everyone has played that game or would recognize it in the somewhat cryptic form in which it is presented. Thus the player does not gain the satisfaction of use of intellect in problem solving; instead, it’s the frustration of playing “guess what I’m thinking” with the author.

Despite all of the above criticism, the excitement and the challenge of playing the game still make it uniquely attractive; as you have no doubt caught on, I write because I care. I have had to actively fight the temptation to simply hack my way through the “snares of death” by direct cribbing from the clue books, so that I could get on to the real interest of the game, which is working one’s way through the dungeons and encountering the different items, monsters, and challenges. I believe that this impatience with the idiosyncratic (thus fundamentally unfair) design of these puzzles represents an impediment, and I would be interested to know if others have commented on this. Note that it doesn’t take any more work for the programmer, but merely a shift of viewpoint to make the puzzles relevant and fair to the reader and also proof against being taken “out of order,” which largely confuses the meaning. A puzzle that is challenging and tricky is fair; a puzzle that is idiosyncratically cryptic may not be.

Thank you for your attention to this somewhat long-winded letter; it was important to me to write. Given how much I care for this game and how devoted I am to playing it and to awaiting future scenarios, I wanted to call your attention to this issue. You need not respond personally, but I would of course be interested in any of your thoughts on this.


I conclude this article as a whole by echoing Gutheil’s closing sentiments; your feedback is the best part of writing this blog. I hope you didn’t find my musings on the process of doing history too digressive, and most of all I hope you found Wes Irby and Thomas Gutheil’s all too rare views from the trenches as fascinating as I did.

 

Tags: , ,