RSS

Category Archives: Interactive Fiction

The Commodore 64

As I described in my last article, many people were beginning to feel that change was in the air as they observed the field of videogame consoles and the emerging market for home computers during the middle part of 1982. If a full-fledged computer was to take the place of the Atari VCS in the hearts of America’s youth, which of the plethora of available machines would it be? IBM had confidently expected theirs to become the one platform to rule them all, but the IBM PC was not gaining the same traction in the home that it was enjoying in business, thanks to an extremely high price and lackluster graphics. Apple was still the media darling, but the only logical contender they could offer for the segment, the Apple II Plus, was looking increasingly aged. Its graphics capabilities, so remarkable for existing at all back in 1977, had barely been upgraded since, and weren’t really up to the sort of colorful action games the kids demanded. Nor was its relatively high price doing it any favors. Another contender was the Atari 400/800 line. Although introduced back in late 1979, these machines still had amongst the best graphics and sound capabilities on the market. On the other hand, the 400 model, with its horrid membrane keyboard, was cost-reduced almost to the point of unusability, while the 800 was, once again, just a tad on the expensive side. And Atari itself, still riding the tidal wave that was the VCS, showed little obvious interest in improving or promoting this tiny chunk of its business. Then of course there was Radio Shack, but no one — including them — seemed to know just what they were trying to accomplish with a pile of incompatible machines of wildly different specifications and prices all labeled “TRS-80.” And there was the Commodore VIC-20 which had validated for many people the whole category of home computer in the first place. Its price was certainly right, but it was just too limited to have long legs.

The TI-99/4A. Note the prominent port for "Solid State Software" to the right of the keyboard.

The TI-99/4A. Note the prominent port for “Solid State Software” to the right of the keyboard.

The most obvious contender came from an unexpected quarter. Back in early 1980, the electronics giant Texas Instruments had released a microcomputer called the TI-99/4. Built around a CPU of TI’s own design, it was actually the first 16-bit machine to hit the market. It had a lot of potential, but also a lot of flaws and oddities to go with its expensive price, and went nowhere. Over a year later, in June of 1981, TI tried again with an updated version, the TI-99/4A. The new model had just 16 K of RAM, but TI claimed more was not necessary. Instead of using cassettes or floppy disks, they sold software on cartridges, a technique they called “Solid State Software.” Since the programs would reside in the ROM of the cartridge, they didn’t need to be loaded into RAM; that needed to be used only for the data the programs manipulated. The idea had some real advantages. Programs loaded instantly and reliably, something that couldn’t be said for many other storage techniques, and left the user to fiddle with fragile tapes or disks only to load and save her data files. This just felt more like the way a consumer-electronics device ought to work to many people — no typing arcane commands and then waiting and hoping, just pop a cartridge in and turn the thing on. The TI-99/4A also had spectacularly good graphics, featuring sprites, little objects that were independent of the rest of the screen and could be moved about with very little effort on the part of the computer or its programmer. They were ideal for implementing action games; in a game of Pac-Man, for instance, the title character and each of the ghosts would be implemented as a sprite. Of the other contenders, only the Atari 400 and 800 offered sprites — as well as, tellingly, all of the game consoles. Indeed, they were considered something of a necessity for a really first-rate gaming system. With these virtues plus a list price of just $525, the TI-99/4A was a major hit right out of the gate, selling in numbers to rival the even cheaper but much less capable VIC-20. It would peak at the end of 1982 with a rather extraordinary (if brief-lived) 35 percent market share, and would eventually sell in the neighborhood of 2.5 million units.

With the TI-99/4A so hot that summer of 1982, the one wildcard — the one obstacle to anointing it the king of home computers — was a new machine just about to ship from Commodore. It was called the Commodore 64, and it would change everything. Its story had begun the previous year with a pair of chips.

In January of 1981 some of the engineers at Commodore’s chipmaking subsidiary, MOS Technologies, found themselves without a whole lot to do. The PET line had no major advancements in the immediate offing, and the VIC-20’s design was complete (and already released in Japan, for that matter). Ideally they would have been working on a 16-bit replacement for the 6502, but Jack Tramiel was uninterested in funding such an expensive and complicated project, a choice that stands as amongst the stupidest of a veritable encyclopedia of stupidity written by Commodore management over the company’s chaotic life. With that idea a nonstarter, the engineers hit upon a more modest project: to design a new set of graphics and sound chips that would dramatically exceed the capabilities of the VIC-20 and (ideally) anything else on the market. Al Charpentier would make a graphics chips to be called the VIC-II, the successor to the VIC chip that gave the VIC-20 its name. Bob Yannes would make a sound synthesizer on a chip, the Sound Interface Device (SID). They took the idea to Tramiel, who gave them permission to go ahead, as long as they didn’t spend too much.

In deciding what the VIC-II should be, Charpentier looked at the graphics capabilities of all of the computers and game machines currently available, settling on three as the most impressive, and thus the ones critical to meet or exceed: the Atari 400 and 800, the Mattel Intellivision console, and the soon-to-be-released TI-99/4A. Like all of these machines, the VIC-II chip would have to have sprites. In fact, Charpentier spent the bulk of his time on them, coming up with a very impressive design that allowed up to eight onscreen sprites in multiple colors. (Actually, as with so many features of the VIC-II and the SID, this was only the beginning. Clever programmers would quickly come up with ways to reuse the same sprite objects, thus getting even more moving objects on the screen.) For the display behind the sprites, Charpentier created a variety of character-based and bitmapped modes, with palettes of up to 16 colors at resolutions of up to 320 X 200. On balance, the final design did indeed exceed or at least match the aggregate capabilities of anything else on the market. It offered fewer colors than the Atari’s 128, for example, but a much better sprite system; fewer total sprites (without trickery) than the TI-99/4A’s 32, but bigger and more colorful ones, and with about the same background display capabilities.

If the VIC-II was an evolutionary step for Commodore, the SID was a revolution in PC and videogame sound. Bob Yannes, just 24 years old, had been fascinated by electronic sound for much of his life, devouring early electronica records like those by Kraftwerk and building simple analog synthesizers from kits in his garage. Hired by MOS right out of university in 1978, he felt like he had been waiting all his employment for just this project. An amateur musician himself, he was appalled by the sound chips that other engineers thought exceptional, like that in the Atari 400 and 800. From a 1985 IEEE Spectrum article on the making of the Commodore 64:

The major differences between his chip and the typical videogame sound chips, Yannes explained, were its more precise frequency control and its independent envelope generators for shaping the intensity of a sound. “With most of the sound effects in games, there is either full volume or no volume at all. That really makes music impossible. There’s no way to simulate the sound of any instrument even vaguely with that kind of envelope, except maybe an organ.”

Although it is theoretically possible to use the volume controls on other sound chips to shape the envelope of a sound, very few programmers had ever tackled such a complex task. To make sound shaping easy, Yannes put the envelope controls in hardware: one register for each voice to determine how quickly a sound builds up; two to determine the level at which the note is sustained and how fast it reaches that level; and one to determine how fast the note dies away. “It took a long time for people to understand this,” he conceded.

But programmers would come to understand it in the end, and the result would be a whole new dimension to games and computer art. The SID was indeed nothing short of a full-fledged synthesizer on a chip. With three independent voices to hand, its capabilities in the hands of the skilled are amazing; the best SID compositions still sound great today. Games had beeped and exploded and occasionally even talked for years. Now, however, the emotional palette game designers had to paint on would expand dramatically. The SID would let them express deep emotions through sound and (especially) music, from stately glory to the pangs of romantic love, from joy to grief.

In November of 1981 the MOS engineers brought their two chips, completed at last, to Tramiel to find out what he’d like to do with them. He decided that they should put them into a successor to the VIC-20, to be tentatively titled the VIC-40. In the midst of this discussion, it emerged that the MOS engineers had one more trick up their sleeves: a new variant of the 6502 called the 6510 which offered an easy way to build an 8-bit computer with more than 48 K of RAM by using a technique called bank switching.

Let’s stop here for just a moment to consider why this should have been an issue at all. Both the Zilog Z80 and the MOS 6502 CPUs that predominated among early PCs are 8-bit chips with 16-bit address buses. The latter number is the one that concerns us right now; it means that the CPU is capable of addressing up to 64 K of memory. So why the 48 K restriction? you might be asking. Well, you have to remember that a computer does not only address RAM; there is also the need for ROM. In the 8-bit machines, the ROM usually contains a BASIC-based operating environment along with a few other essentials like the glyphs used to form characters on the screen. All of this usually consumes about 16 K, leaving 48 K of the CPU’s address space to be mapped to RAM. With the arrival of the 48 K Apple II Plus in 1979, the industry largely settled on this as both the practical limit for a Z80- or 6502-based machine and the configuration that marked a really serious, capable PC. There were some outliers, such as Apple’s Language Card that let a II Plus be expanded to 64 K of RAM by dumping BASIC entirely in lieu of a Pascal environment loaded from disk, but the 48 K limit was largely accepted as just a fact of life for most applications.

With the 6510, however, the MOS engineers added some circuitry to the 6502 to make it easy to swap pieces of the address space between two (or more) alternatives. Below is an illustration of the memory of the eventual Commodore 64.

Commodore 64 memory map

Ignoring the I/O block as out of scope for this little exercise, let’s walk through this. First we have 1 K of RAM used as a working space to hold temporary values and the like (i.e., the program stack). Then 1 K is devoted to storing the current contents of the screen. Next comes the biggest chunk, 38 K for actual BASIC programs. Then 8 K of ROM, which stores the BASIC language itself. Then comes another 4 K of “high RAM” that’s gotten trapped behind the BASIC ROM; this is normally inaccessible to the BASIC programmer unless she knows some advanced techniques to get at it. Then 4 K of ROM to hold the glyphs for the standard onscreen character set. Finally, 8 K of kernel, storing routines for essential functions like reading the keyboard or interacting with cassette or disk drives. All of this would seem to add up to a 44 K RAM system, with only 40 K of it easily accessible. But notice that each piece of ROM has RAM “underneath” it. Thanks to the special circuitry on the 6510, a programmer can swap RAM for ROM if she likes. Programming in assembly language rather than BASIC? Swap out the BASIC ROM, and get another 8 K of RAM, plus easy, contiguous access to that high block of another 4 K. Working with graphics instead of words, or would prefer to define your own font? Swap out the character ROM. Taking over the machine entirely, and thus not making so much use of the built-in kernel routines? Swap the kernel for another 8 K of RAM, and maybe just swap it back in from time to time when you want to actually use something there.

Commodore 64 startup screen

The above will hopefully answer the most common first question of a new Commodore 64 user, past or present: Why does my “64 K RAM system” say it has only 38 K free for BASIC? The rest of the memory is there, but only for those who know how to get at it and who are willing to forgo the conveniences of BASIC. I should emphasize here that the concept of bank switching was hardly an invention of the MOS engineers; it’s a fairly obvious approach, after all. Apple had already used the technique to pack a full 128 K of RAM into a 6502-based computer of their own, the failed Apple III (about which more in the very near future). The Apple III, however, was an expensive machine targeted at businesses and professionals. The Commodore 64 was the first to bring the technique to the ordinary consumer market. Soon it would be everywhere, giving the venerable 6502 and Z80 new leases on life.

Jack Tramiel wasn’t a terribly technical fellow, and likely didn’t entirely understand what an extra 16 K of memory would be good for in the first place. But he knew a marketing coup when he saw one. Thus the specifications of the new machine were set: a 64 K system built around MOS’s three recent innovations — the 6510, the VIC-II, and the SID. The result should be cheap enough to produce that Commodore could sell it for less than $600. Oh, and please have a prototype ready for the January 1982 Winter CES show, less than two months away.

With so little time and such harsh restrictions on production costs, Charpentier, Yannes, and the rest of their team put together the most minimalist design they could to bind those essential components together. They even managed to get enough of it done to have something to show at Winter CES, where the “VIC-40” was greeted with excitement on the show floor but polite skepticism in the press. Commodore, you see, had a well-earned reputation, dating from the days when the PET was the first of the trinity of 1977 to be announced and shown but the last to actually ship, for over-promising at events like these and delivering late or not at all. Yet when Commodore showed the machine again in June at the Summer CES — much more polished, renamed the Commodore 64 to emphasize what Tramiel and Commodore’s marketing department saw as its trump card, and still promised for less than $600 — they had to start paying major attention. Days later it started shipping. The new machine was virtually indistinguishable from the VIC-20 in external appearance because Commodore hadn’t been willing to spend the time or money to design a new case.

The Commodore 64

The Commodore 64

Inside it was one hell of a machine for the money, although not without its share of flaws that a little more time, money, and attention to detail during the design process could have easily corrected.

The BASIC housed in its ROM (“BASIC 2.0”) was painfully antiquated. It was actually the same BASIC that Tramiel had bought from Microsoft for the original PET back in 1977. Bill Gates, in a rare display of naivete, sold him the software outright for a flat fee of $10,000, figuring Commodore would have to come back soon for another, better version. He obviously didn’t know Jack Tramiel very well. Ironically, Commodore did have on hand a better BASIC 4.0 they had used in some of the later PET models, but Tramiel nixed using it in the Commodore 64 because it would require a more expensive 16 K rather than 8 K of ROM chips to house. People were already getting a lot for their money, he reasoned. Why should they expect a decent BASIC as well? The Commodore 64’s BASIC was not only primitive, but completely lacked commands to actually harness the machine’s groundbreaking audiovisual capabilities; graphics and sound could be accomplished in BASIC only by using “peek” and “poke” commands to access registers and memory locations directly, an extremely awkward, inefficient, and ugly way of programming. If the memory restrictions on BASIC weren’t enough to convince would-be game programmers to learn assembly language, this certainly did. The Commodore 64’s horrendous BASIC likely accelerated an already ongoing flight from the language amongst commercial game developers. For the rest of the 1980s, game development and assembly language would go hand in hand.

Due to a whole combination of factors — including miscommunication among marketing, engineering, and manufacturing, an ultimately pointless desire to be hardware compatible with the VIC-20, component problems, cost-cutting, and the sheer rush of putting a product together in such a limited time frame — the Commodore 64 ended up saddled with a disk system that would become, even more than the primitive BASIC, the albatross around the platform’s neck. It’s easily the slowest floppy-disk system ever sold commercially, on the order of thirty times slower than Steve Wozniak’s masterpiece, the Apple II’s Disk II system. Interacting with disks from BASIC 2.0, which was written before disk drives existed on PCs, requires almost as much patience as does waiting for a program to load. For instance, you have to type “LOAD ‘$’, 8” followed by ‘LIST’ just to get a directory listing. As an added bonus, doing so wipes out any BASIC program you might have happened to have in memory.

The disk system’s flaws frustrate because they dissipate a lot of potential strengths. Commodore had had a unique approach to disk drives ever since producing their first for the PET line circa 1979. A Commodore disk drive is a smart device, containing its own 6502 CPU as well as ROM and 2 K of RAM. The DOS used on other computers like the Apple II to tell the computer how to control the drive, manage the filesystem, etc., is unnecessary on a Commodore machine. The drive can control itself very well, thank you very much; it already knows all about that stuff. This brings some notable advantages. No separate DOS has to be loaded into the computer’s RAM, eating precious memory. DOS 3.3, for example, the standard on the Apple II Plus at the time of the Commodore 64’s introduction, eats up more than 10 K of the machine’s precious 48 K of RAM. Thus the Commodore 64’s memory edge was in practical terms even more significant than it appeared on paper. Because it’s possible to write small programs for the drive’s CPU to process and load them into the drive’s RAM, the whole system was a delight for hackers. One favorite trick was to load a disk-copying program into a pair of drives, then physically disconnect them from the computer. They would continue happily copying disks on their own, as long as the user kept putting more disks in. More practically for average users, it was often possible for games to play music or display animated graphics while simultaneously loading from the drive. Other computers’ CPU were usually too busy controlling the drive to manage this. Of course, this was a very good feature for this particular computer, because Commodore 64 users would be spending a whole lot more time than users of other computers waiting for their disk drives to load their programs.

Quality-control issues plagued the entire Commodore 64 line, especially in the first couple of years. One early reviewer had to return two machines before Commodore shipped him one that worked; some early shipments to stores were allegedly 80 percent dead on arrival. To go with all of their other problems, the disk drives were particularly unreliable. In one early issue, Compute!’s Gazette magazine stated that four of the seven drives in their offices were currently dead. The poor BASIC and unfriendly operating environment, the atrocious disk system, and the quality-control issues, combined with no option for getting the 80-column display considered essential for word processing and much other business software, kept the Commodore 64 from being considered seriously by most businesses as an alternative to the Apple II or IBM PC. Third-party solutions did address many of the problems. Various improved BASICs were released as plug-in cartridges, and various companies rewrote the systems software to improve transfer speeds by a factor of six or more. But businesses wanted machines that just worked for them out of the box, which Apple and IBM largely gave them while Commodore did not.

None of that mattered much to Commodore, at least for now, because they were soon selling all of the Commodore 64s they could make for use in homes. No, it wasn’t a perfect machine, not even with its low price (and dropping virtually by the month), its luxurious 64 K of memory, its versatile graphics, and its marvelous SID chip. But, like the Sinclair Spectrum that was debuting almost simultaneously in Britain, it was the perfect machine for this historical moment. Also like the Spectrum, it heralded a new era in its home country, where people would play — and make — games in numbers that dwarfed what had come before. For a few brief years, the premiere mainstream gaming platform in the United States would be a full-fledged computer rather than a console — the only time, before or since, that that has happened. We’ll talk more about the process that led there next time.

(As you might expect, much of this article is drawn from Brian Bagnall’s essential history of Commodore. The IEEE Spectrum article referenced above was also a gold mine.)

 
28 Comments

Posted by on December 17, 2012 in Digital Antiquaria, Interactive Fiction

 

Tags: ,

Summer Camp is Over

It’s difficult to exaggerate just what a phenomenon Atari and their VCS console were in the United States of the very early 1980s. The raw figures are astounding; nothing else I’ve written about on this blog holds a candle to Atari’s mainstream cultural impact. By the beginning of 1982 the rest of the business of their parent company, the longstanding media conglomerate Warner Communications, looked almost trivial in comparison. Atari reaped six times as much profit as Warner’s entire music division; five times as much as the film division. By the middle of the year 17 percent of American households owned a videogame console, up from 9 percent at the same time of the previous year. Atari all but owned this exploding market, to the tune of an 80 percent share. The company’s very name had become synonymous for videogames, like Kleenex is for tissue. People didn’t ask whether you played videogames; they asked whether you played Atari. As the company ramped up for the big Christmas season with their home version of the huge arcade hit Pac-Man as well as a licensed adaptation of the blockbuster movie of the year, E.T., they confidently predicted sales increases of 50 percent over the previous Christmas. But then, on December 7, they shocked the business world by revising those estimates radically downward, to perhaps a 10 or 15 percent increase. Granted, plenty of businesses would still love to have growth like that, but the fact remained that Atari for the first time had underperformed. Change was in the air, and everyone could sense it.

Those who had been watching closely and thoughtfully could feel the winds of change already the previous summer, when Atari’s infamously substandard version of Pac-Man sold in massive numbers, but not quite as massive numbers as the company and their boosters had predicted; when sales on the Mattel Intellivision and the brand new ColecoVision soared, presumably at the expense of Atari’s aged VCS; when Commodore continued to aggressively market their low-cost home computers as a better alternative to a games console, and continued to be rewarded with huge sales. The big question became what form the post-VCS future of gaming would take, assuming it didn’t just fade away like the Hula Hoop fad to which videogames were so often compared. There were two broad schools of thought, who would each prove to be right and wrong in their own ways. Some thought that the so-called “second generation” consoles, like the ColecoVision, would pick up Atari’s slack and the console videogame industry would continue as strong as ever. Others, however, looked to the PC industry, which VisiCalc and the IBM PC had legitimized even as Commodore was proving that people would buy computers for the home in huge numbers if the price was right. The VIC-20 may have been only modestly more capable than the Atari VCS, but as a proof of concept of sorts it certainly got people’s attention. With prices dropping and new, much more capable machines on the horizon, many analysts cast their lot with the home computer as the real fruition of the craze that the Atari VCS had started. Full-fledged computers could offer so much better, richer experiences than the consoles thanks to their larger memories, their ability to display text, their keyboards, their disk-based storage. The newest computers had much better graphics and sound than their console counterparts to boot. And of course you could do more than play games with a computer, like write letters or help Junior learn BASIC as a leg-up for a computer world soon to come.

An increasing appreciation of the potential of home computers and computer games by the likes of Wall Street meant big changes for the pioneers I’ve been writing about on this blog. Although most of the signs of these changes would not be readily visible to consumers until the following year, 1982 was the year that Big Capital started flowing into the computer-game (as opposed to the console-centric videogame) industry. Slick new companies like Electronic Arts were founded, and old-media corporations started commissioning software divisions. The old guard of pioneers would have to adapt to the new professionalism or die, a test many — like The Software Exchange, Adventure International, California Pacific, Muse, and Edu-Ware, among dozens of others — would fail. The minority that survived — like On-Line Systems (about to be rechristened Sierra On-Line), Brøderbund, Automated Simulations (about to be rechristened Epyx), Penguin, and Infocom — did so by giving their scruffy hacker bona fides a shave and a haircut, hiring accountants and MBAs and PR firms (thus the name changes), and generally starting to behave like real companies. Thanks to John Williams, who once again was generous enough to share his memories with me, I can write about how this process worked within On-Line Systems in some detail. The story of their transformative 1982, the year that summer camp ended, begins with a venture-capital firm.

TA Associates was founded in 1968 as one of the first of the new breed of VC firms. From the beginning, they were also one of the most savvy, often seeing huge returns on their investments while building a reputation for investing in emerging technologies like recombinant DNA and gene splicing at just the right moment. They were one of the first VC firms to develop an interest in the young PC industry, thanks largely to Jacqueline Morby, a hungry associate who came to TA (and to a career in business) only at age 40 in 1978, after raising her children. While many of her peers rushed to invest in hardware manufacturers like industry darling Apple, Morby stepped back and decided that software was where the action was really going to be. It’s perhaps difficult today to fully appreciate what a brave decision that was. Software was still an uncertain, vaguely (at best) understood idea among businesspeople at the time, as opposed to the concrete world of hardware. “Because it was something you couldn’t see, you couldn’t touch, you couldn’t hold,” she later said to InfoWorld, “it was a frightening thing to many investors.” For her first big software investment, in 1980, Morby backed what would ultimately prove to be the wrong horse: she invested in Digital Research, makers of CP/M, rather than Microsoft. Her record after that, however, would be much better, as she and TA maintained a reputation throughout the 1980s as one of (if not the) major players in software VC. She described her approach in a recent interview:

If you talk to enough entrepreneurs, you quickly figure out which of their ventures are the most promising. First, I would consider the year they were formed. If a company was three years old and employed 100 people, that meant something was going right. Then, after researching what their products did, I’d call them — cold. In those days, nobody called on the presidents of companies to say, “Hi, I’m an investor and I’m interested in you. Might I come out to visit and introduce myself?” But most of the companies said, “Come on out. There’s no harm in talking.” My calling companies led to many, many investments throughout the years.

When you look at the potential of a company, the most important questions to consider are, “How big is its market and how fast is it growing?” If the market is only $100 million, it’s not worth investing. The company can’t get very big. Many engineers never ask these questions. They just like the toys that they’re inventing. So you find lots of companies that are going to grow to $5 million or so in sales, but never more, because the market for their products is not big enough.

By 1982, Morby, now a partner with TA thanks to her earlier software success, had become interested in investing in an entertainment-software company. If computer games were indeed to succeed console games once people grew tired of the limitations of the Atari VCS and its peers, the potential market was going to be absolutely huge. After kicking tires around the industry, including at Brøderbund, she settled on On-Line Systems as just the company for her — unique enough to stand out with its scenic location and California attitude but eager to embrace the latest technologies, crank out hits, and generally take things to the next level.

When someone offers you millions of dollars virtually out of the blue, you’re likely to think that this is all too good to be true. And indeed, venture capital is always a two-edged sword, as many entrepreneurs have learned to their chagrin. TA’s money would come only with a host of strings attached: TA themselves would receive a 24 percent stake in On-Line Systems; Morby and some of her colleagues would sit on the board and have a significant say in the company’s strategic direction. Most of all, everyone would have to clean up their act and start acting like professionals, starting with the man at the top. Steven Levy described Ken Williams in his natural habitat in Hackers:

Ken’s new office was just about buried in junk. One new employee later reported that on first seeing the room, he assumed that someone had neglected to take out a huge, grungy pile of trash. Then he saw Ken at work, and understood. The twenty-eight-year-old executive, wearing his usual faded blue Apple Computer T-shirt and weather-beaten jeans with a hole in the knee, would sit behind the desk and carry on a conversation with employees or people on the phone while going through papers. The T-shirt would ride over Ken’s protruding belly, which was experiencing growth almost as dramatic as his company’s sales figures. Proceeding at lightning pace, he would glance at important contracts and casually throw them in the pile. Authors and suppliers would be on the phone constantly, wondering what had happened to their contracts. Major projects were in motion at On-Line for which contracts hadn’t been signed at all. No one seemed to know which programmer was doing what; in one case two programmers in different parts of the country were working on identical game conversions. Master disks, some without backups, some of them top secret IBM disks, were piled on the floor of Ken’s house, where one of his kids might pick it up or his dog piss on it. No, Ken Williams was not a detail person.

If Ken was not detail-oriented, he did possess a more valuable and unusual trait: the ability to see his own weaknesses. He therefore acceded wholeheartedly to TA’s demands that he hire a squad of polished young managers with suits, resumes, and business degrees. He even let TA field most of the candidates. He hired as president Dick Sunderland, a fellow he had worked for before the birth of On-Line, where he had been loathed by the hackers under him as too pedantic, too predictable, too controlling, too boring. To Ken (and TA) this sounded like just the sober medicine On-Line would need to compete in the changing industry.

Which is not to say that all of this new professionalism didn’t also come with its attendant dangers. John Williams states frankly today that “some of those new managers came in with the idea that they would run the business after they pushed Ken to the side or out.” (It wasn’t clear to the Williams whether they came up with that idea on their own or TA subtly conveyed it to them during the hiring process.) Ken also clashed constantly with his own hire Sunderland; the latter would be gone again within a year. He was walking a difficult line, trying to instill the structure his company needed to grow and compete and be generally taken seriously by the business community without entirely losing his original vision of a bunch of software artisans creating together in the woods. As org charts started getting stapled to walls, file cabinets started turning up locked, and executive secretaries started appearing as gatekeepers outside the Williams’ offices, many of the old guard saw that vision as already dying. Some of them left. Needless to say, Ken no longer looked for their replacements in the local liquor store.

Ken proved amazingly adept at taking the good advice his new managers had to offer while remaining firmly in charge. After a while, most accepted that he wasn’t going anywhere and rewarded him with a grudging respect. Much of their advice involved the face that On-Line presented to the outer world. For a long time now everyone had agreed that the name “On-Line Systems,” chosen by Ken back when he had envisioned a systems software company selling a version of FORTRAN for microcomputers, was pretty awful — “generic as could be and dull as dishwater” in John Williams’s words. They decided on the new name of “Sierra On-Line.” The former part conveyed the unique (and carefully cultivated) aura of backwoods artisans that still clung to the company even in these more businesslike days, while the latter served as a bridge to the past as well as providing an appropriate high-tech flourish (in those times “On-Line” still sounded high-tech). They had a snazzy logo featuring a scenic mountain backdrop drawn up, and revised and slicked-up their packaging. The old Hi-Res Adventure line was now SierraVenture; the action games SierraVision.

Sierra hired Barbara Hendra, a prominent New York PR person, to work further on their image. Surprisingly, the erstwhile retiring housewife Roberta was a big force behind this move; her success as a game designer had revealed an unexpected competitive streak and a flair for business of her own. Hendra nagged Roberta and especially Ken — he of the faded, paunch-revealing tee-shirt and the holey jeans — about their dress and mannerisms, teaching them how to interact with the movers and shakers in business and media. She arranged a string of phone interviews and in-person visits from inside and outside the trade press, including a major segment on the prime-time news program NBC Magazine. Ken was good with these junkets, but Roberta — pretty, chic, and charming — was the real star, Sierra’s PR ace in the hole, the antithesis of the nerds so many people still associated with computer games. When someone like Roberta said that computer games were going to be the mass-market entertainment of the future, it somehow sounded more believable than it did coming from a guy like Ken.

In the midst of all this, another windfall all but fell into Sierra’s lap. Christopher Cerf, a longtime associate of The Children’s Television Workshop of Sesame Street fame, approached them with some vague ideas about expanding CTW into software. From there discussions moved in the direction of a new movie being developed by another CTW stalwart: Jim Henson, creator of the Muppets. For Ken, who had been frantically reading up on entertainment and media in order to keep up with the changes happening around his company, the idea of working with Henson was nothing short of flabbergasting, and not just because the Muppets were near the apogee of their popularity on the heels of two hit movies, a long-running television series, and a classic Christmas special with John Denver. John Williams:

Ken developed a kind of worship for two men as he began to study up on entertainment. One was Walt Disney and the second was Jim Henson. Both were men who were enablers — not known as much for their own artistry so much as their ability to bring artists and business together to make really big things happen — and that was what Ken strived for. Walt was already gone of course, but Henson was still alive.

Ken Williams (right) hobnobbing with Jim Henson

Ken Williams (right) hobnobbing with Jim Henson

The almost-completed movie was called The Dark Crystal. In the works on and off for five years, it marked a major departure for Henson and his associates. Although populated with the expected cast of puppets and costumed figures (and not a single recognizable human), there were no Muppets to be found in it. It was rather a serious — even dark — fantasy tale set in a richly organic landscape of the fantastic conceived by Henson’s creative partner on the project, designer and illustrator Brian Froud. In an early example of convergence culture, Henson and friends were eager to expand the world of the movie beyond the screen. They already planned a glossy hardcover book, a board and a card game, and a traveling art exhibit. Now an adventure game, to be designed by Roberta, sounded like a good idea. Such a major media partnership was a first for a computer-game publisher, although Atari had been doing licensed games for some time now for the VCS. Anyone looking for a sign that computer games were hitting the big time needed look no farther.

The Dark Crystal

For the Williamses, the changes that the venture capitalists had brought were nothing compared to this. Suddenly they were swept into the Hollywood/Manhattan media maelstrom, moving in circles so rarified they’d barely realized they existed outside of their televisions. John Williams again:

I remember this time very well. Let me put it in a very personal perspective. I’m like 22 or 23. A guy who grew up in Wheaton, Illinois (which is just down the street from absolutely nowhere) and currently living in a town of like 5000 people 50 miles from the nearest traffic light. Now imagine this young wet-behind-the-ears punk walking through the subways and streets of Manhattan with Jim Henson, getting interviewed on WNBC talk radio while wearing his first real tailored suit. Eating at “21” with Chris Cerf, and taking limos to meet with publishing companies on Times Square. That was me – and I was just along for the ride. For Ken and Roberta, it was on a whole other level.

Much of the Williams’ vision for computerized entertainment, of games as the next great storytelling medium to compete with Hollywood, was forged during this period. If they had ever doubted their own vision for Sierra, hobnobbing with the media elite convinced them that this stuff was going to get huge. Years before the technology would become practical, they started toying with the idea of hiring voice actors and considering how Screen Actors Guild contracts would translate to computer games.

But for here and now there was still The Dark Crystal, in the form of both movie and game. Both ended up a bit underwhelming as actual works when set against what they represent to Sierra and the computer-game industry.

The movie is in some ways an extraordinary achievement, a living alien world built from Styrofoam, animatronics, and puppets. It’s at its most compelling when the camera simply lingers over the landscape and its strange inhabitants. Unfortunately, having created this world, Henson and company don’t seem quite sure what to do with it. The story is an unengaging quest narrative which pits an impossibly, blandly good “chosen one,” the Gelfling Jen, against the impossibly evil race of the Skeksis. It’s all rather painfully derivative of The Lord of the Rings: two small protagonists carry an object of great power into danger, with even a Gollum stand-in to dog their steps. Nor do the endless melodramatic voiceovers or the hammy voice acting do the film any favors. It’s a mystery to whom this film, too dark and disturbing for children and too hokey and simplistic for adults and with none of the wit and joy that marked Henson’s Muppets, was meant to really appeal. There have been attempts in recent years to cast the movie, a relative commercial disappointment in its time, as a misunderstood masterpiece. I’m not buying it. The Dark Crystal is no Blade Runner.

The game is similarly difficult to recommend. Like The Hobbit, The Dark Crystal‘s quest narrative maps unusually well to an adventure game, but Roberta showed none of the technical ambition that Veronika Megler displayed in making a game of her source material. The Dark Crystal suffers from the same technical and design flaws that mark all of the Hi-Res Adventure line: absurd puzzles, bad parser, barely-there world model, you’ve heard the litany before from me. In the midst of the problems, however, there are more nods toward story than we’re used to seeing in our old-school adventure games, even if they sometimes smack more of the necessities born of doing a movie adaptation than a genuine striving to advance the medium. Early on we get by far the longest chunk of expository text to make it into any of the Hi-Res Adventure line.

The Dark Crystal

Unusually, the game is played in the third person, with you guiding the actions of the movie’s hero Jen and, later, both Jen and his eventual sidekick/tentative love interest, Kira. The duality of this is just odd; you never quite know who will respond to your commands. The third-person perspective extends to the graphics, which show Jen and Kira as part of each scene.

The Dark Crystal

As Carl Muckenhoupt mentions in his (highly recommended) posts about the game, it’s tempting to see the graphics as a transitional step between the first-person perspective of Roberta’s earlier Hi-Res Adventure games and the fully animated adventure games that she would make next — games that would have you guiding your onscreen avatar about an animated world in real-time. It’s also very possible that working with the fleshed-out story and world of someone else inspired Roberta to push her own future original works further in the direction of real storytelling. Notably, before The Dark Crystal none of her games bothered to define their protagonists or even give them names; after it, all of them did.

Whatever influence it had on Roberta’s design approach, the fact remains that she seemed less passionate about The Dark Crystal itself than she had been about her previous games. With the licensing deal having been finalized as the movie was all but ready for release, The Dark Crystal was what John Williams euphemistically calls a “compressed timeline” game. Roberta spent only a month or so on the design while dealing with all of the distractions of her new life in the spotlight, then turned the whole thing over to Sierra’s team of in-house programmers and artists. It all feels a bit rote. John:

The simple truth is that the whole of the Dark Crystal project was, in the end, a business decision and not really driven by our developers or our creative people. I think that’s really why this is one of the least cared about and least remembered products in the Sierra stable. Look back at that game and there’s really none of Roberta’s imagination in there – and the programmers, artists, etc., involved were basically mimicking someone else’s work and creating someone else’s vision. The lack of passion shows.

The player must not so much do what seems correct for the characters in any given situation as try to recreate the events of the film. If she succeeds, she’s rewarded with… exactly what she already saw in the movie.

The Dark Crystal

The Dark Crystal

Adapting a linear story to an interactive medium is much more difficult than it seems. This is certainly one of the least satisfying ways to approach it. The one nod toward the dynamism that marks The Hobbit are a couple of minions sent by the Skeksis to hunt you down: an intelligent bat and a Garthim, a giant, armored, crab-like creature with fearsome pincers. If you are spotted in the open by the bat, you have a limited amount of time to get under cover — trees, a cave, or the like — before a Garthim comes to do you in. That’s kind of impressive given the aging game engine, and it does help with the mimesis that so many of the game’s other elements actively work against. But alas, it’s just not enough.

Even with the rushed development schedule, the game didn’t arrive in stores until more than a month after the movie’s December 7, 1982, premiere. After, in other words, the big Christmas buying season. That, along with the movie’s lukewarm critical reception and somewhat disappointing performance at the box office, likely contributed to The Dark Crystal not becoming the hit that Sierra had expected. Its sales were disappointing enough to sour Sierra on similar licensing deals for years to come. Ken developed a new motto: “I don’t play hits, I make them.”

Of course, it also would have been unwise to blame The Dark Crystal‘s underperformance entirely on timing or on its being tied to the fate of the movie. The old Hi-Res Adventure engine, which had been so amazing in the heyday of The Wizard and the Princess, was getting creaky with age, and had long since gone past the point of diminishing commercial returns; not only The Dark Crystal but also its immediate predecessor, the epic Time Zone, had failed to meet sales expectations. This seventh Hi-Res Adventure would therefore be the last. Clearly it was time to try something new if Sierra intended to keep their hand in adventure games. That something would prove to be as revolutionary a step as had been Mystery House. The Dark Crystal, meanwhile, sneaked away into history largely unloved and unremembered, one of the first of a long industry tradition of underwhelming, uninspired movie cash-ins. The fact that computer games had reached a stage where such cash-ins could exist is ultimately the most important thing about it.

If you’d like to try The Dark Crystal for yourself despite my criticisms, here’s the Apple II disk images and the manual.

(As always, thanks to John Williams for his invaluable memories and insights on these days of yore. In addition to the links embedded in the text, Steven Levy’s Hackers and the old Atari history Zap! were also wonderful sources. Speaking of Atari histories: I look forward to diving into Marty Goldberg and Curt Vendel’s new one.)

 
26 Comments

Posted by on December 12, 2012 in Digital Antiquaria, Interactive Fiction

 

Tags: ,

The Hobbit Redux

Sometimes I get things wrong. Usually it’s minor errors that come down to a careless moment or something that got wedged between the teeth of my rusting steel trap of a mind. Luckily, you folks who read what I write almost always come through to correct me when I make mistakes or even when I overreach. Something like that happened with the most recent article I’ve written, but it had causes a little bit more complicated than one of my usual attacks of boneheadedness.

Virtually all of the articles published about Melbourne House and The Hobbit — of which, unsurprisingly given the game’s immense popularity, there were quite a few — describe it as largely the work of Philip Mitchell, who wrote it with the aid of Veronika Megler and Stuart Richie. These are the sources which I relied upon to write my story of the game’s development. Shortly after I published my article, however, Veronika Megler contacted me to tell me that the contemporary sources are, simply put, false. She told me that hers was the primary mind behind the game, that Mitchell developed only the parser and handled the porting to the Spectrum and the addition of the pictures after she had left Melbourne House. Richie’s work, meanwhile, was theoretical rather than technical and played little actual role in the finished game.

I was of course quite nonplussed to hear this, but Veronika’s descriptions of the game’s development and the role played by everyone were so precise that I immediately tended toward believing her. That belief only strengthened as I talked to her more. Today I believe that the official story found in the magazines is a distortion (at best) of the facts.

It’s not difficult to understand how this could have happened. The story of The Hobbit‘s development started to be widely disseminated in the computer press during the lead-up to publication of Philip Mitchell and Melbourne House’s next big adventure game, Sherlock. Thus the pieces in question functioned not only as retrospectives, but — more importantly, at least in the eyes of Melbourne House — promotions for what was coming next. It sounds much better to speak of “the next game by the architect of the hit adventure The Hobbit” than “the next game by the guy who assisted the architect of the hit adventure The Hobbit.” Thus Mitchell’s role was vastly overstated, and Megler’s correspondingly reduced; in effect the two swapped roles, with Mitchell becoming the architect and Megler his assistant. As readers like me took those original articles at face value, this version of events passed down into history.

That’s unfortunate, and I understand Veronika’s frustration at having been effectively robbed of credit that is due to her. However, I can also understand how the pressures of promoting the follow-up to such a gargantuan hit could have led Alfred Milgrom and Mitchell down the path they took. I will also just note for the record that Veronika feels strongly that sexism also played a role in the downplaying of her contribution, although I’m not prepared to levy that accusation myself without knowing the people involved better or having more evidence.

Whatever the reasons behind the changing of the record, I’m convinced at this point that Veronika was indeed the major force behind the form The Hobbit took, as well as its major technical architect. I’ve revised the original article accordingly to reflect the true contributions of everyone involved. If you’ve already read it, I’d encourage you to give the new version a quick skim again, or at the least to know that much of what I credited to Philip Mitchell in the original should rightfully have been credited to Veronika Megler. Sometimes, alas, getting to historical truth is a process. I thank Veronika for taking the time to work with me to document what really happened.

I’m actually on holiday as I write this, back in the United States again. So, it will be a couple of weeks before I’ll have more material for you. But keep me in your RSS readers, because we’ll next be rounding the corner into 1983 at last, and things just keep getting more and more interesting.

In the meantime, happy Thanksgiving to my American readers, and to everyone thanks for reading!

 

Tags: , , ,

The Hobbit

In 1977 Alfred Milgrom started Melbourne House, a book publisher, with “four and a half” employees and offices in London and his native Melbourne, Australia. Over the next several years they made a modest go of it. In addition to a stable of young Australian authors, they established something of a specialty as a publisher of mid-list American authors who lacked contracts with the larger British and Australian houses. They signed quite a variety of them: novelist Gerald Green, just coming off a career peak as the screenwriter of the high-profile American television miniseries Holocaust; nonfiction man’s man extraordinaire Robin Moore, most famous for his 1965 book The Green Berets, which spawned one of the most unexpected hit songs ever as well as an infamously jingoistic John Wayne movie; Lin Farley, one of the first to write about sexual harassment in the workplace; and Raymond Andrews, author of a trilogy of novels about a black sharecropper family in the mid-century South.

And then came 1980, and with it the Sinclair ZX80. With a PhD in “chemistry, maths, and physics” from Melbourne University, Milgrom had some somewhat atypical interests for a publisher; he had “always been interested in computers.” He quickly bought a ZX80 of his own. That August, Melbourne House published the hastily put together 30 Programs for the Sinclair ZX80, an unadorned collection of short, simple BASIC listings that could fit within the unexpanded machine’s 1 K of memory, including even a very stripped-down Eliza-like conversation simulator. The programs were credited to an alias computer gamers would soon come to recognize almost as quickly as the name “Melbourne House” itself: Beam Software, a contraction of Milgrom’s initials and the last name of another at Melbourne House who worked with him on the book, Naomi Besen.

In barely a year’s time WH Smith would be selling Sinclairs out of their High Street shops, but at this time no one in the bookseller’s trade knew what to make of the book Milgrom was now trying to sell. So he started taking out advertisements in the enthusiast magazines instead for what was likely the first book ever published about a Sinclair computer. It turned into a “runaway success,” the company’s immediate bestseller. Milgrom followed it up with more hastily produced technical books, written both in-house and by others. Melbourne House would remain one of the most prolific of British computer-book publishers for much of the 1980s. With so much opportunity in this area, their interest in publishing other types of books gradually fell by the wayside.

What with their publishing so many program listings in book form, it seemed an obvious move to begin offering some of them on tape for those who didn’t feel like doing so much typing. Accordingly, their first program on cassette, yet another clone of Space Invaders, appeared in February of 1981, the beginning of a slow transformation in primary avocation from book to software publisher. In a sometimes confusing dichotomy, Melbourne House would remain the publishing arm of Milgrom’s organization, while the wholly owned subsidiary Beam Software served as their in-house development group. Melbourne House would also sometimes publish programs created by outside developers, but for all practical purposes Melbourne House and Beam Software were one and the same entity.

Milgrom had been aware of Adventure and its derivatives for years, some of the latter of which were just beginning by mid-1981 to sneak into the British software market in the form of early, primitive efforts by Artic Computing. Realizing that the form was soon likely to be huge in Britain, as it already was in the United States, he decided to commit Melbourne House to creating one bigger and better than anything currently available for British microcomputers. Knowing he lacked the time and the technical skills to implement such an ambitious project, he posted an advertisement back in Australia at his alma mater, the University of Melbourne, looking for computer-science students interested in working part time on a game-development project. (It being the beginning of August, the Australian spring semester was just beginning.) The first to respond was Veronika Megler, a student about to begin her final year as an undergraduate with a particular interest in database design. Milgrom gave her a very simple brief: “Make the best adventure game ever. Period.”

Luckily, Megler had plenty of ideas about how to approach that rather vague objective. She had played just one adventure game in her life — typically enough, the original Adventure by Crowther and Woods. Yet she already felt she knew enough about the form to say what she didn’t like about it, what she wanted her game to do differently. She hated the threadbare world and static nature of Adventure, the way that most of the possible interactions were pre-scripted so that certain verbs only worked in certain places and many perfectly sensible actions were completely unprovided for. Most of all, she hated the way the other characters in the world had nothing to do, no possibility of reacting to the player’s actions. In place of solitary, static puzzle-solving, she imagined a dynamic environment filled with other characters moving about the world and pursuing agendas of their own — something that might actually feel like living inside a real story. Both Megler and Milgrom also very much wanted to get beyond primitive two-word parsers, something only Infocom had so far managed.

Megler recruited a partner to work with her on the game, Philip Mitchell, a fellow senior with whom she had already worked on a number of group projects and whom she knew to be both easy to get on with and a skilled programmer. Milgrom himself added a third member to the team specifically to help them with the parser: Stuart Richie, who was doing a dual degree in English linguistics and computer science, with a special interest in combining the two fields.

At first, the game was planned as a generic fantasy adventure. However, none of the people involved had any experience as writers of fiction. At some point during the early stages of development, someone (it’s unclear exactly who) suggested that it might be possible to adapt J.R.R. Tolkien’s The Hobbit. Once named, it seemed the obvious candidate for a story. Bilbo Baggins’s quest to kill the dragon Smaug and return safely with his treasure, overcoming trials and tribulations along the way, was not just suitable for an adventure game but practically identical in the broad strokes to the structure of most of them. And The Hobbit was very popular — probably the most-read fantasy novel of all time, in fact — which would guarantee the game an eager audience. (I’m going to assume from here on that you’ve read the book, which I think is probably true of most everybody reading this blog. If you haven’t, you should. It’s a consistent delight, with none of the reactionary nostalgia for an older, class-bound Britain that sometimes bothers me about The Lord of the Rings.)

Unlike more naive characters like the Austin brothers, Milgrom knew that he needed to work something out with the Tolkien estate before releasing a commercial game based on the novel. About six months in, with some demonstrations ready to show them, he made contact. As Milgrom put it in later interviews, he had “contingency plans” if the Tolkien people should turn him down — presumably, filing the proverbial serial numbers off and releasing the game as a generic fantasy adventure after all. But luckily they were very receptive. As I write this, we’re awash in hype over the imminent release of the first of Peter Jackson’s Hobbit movies. It’s amazing to consider that thirty years ago the Tolkien estate was willing to entrust the property to a tiny publisher like Melbourne House employing a few kids working part-time when not at university. Tolkien was then, as he is now, the premiere fantasy writer. It’s just that the position of fantasy fiction within popular culture has changed incalculably, in no small part due to trends whose roots I’ve been chronicling on this blog.

Even with the novel to provide a world and the outline of a plot, the team had an insanely ambitious brief, one that obviously was not going to fly on the current Sinclair machines. Nor had Sinclairs made their way into the Australian market in any great numbers anyway. The most popular PC there at the moment was a Hong Kong-built clone of the TRS-80 sold through the local Dick Smith chain of electronic stores: the dubiously legal Dick Smith System 80. These machines shipped with only 4 K or 16 K of memory, but with a bit of ingenuity could be expanded up to 48 K. They also used the Z80 processor found in many machines, including the Sinclairs. Milgrom and his team decided to make their game on their hacked 48 K System 80s, under the assumption that by the time it was finished other, more consumer-friendly machines with the necessary attributes would be available to which they could port it without too much hassle. This practice of targeting tomorrow’s hardware today is now common in AAA game development; The Hobbit was perhaps the first example of it.

Of course, with 48 K and no disk drive to work with for virtual memory (Australia, like Britain, was still firmly cassette-bound), they still had one hell of a task in front of them. Megler remained the linchpin of the project, developing a whole adventuring system that should be at least theoretically reusable in future games. She also went through the book to develop a plan for the game, mapped the major events and characters to locations in the world, and added them to the engine’s database. Mitchell worked on a full-sentence parser that would allow the player to talk to the other characters in the world and even order them about. He called his system “Inglish.” Together, the code for the engine and the parser was eventually squeezed down to about 17 K, leaving the rest of the memory for Megler’s database. Richie, who was employed by Melbourne House for only a few months, contributed no code, and his ideas ultimately had little influence on the system. Milgrom’s idea of hiring a linguistics expert to develop a parser is one of those that sounds better in theory than it works in reality. As countless other programmers have learned, developing a good adventure-game parser has more to do with common sense and careful diligence than elaborate theories about linguistics or natural-language processing.

The Hobbit‘s development had some similarities to a student project, a certain abstract naiveté that sometimes threatened to send the team wandering hopelessly off course. They were having great fun — perhaps sometimes too much fun — just playing in this world they were building. Thanks to all of its random dynamism, it constantly surprised even them. Megler sometimes played the system like an early version of a god game such as The Sims, injecting new NPCs just to see what would happen and what kind of chaos they would cause with their fellow actors and the player: “I’d written in an angry dwarf that kept trying to kill you, and if you did something (I don’t remember what) it became a randy dwarf, and kept following you around and propositioning you. But Fred and Phil decided that was a little too much, and made me take it out again.”

And then it was the summer of 1982, the semester was over, and — in a demonstration of just what a part-time, semi-amateur project this was — Megler, the primary architect of all this, was suddenly gone: “I was bored with full-time programming and debugging, and eager to get on with a ‘real career’ (which gaming wasn’t, back then).” Only Mitchell stayed behind, to be hired by Milgrom as a regular, full-time employee. By this time The Hobbit was in a relatively finished form, a bit rough around the edges but basically a playable game on the TRS-80/System 80. Now the ideal platform on which to actually release it had come around, just as they had hoped it would: the first Sinclair Spectrums were just reaching consumers back in Britain. What with Melbourne House’s distribution network in that country and the tiny size of the domestic Australian market, the Spectrum and Britain were the obvious target platform and market respectively for their game. Luckily, the Spectrum used the same Z80 chip as their development platform, and had the same 48 K of memory. Porting Megler’s engine to the Speccy should be relatively simple.

The Speccy did also have one important feature that their development machines had lacked: color, bitmapped graphics. Milgrom decided that illustrations could be the cherry on top of his next-generation adventure. He commissioned an artist, Kent Rees, to create — on paper, as was the norm at the time — pictures for about 30 of the game’s 80-odd locations. Mitchell then developed a system to trace these images and import them into the computer, using the vector-drawing techniques pioneered by Ken Williams for Mystery House. (You can see clear evidence of this in the finished game; the computer draws each line and fill one by one before your eyes, like an artist recreating the picture each time.) The illustrations are by no means stunning, but they were certainly novel in their time, and sometimes do manage to add a little something to the atmosphere.

Interestingly, Mitchell continued to do most of this work on the System 80, a much more pleasant machine to work with thanks to its real keyboard. He only moved the finished product to the Spectrum when it came time to test his handiwork. (To add to the irony, the TRS-80 would be one of the few platforms on which The Hobbit would never get an official release.) Thanks to some very efficient drawing algorithms as well as smart text-compression routines that rivaled those of Level 9, Mitchell was able to pack the entire game, with illustrations, into the 48 K Spectrum, a remarkable feat indeed when one considers that he had no recourse to external storage — 48 K was literally all he had to work with for code, text, data, and pictures.

As summer passed into fall, the game was settling into its final form. But now a persistent problem threatened to derail everything: a multitude of tiny glitches and bugs that cumulatively began to overwhelm the experience of every session the longer it continued. Rather than crafting interactions by hand, Megler had striven to make The Hobbit a dynamic simulation. Monsters and other characters move about and act differently in every session, guided by random chance as well as their disposition toward the player (attacking Gandalf, Elrond, or Thorin tends to get you on their bad side); every object has a weight, size, and strength that determine its interactions with every other; each character, CRPG-style, has a certain numerical defensive and offensive strength as well as a health level for determining the results of combat. This could all lead to fascinating examples of what we would now call emergent behavior or even emergent storytelling, but it could also lead to a welter of bugs and general weirdness. Tracking these down turned into a nightmare, as the randomization and dynamism of the world meant that many were impossible to reproduce consistently. This had presented a huge challenge even when Megler was still on the project:

The Hobbit was a tough game to test. Unlike the other games of the time, it was written in assembler, not BASIC, and we would find bugs in the assembly and linking programs. Also, it was not deterministic, and the game played differently every time you played it, as a result of Philip doing a lot of work to develop a “perfect” randomizing routine. Literally, the player had a turn, then each animal had a turn, and the animals just “played” the game themselves according to their character profile, which included interacting with each other. In essence, the animals would do to each other anything that they could do to or with you. So we would constantly have animals interacting in ways that had never been programmed or envisioned. The game would crash because of something that happened in another part of the game that you as the user (or person testing the game!) didn’t see, because the game only showed you what was happening in your location. For a while, we had terrible trouble with all the animals showing up in one location and then killing each other before you got there, before I got the character profiles better adjusted!

Melbourne House struggled with these problems for a time, but eventually, as development stretched toward the eighteen-month mark, seems to have just declared it good enough and pushed it out the door. A telling disclaimer in the manual indicates that they were aware that it wasn’t quite in the state it probably should have been: “Due to the immense size and complexity of this game it is impossible to guarantee that it will ever be completely error-free.” And indeed, the first release of the game in particular is riddled with inexplicabilities. Swords break on spider webs; Bilbo can carry the strapping warrior Bard about for hours; Gandalf and Thorin can walk through walls; garbled text and status messages obviously meant for the development team pop up from time to time. Melbourne House released a version 1.1 shortly thereafter, which fixed some of this but — oops! — also broke another critical interaction, rendering the game unwinnable. Version 1.2 soon followed, but throughout the game’s long published history Melbourne House seemed to remain stuck in the same perpetual game of whack-a-mole. Today it’s still remembered for its bugs almost as much as anything else.

The parser is beset by problems of its own. It does understand a lot, including, for the first time anywhere to my knowledge, adverbs. It’s possible, for instance, to “viciously attack the mean goblin,” although I’d be shocked to learn that it doesn’t just throw away the adverb as it does articles. Yet in other ways, especially in early releases, it’s very frustrating to work with. It’s possible to “climb into the boat,” but not to “enter” or “get in” it; possible to ask Thorin to “carry me,” but not to ask him to “take me” (talk of randy dwarfs aside, no double entendre intended); possible to “look across the river”, but not to “look over” it. When I recently played the game I had at least two occasions where I knew what to do but just could not express it to the game no matter how hard I tried, and finally had to get the answer from a walkthrough. Coming from someone who’s played as many text adventures as I have, that’s a condemnation indeed.

Playing The Hobbit can be, as stated in the perfect title of its one MobyGames review, “strange.” In spite of the grand ambitions, expecting even a shadow of the richness of Tolkien’s world (not to mention his prose) in a 48 K adventure game is expecting too much. There was no real possibility of presenting the temporal element that is so important to stories. Instead, the plot of the novel is mapped to the game’s geography: moving further eastward gets you further and further into the story, from the beginning in Bilbo’s hobbit hole to the climax at Smaug’s lair. (The Battle of the Five Armies, like all of the dwarfs except Thorin, is left out as just too complicated to deal with.) This has the disconcerting side effect that you can travel back in time whenever you wish: the trolls’ camp is just two moves east of Bilbo’s house, and one move west of Rivendell. Needless to say given such a compressed geography, the sense of embarking on a grand journey that the book conveys so well is largely absent. That it works as well as it does is a testament to the book’s almost uniquely adventure-game-suitable quest narrative. Few other temporal landscapes could be mapped even this neatly to the geographical.

The experience feels rather like wandering through a series of stage sets depicting the major scenes from the book — stage sets which are also being wandered by a bunch of other characters just smart enough to be profoundly, infuriatingly stupid. Your companions on the quest, Thorin and Gandalf, are both singularly useless (or worse) when left to their own devices. Never one to let circumstances get in the way of avarice, Thorin will “sit down to sing about gold” in the midst of a goblin, warg, or dragon attack. Gandalf, meanwhile, is also attracted to shiny objects; he constantly plucks random items off your person (“What’s this?”), then tosses them on the ground and wanders off when his one-turn attention span expires. A critical element of the game is the player’s ability — and occasional requirement — to give orders to other (friendly) characters, to have them do things beyond the abilities of a four-foot-tall hobbit. Sometimes they do what you ask, but sometimes they’re feeling petulant. Perhaps the seminal Hobbit moment comes when you scream at Bard to kill the dragon that’s about to engulf you both in flames, and he answers, “No.” After spending some time with this collection of half-wits, even the most patient player is guaranteed to start poking at them with her sword at some point.

And actually, therein sort of lies the secret to enjoying the game, and the root of its appeal in its time. It can be kind of fascinating to run around these stage sets with all of these other crazy characters just to see what can happen — and what you can make happen. Literally, no two games of The Hobbit are the same. I can see what Megler was striving toward: a truly living, dynamic story where anything can happen and where you have to deal with circumstances as they come, on the fly. It’s a staggeringly ambitious, visionary thing to be attempting. Infocom had already moved somewhat in that direction with Deadline, but (probably wisely) had hung the more dynamic elements from a scaffolding of pre-scripted set-piece events — and even at that it was easy in early releases in particular to break through the sense of realism of the simulation.

Needless to say, the idea doesn’t entirely or even mostly work in The Hobbit either. There are still enough traditional puzzles that it’s too easy to lock yourself out of victory and have your living fantasy become a Beckett tragicomedy. Then there’s the wonky physics, the way that entirely random developments can ruin your game, and of course all of those bugs that often leave you wondering whether some crazy thing you’re seeing is an expected part of the general surreality that surrounds you or just something gone haywire. (At a certain point, it kind of ceases to matter anymore; you just go with it.) To say that the game’s reach exceeds its grasp hardly begins to state the case; the thing the game is reaching for is somewhere in orbit above its firmly earthbound self, being an experience huge teams of developers still haven’t entirely succeeded in delivering today. But still, The Hobbit plays like no adventure before it. In my recent game, a warg somehow got into the wood elves’ compound long before I got there. I arrived to find him prancing atop the corpse of the one who should have captured me and thrown me in a cell. Suddenly my problem was not how to escape from the elves but how to get past the warg, a very tough customer — not exactly how it played out in the book, but an exciting experience nevertheless. Sometimes, when it works, The Hobbit can be kind of amazing. It stands today as the direction that was largely not taken in text adventures, and at its best it can make you wonder why.

Expensive American imports aside, The Hobbit marked a whole string of firsts for the British adventure scene: first full-sentence parser; first illustrated game; first title licensed from a book (this would have been a first in the American market as well); not to mention first crazy experiment in emergent text-adventure storytelling. And it arrived just as Spectrums were finally getting to consumers in big numbers, and as said consumers were eager for flashy new experiences to enjoy on their new machines. The Hobbit, in short, became huge. It was a hit out of the gate, and just kept selling and selling as months on the market turned into years. Melbourne House made ports for virtually all of the other viable computing platforms of the time, as well as enhanced versions for disk-drive-equipped machines that improved the graphics, added atmospheric music, and offered a little bit smarter companions, a little bit better parsing, and a little bit more to do. It was in this form that the game finally reached American shores in 1985, through an arrangement with Addison-Wesley. The game promptly became a big hit there as well.

Indeed, The Hobbit seemed adaptable to any market or promotional scheme. In its original British incarnation, it was minimally packaged in a rather garish box typical of the young scene there. In the United States, it was beautifully packaged in a classy fold-out box with a lovely, understated cover illustration drawn by Tolkien himself — one of the best of the golden age of computer-game packaging. The American version of the game even came complete with a copy of the book included.

Exactly how many copies the game eventually sold on both sides of the Atlantic is a matter for some speculation. In High Score!, Rusel DeMaria and Johnny L. Wilson state that it sold more than a million copies, but even given its undoubtedly phenomenal popularity I tend to be leery of such a figure given what I know of sales figures for other games of the era. An article in the British magazine Computer and Video Games dated March 1985 guesses that it may have sold up to 200,000 copies by that point. With its entry into the American market (where it was a hit, but not the phenomenon it was in Britain) and continued popularity in Britain, it’s very possible that the game ended up selling half a million copies in total, but it’s hard for me to see my way to much more than that barring better evidence. Still, even the lower figure makes it an excellent candidate for the bestselling text adventure of all time, challenged, if at all, only by Infocom’s Zork I. (The most played text adventure, of course, is and will likely always remain the original Adventure.) The Hobbit made Melbourne House as a major software publisher. And it largely made the British adventure game as its own unique thing, ready to go its own way and try its own things rather than remain beholden to the American approach.

As I write about The Hobbit, “strange” is a word that comes up again and again; everything about it seems anomalous. It’s strange that the game that made the British adventure game should have come from half a world away. It’s strange that a game with such an atypical approach to the form should be the best candidate for the bestselling example of said form of all time. It’s strange that the first publisher to license a book should have been tiny Melbourne House, not one of the more established American publishers. It’s strange that what is, in all honesty, something of a bug-ridden mess should also have such a compelling quality to it. It’s strange that a game based on a novel should be all about emergence rather than attempting to recreate the story of the book. It’s strange that the woman who came up with this new vision of how an adventure game could work left Melbourne House and the burgeoning industry before The Hobbit was even complete, never to return. The Hobbit is most interesting because so much about it is so unlikely.

If you’d like to try it in its original form for yourself, here’s a copy of the Spectrum tape image and the manual. There are lots of Spectrum emulators out there; I use Fuse. Of course, you can also find heaps of other versions out there for heaps of platforms, including the enhanced, disk-based versions that feel more fleshed-out than the original. But never fear, all retain at least a light dusting of the bugs and oddities that are so critical to the Hobbit experience.

(Sources for this article include the web links in the post itself as well as interviews, articles, and profiles in Computer and Video Games #27, Computer and Video Games #41, Crash # 3, Popular Computing Weekly Vol. 1 No. 36, Popular Computing Weekly Vol. 2 No. 43, ZX Computing Vol. 1 No. 6, and Home Computing Weekly #5. And Veronika Megler herself was an invaluable source for this latest, revised version.)

 
53 Comments

Posted by on November 16, 2012 in Digital Antiquaria, Interactive Fiction

 

Tags: , , ,

The Speccy

To put it mildly, Clive Sinclair was not pleased by the BBC’s decision to make the Acorn Proton the standard bearer of British computing. He had some legitimate complaints to levy. The process of selecting the machine, for one thing, could under no interpretation be considered fair and above-board. The BBC had simply handed the contract to the government-controlled Newbury, then when that fell through slipped it to Acorn as the most viable remaining manufacturer that was not run by Sinclair; his company and others never had a chance. The price of the Acorn machine also seemed at odds with the BBC’s original intentions for the program. The £400 BBC Micro Model B (the really practical and desirable model) was simply too expensive to become the fixture in everyday British homes that the BBC had imagined. And then there was the question of whether a government agency had any right or business to interfere with the workings of private industry in the first place. (Admittedly, this was a question that Sinclair, like so many businessmen, tended to answer differently depending on whether he was the one directly benefiting from the government’s largesse.)

In addition to practical complaints, however, Sinclair was personally wounded by the BBC’s choice. They had chosen Chris Curry, the erstwhile junior partner he had mentored, over him; anointed Curry’s company the Great Hope for British computing rather than his own. In response, he lashed out. His interviews from the period — and he did plenty of them — read like the reactions of a jealous lover, strewn with invective toward the BBC peppered with occasional expressions of dismay that they didn’t pick him. From the August 1981 Your Computer:

“When you have a company like ours, which is easily dominating the whole of Europe in personal computers, we believe we have done a very important job in popularizing computers. It is a real disappointment to have your own national broadcasting corporation completely ignore you.”

“What the BBC is doing, it is doing badly and it is damaging the whole progress of computers in this country. We have put a new version of BASIC into our machines. It has been highly praised in the UK and abroad, because of its editing facilities. We developed into it features such as single-keyword entry. None of that is in the BBC version.”

On the perceived slight of not being consulted in the government’s planning for IT Year ’82:

“The government has it so wrong. Frankly, they are so bad at it, it would be better if they left it alone. Fine, they should be doing things for the computer market, but this recent Department of Industry scheme is so peculiar. We were not even talked to.”

But Sinclair reserved his most strident scorn for the rival whom he perceived to have gone behind his back to make the deal with the BBC. All pretense of civility fell away from his relationship with Curry, whom the BBC “for some strange reasons allowed to stick [their] logo on his machines” which would otherwise not have a chance in the marketplace. No shrinking violets themselves, Curry and his partner Hermann Hauser responded in kind with shots at Sinclair’s “duct-tape” brand of engineering that made his machines an impossibility for a serious organization like the BBC. Even Paul Kriwaczek, the producer of The Computer Programme, got in on the act to defend the BBC’s choice, calling Sinclair’s machines “throwaway” products. It was quite a petty display all around, if also a very entertaining one.

Sinclair largely won this public-relations war. The British computer industry was full of other, smaller companies feeling equally spurned by the BBC deal, equally convinced that their own latest designs could have fit the bill much better than anything Acorn had to offer. Sharing a common enemy with Sinclair made them, at least in this sphere, his friend. With virtually the entire industry standing on one side and just Acorn and the BBC on the other, it was easy to conclude that the BBC Micro must be the bad idea and/or botched execution they said it was. On Sinclair’s side most of all, though, was his popular image as the benevolent “Uncle Clive,” largely the creation of his advertising agency, Primary Contact.

Sinclair certainly looked the part of the slightly eccentric but ultimately cuddly boffin, and Primary Contact played the image up for everything they were worth. As Ian Adamson and Richard Kennedy wrote in Sinclair and the “Sunrise” Technology, even long before the controversy over the BBC Micro, “Sinclair was marketed as the maverick doyen of hi-tech, the lone entrepreneur with the vision to take on the Americans and the Japanese. The implication was that by supporting Sinclair the consumer was advancing the cause of British innovation in the face of the brute strength of foreign marketing might.” Now the entrenched bureaucrats of the BBC could be added to the forces he defied. The Clive Sinclair of the popular imagination spent his time puttering away in a basement laboratory somewhere before emerging with designs that were both simpler and better than the competition thanks to his grounded British know-how. He then, unmaterialistic boffin that he was, sold them for a ridiculously low price and just blinked bemusedly when praised for it. Spreading the joy of computing and helping his country were their own rewards. His anger at the BBC was the righteous anger of the honest, practical man confounded by sycophants and politicians.

The reality was very different. Sinclair certainly had electrical know-how, but he was no computer engineer. His machines were designed by others to his specifications, which always began and ended with the price he intended to sell them for and the profit margin he needed to preserve even at that price. Except for these absolutes, all else — including not only features but fundamental quality controls — was negotiable. Characteristically, Sinclair got most involved with the actual engineering of his “creations” when hunting down which component parts he could source for the cheapest prices. Personally, he was domineering, uninterested in the opinions of others and possessed of a deadly combination of overweening arrogance and a deeply buried insecurity that occasionally flashed to the surface when his decisions were questioned. His contempt extended to his customers, whom he regarded as sheep waiting for the Man of Genius (i.e., him) to tell them what they wanted. Surprisingly, Sinclair didn’t even believe that that would necessarily be computers in the long term. He had come into this field largely to finance the quixotic further development of two absurd products he had been dreaming about for years: a miniature, portable television and an electric car. Computers were just a means to that end, a way to capitalize on a passing fancy of the fickle everyman.

Of course, as French philosophers have taught us, the perceptions engendered by mass media are often as real in practical terms as anything else. The British computer industry needed a company hell-bent on selling its machines so cheaply that almost anyone could afford one, even if that meant cutting some corners. And the British public needed an Uncle Clive persona to put a friendly, comfortably British face on all of this disruptive new technology and tell them they had nothing to fear from it. Sinclair was such a terrible businessman that this ride couldn’t possibly last very long. But it would be fun while it did; whatever else you can say about Clive Sinclair, he’s never been boring.

When not sniping at the BBC in the press, Sinclair spent late 1981 and early 1982 pushing hard on his own new computer that would show them how wrong they had been to choose Acorn over him. The successor to the ZX80 and ZX81, it would borrow much of its internal and external engineering from those earlier machines, remaining a tiny thing that looked more like a desk calculator than a computer. The big change, from which it derived its name — the ZX Spectrum — was the addition of color and graphics capabilities. It would deliver these in a package costing just £125 for 16 K or, in another coup, £175 for a full 48 K, well under half the price of the 32 K BBC Micro Model B.

In many ways the Spectrum was a typical Sinclair product, the result of brutal cost-cutting. The keys were made of squishy rubber, which further added to the calculator-like impression and were only marginally more comfortable than the membrane keyboards of the ZX80 and ZX81. Many choices seemed a product of the echo chamber inside Sinclair, owing little to any sort of practical real-world considerations. When designing the ZX80, the company had developed something they called “one-touch” BASIC programming, which matched each BASIC command word to a key on the keyboard. The idea was that the user need only type a single key for each command instead of the whole word, thus cutting back on typos and limiting the interacting she had to do with the ZX80’s atrocious keyboard. As they added more commands to their BASIC with each successive model, however, the idea became increasingly ridiculous. By the time the Spectrum emerged the user had to memorize arcane sequences for many commands that required holding down multiple shift and control keys and some octopus-like finger dexterity. The sequences were both more difficult to remember and more difficult to enter than just typing in the words would have been; some commands actually required as many or more key strokes than there were letters in the word. How this absurd system could have made it out the door is a mystery — or perhaps a tribute to the dominance of Clive Sinclair, who had decided that “one-key” entry was a key to his company’s success and wasn’t interested in hearing otherwise.

In other ways, though, the Spectrum was a Sinclair like no others. Sinclair wasn’t exactly a company one might have expected to deliver cutting-edge aesthetics, but they shocked here. The machine’s externals, by industrial designer Rick Dickinson, have been enshrined — and for good reason — as a design classic. The svelte ebony case with its flash of color stands out from all of the other computer models of its era, with their chunky, lumpy frames and acres of bland beige plastic.

In practical terms, the Spectrum finally answered a question that had been uncomfortably nagging at the backs of the minds of many people ever since this computer thing got rolling in the press. Everyone understood that computers were the wave of the future and all that. But, if you weren’t running a shop and needing to keep inventory or something, what could you really do with one? Sure, a small minority might spend hours every night tinkering with the vagaries of BASIC, but that was of little practical value and destined to remain a niche interest at best. For everyone else, most notably children and teenagers, the Spectrum finally provided a better answer: you could play games. Its graphics hardware could display fifteen colors at a resolution of 256 X 192, with the very significant restriction that each 8 X 8 block of the screen could use only two of them. Still, combined with the 48 K of memory that allowed a decent scope for complexity in game designs, it was good enough. Its tiny size even meant that you could stuff it into a trench-coat pocket to cart to your mate’s house after school. For all of the caveats and limitations that came with it, the Spectrum was the right machine at the right time at the right price to launch computer games into the mainstream in Britain.

In retrospect one of the most bemusing things about the feud between Sinclair and Acorn is that, as Curry and Hauser at least remarked in their more lucid moments, the BBC Micro and Sinclair Spectrum were barely competitors at all. Sinclair largely created a new, home-computer market in Britain, just as Commodore did with the VIC-20 in the United States. (The VIC-20 was also sold in Britain, but didn’t have quite the same impact.) The BBC Micro, meanwhile, put Acorn in a position similar to that of Apple in the U.S., making more expensive, better supported machines for a more “professional” (or, at least, well-heeled) consumer.

The Spectrum was officially launched on April 23, 1982, but it’s pretty safe to say that no consumer received a machine before June. Sinclair, who often trumpeted in interviews that he “never made the same mistake twice,” was nevertheless “utterly astonished” by the demand for the new machine, as he had been for each model that preceded it. The company’s practice of advertising that consumers who ordered directly from them would receive their computers within 28 days prompted much ire as waiting periods stretched to three months and beyond. This in turn prompted a sternly worded warning from the Advertising Standards Authority, a ritual that played out with every Sinclair product launch. It was 1983 before Sinclair cleared its backlog, at which time the company started the whole shortage over again when they reduced prices by more than 25%. In the end none of the angst mattered that much. Where else, Sinclair asked, were customers going to find a 48 K micro with 15-color graphics capabilities for his prices? Most people would wait, he figured — and he was right.

So, IT Year was a success beyond architect Kenneth Baker’s wildest dreams. At its end there were three times as many computers in British homes as there had been at its beginning. One could certainly argue how much of this explosion was really due to the government’s efforts; one suspects that Clive Sinclair would have some strong opinions on that subject. But, however we apportion the credit, things would never be the same; computing in Britain went mainstream with the IT Year and, crucially, the Spectrum. A generation of British children went to school to learn about computers and BASIC and many other subjects on the sturdy BBC Micros. The same kids came home to hang out with friends and play games in front of their “Speccys.” Can you guess which machine is more fondly remembered by Britons today? The poor BBC doesn’t have a chance.

The Spectrum also spawned a huge games industry to feed this eager market. Speccy programmers came to love the machine, as much because of as in spite of its limitations. With its crazy BASIC entry system and dry error messages like “Nonsense in BASIC,” the Speccy felt like theirs — quirky, slightly off-kilter, and somehow distinctly British in its sensibility. Many of the games they produced had a sensibility to match, very different from that of their American cousins. It’s mostly the innovative action games like Manic Miner and Jet Set Willy that are remembered today, but the Speccy also had adventures. Oh, boy, did it have adventures — thousands of them. We’ll look at one of the earliest and most important of them next time.

 
11 Comments

Posted by on November 12, 2012 in Digital Antiquaria, Interactive Fiction

 

Tags: ,