RSS

Monthly Archives: May 2012

The IBM PC, Part 2

Having been so favorably impressed with Bill Gates and Microsoft, Jack Sams returned to them almost as soon as IBM officially gave Project Chess the green light — on August 21, 1980. After having Gates sign yet another NDA, he was ready to move beyond the theoretical and talk turkey. He explained that IBM was planning to make its own PC, something that surprised no one in the room. In keeping with the philosophy of building a machine that could be configured to do anything, he planned to offer the user a choice of using a ROM-hosted BASIC environment similar to that of the Apple II, PET, and TRS-80, or of booting into the disk-oriented operating system CP/M, hugely popular among business users. Microsoft, the premier provider of microcomputer BASICs, was the obvious place to go for the first of these. They had also recently branched out into other, compiled languages like FORTRAN, and Sams wouldn’t mind having him some of those either. Robert X. Cringely and others make much of IBM’s turning to an outside vendor like Microsoft for its software (more of the “slapdash” trope), but this was really not at all unusual. Apple, Commodore, and Radio Shack amongst many others had in fact all done the same, sourcing their BASICs from Microsoft.

Sams was, however, very confused about something else. That spring Microsoft had introduced its first hardware product, the Z80 SoftCard. It was a Z80 CPU on a card which plugged into one of the Apple II’s expansion slots. Once the card was installed, the user could elect whether to give control of her machine to its standard 6502 CPU or to the Z80; the card contained circuitry to allow the Z80 to use the Apple II’s standard memory and other peripherals. Developed in partnership with Seattle Computer Products, a small hardware company with which Microsoft had quite close relations at this time, it really was a marvelous little hack. Because CP/M ran only on Z80 processors, Apple II users had hitherto been cut off from the universe of CP/M business software. Now they had the best of both worlds: all of the fun and educational software that took advantage of the Apple II’s graphics capabilities (not to mention VisiCalc), and all of the text-oriented, businesslike CP/M applications. The SoftCard became a huge success, second only to VisiCalc itself in making the Apple II the only 6502-based machine to be significantly adopted by American business; an Apple II with SoftCard soon became the single most popular CP/M hardware configuration. Based on the SoftCard, which shipped with a copy of CP/M, Sams assumed that Microsoft owned the operating system. Now Gates explained that this was not the case, that Microsoft had only licensed it from its real owner, a company called Digital Research.

Gates and Gary Kildall, the head of Digital and original programmer of CP/M, had known each other for years, and had developed a mutual respect and sort of partnership. When a new machine came out, Microsoft did the languages and Digital did the operating system. Steve Wood, an early Microsoft programmer:

“When we were talking to another OEM, a hardware customer who wanted to run BASIC or any of our products, we got to a point by 1977 or ’78 where we were always trying to get them to go to Digital first and get CP/M running because it made our job a whole lot easier. When we were doing custom things like the General Electric version or NCR version, it got to be a real headache. It made our lives a lot easier if someone would just go license CP/M and get that up on their machines and then our stuff would pretty much run as is. And Gary would do likewise. If someone went to him to license CP/M and they were looking for languages, he would refer people to Microsoft. It was a very synergistic kind of thing.”

Gates and Kildall had even discussed merging their companies at one point. As it was, there was a sort of unwritten understanding that Microsoft would stay out of operating systems and Digital would stay out of languages. In late 1979, however, Digital began distributing a non-Microsoft BASIC with some of their CP/M packages, a development Gates and others at Microsoft viewed as a betrayal of that trust.

Still, Gates dutifully called Kildall right there in Sams’s presence to set up a meeting for Sams and his team for the very next day. He told him they were very important customers, “so treat them right.” For his part, Sams was not thrilled. He was so very impressed with Gates and Microsoft, and “we really only wanted to deal with one person” for all of the systems software. Yet he didn’t see a choice. CP/M, you’ll remember, ran on the Z80 CPU. Sams therefore needed much more than to just purchase a license from Digital; he needed them to agree to port the operating system to the newer 8088 architecture, and to do it on his schedule. The next morning he and his team were on an airplane bound for Pacific Grove, California, home of Digital Research.

This is where the story gets famously unclear. Both Sams and Kildall were asked many times in later years about the events of August 22, 1980. Their stories are so factually disparate that it seems impossible to attribute their differences to mere shading or interpretation. Someone (or perhaps both), it seems, was simply not telling the truth.

Sams claims that he and his team arrived at the Victorian house that served as Digital’s headquarters right on time, only to be told that Kildall had decided to take advantage of a beautiful day by blowing off the meeting and going flying in his private plane. Sams and company were left in the hands of Digital’s business manager, Kildall’s wife Dorothy. Shocked but stalwart, Sams pulled out his NDA as a prelude to getting down to business. Now, on the face of it, this was an intimidating and unfair agreement, saying essentially that the other party could be sued if they revealed any of IBM’s secrets, but that IBM had complete immunity from legal action for the reverse. Gates had had, in his own words, “faith,” and signed right away. Dorothy, however, said no, that she would have to consult with her lawyer first. While Sams fidgeted impatiently in the lobby, she and the lawyer, Gerry Davis, dithered until three o’clock in the afternoon, when she finally signed. With most of the day gone and with the technical mastermind who would need to actually do the port not even present, negotiations didn’t really get anywhere. Sams left Digital, frustrated and annoyed, without even the beginning of an agreement, and immediately started casting about for an alternative to dealing with these people.

For his part, Kildall (who died in 1994 under very strange circumstances) admitted that he was out flying when Sams arrived for his meeting. He claimed, however, that, far from joyriding (joyflying?), he was flying himself home from a business trip. He said it was perfectly okay for the IBM team to have been left in the hands of Dorothy at the beginning of the meeting, as she was much more involved in all business negotiations than he. He nevertheless said that he was back by the afternoon, and that it was in fact him who convinced Dorothy and Davis to just sign the NDA and get on with it. After that negotiations proceeded quickly, and IBM and Digital had a “handshake agreement” by the time the day was over. Further, Kildall claimed that he and Dorothy flew out that night (via commercial airliner this time) to begin a vacation in Florida, and that the IBM group happened to be on the same flight. There they all talked about their plans some more.

Sams says that he did not even fly to Florida immediately after the meeting, but rather back to Seattle to continue to talk with Microsoft, admitting only that perhaps one or two members of the group might have gone directly back to Boca Raton. For years he also adamantly maintained that he never met Kildall at all that day, “unless he was there pretending to be someone else.” Only in recent years has he softened that stance somewhat, saying it’s “possible” Kildall was there, although he “doesn’t remember it.” He also recently said, “We spun it, Kildall spun it, and Microsoft spun it.” This might be read as the last refuge of a man who hasn’t always been entirely truthful, but who knows really. There are witnesses that partially corroborate each version of events. A Digital executive and friend of Kildall named Tom Rolander says he was on the business trip with Kildall, and that they did indeed meet with Sams that afternoon. Meanwhile Davis, Digital’s lawyer, says that he is certain no handshake deal was reached that day, and other IBM staffers do recall Sams saying immediately after the expedition that Kildall never showed up for the meeting.

So, what to make of all this? We might start by looking at Kildall’s personality in contrast to Gates’s. Popular accounts of these events often boil Gates and Kildall down to caricatures, the maniacally driven East Coast businessman versus the laid-back California hippie. They’re actually not awful as caricatures go. Both were wonderful hackers, but they could otherwise have hardly been more different. Gates was determined to prove himself and to win, over and over. When a bigger fish like IBM came calling, he was perfectly willing to humble himself, even to the point of obsequiousness, as long as he needed them as a steppingstone to the next level. (Once he didn’t need them anymore, of course, all bets were off.) It may not have been grounded in the most admirable of traits, but Gates’s ambition made Microsoft beloved by many of their partners. Not only had Gates assembled a very talented team, but they reflected their boss’s personality in being willing to work like dogs and walk through walls to get the job done and outdo their competitors. Kildall, meanwhile, often didn’t even seem certain he wanted to be running a business in the first place:

In one of the darkest of those moments in the late ’70s, Gary passed the parking lot by on his way in to work, and continued around the block, realizing that he just couldn’t bring himself to go in the door. He circled the block three times before he could force himself to confront another day at DRI.

One can’t imagine a remotely similar moment of doubt plaguing Gates.

The joy of hacking was what was important to Kildall. Users needed just be patient. While he would be happy to work with IBM, they needed to get in line like everyone else. Certainly he wasn’t interested in groveling to them. Digital’s vice president in 1980, Gordon Eubanks, says, “Gary cared a lot more about partying than running a business.” In addition to partying, Kildall cared about software. Gates cared about the software business. Eubanks:

The differences between Bill and Gary were just striking. Bill saw an opportunity, he would drive, he’d commit, he’d probably over commit, no problem. Gary was like, “I don’t care, I’m Digital Research. You deal with me, and you deal with me on my terms.”

And then of course there’s the personality of Sams, or rather of his corporate parent. IBM was the big dog in computers, and they expected to be treated like it. If they condescended to visit the likes of Microsoft or Digital, they should be treated like the VIPs they were, shown that the company in question really wanted their business. When Digital failed to demonstrate their respect and thankfulness to the same degree as did Microsoft — and whatever else happened that day, it does seem pretty clear that this at least was the case; Eubanks describes Dorothy as constantly “bitchy” to everyone, including potential customers — Sams was angry. “Don’t these people know who I am?” he must have wondered. Further, it’s pretty clear that Sams was unhappy about having to deal with Digital in lieu of Gates before he ever boarded that flight for California. As our mothers always told us, going into something with a bad attitude usually yields a bad result.

What is certain is that, handshake or no handshake and regardless of what impression Kildall might have been under, Sams was not pleased with his experience at Digital. He asked Gates, who had by this time signed an official consulting deal, whether he might find him an alternative to CP/M. Gates said he would see what he could do. In the meantime Sams claims he continued to try to work out something with Digital, but couldn’t get a commitment to develop an 8088 CP/M on the strict timetable he needed. Eubanks says that Kildall just didn’t find the project all that “interesting,” in spite of the obvious, pressing business need for it, and thus worked on it only halfheartedly.

And then Gates came back with QDOS.

 

Tags: , ,

The IBM PC, Part 1

What with the arrival of the category-defining Commodore VIC-20 and the dramatic growth of the British PC market, 1981 has provided us with no shortage of new machines and other technical developments to talk about. Yet I’ve saved the biggest event of all for last: the introduction of the IBM PC, the debut of an architecture that is still with us over 30 years later. As such a pivotal event in the history of computing, there’s been plenty written about it already, and no small amount of folklore of dubious veracity has also clustered around it. Still, it’s not something we can ignore here, for the introduction of the IBM PC in late 1981 marks the end of the first era of PCs as consumer products as surely as the arrival of the trinity of 1977 spelled the end of the Altair era of home-built systems. So, I’ll tell the tale here again. Along the way, I’ll try to knock down some pervasive myths.

One could claim that the IBM PC was not really IBM’s first PC at all. In September of 1975 the company introduced the IBM 5100, their first “portable” computer. (“Portable” meant that it weighed just 55 pounds and you could buy a special travel case to lug it around in.)

The 5100 was not technically a microcomputer; it used a processor IBM had developed in-house called the PALM which was spread over an entire circuit board rather than being housed in a single microchip. From the end user’s standpoint, however, that made little difference; certainly it would seem to qualify as a personal computer if not a microcomputer. It was a self-contained, Turing complete, programmable machine no larger than a suitcase, with a tape drive for loading and saving programs, a keyboard, and a 5-inch screen all built right in along with 16 K or more of RAM. What made the 5100 feel different from the first wave of PCs were its price and its promoted purpose. The former started at around $10,000 and could quickly climb to the $20,000 range. As for the latter: IBM pushed the machine as a serious tool for field engineers and the like in remote locations where they couldn’t access IBM’s big machines, not as anything for fun, education, hacking, or even office work. The last of these at least changed with two later iterations of the concept, the 5110 and 5120, which were advertised as systems suitable for the office, with accounting, database, and even word processing applications available. Still, the prices remained very high, and actually outfitting one for this sort of office work would entail connecting it to a free-standing disk array that was larger than the machine itself, making the system look and feel more like a minicomputer and less like a PC. It’s nevertheless telling that, although it was almost never referred to by this name, the IBM PC when it finally arrived had the official designation of (with apologies to Van Halen) the IBM 5150, a continuation of the 5100 line of portable computers rather than an entirely new thing — this even though it shared none of the architecture of its older siblings.

In February of 1978 IBM began working on its first microcomputer — and it still wasn’t the IBM PC. It was a machine called the System/23 Datamaster.

Designed once again for an office environment, the Datamaster was built around an Intel 8085 microprocessor. It was large and heavy (95 pounds), and still cost in the $10,000 range, which combined with its very business-oriented, buttoned-down personality continued to make it feel qualitatively different from machines like the Apple II. Yet it was technically a microcomputer. IBM was a huge company with a legendarily labyrinthine bureaucracy, meaning that projects could sometimes take an inordinately long time to complete. Despite the Datamaster project predating the PC project by two years, the former didn’t actually come out until July of 1981, just in time to have its thunder stolen by the announcement of the IBM PC the following month. Still, if the question of IBM’s first microcomputer ever comes up in a trivia game, there’s your answer.

The machine that would become known as the real IBM PC begins, of all places, at Atari. Apparently feeling their oats in the wake of the Atari VCS’s sudden Space Invaders-driven explosion in popularity and the release of their own first PCs, the Atari 400 and 800, they made a proposal to IBM’s chairman Frank Cary in July of 1980: if IBM wished to have a PC of their own, Atari would deign to build it for them. Far from being the hidebound mainframer that he’s often portrayed as, Cary was actually something of a champion of small systems — even if “small systems” in the context of IBM often meant something quite different from what it meant to the outside world. Cary turned the proposal over to IBM’s Director of Entry Systems, Bill Lowe, based out of Boca Raton, Florida. Lowe in turn took it to IBM’s management committee, who pronounced it “the dumbest thing we’ve ever heard of.” (Indeed, IBM and Atari make about the oddest couple imaginable.) But at the same time, everyone knew that Lowe was acting at the personal behest of the chairman, not something to be dismissed lightly if they cared at all about their careers. So they told Lowe to assemble a team to put together a detailed proposal for how IBM could build a PC themselves — and to please come back with it in just one month.

Lowe assembled a team of twelve or thirteen (sources vary) to draft the proposal. In defiance of all IBM tradition, he deliberately kept the team small, the management structure informal, hoping to capture some of the hacker magic that had spawned PCs in the first place. His day-to-day project manager, Don Estridge, said, “If you’re competing against people who started in a garage, you have to start in a garage.” One might have expected IBM, the Goliath of the computer industry, to bludgeon their way into the PC market. Indeed, and even as they congratulated themselves for having built this new market using daring, creativity, and flexibility stolid IBM could not hope to match, many PC players lived in a sort of unvoiced dread of exactly this development. IBM, however, effectively decided to be a good citizen, to look at what was already out there and talk to those who had built the PC market to find out what was needed, where a theoretical IBM PC might fit. In that spirit, Jack Sams, head of software development, recommended that they talk to Microsoft. Sams was unusually aware of the PC world for an IBMer; he had actually strongly pressed for IBM to buy the BASIC for the Datamaster from Microsoft, but had been overruled in favor of an in-house effort. “It just took longer and cost us more,” he later said. Sams called Bill Gates on July 21, 1980, asking if he (Sams) could drop by their Seattle office the next day for a friendly chat about PCs. “Don’t get too excited, and don’t think anything big is about to happen,” he said.

Gates and Steve Ballmer, his right-hand man and the only one in this company of hackers with a business education, nevertheless both realized that this could be very big indeed. When Sams arrived with two corporate types in tow to function largely as “witnesses,” Gates came out personally to meet them. (Sams initially assumed that Gates, who still had the face, physique, and voice of a twelve-year-old, was the office boy.) Sams immediately whipped out the non-disclosure agreement that was standard operating procedure for IBM. Gates: “IBM didn’t make it easy. You had to sign all these funny agreements that sort of said IBM could do whatever they wanted, whenever they wanted, and use your secrets however they felt. So it took a little bit of faith.” Nevertheless, he signed it immediately. Sams wanted to get a general sense of the PC market from Gates, a man who was as intimately familiar with it as anyone. In this respect, Gates was merely one of a number of prominent figures he spoke with. However, he also had an ulterior motive: to see just what kind of shop Gates was running, to try to get a sense of whether Microsoft might be a resource his team could use. He was very impressed.

After consulting with Gates and others, Lowe presented a proposal for the machine that IBM should build on August 8. Many popular histories, such as the old PBS Triumph of the Nerds, give the impression that the IBM PC was just sort of slapped together in a mad rush. Actually, a lot of thought went into the design. There were two very interesting aspects.

At that time, almost all PCs used one of two CPUs: the MOS 6502 or the Zilog Z80. Each was the product of a relatively small, upstart company, and each “borrowed” its basic instruction set and much of its design from another, more expensive CPU produced by a larger company — the Motorola 6800 and the Intel 8080 respectively. (To add to the ethical questions, both were largely designed by engineers who had also been involved with the creation of their “inspirations.”) Of more immediate import, both were 8-bit chips capable of addressing only 64 K of memory. This was already becoming a problem. The Apple II, for example, was limited, due to the need to also address 16 K of ROM, to 48 K of RAM at this time. We’ve already seen the hoops that forced Apple and the UCSD team to run through to get UCSD Pascal running on the machine. Even where these CPUs’ limitations weren’t yet a problem, it was clear they were going to be soon. The team therefore decided to go with a next-generation CPU that would make such constraints a thing of the past. IBM had a long history of working with Intel, and so it chose the Intel 8088, a hybrid 8-bit / 16-bit design that could be clocked at up to 5 MHz (far faster than the 6502 or Z80) and, best of all, could address a full 1 MB of memory. The IBM PC would have room to grow that its predecessors lacked.

The other interesting aspect was this much-vaunted idea of an “open architecture.” In Accidental Empires and even more so in Triumph of the Nerds Robert X. Cringely makes it out to be a choice born of necessity, just another symptom of the machine as a whole’s slapdash origins: “An IBM product in a year! Ridiculous! To save time, instead of building a computer from scratch, they would buy components off the shelf and assemble them — what in IBM speak was called ‘open architecture.'” Well, for starters “open architecture” is hardly “IBM speak”; it’s a term used to describe the IBM PC almost everywhere — and probably least of all within IBM. (In his meticulous, technically detailed Byte magazine article “The Creation of the IBM PC,” for example, team-member David J. Bradley doesn’t use it once.) But what do people mean when they talk about “open architecture?” Unfortunately for flip technology journalists, the “openness” or “closedness” of an architecture is not an either/or proposition, but rather, like so much else in life, a continuum. The Apple II, for example, was also a relatively open system in having all those slots Steve Wozniak had battled so hard for (just about the only battle the poor fellow ever won over Steve Jobs), slots which let people take the machine to places its creators had never anticipated and which bear a big part of the responsibility for its remarkable longevity. Like IBM, Apple also published detailed schematics for the Apple II to enable people to take the machine places they never anticipated. The CP/M machines that were very common in business were even more open, being based on a common, well-documented design specification, the S-100 bus, and having plenty of slots themselves. This let them share both hardware and software.

Rather than talking of an open architecture, we might do better to talk of a modular architecture. The IBM would be a sort of computer erector set, a set of interchangeable components that the purchaser could snap together in whatever combination suited her needs and her pocketbook. Right from launch she could choose between a color video card that could do some graphics and play games, or a monochrome card that could display 80 columns of text. She could choose anywhere from 16 K to 256 K of onboard memory; choose one or two floppy drives, or just a cassette drive; etc. Eventually, as third-party companies got into the game and IBM expanded its product line, she would be all but drowned in choices. Most of the individual components were indeed sourced from other companies, and this greatly sped development. Yet using proven, well-understood components has other advantages too, advantages from which would derive the IBM PC’s reputation for stolid reliability.

While sourcing so much equipment from outside vendors was a major departure for IBM, in other ways the IBM PC was a continuation of the company’s normal design philosophy. There was no single, one-size-fits-all IBM mainframe. When you called to say you were interested in buying one of these monsters, IBM sent a rep or two out to your business to discuss your needs, your finances, and your available space with you. Then together you designed the system that would best suit, deciding how much disk storage, how much memory, how many and what kind of tape drives, what printers and terminals and punched-card readers, etc. In this light, the IBM PC was just a continuation of business as usual in miniature. Most other PCs of course offered some of this flexibility. It is nevertheless significant that IBM decided to go all-in for modularity, expandability, or, if we must, openness. Like the CPU choice, it gave the machine room to grow, as hard drives, better video cards, eventually sound cards became available. It’s the key reason that the architecture designed all those years ago remains with us today — in much modified form, of course.

The committee gave Lowe the go-ahead to build the computer. IBM, recognizing itself that its bureaucracy was an impediment to anyone really, you know, getting anything done, had recently come up with a concept it called the Independent Business Unit. The idea was that an IBU would work as a semi-independent entity, freed from the normal bureaucracy, with IBM acting essentially as the venture capitalists. Fortune magazine called the IBU, “How to start your own company without leaving IBM.” Chairman Cary, in a quote that has often been garbled and misattributed, called the IBU IBM’s answer to the question, “How do you make an elephant [IBM] tap dance?” Lowe’s IBU would be code-named Project Chess, and the machine they would create would be code-named the Acorn. (Apparently no one was aware of the British computer company of the same name.) They were given essentially free rein, with one stipulation: the Acorn must be ready to go in just one year.

 

Tags: , ,

Micro Men

For practical purposes, the British PC industry lagged about three years behind the American. It wasn’t that it was impossible to buy a modern American machine. Commodore alone sold some 45,000 PET systems in Britain in that platform’s first three years of availability, and, while they were less common, you could certainly buy imported TRS-80s, Apple IIs, and Atari 400s and 800s if you had the money. But it’s that last part that’s key here. At a time when the pound was worth around $2.50, even the most bare-bones PET system would set you back at least £650, while an Apple II system of the type that was pretty much the expected standard in America by 1981 — a II Plus with 48 K, a color monitor, two floppy drives, perhaps a printer — would quickly climb to around the £2000 mark. To fully understand just how out of reach these prices made computers for the average Briton, you have to understand something about life there in the late 1970s and early 1980s.

The British economy hadn’t really been good for quite some years, suffering along with the rest of country from a sort of general post-empire malaise punctuated by occasional embarrassing shocks like the Three-Day Week (1974), when chronic energy shortages forced the government to mandate that business could only open three days in the week, and the Winter of Discontent (1978-79), when strikes across a whole range of industries brought the economy and, indeed, daily life to a virtual standstill. The latter events were sufficient to ensure the election as Prime Minister of perhaps the most polarizing figure in postwar British political history, Margaret Thatcher, on a platform that promised to drag Britain into the modern age, if necessary kicking and screaming, by rolling back most of the welfare state that had been erected in the aftermath of World War II. Yet nothing got better in the immediate wake of Thatcher’s election. In fact, as the government imposed harsh austerity measures and much of the country’s remaining industrial base collapsed under privatization, they just continued to get worse. By 1981 unemployment was at 12.5%, entire cities were reduced to industrial wasteland, riots were becoming a daily reality, and Thatcher was beset by howling mobs virtually everywhere she went. It felt like something more than just a serious recession; it felt dangerous. That summer The Specials summed up the mood of the country in the apocalyptic, chart-topping “Ghost Town.” Things would get slowly, painfully better after that low point, but it would be nearly a decade before unemployment shrunk to reasonable levels and the modern economy Thatcher had promised really took hold with the beginning of the era of “cool Britannia.”

Suffice to say, then, that most Britons would not have been able to afford American computers even if they were priced in line with what Americans paid for them. While PETs were sold to businesses and TRS-80s and Apple IIs to the handful of wealthy eccentrics who could afford them, a parallel domestic industry arose to serve everyday users at prices they could afford. It began in 1978, three years after the Altair in North America, with a handful of do-it-yourself kits that let hobbyists solder together contraptions of toggle switches and blinking lights. The British equivalent of the trinity of 1977 then arrived, right on schedule, in 1980.

So many characters from the early PC era are larger than life, and their photos seem to say it all about them. You’ve got, for example, Steve Jobs, the glib, handsome charmer whom you wouldn’t quite trust with your daughter.

You’ve got Jack Tramiel, who (Jewishness aside) looks like he should be sitting behind a mound of spaghetti mumbling about breaking kneecaps.

And you’ve got the man history remembers as the first to bring affordable computers to the British public, Sir Clive Sinclair. He looks like a mad genius inventor who should be making gadgets for James Bond — or maybe Maxwell Smart. If you left him alone at your house you’d probably return to find the cat on fire and the daughter’s hair turned blue.

Despite having absolutely no formal training, Sinclair graduated from gigs writing for electronics magazines in 1961 to found Sinclair Radionics, a firm with the perfect name for a mad scientist’s workshop. After years spent selling kits for making radios, amplifiers, test equipment, and the like to hobbyists, Sinclair Radionics started a consumer-electronics line, for which, as (once again) befitted any proper mad scientist, they produced groundbreaking gadgets with absurd design flaws and about the worst quality control imaginable. There was the Sinclair Executive, one of the first calculators small enough to fit in a pocket, but which had an unfortunate tendency to explode (!) when left on too long. And there was the Microvision, a portable television. Unfortunately, Sinclair had neglected to ask just who the hell really wanted to watch TV on a 2″ black-and-white screen, and it was a commercial flop.

But the stereotypical — or satirical — Sinclair product was the Black Watch.

On the plus side, it was one of the first digital wristwatches. On the negative side — gee, where to start? The Black Watch was chronically unreliable in actually, you know, keeping time, never a good feature in a watch; it was apparently very susceptible to climate changes, running at different speeds in different seasons. Batteries lasted for a solid ten days if you were lucky, and were almost as hard to replace as the watch had been to assemble in the first place. (Like many Sinclair products, it was available as a do-it-yourself kit as well as in pre-assembled form). It had a tendency to literally fall to pieces all at once as the clips that held it together fatigued. But even that wasn’t the worst possible failure. In what was becoming a Sinclair trademark, the Black Watch was also known to explode without warning.

Released in late 1975, the Black Watch fiasco combined with the onslaught of cheap calculators from Japan marked the beginning of the end of Sinclair Radionics. Britain’s National Enterprise Board bought a majority interest in 1977, but quickly found Clive to be all but impossible to deal with, and found the hoped-for turnaround a tough nut to crack. The NEB finally pulled the plug on the company in the wake of Thatcher’s election; this sort of mixing with private business was of course under Thatcher’s new paradigm exactly what the government should not be doing. By that time Clive had already started another company on the sly to wriggle free of government interference with his management decisions. He named it Science of Cambridge to keep its guiding hand at least somewhat under wraps. This was the company that would start the PC boom in Britain.

For an exaggerated but entertaining picture of Clive Sinclair the man, I’ll point you to the show whose title I stole for this post, the BBC one-off Micro Men. He was a genuinely talented inventor with a flair for the art of the possible and a determination to bring out products at prices that ordinary people could afford — a populist in the best sense of the word. He was also stupefyingly stubborn and arrogant, one of those supremely tedious people who love to talk about their IQ scores. (He was chairman of British Mensa for almost two decades.) In a typical interview for Your Computer magazine in 1981, he said, “I make mistakes, everyone does, but I never make them twice.” Someone of more average intelligence — like for instance your humble blogger here — might beg to differ that his history of exploding products would seem to point to a man who kept making the same mistakes over and over, thinking he could avoid the perspiration of polishing and perfecting through the inspiration of his initial brilliant idea. But what do I know?

Sinclair had been involved with some of those blinking-box computer kits I mentioned earlier, but he first entered the computer market in a big way with the release of the ZX80 in early 1980, the £100 machine I mentioned in an earlier post as Jack Tramiel’s inspiration for the Commodore VIC-20. Indeed, there are some similarities between the two men, both egocentric executives who were forced out of the calculator market by the cheaper Japanese competition. Yet we shouldn’t push the comparison too far. Sinclair was, to use the British term, a thoroughgoing boffin, filled with childlike enthusiasm for gadgets and for technology’s social potential. Tramiel, however, was all businessman; he would, to paraphrase one of Steve Jobs’s most famous pitches, have been perfectly happy to sell sugared water for his entire life if that gave him the competition he craved.

The ZX80 was, once again, available as either a semi-assembled kit or, for somewhat more, a completed product ready to plug in and use. With its tiny case and its membrane keyboard, it looked more like a large calculator than a computer. Indeed, its 1 K of standard RAM meant that it wasn’t good for much more than adding numbers until the user sprang for an expansion. Its standard BASIC environment was bizarre and seemed almost willfully unfriendly, and it was beset by the usual Sinclair reliability problems, with overheating a particular concern. (At least there were no reports of exploding ZX80s…) The design was so minimal that it didn’t even have a video chip, but rather relied on the CPU to generate a video signal entirely in software. From this stemmed one of its most unique “features”: because the CPU could only generate video when it was not doing something else, the screen went blank whenever a program was actually running, even momentarily every time the user hit a key. But it was a real computer, the first really within reach for the majority of Britons. Sinclair sold 100,000 of them in less than eighteen months.

Science of Cambridge was not the only British company to make a splash in the burgeoning home-computer market in 1980. Another young company, Acorn Computers, released its own machine, the Acorn Atom, later that year.

The Atom cost about 50% more than the ZX80, but was still vastly less than any of the American machines. The extra money bought you a much more usable computer, with a proper keyboard, twice the RAM (even if 2 K was still sadly inadequate for actually doing much of anything), a display that didn’t flick on and off, and a less, shall we say, idiosyncratic interpretation of BASIC. The competition between Sinclair and Acorn was personal. The head of Acorn, Chris Curry, had been for some twelve years Clive Sinclair’s right-hand man. The two had parted ways in late 1978, ironically because Curry wanted to produce a new microcomputer that Sinclair did not (yet) see the potential of. Curry went on to form Acorn with a partner, Hermann Hauser, and barely a year later — Sinclair having suddenly gotten the microcomputer religion — was going toe to toe with his erstwhile boss and mentor.

The following year, 1981, would prove a pivotal one. Sinclair, who changed the name of his company that year to Sinclair Research in the wake of Sinclair Radionics dissolution, introduced the ZX81 in March, an evolution of the ZX80 design that further reduced the price to just £50 in kit form, £70 fully assembled.

Amongst other modest improvements, the ZX81 could run in “slow” mode, in which enough CPU time was always reserved to update the display, eliminating the screen blanking at the cost of dramatically slower CPU throughput. And it could handle floating-point numbers, an impossibility on the ZX80. Of course, it was also a Sinclair product, with everything that entailed. The 16 K RAM expansion didn’t quite fit into its socket correctly; it would occasionally fall out of place with disastrous results. Actually, most of the connections had similar if less acute problems, forcing one to tiptoe gingerly around the machine. (Presumably those living near train tracks were just out of luck.)

The Commodore VIC-20 also arrived that year, at an initial price of about £180. Very much a lowest end of low-end machines in North America, the VIC-20 with its 5 K of RAM and color graphics capabilities was considerably more capable than either the unexpanded Sinclair or Acorn; thus the comparatively high price.

In North America, we saw the emergence of a commercial software market in 1978, as hobbyists like Scott Adams began packaging their programs on cassette tapes in Ziploc baggies and selling them. True to the three-year rule, a domestic British software market began to emerge in 1981, with a similar do-it-yourself personality of hand-copied cassettes and improvised packaging. (One could hear the creators’ children playing and similar background noises on some of these “data” tapes.) Software of course largely meant games, and a big part of games was text adventures.

A very good candidate for the first homegrown British example of the form is Planet of Death, a game for the ZX80 and ZX81 released around June of 1981 by Artic Software, a company formed by two university students, Richard Turner and Chris Thornton, the year before. Unlike the earliest American text-adventure coders, Turner and Thornton had plenty of examples to follow, thanks to their Video Genie computer, a Hong Kong-manufactured clone of the TRS-80 Model 1 that became more popular than the real thing in Britain. (In fact, they did their coding on the Genie, which shared the Sinclair machines’ Zilog Z-80 processor, and transferred their work to the more primitive Sinclairs.) The Artic adventure line, of which Planet of Death was the first, shows a marked Scott Adams influence, from the instructions insert that calls the player’s avatar her “puppet” to Artic’s system of numbering its adventures to help the devoted assemble a complete collection. (One difference: Artic used letters instead of numbers. Thus Planet of Death is Adventure A.)

Planet of Death doesn’t cut a very inspiring figure as the first example of British ludic narrative. Mostly it makes you appreciate its inspiration; whatever his other failings, Scott Adams always finished his games before he released them. Planet of Death plays like something you might find sloshing around the bottom of one of the modern IF Competitions, albeit without the built-in technical competency modern IF languages like Inform bring to the table. It’s as if Turner and Thornton ran out of memory and simply stopped where they were — which, come to think of it, is likely exactly what happened. You’ve got bugs galore, a maze that’s doubly frustrating because it ultimately leads nowhere, red herrings and half-finished puzzles, all wired up to an unusually obtuse two-word parser that thinks “with” is a verb. Yet, just as the ZX80 and ZX81 were real computers, however limited an implementation thereof, Planet of Death was a real adventure game, the first most of the British public had seen, and it sold well enough to spawn a whole line from Artic. It stands at the origin of an adventure-game scene that would become if anything even more vital and prolific than that in the U.S. — one we’ll be following in later posts.

In an important signifier of the growing acceptance of PCs in Britain, the omnipresent High Street newsstand chain WH Smith began selling the ZX81 in its stores with the arrival of the 1981 holiday season, billing it as “your first step into personal computing.” Just as the arrival of the VIC-20 in K-Mart stores in North America signaled a similar paradigm shift there, mainstream British stores would soon be stocking not just Sinclairs but also Acorns and Commodores. Within a few years British computer sales would surpass those in the U.S. on a per capita basis, as Britain became the most computer-mad nation on Earth. We’ll get back to that. For next time, though, we’ll return to the U.S. to look at the last major computer introduction of 1981, and the most long-lived and important of all.

 

Tags: , ,