RSS

The IBM PC, Part 1

07 May

What with the arrival of the category-defining Commodore VIC-20 and the dramatic growth of the British PC market, 1981 has provided us with no shortage of new machines and other technical developments to talk about. Yet I’ve saved the biggest event of all for last: the introduction of the IBM PC, the debut of an architecture that is still with us over 30 years later. As such a pivotal event in the history of computing, there’s been plenty written about it already, and no small amount of folklore of dubious veracity has also clustered around it. Still, it’s not something we can ignore here, for the introduction of the IBM PC in late 1981 marks the end of the first era of PCs as consumer products as surely as the arrival of the trinity of 1977 spelled the end of the Altair era of home-built systems. So, I’ll tell the tale here again. Along the way, I’ll try to knock down some pervasive myths.

One could claim that the IBM PC was not really IBM’s first PC at all. In September of 1975 the company introduced the IBM 5100, their first “portable” computer. (“Portable” meant that it weighed just 55 pounds and you could buy a special travel case to lug it around in.)

The 5100 was not technically a microcomputer; it used a processor IBM had developed in-house called the PALM which was spread over an entire circuit board rather than being housed in a single microchip. From the end user’s standpoint, however, that made little difference; certainly it would seem to qualify as a personal computer if not a microcomputer. It was a self-contained, Turing complete, programmable machine no larger than a suitcase, with a tape drive for loading and saving programs, a keyboard, and a 5-inch screen all built right in along with 16 K or more of RAM. What made the 5100 feel different from the first wave of PCs were its price and its promoted purpose. The former started at around $10,000 and could quickly climb to the $20,000 range. As for the latter: IBM pushed the machine as a serious tool for field engineers and the like in remote locations where they couldn’t access IBM’s big machines, not as anything for fun, education, hacking, or even office work. The last of these at least changed with two later iterations of the concept, the 5110 and 5120, which were advertised as systems suitable for the office, with accounting, database, and even word processing applications available. Still, the prices remained very high, and actually outfitting one for this sort of office work would entail connecting it to a free-standing disk array that was larger than the machine itself, making the system look and feel more like a minicomputer and less like a PC. It’s nevertheless telling that, although it was almost never referred to by this name, the IBM PC when it finally arrived had the official designation of (with apologies to Van Halen) the IBM 5150, a continuation of the 5100 line of portable computers rather than an entirely new thing — this even though it shared none of the architecture of its older siblings.

In February of 1978 IBM began working on its first microcomputer — and it still wasn’t the IBM PC. It was a machine called the System/23 Datamaster.

Designed once again for an office environment, the Datamaster was built around an Intel 8085 microprocessor. It was large and heavy (95 pounds), and still cost in the $10,000 range, which combined with its very business-oriented, buttoned-down personality continued to make it feel qualitatively different from machines like the Apple II. Yet it was technically a microcomputer. IBM was a huge company with a legendarily labyrinthine bureaucracy, meaning that projects could sometimes take an inordinately long time to complete. Despite the Datamaster project predating the PC project by two years, the former didn’t actually come out until July of 1981, just in time to have its thunder stolen by the announcement of the IBM PC the following month. Still, if the question of IBM’s first microcomputer ever comes up in a trivia game, there’s your answer.

The machine that would become known as the real IBM PC begins, of all places, at Atari. Apparently feeling their oats in the wake of the Atari VCS’s sudden Space Invaders-driven explosion in popularity and the release of their own first PCs, the Atari 400 and 800, they made a proposal to IBM’s chairman Frank Cary in July of 1980: if IBM wished to have a PC of their own, Atari would deign to build it for them. Far from being the hidebound mainframer that he’s often portrayed as, Cary was actually something of a champion of small systems — even if “small systems” in the context of IBM often meant something quite different from what it meant to the outside world. Cary turned the proposal over to IBM’s Director of Entry Systems, Bill Lowe, based out of Boca Raton, Florida. Lowe in turn took it to IBM’s management committee, who pronounced it “the dumbest thing we’ve ever heard of.” (Indeed, IBM and Atari make about the oddest couple imaginable.) But at the same time, everyone knew that Lowe was acting at the personal behest of the chairman, not something to be dismissed lightly if they cared at all about their careers. So they told Lowe to assemble a team to put together a detailed proposal for how IBM could build a PC themselves — and to please come back with it in just one month.

Lowe assembled a team of twelve or thirteen (sources vary) to draft the proposal. In defiance of all IBM tradition, he deliberately kept the team small, the management structure informal, hoping to capture some of the hacker magic that had spawned PCs in the first place. His day-to-day project manager, Don Estridge, said, “If you’re competing against people who started in a garage, you have to start in a garage.” One might have expected IBM, the Goliath of the computer industry, to bludgeon their way into the PC market. Indeed, and even as they congratulated themselves for having built this new market using daring, creativity, and flexibility stolid IBM could not hope to match, many PC players lived in a sort of unvoiced dread of exactly this development. IBM, however, effectively decided to be a good citizen, to look at what was already out there and talk to those who had built the PC market to find out what was needed, where a theoretical IBM PC might fit. In that spirit, Jack Sams, head of software development, recommended that they talk to Microsoft. Sams was unusually aware of the PC world for an IBMer; he had actually strongly pressed for IBM to buy the BASIC for the Datamaster from Microsoft, but had been overruled in favor of an in-house effort. “It just took longer and cost us more,” he later said. Sams called Bill Gates on July 21, 1980, asking if he (Sams) could drop by their Seattle office the next day for a friendly chat about PCs. “Don’t get too excited, and don’t think anything big is about to happen,” he said.

Gates and Steve Ballmer, his right-hand man and the only one in this company of hackers with a business education, nevertheless both realized that this could be very big indeed. When Sams arrived with two corporate types in tow to function largely as “witnesses,” Gates came out personally to meet them. (Sams initially assumed that Gates, who still had the face, physique, and voice of a twelve-year-old, was the office boy.) Sams immediately whipped out the non-disclosure agreement that was standard operating procedure for IBM. Gates: “IBM didn’t make it easy. You had to sign all these funny agreements that sort of said IBM could do whatever they wanted, whenever they wanted, and use your secrets however they felt. So it took a little bit of faith.” Nevertheless, he signed it immediately. Sams wanted to get a general sense of the PC market from Gates, a man who was as intimately familiar with it as anyone. In this respect, Gates was merely one of a number of prominent figures he spoke with. However, he also had an ulterior motive: to see just what kind of shop Gates was running, to try to get a sense of whether Microsoft might be a resource his team could use. He was very impressed.

After consulting with Gates and others, Lowe presented a proposal for the machine that IBM should build on August 8. Many popular histories, such as the old PBS Triumph of the Nerds, give the impression that the IBM PC was just sort of slapped together in a mad rush. Actually, a lot of thought went into the design. There were two very interesting aspects.

At that time, almost all PCs used one of two CPUs: the MOS 6502 or the Zilog Z80. Each was the product of a relatively small, upstart company, and each “borrowed” its basic instruction set and much of its design from another, more expensive CPU produced by a larger company — the Motorola 6800 and the Intel 8080 respectively. (To add to the ethical questions, both were largely designed by engineers who had also been involved with the creation of their “inspirations.”) Of more immediate import, both were 8-bit chips capable of addressing only 64 K of memory. This was already becoming a problem. The Apple II, for example, was limited, due to the need to also address 16 K of ROM, to 48 K of RAM at this time. We’ve already seen the hoops that forced Apple and the UCSD team to run through to get UCSD Pascal running on the machine. Even where these CPUs’ limitations weren’t yet a problem, it was clear they were going to be soon. The team therefore decided to go with a next-generation CPU that would make such constraints a thing of the past. IBM had a long history of working with Intel, and so it chose the Intel 8088, a hybrid 8-bit / 16-bit design that could be clocked at up to 5 MHz (far faster than the 6502 or Z80) and, best of all, could address a full 1 MB of memory. The IBM PC would have room to grow that its predecessors lacked.

The other interesting aspect was this much-vaunted idea of an “open architecture.” In Accidental Empires and even more so in Triumph of the Nerds Robert X. Cringely makes it out to be a choice born of necessity, just another symptom of the machine as a whole’s slapdash origins: “An IBM product in a year! Ridiculous! To save time, instead of building a computer from scratch, they would buy components off the shelf and assemble them — what in IBM speak was called ‘open architecture.'” Well, for starters “open architecture” is hardly “IBM speak”; it’s a term used to describe the IBM PC almost everywhere — and probably least of all within IBM. (In his meticulous, technically detailed Byte magazine article “The Creation of the IBM PC,” for example, team-member David J. Bradley doesn’t use it once.) But what do people mean when they talk about “open architecture?” Unfortunately for flip technology journalists, the “openness” or “closedness” of an architecture is not an either/or proposition, but rather, like so much else in life, a continuum. The Apple II, for example, was also a relatively open system in having all those slots Steve Wozniak had battled so hard for (just about the only battle the poor fellow ever won over Steve Jobs), slots which let people take the machine to places its creators had never anticipated and which bear a big part of the responsibility for its remarkable longevity. Like IBM, Apple also published detailed schematics for the Apple II to enable people to take the machine places they never anticipated. The CP/M machines that were very common in business were even more open, being based on a common, well-documented design specification, the S-100 bus, and having plenty of slots themselves. This let them share both hardware and software.

Rather than talking of an open architecture, we might do better to talk of a modular architecture. The IBM would be a sort of computer erector set, a set of interchangeable components that the purchaser could snap together in whatever combination suited her needs and her pocketbook. Right from launch she could choose between a color video card that could do some graphics and play games, or a monochrome card that could display 80 columns of text. She could choose anywhere from 16 K to 256 K of onboard memory; choose one or two floppy drives, or just a cassette drive; etc. Eventually, as third-party companies got into the game and IBM expanded its product line, she would be all but drowned in choices. Most of the individual components were indeed sourced from other companies, and this greatly sped development. Yet using proven, well-understood components has other advantages too, advantages from which would derive the IBM PC’s reputation for stolid reliability.

While sourcing so much equipment from outside vendors was a major departure for IBM, in other ways the IBM PC was a continuation of the company’s normal design philosophy. There was no single, one-size-fits-all IBM mainframe. When you called to say you were interested in buying one of these monsters, IBM sent a rep or two out to your business to discuss your needs, your finances, and your available space with you. Then together you designed the system that would best suit, deciding how much disk storage, how much memory, how many and what kind of tape drives, what printers and terminals and punched-card readers, etc. In this light, the IBM PC was just a continuation of business as usual in miniature. Most other PCs of course offered some of this flexibility. It is nevertheless significant that IBM decided to go all-in for modularity, expandability, or, if we must, openness. Like the CPU choice, it gave the machine room to grow, as hard drives, better video cards, eventually sound cards became available. It’s the key reason that the architecture designed all those years ago remains with us today — in much modified form, of course.

The committee gave Lowe the go-ahead to build the computer. IBM, recognizing itself that its bureaucracy was an impediment to anyone really, you know, getting anything done, had recently come up with a concept it called the Independent Business Unit. The idea was that an IBU would work as a semi-independent entity, freed from the normal bureaucracy, with IBM acting essentially as the venture capitalists. Fortune magazine called the IBU, “How to start your own company without leaving IBM.” Chairman Cary, in a quote that has often been garbled and misattributed, called the IBU IBM’s answer to the question, “How do you make an elephant [IBM] tap dance?” Lowe’s IBU would be code-named Project Chess, and the machine they would create would be code-named the Acorn. (Apparently no one was aware of the British computer company of the same name.) They were given essentially free rein, with one stipulation: the Acorn must be ready to go in just one year.

 

Tags: , ,

26 Responses to The IBM PC, Part 1

  1. Eric Fischer

    May 7, 2012 at 10:51 pm

    Great history as always, and interesting that the Datamaster keyboard so closely resembles the PC’s.

    I think you misstate the responsible department, though: “Entry Systems Division,” as in data entry, not as in entry-level.

     
    • Jimmy Maher

      May 7, 2012 at 11:02 pm

      Mm, I have a few sources that call it the “Entry-Level Systems” division. For instance, from Hard Drive by James Wallace:

      “The company’s efforts to produce a low-end commercial product were centered at its plant in Boca Raton, Florida, known as the Entry Level Systems unit. There, several projects were underway. An engineering team headed by Bill Sydnes was working on the System 23 Datamaster.”

       
      • Andrew Dalke

        May 9, 2012 at 3:46 pm

        The Palm Beach Post – Mar 17, 1984 says “Despite well-publicized misgivings about Florida’s unitary tax, International Business Machines announced plans this week to double spending on a Boca Raton-based division. The Entry Systems division will get $500 million…”

        The Sun Sentinel – June 15, 1986 says “IBM, which is based in Armonk, N.Y., plays down its influence on South Florida. Its lead division in Boca Raton, Entry Systems Division, is the leading producer of personal computers in the world.” March 13, 1985 says “He has worked as president of the Boca Raton-based Entry Systems Division, which is responsible for developing and manufacturing the company’s line of personal computers.” January 28, 1985: “freeing its Boca Raton-based personal computer group — known as the Entry Systems Division”

        IBM’s biography for Philip D. Estridge says “The following is the text of a December 1984 IBM biography. ‘Mr. Philip D. Estridge, IBM vice president and president, Entry Systems Division, International Business Machines Corporation, Boca Raton, Florida.'”

        There are a number of journal articles from the time, like “Association of Visually Coded Functions with an Alternate Key” and “Human Factors Testing of Icons for PC JR’S Word Processor Application” which list an affiliation of “International Business Machines Entry Systems Division Boca Raton, Florida”

         
        • Jimmy Maher

          May 9, 2012 at 10:57 pm

          Well, I guess that’s definitive enough. Teach me not to rely on secondary sources. :)

           
          • Michael Burke

            June 5, 2017 at 5:03 am

            Initially the IBM PC was only sold through the IBM store. That was a major headache for larger companies especially.

            Not being part of the Data Processing Division, the PC was not on the salesman’s catalog and there was no SE support.

            Adding to that problem, it wasn’t even part of the Office Systems Division. The new IBM retail stores where supposed to be the answer.

             
          • Alvin

            July 14, 2017 at 5:59 pm

            It may be definitive, but wrong. Initially, IBM, in the form of Harold Sparky Sparks, had a deal with Ed Farber to sell the PC in his stores. I think they were called Computerland.

             
    • Keith Palmer

      May 8, 2012 at 1:19 am

      Great history as always, and interesting that the Datamaster keyboard so closely resembles the PC’s.

      I saw a letter in a late issue of “Creative Computing” suggesting the “infamous” location of some of the keys on the original IBM PC keyboard came from the need to add keys to the Selectric typewriter keyboard, and that the design sort of carried forward. This does seem to back that up.

       
  2. ZUrlocker

    May 8, 2012 at 12:33 am

    Good story, as usual. I bought a bare bones IBM PC in 1982 about a year after it came out and equipped it with third party floppy drives, video card, Amdek monitor, 256k memory. I think the whole thing must have cost somewhere around $2,000. I was still an undergrad in school, so it was a lot of money, but this was a great upgrade over my Apple II. The PC was a heavy duty machine, with much better programming tools like Turbo Pascal. Unfortunately, games pretty much sucked using CGA graphics. A couple of years later, while in grad school, I upgraded again to a “fat Mac” 512K.
    –Zack

     
  3. Felix Pleșoianu

    May 8, 2012 at 5:41 am

    Honestly, the 4-color limitation of CGA didn’t really matter when all you had was a green screen…

     
  4. Malcolm

    February 12, 2016 at 11:11 pm

    The first computer I touched (at age 9) was an IBM 5100 at the Duke University computer access centre. As far as I could tell, the centre was open to all comers (although my dad was on sabbatical at Duke at the time).

    These 5100s had the toggle switch on the front panel to choose BASIC or APL, and the APL keyboards (who knows what the reality looks like where I set the toggle switch to APL).

    For a princely sum you could buy a second-hand QIC, and a friendly staff member would copy some programs to it. In particular, I remember HUNT THE WUMPUS.

    I also remember typing in programs from one of Ahl’s books. And my parents banning my trips to the centre until I had mastered my multiplication tables (I think I managed that in about two days ;)

    For me the 5100 was the beginning of a lifelong obsession. Back in Sydney, getting access to a PC was harder to manage, and I muddled by on others’ (including a lot of time with a borrowed Sharp PC1500 pocket computer) until I convinced my parents to buy a Mac in 1985… but that’s a whole different story!

     
  5. DZ-Jay

    February 11, 2017 at 7:08 pm

    I think you misunderstood the accounts from Cringely. Nothing you state contradict what he said, except that you are using “open systems” to mean “expandable,” when the I dusty uses it to mean “freely accessible by third parties.”

    That’s the point of Cringely’s commentary in Triumph Of The Nerds: not that IBM cobbled together the machine out of spare parts and without thought; but that the recommendation report considered that in order to accelerate time to market and have a chance at succeeding, they couldn’t spend a few years designing a proprietary architecture and build the components themselves (a “closed” architecture), but go with off-the-shelf parts and third-party components — and “open” system.

    You admit yourself tha this is very un-IBM-like, the precise point Cringely makes.

    dZ.

     
    • DZ-Jay

      February 11, 2017 at 7:09 pm

      Auto-correct error: “when the I dusty uses…” Is supposed to be “when Cringely uses…” Sorry.

       
    • DZ-Jay

      February 12, 2017 at 11:43 am

      By the way, in the mainframe industry, an “open architecture” does not refer to expandability, but to the fact that there is an open market for components rather than a single vendor with a “closed” proprietary supply.

       
  6. Funai Dèssien

    June 5, 2017 at 4:06 pm

    >pound
    This is a totally illogical feudal measure.
    Use kg.

     
  7. Alvin

    July 14, 2017 at 5:59 pm

     
  8. Aula

    January 22, 2018 at 6:32 pm

    This article still contains an error that was fixed in the “Binning the Trash-80” article: CP/M did *not* need the S-100 bus (nor did S-100 need CP/M). CP/M is a strong contender for the most hardware-agnostic operating system ever created, since the higher-level parts (BDOS and CCP) abstracted all hardware access to BIOS calls, meaning only the BIOS had to be aware of the actual hardware. Of course, many applications needed more than the very primitive BIOS interface could offer, so either they had to be configured (often tediously) or they didn’t work very well (or at all).

     
    • whomever

      January 22, 2018 at 7:38 pm

      I think Jimmy’s point was more that most of the CP/M machines at the time did use the CP/M? You are of course correct, in fact the IBM PC itself could run a port of CP/M (CP/M 86) though it never caught on.

       
  9. Will Moczarski

    January 22, 2020 at 10:28 am

    release of the their own first PCs
    -> release of their

     
    • Jimmy Maher

      January 22, 2020 at 8:20 pm

      Thanks!

       
  10. Ben

    June 12, 2020 at 7:56 pm

    that’s he -> that he’s

    limitation -> limitations

     
    • Jimmy Maher

      June 15, 2020 at 8:59 am

      Thanks!

       
  11. Jeff Nyman

    July 18, 2021 at 8:13 pm

    “Rather than talking of an open architecture, we might do better to talk of a modular architecture.”

    We probably wouldn’t, though, because that term doesn’t fit the context being discussed. You can have something modular that isn’t technically an “open architecture.” Open architecture systems prioritize specific qualities in their designs: adaptability, modularity, portability, scalability, and interoperability.

    Thus a modular aspect is only one part of that. There are many systems that are very modular but quite closed. There are also what are called “selectively open” architectures. And all of this can be orthogonal to to technology infrastructure with specifications that are public as opposed to proprietary, which is often the limited form that people speak of in regards to “open architecture.”

    Regarding Cringley and his quote, if you are looking at things historically, you have to look at how widespread (or not) the term open architecture was back then. For example consider an article by Michael Schrage from 2 November 1983, where he said said: “IBM has an ‘open architecture’ for the system so that other companies can write software and build hardware for it.”

    Note how “open architecture” was put in quotes, indicating that this seemed to be some specific term that IBM was using in particular. Other articles from that time show that “open architecture” was a term more used with IBM that others and was often associated with them in particular.

     
    • Jeff Nyman

      August 8, 2021 at 9:24 pm

      On reflection, I almost wonder if Cringely is talking about different stages of IBM as it evolved very rapidly. Cringely is quoted as saying:

      “To save time, instead of building a computer from scratch, they would buy components off the shelf and assemble them — what in IBM speak was called ‘open architecture.’”

      It’s almost sarcastic; as in “Here — if you can believe it — is what IBM thought ‘open architecture’ meant.”

      And initially that is all that IBM did: they went to outside vendors for most of the parts they needed. But eventually it seems (and if the history written by the players at the time is anything to go by), the independent business unit evolved and what happened was a true open architecture concept: IBM essentially built a modular base but then, crucially, also published the specifications. Their plan was to encourage other manufacturers to create the pieces that plugged into their core base system.

      The Triumph of the Nerds referenced makes this clear: “The key decisions were to go with an open architecture, non IBM technology, non IBM software, non IBM sales and non IBM service. And we probably spent a full half of the presentation carrying the corporate management committee into this concept. Because this was a new concept for IBM at that point.”

      But also: “As the frenzied 80’s came to a close IBM reached a watershed – they had created an open PC architecture that anyone could copy.”

      So there was an evolution of how IBM conceived of the open architecture.

      Upon rereading this, I also realize this quote from the post is also a bit inaccurate:

      “While sourcing so much equipment from outside vendors was a major departure for IBM, in other ways the IBM PC was a continuation of the company’s normal design philosophy.”

      The IBM PC was very much NOT in their normal design philosophy. (Although you could argue that their work on the IBM 5520 or the Displaywriter was priming them for this.) So much was this not their “normal design philosophy” that they got it completely wrong in that they didn’t foresee a key problem and it’s also referenced in Triumph of the Nerds: “IBM always thought their inside track would keep them ahead – wrong. IBM’s glacial pace and high overhead put them at a disadvantage to the leaner clone makers.”

      So what they had was a new design philosophy but being enacted in their old ways of doing business.

       

Leave a Reply to Jeff Nyman Cancel reply

Your email address will not be published.


This site uses Akismet to reduce spam. Learn how your comment data is processed.