RSS

Search results for ‘trinity’

Of Game Consoles, Home Computers, and Personal Computers

When I first started writing the historical narrative that’s ended up consuming this blog, I should probably have stated clearly that I was writing about the history of computer games, not videogames or game consoles. The terms “computer game” and “videogame” have little or no separation today, but in the late 1970s and early 1980s the two were regarded as very distinct things. In Zap!, his history of Atari written just as that company was imploding in 1983, Scott Cohen takes the division as a given. He states, “Perhaps Atari’s most significant contribution is that it paved the way for the personal computer.” In predicting the future of the two categories, he is right about one and spectacularly wrong about the other. The PC, he says, will continue up a steadily inclining growth curve, becoming more and more an expected household fixture as the years go by. The game console, however, will be dismissed in future years as a “fad,” the early 1980s version of the Hula Hoop.

If we trace back far enough we can inevitably find some common origins, but the PC and game console were generally products of different folks with very different technical orientations and goals. Occasional collisions like Steve Jobs’s brief sojourn with Atari were more the exception than the rule. Certainly the scales of the two industries were completely out of proportion with one another. We’ve met plenty of folks on this blog who built businesses and careers and, yes, made lots of money from the first wave of PCs. Yet everything I’ve discussed is a drop in the bucket compared to the Atari-dominated videogame industry. A few figures should make this clear.

Apple, the star of the young PC industry, grew at an enviable rate in its early years. For example, sales more than doubled from 1979 to 1980, from 35,000 units to 78,000. Yet the Atari VCS console also doubled its sales over the same period: from 1 million in 1979 to 2 million in 1980. By the time the Apple II in 1983 crossed the magical threshold of 1 million total units sold, the VCS was knocking at the door of 20 million. Even the Intellivision, Mattel’s distant-second-place competitor to the VCS, sold 200,000 units in 1980 alone. In mid-1982, the height of the videogame craze, games consoles could already be found in an estimated 17% of U.S. households. Market penetration like that would be years in coming to the PC world.

In software the story is similar. In 1980, a PC publisher with a hit game might dream of moving 15,000 units. Atari at that time already had two cartridges, Space Invaders and Asteroids, that had sold over 1 million copies. Activision, an upstart VCS-game-maker formed by disgruntled Atari programmers, debuted in 1980 with sales of $67 million on its $25 game cartridges. By way of comparison, Apple managed sales of $200 million on its $1500 (or more) computer systems. The VCS version of Pac-Man, the big hit of 1981, sold over 2 million copies that year alone. Again, it would be a decade or more before PC publishers would begin to see numbers like that for their biggest titles.

So, we have two very different worlds here, that of the mass-market, inexpensive game consoles and that of the PC, the latter of which remained the province of the most affluent, technology-savvy consumers only. But then a new category began to emerge, to slot itself right in the middle of this divide: the “home computer.” The first company to dip a toe into these waters was Atari itself.

Steve Jobs during his brief association with Atari brought a proposal for what would become the Apple II to Atari’s then-head Nolan Bushnell. With Atari already heavily committed to both arcade machines and the project that would become the VCS, Bushnell declined. (Bushnell did, however, get Jobs a meeting with potential investor Don Valentine, who in turn connected him with Mike Markkula. Markkula became the third employee at Apple, put up most of the cash the company used to get started in earnest, and played a key role in early marketing efforts. Many regard him as the unsung hero of Apple’s unlikely rise.) Only later on, after the success of the Apple II and TRS-80 proved the PC a viable bet, did Atari begin to develop a full-fledged computer of its own.

The Atari 400 and 800, released in late 1979, were odd ducks in comparison to other microcomputers. The internals were largely the work of three brilliant engineers, Steven Mayer, Joe Decuir, and Jay Miner, all of whom had also worked on the Atari VCS. Their design was unprecedented. Although they had at their heart the same MOS 6502 found in the Atari VCS and the Apple II, the 400 and 800 were built around a set of semi-intelligent custom chips that relieved the CPU of many of its housekeeping burdens to increase its overall processing potential considerably. These chips also brought graphics capabilities that were nothing short of stunning. Up to 128 colors could be displayed at resolutions of up to 352 X 240 pixels, and the machines also included sprites, small graphics blocks that could be overlaid over the background and moved quickly about; think of the ghosts in Pac-Man for a classic example. By comparison, the Apple II’s hi-res mode, 280 X 160 pixels with 6 possible colors, no sprites, and the color-transition limitations that result in all that ugly color fringing, had represented the previous state of the art in PC graphics. In addition, the Atari machines featured four-voice sound-synthesis circuitry. Their competitors offered either no sound at all, or, as in the case of the Apple II, little more than beeps and squeaks. As an audiovisual experience, the new Atari line was almost revolutionary.

Still, externally the Apple II looked and was equipped (not to mention was priced) like a machine of serious intent. The Ataris lacked the Apple’s flexible array of expansion slots as well as Steve Wozniak’s fast and reliable floppy-disk system. They shipped with just 8 K of memory. Their BASIC implementation, one of the few not sourced from Microsoft, was slow and generally kind of crummy. The low-end model, the 400, didn’t even have a proper keyboard, just an awkward membrane setup. And it wasn’t even all a story of missing features. When you inspected the machines more closely, you found something unexpected: a console-style port for game cartridges. The machines seemed like Frankensteins, stuck somewhere between the worlds of the game console and the PC. Enter the home computer — a full-fledged computer, but one plainly more interested in playing games and doing “fun” things than “serious” work. The Atari logo on the cases, of course, also contributed to the impression that, whatever else they were, these machines weren’t quite the same thing as, say, the Apple II.

Alas, Atari screwed the pooch with the 400 and 800 pretty badly. From the beginning it priced them too high for their obvious market; the 800 was initially only slightly less expensive than the Apple II. And, caught up like the rest of the country in VCS-fever, they put little effort into promotion. Many in management hardly seemed aware that they existed at all. In spite of this, their capabilities combined with the Atari name were enough to make them modest sales successes. They also attracted considerable software support. On-Line Systems, for instance, made them their second focus of software development, behind only the Apple II, during their first year or two in business. Still, they never quite lived up to their hardware’s potential, never became the mass-market success they might (should?) have been.

The next company to make a feint toward the emerging idea of a home computer was Radio Shack, who released the TRS-80 Color Computer in 1980. (By the end of that year Radio Shack had four separate machines on the market under the TRS-80 monicker, all semi- or completely incompatible with one another. I haven’t a clue why no one could come up with another name.) Like so much else from Radio Shack, the CoCo didn’t seem to know quite what it wanted to be. Radio Shack did get the price about right for a home computer: $400. And they provided a cartridge port for instant access to games. Problem was, those games couldn’t be all that great, because the video hardware, while it did indeed allow color, wasn’t a patch on the Atari machines. Rather than spend money on such niceties, Tandy built the machine around a Motorola 6809, one of the most advanced 8-bit CPUs ever created. That attracted a small but devoted base of hardcore hackers who did things like install OS-9, the first microcomputer operating system capable of multitasking. Meanwhile the kids and families the machine was presumably meant to attract shrugged their shoulders at the unimpressive graphics and went back to their Atari VCSs. Another missed opportunity.

The company that finally hit the jackpot in the heretofore semi-mythical home-computer market was also the creator of the member of the trinity of 1977 that I’ve talked about the least: Commodore, creator of the PET. I’ll try to make up for some of that inattention next time.

 
 

Tags:

Pascal and the P-Machine

Working with a small team of assistants, Niklaus Wirth designed Pascal between 1968 and 1970 at the Swiss Federal Institute of Technology in Zürich. His specification was implemented for the first time on the university’s CDC Cyber mainframe in mid-1970, and the system was finally considered complete and robust enough to introduce in beginning programming classes there in 1972. With his language essentially complete and with a working proof of concept in daily use, Wirth now shifted roles, from design and implementation to the equally daunting task of convincing computer-science departments around the world to give up their old languages and give his new one a shot. Like the PC industry of a decade later, the world of institutional computing was full of incompatible systems that often had trouble even exchanging data, much less programs. And yet Pascal needed to be available on all or most of these machines — or at least the ones commonly chosen by computer-science departments for pedagogical use — to have a chance of realizing Wirth’s goal of Pascal serving as an antidote to the deadly virus of BASIC. Porting the compiler by hand to all of those disparate architectures looked to be a daunting task indeed.

Wirth’s next epipheny should sound familiar if you read my earlier posts about Infocom: working closely with a graduate student, Urs Amman, he created a virtual machine, named the P-Machine, that could be hosted on all of these physical machines. They rewrote the Pascal compiler to output P-Code that could run under the P-Machine, just as Infocom later did in designing ZIL and the Z-Machine. (That’s of course no big surprise, as the P-Machine was the inspiration for the Z-Machine. If you’ve been reading these posts chronologically, I’m afraid we’ve rather put the cart before the horse.) Wirth, however, went one step further: he rewrote the Pascal compiler and other development tools themselves in P-Code, thus completing the circle. Once a P-Machine interpreter was written for any given platform, that platform could not only run the whole universe of already extant Pascal software, but also run the compiler, allowing users to create more software that could not only run on that platform but on all others for which P-Machine interpreters had been written. Similarly, updates to Pascal could be made instantly available on every platform hosting the language. Neat trick, no?

Beginning in 1973, Wirth began offering a “P-Kit” to anyone who wanted one. It consisted of the P-Code Pascal compiler and the source code, itself written in Pascal, for a P-Machine interpreter. The recipient need only (?) translate this source into a program runnable on their platform, working in assembly or some other high-level language, to get a complete Pascal environment up and running. To further encourage as many implementations as possible, Wirth published the specifications for the P-Machine in his book Algorithms + Data Structures = Programs, published in German in 1975 and in English the following year. The P-Machine did its job. By the mid-1970s universities were increasingly adapting Pascal as their standard beginning pedagogical language in lieu of comparative dinosaurs like BASIC and FORTRAN.

Meanwhile, the PC revolution was beginning, a development of which Wirth remained virtually unaware. He was after all firmly entrenched in the established institutional computing culture, and, further, he was working from Europe, where microcomputer technology was oddly slow in arriving. It would therefore be someone else, Ken Bowles of the University of California San Diego, who would spearhead a drive to bring Pascal and the P-Machine to microcomputers.

Bowles was an angry, frustrated man when he received his P-Kit in 1974. A devotee of interactive, time-shared computing over the old batch-processing model, Bowles had ascended to director of UCSD’s computer center in 1968. One of his first actions had been to replace the mainframe at the core of the center, an aged, batch-processing-bound Control Data system, with a state-of-the-art Burroughs capable of timesharing. Incredibly, however, Bowles got word from a lecturing stint in Oxford, England, in mid-1974 that the university’s administrators had decided, without even consulting him, to replace the Burroughs system with another big, traditional, batch-processing IBM mainframe. Even better, he got this news not from the university but from contacts at Burroughs, who contacted him asking why UCSD was pulling its contract. Bowles resigned his position as director in protest, going back to being just an ordinary professor, but could only watch helplessly as the trucks arrived to cart away the Burroughs system that had been essential to much of the research of him and his students. Worse, his programming classes would now have to be taught in the old way once again: instead of being able to write a program, compile it, and instantly see the result, students would have to type it out onto punched cards, deliver it to the computer center, then return the next day — if they were lucky — to see if it had actually worked. And rinse and repeat, ad nauseum.

Bowles saw the P-Kit as a possible solution to his woes, a chance to get a proper development environment back into the hands of his students. He would let the administrators have their mainframe, and try to get Pascal running on smaller, cheaper machines. Unlike his colleague in Switzerland, Bowles could even in 1974 see where the new generation of microchip technology was leading; he realized that desktop computers were on the horizon. While he would initially implement his P-Machine on a PDP-11 minicomputer, he could already envision the day when every student would have her own private computer to program. Thus the portability of the P-Machine was key to his project.

By mid-1976, Bowles and a small group of students had already come a long way, with a working PDP-11 Pascal environment that they had begun using to teach introduction-to-programming classes. (It replaced, not without controversy from traditionalists, the older FORTRAN-based curriculum.) And they had not just created a clone of Wirth’s compiler but had gone far beyond it. They had expanded greatly upon Wirth’s relatively stripped-down language, adding everyday conveniences such as better string handling and easier file access. Around it they had built what amounted to an entire Pascal operating system, all running in virtualized P-Code, similar to the interactive BASIC environments of the time but better; the text editor, for instance, was something of a marvel for its time. When UCSD Pascal began to spread, their tinkering with Pascal raised a fair amount of ire from some quarters, not least from Wirth himself, a pedantic sort who regarded the language in its original form as perfect, with everything it needed and nothing it didn’t. Still, UCSD Pascal would soon supersede Wirth’s own implementation as the standard, most notably inspiring what became the commercial juggernaut Turbo Pascal. And whatever his misgivings at the time, Wirth has since come to acknowledge the enormous role UCSD Pascal played in popularizing his design in the PC world.

In July of 1976, Bowles and his students brought their Pascal up for the first time on a microcomputer, a Z80-based system built from a kit. He describes this moment as a “revelation”; all of the software his team had created for the PDP-11 version just worked, immediately, with no changes whatsoever.

Bowles had begun his project to provide a better tool for his students, but it was soon obvious that UCSD Pascal had commercial potential outside the university. The first partnership was with a tiny startup called Terak, who had developed a workstation called the 8510/a that was basically a stripped-down, semi-compatible clone of the PDP-11 minicomputer with added bitmapped graphics capabilities that were stunning for their time. Having been first implemented on a PDP-11, UCSD Pascal was of course a natural fit there. Bowles went on the road with Terak to demonstrate the system, where the programming environment combined with the machine’s display capabilities inspired “gasps of amazement.” Terak machines soon became the standard platforms for running UCSD Pascal at UCSD itself.

The greenest pastures, however, beckoned from the burgeoning PC market. Microcomputer users and programmers were already as early as 1977 trying to reckon with the incompatible machines on the market: the TRS-80, Apple II, and Commodore PET, not to mention the dozens of kit and boutique computers, were all incompatible with one another, fragmenting an already tiny software market. Yes, these machines all ran BASIC, but each hosted a subtly different version of the language, crafted in response to the hardware’s capabilities and the whims of the machine’s manufacturer, enough to guarantee that all but the simplest BASIC programs would need some translation to move from platform to platform.

Every programmer had to deal with this reality, whether by coding in BASIC and translating as necessary (as did the general-purpose magazines, who often published type-in listings footnoted with the changes needed to run the program on platforms X, Y, and Z), developing some sort of portable game engine (as did Scott Adams, Automated Simulations, and Infocom), or just focusing on a single platform and hoping it was enough to sustain a business (as did the Apple II-specific supercoders I mentioned in my last post). The UCSD system offered another solution. Beginning in 1978, Bowles and his student started a quasi-business selling versions of the system for S-100-bus PCs to anyone who asked for one for $15. Those machines, descendents of the original Altair and generally either built from kits or provided by boutique manufacturers, inhabited a somewhat different ecosystem than the friendlier, more mass-market trinity of 1977, being the domain of the hardcore technical set that made up the core of Byte magazine’s readership and, increasingly, business users. (Tellingly, games, which dominated early software on the trinity of 1977, were few and far between on these machines.) For all that, however, there were quite a lot of them out there, and quite lot of their owners were eager to experiment with UCSD Pascal in lieu of their normal operating system of choice, Digital Research’s CP/M.

Bowles first met Steve Jobs and Steve Wozniak at the very West Coast Computer Faire at which they unveiled the Apple II. Jobs was already eying the education market, eager to forge “respectable” ties for Apple, and eager to bring professional-level software to the platform, and so the two men remained in intermittent contact. The relationship was given a boost the following year when Bill Atkinson, a UCSD alum, came to work for Apple. Atkinson, a computer engineer whose word held a great deal of sway with the un-technical Jobs, was greatly enamored of UCSD Pascal, convinced it would be a great booster for the Apple II. Still, that remained a problematic proposition at this point. Although UCSD Pascal had been designed to run on tiny machines in comparison to its inspiration, there were inevitable limits. The system was designed for a machine with at least 64 K of memory. By contrast, the first Apple IIs could be purchased with as little as 4 K, and seldom exceeded 16 K. It was an obvious nonstarter. And so the relationship between Apple and UCSD remained just talk for the moment.

In mid-1979 Apple introduced the dramatically improved Apple II Plus, which generally sold with what was taken at the time as the machine’s maximum possible memory of 48 K; the 6502 CPU used in the Apple II can only address 64 K at one time, of which 16 K was used by the ROM memory that hosted the machine’s BASIC-based operating system. They were getting close, but an Apple II version of UCSD Pascal still seemed out of reach. As it turned out, however, they were close enough that some clever hacking could get the job done.

The UCSD system would by design completely take over the machine. This meant that the 16 K of BASIC ROM would be superfluous when the machine was running the new operating system. Therefore Apple came up with a new expansion card (reason to bless Woz’s insistence on having all those slots again!) containing 16 K of RAM memory. The user could choose whether the CPU addressed this RAM (for running UCSD Pascal), or the standard 16 K of ROM (for running other software). Just like that, they had their 64 K machine.

The USCD Pascal software, renamed to Apple Pascal, was sold as a single package along with this “Language Card” for about $500 from shortly after the arrival of the Apple II Plus. It transformed just about everything about the Apple II; even its disks used their own format, unreadable under the normal Apple II environment. It would not be an exaggeration to say that an Apple II equipped with Apple Pascal was a completely new and different machine from Woz’s original creation, with a personality all its own. The inability to exchange programs and data with users who hadn’t purchased the system was, undeniably, a drawback. On the plus side, however, the user got easily the most advanced development environment available on any microcomputer of this era. Not only did she have access to the Pascal language in lieu of BASIC, but Apple and UCSD worked in quite a lot of extensions to take advantage of the Apple II’s unique bitmapped graphics capabilities, borrowing from the older Terak implementation. I’ll come back to that a couple of posts from now, when I demonstrate a concrete example of Apple Pascal in action. And we’ll start on the story that will lead to that next time.

 
 

Tags: ,

A Tale of Three Languages

If I had to name one winner amongst the thousands of programming languages that have been created over the last 60 years, the obvious choice would be C. Developed by Dennis Ritchie from 1969 as the foundation of the Unix operating system, C remains one of the most commonly used languages even today; the Linux kernel, for example, is implemented in C. Yet that only tells part of the story. Dozens of other languages have borrowed the basic syntax of C while adding bells and whistles of their own. This group includes the most commonly used languages in computing, such as Java, C++, and Perl; quickly growing upstarts like C# and Objective C; and plenty of more esoteric domain-specific languages, like the interactive-fiction development system TADS 3. For a whole generation of programmers, C’s syntax, so cryptic and off-putting to newcomers with its parenthesis, curly braces, and general preference for mathematical symbols in lieu of words, has become a sort of comfort food. “This new language can’t be that bad,” we think. “After all, it’s really just C with…” (“these new things called classes that hold functions as well as variables”; “a bunch of libraries to make text-adventure development easy”; etc.). For obvious reasons, “C-like syntax” always seems to be near the top of the feature list of new languages that have it. (And for those who don’t: congratulations on sticking to your aesthetic guns, but you’ve chosen a much harder road to acceptance. Good luck!)

When we jump back 30 years, however, we find in this domain of computing like in so many others a very different situation. In this time C was the standard language of the fast-growing institutional operating system Unix, but had yet to really escape the Unix ghetto to join the top tier of languages in the computing world at large. Microcomputers boasted only a few experimental and/or stripped-down C compilers, and the language was seldom even granted a mention when magazines like Byte did one of their periodic surveys of the state of programming. The biggest buzz in Byte went instead to Niklaus Wirth’s Pascal, named after the 17th-century scientist, inventor, and philosopher who invented an early mechanical calculating machine. Even after C arrived on PCs in strength, Pascal, pushed along by Borland’s magnificent Turbo Pascal development environment, would compete with and often even overshadow it as the language of choice for serious programmers. Only in the mid-1990s did C finally and definitively win the war and become the inescapable standard we all know today.

While I was researching this post I came across an article by Chip Weems of Oregon State University. I found it kind of fascinating, so much that I’m going to quote from it at some length.

In the early days of the computer industry, the most expensive part of owning a computer was the machine itself. Of all the components in such a machine, the memory was the most costly because of the number of parts it contained. Early computer memories were thus small: 16 K was considered large and 64 K could only be found in supercomputers. All of this meant that programs had to take advantage of what little space was available.

On the other hand, programs had to be written to run as quickly as possible in order to make the most efficient use of the large computers. Of course these two goals almost always contradicted each other, which led to the concept of the speed versus space tradeoff. Programmers were prized for the ability to write tricky, efficient code which took advantage of special idiosyncrasies in the machine. Supercoders were in vogue.

Fortunately, hardware evolved and became less expensive. Large memories and high speed became common features of most systems. Suddenly people discovered that speed and space were no longer important. In fact roles had reversed and hardware had become the least expensive part of owning a computer.

The costliest part of owning a computer today is programming it. With the advent of less expensive hardware, the emphasis has shifted from speed versus space to a new tradeoff: programmer cost versus machine cost. The new goal is to make the most efficient use of a programmer’s time, and program efficiency has become less important — it’s easier to add more hardware.

If you know something about the history of the PC, you’re probably nodding along right now, as we’re seemingly on very familiar ground. If you’re a crotchety old timer, you may even be mulling over a rant about programmers today who solve all their problems just by throwing more hardware at them. (When old programmers talk about the metaphorical equivalent of having to walk both ways uphill in the snow to school every morning, they’re actually pretty much telling the truth…) Early Apple II magazines featured fawning profiles of fast-graphics programming maestros like Nasir Gebelli (so famous everyone just knew him by his first name), Bill Budge, and Ken Williams, the very picture of Weems’s “supercoders” who wrote “tricky, efficient code which took advantage of special idiosyncrasies in the machine.” If no one, including themselves after a few weeks, could quite understand how their programs did their magic, well, so be it. It certainly added to the mystique.

Yet here’s the surprising thing: Weems is not describing PC history at all. In fact, the article predates the fame of the aforementioned three wizards. It appeared in the August, 1978, issue of Byte, and is describing the evolution of programming to that point on the big institutional systems. Which leads us to the realization that the history of the PC is in many ways a repeat of the history of institutional computing. The earliest PCs being far too primitive to support the relatively sophisticated programming languages and operating systems of the institutional world, early microcomputer afficionados were thrown back into a much earlier era, the same that Weems is bidding a not-very-fond farewell to above. Like the punk-rock movement that was exploding just as the trinity of 1977 hit the market, they ripped it up and started again, only here by necessity rather than choice. This explains the reaction, somewhere between bemused contempt and horror, that so many in the institutional world had to the tiny new machines. (Remember the unofficial motto of MIT’s Dynamic Modeling Group: “We hate micros!”) It also explains the fact that I’m constantly forced to go delving into the history of computing on the big machines to explain developments there that belatedly made it to PCs. In fact, I’m going to do that again, and just very quickly look at how institutional programming got to the relatively sophisticated place at which it had arrived by the time the PC entered the scene.

The processor at the heart of any computer can ultimately understand only the most simplistic of instructions. Said instructions, known as “opcodes,” do such things as moving a single number from memory into a register of the processor; or adding a number already stored in a register to another; or putting the result from an operation back into memory. Each opcode is identified by a unique sequence of bits, or on/off switches. Thus the first programmers were literally bit flippers, laboriously entering long sequences of 1s and 0s by hand. (If they were lucky, that is; some early machines could only be programmed by physically rewiring their internals.) Assemblers were soon developed, which allowed programmers to replace 1s and 0s with unique textual identifiers: “STO” to store a number in memory, “ADD” to do the obvious, etc. After writing her program using this system of mnemonics, the programmer just had to pass it through the assembler to generate the 1s and 0s the computer needed. That was certainly an improvement, but still, programming a computer at the processor level is very time consuming. Sure, it’s efficient in that the computer does what you tell it to and only what you tell it to, but it’s also extremely tedious. It’s very difficult to write a program of real complexity from so far down in the weeds, hard to keep track of the forest of what you’re trying to accomplish when surrounded by trees made up of endless low-level STOs and ADDs. And even if you’re a supercoder who’s up to the task, good luck figuring out what you’ve done after you’ve slept on it. And as for others figuring it out… forget about it.

And so people started to develop high-level languages that would let them program at a much greater level of abstraction from the hardware, to focus more on the logic of what they were trying to achieve and less on which byte they’d stuck where 2000 opcodes ago. The first really complete example of such a language arrived in 1954. We’ve actually met it before on this blog: FORTRAN, the language Will Crowther chose to code the original Adventure more than 20 years later. LISP, the ancestor of MIT’s MDL and Infocom’s ZIL, arrived in 1958. COBOL, language of a million dull-but-necessary IBM mainframe business programs, appeared in 1959. And they just kept coming from there, right up until the present.

As the 1960s wore on, increasing numbers of people who were not engineers or programmers were beginning to make use of computers, often logging on to timesharing systems where they could work interactively in lieu of the older batch-processing model, in which the computer was fed some data, did its magic, and output some result at the other end without ever interacting with the user in between. While they certainly represented a huge step above assembly language, the early high-level languages were still somewhat difficult for the novice to pick up. In addition, they were compiled languages, meaning that the programmer wrote and saved them as plain text files, then passed them through another program called a compiler which, much like an assembler, turned them into native code. That was all well and good for the professionals, but what about the students and other amateurs who also deserved a chance to experience the wonder of having a machine do their bidding? For them, a group of computer scientists at Dartmouth University led by John Kemeny and Thomas Kurtz developed the Beginner’s All-Purpose Symbolic Instruction Code: BASIC. It first appeared on Dartmouth’s systems in 1964.

As its name would imply, BASIC was designed to be easy for the beginner to pick up. Another aspect, somewhat less recognized, is that it was designed for the new generation of time-sharing systems: BASIC was interactive. In fact, it wasn’t just a standalone language, but rather a complete computing environment which the would-be programmer logged into. Within this environment, there was no separation between statements used to accomplish something immediately, like LISTing a program or LOADing one, and those used within the program itself. Entering “PRINT ‘JIMMY'” prints “JIMMY” to the screen immediately; put a line number in front of it (“10 PRINT ‘JIMMY'”) and it’s part of a program. BASIC gave the programmer a chance to play. Rather than having to type in and save a complete program, then run it through a compiler hoping she hadn’t made any typos, and finally run the result, she could tinker with a line or two, run her program to see what happened, ad infinitum. Heck, if she wasn’t sure how a given statement worked or whether it was valid, she could just type it in by itself and see what happened. Because BASIC programs were interpreted at run-time rather than compiled beforehand into native code, they necessarily ran much, much slower than programs written in other languages. But still, for the simple experiments BASIC was designed to facilitate that wasn’t really so awful. It’s not like anyone was going to try to program anything all that elaborate in BASIC… was it?

Well, here’s where it all starts to get problematic. For very simple programs, BASIC is pretty straightforward and readable, easy to understand and fun to just play with. Take everybody’s first program:

10 PRINT "JIMMY RULES!"
20 GOTO 10

It’s pretty obvious even to someone who’s never seen a line of code before what that does, it took me about 15 seconds to type it in and run it, and in response I get to watch it fill the screen with my propaganda for as long as I care to look at it. Compared to any other contemporary language, the effort-to-reward ratio is off the charts. The trouble only starts if we try to implement something really substantial. By way of example, let’s jump to a much later time and have a look at the first few lines of the dungeon-delving program in Richard Garriott’s Ultima:

0 ONERR GOTO 9900
10 POKE 105, PEEK (30720): POKE 106, PEEK (30721): POKE 107, PEEK (30722): POKE 108, PEEK (30723): POKE 109, PEEK (30724): POKE 110, PEEK (30725): POKE 111, PEEK (30726): POKE 112, PEEK (30727)
20 PRINT "BLOAD SET"; INT (IN / 2 + .6)
30 T1 = 0:T2 = 0:T3 = 0:T4 = 0:T5 = 0:T6 = 0:T7 = 0:T8 = 0:T9 = 0: POKE - 16301,0: POKE - 16297,0: POKE - 16300,0: POKE - 16304,0: SCALE= 1: ROT= 0: HCOLOR= 3: DEF FN PN(RA) = DNG%(PX + DX * RA,PY + DY * RA)
152 DEF FN MX(MN) = DN%(MX(MN) + XX,MY(MN)): DEF FN MY(MN) = DN%(MX(MN),MY(MN) + YY): DEF FN L(RA) = DNG%(PX + DX * RA + DY,PY + DY * RA - DX) - INT (DN%(PX + DX * RA + DY,PY + DY * RA - DX) / 100) * 100: DEF FN R(RA) = DNG%(PX + DX * RA - DY,PY + DY * RA + DX) - INT (DN%(PX + DX * RA - DY,PY + DY * RA + DX) / 100) * 100
190 IF PX = 0 OR PY = 0 THEN PX = 1:PY = 1:DX = 0:DY = 1:HP = 0: GOSUB 500
195 GOSUB 600: GOSUB 300: GOTO 1000
300 HGR :DIS = 0: HCOLOR= 3

Yes, given the entire program so that you could figure out where all those line-number references actually lead, you could theoretically find the relatively simple logic veiled behind all this tangled syntax, but would you really want to? It’s not much fun trying to sort out where all those GOTOs and GOSUBs actually get you, nor what all those cryptic one- and two-letter variables refer to. And because BASIC is interpreted, comments use precious memory, meaning that a program of real complexity like the one above will probably have to dispense with even this aid. (Granted, Garriott was also likely not interested in advertising to his competition how his program’s logic worked…)

Now, everyone can probably agree that BASIC was often stretched by programmers like Garriott beyond its ostensible purpose, resulting in near gibberish like the above. When you have a choice between BASIC and assembly language, and you don’t know assembly language, necessity becomes the mother of invention. Yet even if we take BASIC at its word and assume it was intended as a beginner’s language, to let a student play around with this programming thing and get an idea of how it works and whether it’s for her, opinions are divided about its worth. One school of thought says that, yes, BASIC’s deficiencies for more complex programming tasks are obvious, but if used as a primer or taster of sorts for programming it has its place. Another is not only not convinced by that argument but downright outraged by BASIC, seeing it as an incubator of generations of awful programmers.

Niklaus Wirth was an early member of the latter group. Indeed, it was largely in reaction to BASIC’s deficiencies that he developed Pascal between 1968 and 1970. He never mentions BASIC by name, but his justification for Pascal in the Pascal User Manual and Report makes it pretty obvious of which language he’s thinking.

The desire for a new language for the purpose of teaching programming is due to my dissatisfaction with the presently used major languages whose features and constructs too often cannot be explained logically and convincingly and which too often defy systematic reasoning. Along with this dissatisfaction goes my conviction that the language in which the student is taught to express his ideas profoundly influences his habits of thought and invention, and that the disorder governing these languages imposes itself into the programming style of the students.

There is of course plenty of reason to be cautious with the introduction of yet another programming language, and the objection against teaching programming in a language which is not widely used and accepted has undoubtedly some justification, at least based on short-term commercial reasoning. However, the choice of a language for teaching based on its widespread acceptance and availability, together with the fact that the language most taught is thereafter going to be the one most widely used, forms the safest recipe for stagnation in a subject of such profound pedagogical influence. I consider it therefore well worthwhile to make an effort to break this vicious cycle.

If BASIC, at least once a program gets beyond a certain level of complexity, seems to actively resist every effort to make one’s code readable and maintainable, Pascal swings hard in the opposite direction. “You’re going to structure your code properly,” it tells the programmer, “or I’m just not going to let you compile it at all.” (Yes, Pascal, unlike BASIC, is generally a compiled language.) Okay, that’s not quite true; it’s possible to write ugly code in any language, just as it’s at least theoretically possible to write well-structured BASIC. But certainly Pascal works hard to enforce what Wirth sees as proper programming habits. The opinions of others on Wirth’s approach have, inevitably, varied, some seeing Pascal and its descendants as to this day the only really elegant programming languages ever created and other seeing them as straitjackets that enforce a certain inflexible structural vision that just isn’t appropriate for every program or programmer.

For my part, I don’t agree with Wirth and so many others that BASIC automatically ruins every programmer who comes into contact with it; people are more flexible than that, I think. And I see a bit of both sides of the Pascal argument, finding myself alternately awed by its structural rigorousness and infuriated by it every time I’ve dabbled in the language. Since I seem to be fond of music analogies today: Pascal will let you write a beautiful programming symphony, but it won’t let you swing or improvise. Still, when compared to a typical BASIC listing or, God forbid, an assembly-language program, Pascal’s clarity is enchanting. Considering the alternatives, which mostly consisted of BASIC, assembly, and (on some platforms) creaky old FORTRAN, it’s not hard to see why Byte and many others in the early PC world saw it as the next big thing, a possible successor to BASIC as the lingua franca of the microcomputer world. Here’s the heart of a roulette game implemented in Pascal, taken from another article in that August 1978 issue:

begin 
     askhowmany  (players); 
     for  player :  =  1  to players do 
          getname  (player ,  playerlist) ; 
     askif (yes); 
     if  yes  then  printinstructions; 
     playersleft : =  true ; 
     while  playersleft do 
          begin  
          for  player :  =  1  to players do 
          repeat 
               getbet (player,  playerlist);
               scanbet (player, playerlist); 
               checkbet  (player, playerlist, valid);
          until valid; 
          determine (winningnumber); 
          for  player : =  1 to  players do 
               begin  
               if  quit (player, playerlist) 
                    then  processquit  (player, playerlist, players, playersleft); 
               if  pass  (player, playerlist) 
                    then  processpass (player, playerlist); 
               if  bet  (player , playerlist) 
                    then  processbet  (player, playerlist, winningnumber)
               end
     end  
end.

The ideal of Wirth was to create a programming language capable of supporting self-commenting code: code so clean and readable that comments became superfluous, that the code itself was little more difficult to follow than a simple textual description of the program’s logic. He perhaps didn’t quite get there, but the program above is nevertheless surprisingly understandable even if you’ve never seen Pascal before. Just to make it clear, here’s the pseudocode summary which the code extract above used as its model:

Begin program. 
     Ask how many  players. 
     For  as many players as there are, 
          Get each player's name. 
     Ask if instructions are needed. 
     If  yes, output  the  instructions. 
     While there are still any players left, 
          For as many  players as there are, 
               Repeat until a valid bet is obtained: 
                    Get the player's bet. 
                         Scan the bet. 
                         Check bet for validity. 
          Determine the winning number. 
          For as many players as there are, 
               If player quit, process  the quit. 
               If  player passed , process the  pass. 
               If  player bet, 
                    Determine whether player won or lost. 
                    Process  this accordingly.
End program.

Yet Pascal’s readability and by extension maintainability was only part of the reason that Byte was so excited. We’ll look at the other next time… and yes, this tangent will eventually lead us back to games.

 
 

Tags: ,

Robot War

If you want to understand how different the computer world of 1981 was from that of today, a good place to look is the reception of Silas Warner’s programming game, Robot War. It received big, splashy feature articles in Softalk, the early flagship of the Apple II community, as well as the premiere issue of Computer Gaming World, one of the first two computer magazines unabashedly dedicated just to games. (Softline, a spinoff of Softalk, edged it out by just a hair for the prize of first.) In the only metric that ultimately matters to a publisher, it even bounced on and off of Softalk‘s monthly lists of the top 30 Apple II software bestsellers for a year or so. All this for a “game” that involved a text editor, a compiler, and a debugger — a game that sounds suspiciously like work to modern ears. But in 1981 the computer world was still a comparatively tiny one, and virtually everyone involved knew at least a little bit of programming as a prerequisite to getting anything at all done; most home computers booted directly into BASIC, after all. More abstractly, even the hardcore gamers (not that that term had yet been invented) were as fascinated with the technology used to facilitate their obsession as they were with games as entities unto themselves. In this milieu, a programming game didn’t sound like quite such an oxymoron.

Robot War was by far the most ambitious game Silas had yet created for Muse, a dramatic departure from simple BASIC excursions like Escape! Not coincidentally, it was also the first he created after finally agreeing to come to Muse Software full time in 1980. He did already have a leg up on it to start, for Robot War on the Apple II is basically the same game as the version he had programmed for the PLATO system a few years before. It does, however, offer some enhancements, most notably the ability for up to five robots to battle one another at one time in a huge free for all; the original had offered only one-on-one matches.

While they didn’t approach software development as systematically as did Infocom, Muse had developed some unusually sophisticated tools by this stage to make assembly-language coding a less arduous task. At a time when other shops seemed to accept perpetual reinventing of wheels as a way of life, Muse had also gotten quite good at reusing its code wherever possible. Large chunks of Robot War, for instance, are lifted straight out of Super-Text, the company’s word processor. One edits one’s source code in a streamlined version of Super-Text itself. Employing one of the strangest criteria for recommending a game ever, Softalk noted that playing Robot War makes “learning the real Super-Text a snap.”

The other way that Super-Text helped beget Robot War is more surprising, and gives me the opportunity to make one of little lessons in technology — specifically, computer display technology.

The screen on which you’re reading this is almost certainly a bitmapped display. This means that it is seen by the computer as just a grid of colored pixels. The text you’re reading is mapped onto that grid in software, “drawn” there like an unusually intricate picture. This is a cool thing for many reasons. For one, it allows you to customize things like the size, shape, and style of the default font to suit your own preferences. For another, it allows writers like me to play with different typefaces to get our message across. It’s a particularly nice thing for word processing, where a document on the screen can be rendered as a mirror image of what will appear when you click “Print.” (We call this what-you-see-is-what-you-get, or WYSIWYG). It’s also got some disadvantages, however: rendering all of that text letter by letter and pixel by pixel consumes a lot of processing power, and storing that huge grid of pixels consumes a lot of memory. The screen on which I’m writing this is 1920 X 1200 pixels. At the 4 bytes per pixel needed to display all the colors a modern computer offers — another, separate issue — that amounts to about 9 MB. That number is fairly negligible on a machine with 4 GB of memory like this one, but on one with just 48 K like the Apple II, even accounting for the need to store vastly fewer colors and a vastly lower resolution, it can be a problem. So, the standard, default display mode of the Apple II is a textual screen, stored not as a grid of individual pixels but as a set of cells, into each of which a single letter or a graphical glyph — essentially a “letter” showing a little glyph which can be combined with others to draw frames, diagrams, or simple pictures — can be inserted. Rendering these characters to the screen is then handled in the display hardware rather than involving any software at all. This approach has plenty of disadvantages: one is limited to a single font; said font must be mono- rather than variable-spaced; changing the font’s size or style are right out; etc. On the plus side, it’s fast and it doesn’t use too much memory. In fact, the Apple II was unique among the trinity of 1977 in offering a bitmapped graphics mode at all; the TRS-80 and PET offered only character-oriented displays. The Apple II’s Hi-Res mode is much of the reason it stood out so amongst its peers as the Cadillac of early microcomputers.

One would naturally expect a word processor — about the most text-oriented application imaginable — to work in the Apple II’s text mode. As Ed Zaron of Muse was developing Super-Text, however, he had to confront a problem familiar to makers and users of much early Apple II application software. The Apple II’s text mode could display just 40 big, blocky characters per line. Amongst other reasons, this design decision had been made because the machine’s standard video feed was just an everyday, fairly low-quality analog television signal. Trying to display more, smaller characters, especially on the television many users chose in lieu of a proper monitor, would just result in a bleeding, unreadable mess. The problem for word processing and other business applications was that a standard typewritten page has 80 characters to a line. Thus, and even though the word processor was not going to be anything close to WYSIWYG under any circumstances given the other limitations of the Apple II’s display, it was even harder than it might otherwise be for the user to visualize what a document would look like in hard copy while it was on the screen, what with each hardcopy line spread over two onscreen. Zaron therefore considered whether he might be able to use Hi-Res mode to display 80 characters of text, at least for those whose displays were good enough to make it readable.

The problem with that idea, however, was that the Apple II has no built-in ability to render text to the Hi-Res screen. One can paint individual pixels, even draw lines and simple shapes, but there is no facility to tell the machine to, say, draw the letter “A” at position 100 X 100. Zaron therefore spent considerable time developing a Hi-Res character generation of his own — a program that could essentially render little pictures representing each glyph to the screen on command, just as your display works today. Zaron and Muse ultimately decided the idea just wasn’t viable for Super-Text. Even with a good monitor it was just too ugly to work with for long periods of time given the color idiosyncrasies of Hi-Res mode, and it was unacceptably slow to work with for entering and editing text. Besides, by that time something called the Sup’R’Terminal was available from a company called M&R Enterprises. This was a card which plugged into one of the Apple II’s internal slots (bless Woz’s foresight!) and solved the problem by adding an entirely new, alternate display system that could render 80 columns of text quickly and cleanly. It also solved another problem for word processors in being able to render lower-case as well as upper-case text (the original Super-Text had had to distinguish upper case from lower case by highlighting the former in reverse video). Soon enough an array of similar products would be available, eventually including some from Apple itself. So, Zaron’s character generator went on the shelf…

…to be picked up by Silas Warner and incorporated into Robot War. While plenty of games made use of the Apple II’s split-screen mode which allowed a few lines of conventional text to appear at the bottom of a Hi-Res display, the screenshot above is one of the few examples in early Apple II software of dynamically updated text being incorporated directly into a Hi-Res display, thanks to Zaron’s aborted Super-Text character generator. Sometimes software development works in crazy ways.

Even if you aren’t a programmer, the idea of Robot War — of programming your own custom robot, then sending him off to do battle with others while you watch — is just, well, neat. That neatness is a big reason that I can’t resist taking some time to talk about it here, where we’re usually all about the ludic narrative. Of course, given the technological constraints Silas was working with there are inevitable limits to the concept. You don’t get to design your robot in the physical sense; each is identical in size, in the damage it can absorb, in acceleration and braking, and in having a single rotable radar dish it can use to “see” and a single rotatable gun it can use to shoot. The programming language you work with is extremely primitive even by the standard of BASIC, with just a bare few commands. Actual operation of the robot is accomplished by reading from and writing to a handful of registers. That can seem an odd way to program today — it took me a while to wrap my mind around it again after spending recent months up to my eyebrows in Java — but in 1981, when much microcomputer programming involved PEEKing and POKEing memory locations and hardware registers directly, it probably felt more immediately familiar.

Here’s a quick example, one of the five simple robots that come with the game.

;SAMPLE ROBOT 'RANDOM'

] 250 TO RANDOM ;INITIALIZE RANDOM — 250
MAXIMUM
]
]START
] DAMAGE TO D ;SAVE CURRENT DAMAGE
]
]SCAN
] IF DAMAGE # D GOTO MOVE ;TEST — MOVE IF HURT
] AIM+17 TO AIM ;CHANGE AIM IF OK
]
]SPOT
] AIM TO RADAR ;LINE RADAR WITH LAUNCHER
] IF RADAR>0 GOTO SCAN ;CONTINUE SCAN IF NO ROBOT
] 0-RADAR TO SHOT ;CONVERT RADAR READING TO
]DISTANCE AND FIRE
] GOTO SPOT ;CHECK IF ROBOT STILL THERE
]
]MOVE
] RANDOM TO H
] RANDOM TO V ;PICK RANDOM PLACE TO GO
]
]MOVEX
] H-X*100 TO SPEEDX ;TRAVEL TO NEW X POSITION
] IF H-X>10 GOTO MOVEX ;TEST X POSITION
] IF H-X ] 0 TO SPEEDX ;STOP HORIZONTAL MOVEMENT
]
]MOVEY
] V-Y*100 TO SPEEDY ;TRAVEL TO NEW Y POSITION
] IF V-Y>10 GOTO MOVEY ;TEST Y POSITION
] IF V-Y ] 0 TO SPEEDY ;STOP VERTICAL MOVEMENT
] GOTO START ;START SCANNING AGAIN
]

Let’s just step through this quickly. We begin by plugging 250 into the RANDOM register, which tells the robot we will expect any random numbers we request to be in the range of 0 to 249. We store the value currently in the DAMAGE register (the amount of damage the robot has received) into a variable, D, for safekeeping. Immediately after we test the DAMAGE register against the value we just stored; if the former is now less than the latter, we know we are taking fire. Let’s assume for the moment this is not the case. We therefore add 17 to the AIM register, which has the effect of rotating our gun 17 degrees around a 360-degree axis. We send a pulse out from our radar dish in the same direction that the gun is now facing. If the radar spots another robot, it will place a number representing the negation of its distance from us into the RADAR register; otherwise it places a 0 or a positive number there. (Yes, this seems needlessly unintuitive; Silas presumably had a good technical reason for doing it this way.) If we do find a robot, we fire the gun by placing the absolute value of the number stored in RADAR into the SHOOT register. This fires a shell set to explode that distance away. We continue to shoot as long as the robot remains there. When it is there no longer, we go back to scanning the battlefield for targets.

Should we start taking fire, we need to move away. In accordance with our name, we decide this by storing random numbers from 0 to 249 — the battlefield is grid of 256 X 256 — into two variables representing our desired new horizontal and vertical positions, H and V. What follows gets a little bit more tricky. The SPEEDX and SPEEDY registers represent horizontal and vertical movement respectively, with negative numbers representing movement to the left or upward and positive numbers to the right or downward. For an added wrinkle, we can only accelerate or decelerate 40 units per second, regardless of what we place in these registers. So, we’re figuring out the relative distance and direction of our goal to our current position, which we find by reading registers X and Y, then moving that way by manipulating SPEEDX and SPEEDY. Because this is not a terribly sophisticated robot, we move into position on each axis individually rather than trying to move on a diagonal. Once we have reached our (approximate) goal, we settle down to scan and shoot once more.

So, what you’re really doing here is writing an AI routine of the sort that someone making a game from scratch might program. If nothing else, that makes it a great training tool for a prospective game programmer. Although one can have some fun playing against the robots that come with it, Robot War is really meant to be a multiplayer game, where one places one’s creations up against those of others. It begs for some sort of tournament, and in fact that’s exactly what happened; Computer Gaming World was so enamored with Robot War that they sponsored a couple in partnership with Muse. For each, several Apple IIs spent several weeks in the basement of Muse’s office/store crunching through battles to determine an eventual champion. I was intrigued enough by the idea to consider proposing a tournament here with you my gentle readers, but upon spending some time with the actual software I tend to think it’s just too crusty and awkward to modern sensibilities to garner enough interest. If you think I’m wrong, though, tell me about it in comments or email; if there’s real interest I’m happy to reconsider. Regardless, here’s the Apple II disk image and the manual for you to have a look at.

In common with another Silas Warner game of 1981, Robot War had a cultural impact far beyond what its sales figures might suggest. It was common enough even in 1981 for computer programs to model the real world, in the form of flight simulators, war games, etc. The subject matter of Robot War, however, went in the opposite direction when something called the “Critter Crunch” took place in Denver in 1987. Today real-world robot combat leagues are kind of a big deal, with their matches often televised and given exposure that any number of human sports would kill to have. I can’t say all of this wouldn’t have started without Silas Warner’s game, but it’s perhaps more than just coincidence that two of the first sustained robot-combat leagues were called Robot Wars, as were a couple of the robot-combat television series (one of which, ironically, turned back into a videogame series). Even more definitive is the influence Robot Wars exerted on the programming games that followed it. The most obvious direct homage is Robot Battle, but there’s plenty of the Robot War DNA in more mainstream efforts like MindRover, not to mention plenty of free hacker-oriented programming games which may or may not involve actual robots. And to think that Robot War was just Silas Warner’s second most influential game of a prodigious 1981…

We’ll get to that other game, which actually bears more directly on this blog’s usual obsessions, soon. First, though, I want to grab one of these other balls I’ve got in the air and check in with one of our old friends.

 
 

Tags: , ,

Silas Warner and Muse Software

Silas Warner was born in Chicago on August 18, 1949, the first and only child of Forrest and Ann Warner. Their family situation was fraught, with Ann and Silas allegedly suffering physically and mentally at the hands of Forrest. Although they couldn’t prove it, it’s a measure of how bad the situation was that both believed that Forrest attempted to kill them by tampering with the brakes on Ann’s car when Silas was 5. Shortly after, they fled Chicago to return to Ann’s home town of Bloomington, Indiana. With the support of her family, Ann earned a degree in education from Indiana University and began teaching. Silas never had any personal contact with his father for the rest of his life.

Ann never remarried, but rather built her emotional world around Silas. She could happily talk for hours about her son, whom she devoutly believed was “special,” destined for great things. As evidence, she claimed that he had already begun reading at the age of two. Later she would brag about his alleged perfect score on his SAT test, or his scholarship offers. She encouraged him to immerse himself in books and intellectual pursuits even as he physically grew up to be a veritable giant, almost seven feet in height and well over 300 pounds in weight. The portrait that emerges on a site offering reminiscences is of an intellectually prodigious and essentially good-hearted but — to put it mildly — socially challenged person. He often struck others as just a little bit sad. A cousin writes about playing on visits with the elaborate train set he’d constructed, but also says that “it was really hard to talk to him. He didn’t seem to know how to carry on a conversation or even really how to ‘play.’ I have to say I just felt sorry for him.” His mother didn’t help the situation by actively discouraging him from having much contact with even his cousins, whom she judged “not up to his caliber of intelligence.” With his social ineptitude, his weight, and the clothes that Ann made for him because she couldn’t purchase any big enough, Silas had a predictably rough time of it in high school. Even a flirtation with football only left him with an injury that would bother him for the rest of his life. On the other hand, his size was intimidating, and he could display a vicious temper when sufficiently roused; he knocked at least one bully unconscious.

Silas entered Indiana University’s physics program in 1966. (It’s a funny thing that so many hackers — Will Crowther and Ken Williams also among them — first entered university as physics majors in the days when computer-science programs and computer access in general weren’t so common. It must have something to do with being attracted to complex systems.) At university Silas continued his eccentric ways. A fellow student speaks of him “walking campus in his long black trench coat reading advanced chemistry and physics textbooks only inches from his face.” More surprisingly, he became “a reporter for the campus radio station, toting his portable reel-to-reel tape recorder gathering stories.”

He also discovered computers at Indiana University. In fact, he found a job working with them before he even graduated, dividing his senior year between his studies and a contract programming job developing accident-analysis software in COBOL for an IBM mainframe. After finishing his degree in 1970, he stayed at the university as an “undergraduate assistant,” an interface of sorts between the student body and the arcane world of the university’s computer systems. That put him in an idyllic position when PLATO came to Indiana University.

I’ve had occasion to mention the PLATO system before on this blog when I described the earliest computerized adaptations of Dungeons and Dragons that were hosted there. I’ve also mentioned Control Data Corporation, who built the mainframe and custom graphical terminals that ran PLATO in addition to giving a young Ken Williams his entree into the computer industry. What I haven’t done, however, is describe the link between the two.

CDC’s co-founder and CEO through its rise, glory years, and eventual downfall in the 1980s at the hands of the new microcomputers was a man named Bill Norris, who refused to accept the currently fashionable business dogma that a corporation’s only duty to society was to maximize profits and shareholder value. An odd combination of shrewd businessman and dreamy idealist, he attempted to use CDC as a force for social good by opening factories in economically depressed areas and funding experimental wind farms amongst a multitude of other projects. Even the Control Data Institute that gave Ken Williams his start was something of a do-gooder project of Norris’s, founded to give bright kids without university credentials a chance to build a career in the computer industry as well as to provide a pool of inexpensive workers for CDC. At a time when even most of his fellow computer-industry executives saw the machines primarily as tools of business, he believed that they could also be a source of social good. He therefore signed CDC on to be the technological and industrial partner of the PLATO system in 1963, just three years after Donald Blitzer had produced the first proofs of concept at the University of Illinois. With steady funding from the National Science Foundation, PLATO grew rapidly from there, with much of its development taking play at a new independent entity, the Computer-Based Education Research Laboratory (CERL), which stood halfway between the business pole of the program (CDC) and the academic pole (the University of Illinois). It would be silly to claim that CDC had no legitimate business interest in PLATO; CERL and PLATO delivered a steady stream of innovative new technologies and ideas to the company. Still, the relationship also reflected Norris’s unique approach to business with a social conscience.

As I wrote in that earlier post, PLATO really came of age with the PLATO IV iteration in 1972, which brought graphical display terminals out of Illinois for the first time to hundreds of institutions spread around the country and, eventually, the world. One of the first of those institutions was the University of Indiana, where Silas helped to set up the first terminals. Soon he was not just administering the system but contributing major pieces of courseware and other software. For instance, he authored “HELP,” a standard tutorial and introduction to the system for new users, and a “massive lesson menu system named IUDEMO.”

PLATO programs — optimistically called “lessons” — were programmed in a language called TUTOR that was accessible to every user. This relatively easy-to-use language enabled much of the creativity of the PLATO community. It allowed educators and students with no knowledge of the vagaries of bits and bytes to design serviceable programs while also being powerful enough to create some surprisingly elaborate games, from dungeon crawls to flight simulators, board-game adaptations to shoot-em-ups. Many if not most of these games were multiplayer; you simply navigated to a “big board” of eager players, found a partner (or two, or more; some could support more than 50 simultaneous players, amounting to virtual worlds in their own right as well as games), and dived in. In addition to his more legitimate activities, Silas became deeply involved with this generally tolerated-if-not-encouraged side of PLATO. He helped John Daleske get started developing Empire, an early — possibly the first — multiplayer action game. Later, he developed his own variant of Empire, which he called Conquest. Another project was possibly the world’s first multiplayer flight simulator, called Air Race. On the theory that guns make everything more fun, Brand Fortner built from Air Race the multiplayer air-combat simulation Air Fight, which became one of PLATO’s biggest hits as well as one of its administrators’ biggest scourges; 50 or 60 active Air Fight players could bring PLATO’s million-dollar CDC mainframe to its knees.

CERL and CDC sometimes hired particularly promising PLATO programmers to work for them. That’s how Silas came to leave Indiana University at last in 1976, moving to Baltimore to work for Commercial Credit, a consumer lending company that was, oddly enough, wholly owned by CDC. Silas came in to develop various in-house training programs on PLATO, such as “Sales-Call Simulator,” an “educational adventure.” While he was about it, he also created his first hit game, Robot War. Each player would program the AI routines for her own robot, using a language Silas devised for the purpose that was essentially a subset of the TUTOR language that virtually every serious PLATO user already had at least some familiarity with. Then the robots would go at it, while the players watched and hoped. Robot War was the first of its kind, the first of a whole genre of programming games that remain a beloved if obscure preoccupation of some hackers to this day. (I’ll have much more to say about Robot War soon).

Silas became particular friends with two other Commercial Credit employees: Ed Zaron, a programmer in the credit scoring department; and Jim Black, an accountent in the billing department. Zaron describes his introduction to the always eccentric Silas:

Silas is one of a kind. I’ll never forget first meeting him. Silas is a big guy, maybe 6’8″ and say 320lbs. Here’s the picture, he was walking down mainstreet in downtown Baltimore wearing a huge, sagging sports coat. He had a car battery (yes, car battery!) in one pocket, a CB radio in the other pocket and a whip antenna stuck down the back of his jacket. He was occasionally talking on the CB as he held two magazines open in one hand. One of Silas’s favorite things was to read two mags simultaneously, kinda one inside the other, flipping back and forth.

This was just about the time that the microcomputer trinity of 1977 arrived. Silas, Zaron, and Black all became very early Apple II adopters; Silas, for instance, ended up with serial number 234. Like Scott Adams and others with the programming skills to make the machines do something at least ostensibly fun or useful, the three decided to form a company — Muse Software. Their first products were, like most early Apple II software, programmed in BASIC.

Muse debuted with two games. There was Zaron’s Tank Wars, a multiplayer arcade-style game similar to the Atari 2600’s original Combat. And there was a maze game by Silas, which presented its world to the player via a first-person, three-dimensional rendering, possibly the first such ever crafted for a microcomputer. The concept was, however, old hat on PLATO, where similar so-called “maze runners” were a popular genre. Indeed, Muse’s PLATO experiences would prove to be a fecund source of inspiration, as they continued to adapt ideas born of that system’s flourishing games community for the little micros. Within a few months Silas had expanded his maze game to create Escape!, the game which inspired Richard Garriott to make 3D dungeons a part of Akalabeth and, by extension, the Ultimas. Escape! killed productivity inside Apple itself, as described by David Gordon, the man responsible for introducing it there:

On one of my first trips to Apple Computer in 1978 I took with me a simple maze game called Escape by a fledgling company called Muse. Apple had 50 or 60 employees at the time and I created a work loss of approximately 60 man weeks because everyone at Apple was playing that game instead of working. They were charting out the mazes and trying to solve the puzzle.

Muse’s simple programs, which they pumped out at a prodigious rate and packaged themselves using art provided by Black’s girlfriend, proved to be surprisingly popular. Weary of spending their evenings copying cassettes and their weekends touring the East Coast trade-show circuit, Zaron and Black soon quit their jobs at Commercial Credit to make a real, entrepreneurial go of it, although a more cautious Silas stayed on there until 1980. With public-relations skills like this, maybe it was for the best that Silas didn’t have so much time for the shows:

I remember in the early days of MUSE, I attended a “Computer Show” in Philadelphia with my dad and Silas. He had just written that Voice/Music program for the Apple II, which attracted a pretty big crowd. The big thing then was selling and trading programs recorded on cassette tapes. Hilarious! Anyway, it was great to see Silas pitching the programs and working with people. You really got to see what they were made of when he would stop talking, reach into his nose and pull out a gigantic booger, and then wipe it on the underside of the nearest table or chair, and continue with the demonstration. He was really great.

Muse’s early catalogs contained a shambolic line of programs typical of other early software houses like Adventure International and On-Line Systems. In addition to the games, there were drawing programs, programming utilities, educational drills, text editors. By 1980, however, disks and the spacious 48 K of memory that came in the Apple II Plus were becoming the accepted standard, and customers were beginning to expect more of their software. Muse created a development system of its own that allowed them to write fast assembly language programs while still having access to some of the conveniences and structure of higher level languages. With Silas on board full time at last, they also moved from their first office, a cramped space above a gun store, to lease a two-story building for themselves in downtown Baltimore. The top floor housed the business and software development arms, which now consisted of half a dozen employees, while the lower floor became the “Muse Computer Center,” a retail computer store selling Muse’s products as well as those of others. One non-obvious advantage of operating a store was that it allowed Muse to order products at dealer prices, making it easy to keep up with the competition’s latest in the fast-moving game of oneupsmanship that the Apple II software market was becoming.

In that spirit: Muse’s two major products of 1980 both advanced the state of the art. Zaron’s Super-Text was the most powerful and usable of the early Apple II word processors. And Silas’s The Voice let the user, incredibly, record her own voice and play it back, after a fashion, on the Apple II’s primitive sound hardware. This was absolutely unprecedented stuff. Both programs would play a big role in Silas’s two landmark games of the following year, about which more in my next post.

 
5 Comments

Posted by on January 25, 2012 in Digital Antiquaria, Interactive Fiction

 

Tags: , ,