RSS

Sequels in Strategy Gaming, Part 3: Heroes of Might and Magic II

New World Computing’s Heroes of Might and Magic II: The Succession Wars is different from the strategy-game sequels we’ve previously examined in this series in a couple of important ways. For one thing, it followed much more quickly on the heels of its predecessor: the first Heroes shipped in September of 1995, this follow-up just over one year later. This means that it doesn’t represent as dramatic a purely technological leap as do Civilization II and Master of Orion II; Heroes I as well was able to take advantage of SVGA graphics, CD-ROM, and all the other transformations the average home computer underwent during the first half of the 1990s. But the Heroes series as a whole is also conceptually different from the likes of Civilization and Master of Orion. It’s a smaller-scale affair, built around human-crafted rather than procedurally-generated maps, with more overt, pre-scripted narrative elements. All of these factors cause Heroes II to blur the boundaries between the fiction-driven and the systems-driven sequel. Its campaign — which is, as we’ll see, only one part of what it has to offer — is presented as a direct continuation of the story, such as it was, of Heroes I. At the same time, though, it strikes me as safe to say that no one bought the sequel out of a burning desire to find out what happens to the sons of Lord Morglin Ironfist, the star of the first game’s sketchy campaign. They rather bought it because they wanted a game that did what Heroes I had done, only even better. And fortunately for them, this is exactly what they got.

Heroes II doesn’t revamp its predecessor to the point of feeling like a different game entirely, as Master of Orion II arguably does. The scant amount of time separating it from its inspiration wouldn’t have allowed for that even had its creators wished it. It would surely not have appeared so quickly — if, indeed, it ever appeared at all — absent the new trend of strategy-game sequels. But as it was, Jon Van Caneghem, the founder of New World Computing and the mastermind of Heroes I and II, approached it as he had his earlier Might and Magic CRPG series, which had seen five installments by the time he (temporarily) shifted his focus to strategy gaming. “We weren’t making a sequel for the first time,” says the game’s executive producer Mark Caldwell. “So we did as we always did. Take ideas we couldn’t use or didn’t have time to implement in the previous game and work them into the next one. Designing a computer game, at least at [New World], was always about iterating. Just start, get something working, then get feedback and iterate.” In the case of Heroes II, it was a matter of capitalizing on the first game’s strengths — some of which hadn’t been entirely clear to its own makers until it was released and gamers everywhere had fallen in love with it — and punching up its relatively few weaknesses.

Ironically, the aspects of Heroes I that people seemed to appreciate most of all were those that caused it to most resemble the Might and Magic CRPGs, whose name it had borrowed more for marketing purposes than out of any earnest belief that it was some sort of continuation of that line. Strategy designers at this stage were still in the process of learning how the inclusion of individuals with CRPG-like names and statistics, plus a CRPG-like opportunity to level them up as they gained experience, could allow an often impersonal-feeling style of game to forge a closer emotional connection with its players. The premier examples before Heroes I were X-COM, which had the uncanny ability to make the player’s squad of grizzled alien-fighting soldiers feel like family, and Master of Magic, whose own fantasy heroes proved to be so memorable that they almost stole the show, much to the surprise of that game’s designer. Likewise, Jon Van Caneghem had never intended for the up to eight heroes you can recruit to your cause in Heroes of Might and Magic to fill as big a place in players’ hearts as they did. He originally thought he was making “a pure strategy game that was meant to play and feel like chess.” But a rudimentary leveling system along with names and character portraits for the heroes, all borrowed to some extent from his even earlier strategy game King’s Bounty, sneaked in anyway, and gamers loved it. The wise course was clearly to double down on the heroes in Heroes II.

Thus we get here a much more fleshed-out system for building up the capabilities of our fantasy subordinates, and building up our emotional bond with them in the process. The spells they can cast, in combat and elsewhere, constitute the single most extensively revamped part of the game. Not only are there many more of them, but they’ve been slotted into a magic system complex enough for a full-fledged CRPG, with spell books and spell points and all the other trimmings. And there are now fourteen secondary skills outside the magic system for the heroes to learn and improve as they level up, from Archery (increases the damage a hero’s minions do in ranged combat) to Wisdom (allows the hero to learn higher level spells), from Ballistics (increases the damage done by the hero’s catapults during town sieges) to Scouting (lets the hero see farther when moving around the map).

Another thing that CRPGs had and strategy games usually lacked was a strong element of story. Heroes I did little more than gesture in that direction; its campaign was a set of generic scenarios that were tied together only by the few sentences of almost equally generic text in a dialog box that introduced each of them. The campaign in Heroes II, on the other hand, goes much further. It’s the story of a war between the brothers Roland and Archibald, good and evil respectively, the sons of the protagonist of Heroes I. You can choose whose side you wish to fight on at the outset, and even have the opportunity to switch sides midstream, among other meta-level decisions. Some effects and artifacts carry over from scenario to scenario during the campaign, giving the whole experience that much more of a sense of continuity. And the interstices between the scenarios are filled with illustrations and voice-over narration. The campaign isn’t The Lord of the Rings by any means — if you’re like me, you’ll have forgotten everything about the story roughly one hour after finishing it — but it more than serves its purpose while you’re playing.

Instead of just a few lines of text, each scenario in the campaign game is introduced this time by some lovely pixel art and a well-acted voice-over.

These are the major, obvious improvements, but they’re joined by a host of smaller ones that do just as much in the aggregate to make Heroes II an even better, richer game. When you go into battle, the field of action has been made bigger — or, put another way, the hexes on which the combatants stand have been made smaller, giving space for twice as many of them on the screen. This results in engagements that feel less cramped, both physically and tactically; ranged weapons especially really come into their own when given more room to roam, as it were. The strategic maps too can be larger, up to four times so — or for that matter smaller, again up to four times so. This creates the potential for scenarios with wildly different personalities, from vast open-world epics to claustrophobic cage matches.

The tactical battlefields are larger now, with a richer variety of spells to employ in ingenious combinations.

And then, inevitably, there’s simply more stuff everywhere you turn. There are two new factions to play as or against, making a total of six in all; more types of humans, demi-humans, and monsters to recruit and fight against; more types of locations to visit on the maps; more buildings to construct in your towns; way more cool weapons and artifacts to discover and give to your heroes; more and more varying standalone scenarios to play in addition to the campaign.

One of the two new factions is the necromancers, who make the returning warlocks seem cute and cuddly by comparison. Necromancer characters start with, appropriately enough, the “necromancy” skill, which gives them the potential to raise huge armies of skeletons from their opponents’ corpses after they’ve vanquished them. This has been called unbalancing, and it probably is in at least some situations, but it’s also a heck of a lot of fun, not to mention the key to beating a few of the most difficult scenarios.

The other new faction is the wizards. They can eventually recruit lightning-flinging titans, who are, along with the warlocks’ black dragons, the most potent single units in the game.

After Heroes II was released, New World delegated the task of making an expansion pack to Cyberlore Studios, an outfit with an uncanny knack for playing well with others’ intellectual property. (At the time, Cyberlore had just created a well-received expansion pack for Blizzard’s Warcraft II.) Heroes of Might and Magic II: The Price of Loyalty, the result of Cyberlore’s efforts, comes complete with not one but four new campaigns, each presented with the same lavishness as the chronicles of Roland and Archibald, along with still more new creatures, locations, artifacts, and standalone scenarios. All are welcome.

There are even riddles. Sigh… you can’t win them all, I guess. I feel about riddles in games the way Indiana Jones does about snakes in archaeological sites.

But wait, I can hear you saying: didn’t you just complain in those articles about Civilization II and Master of Orion II that just adding more stuff doesn’t automatically or even usually make a game better? I did indeed, and I’ve been thinking about why my reaction to Heroes II is so different. Some of it undoubtedly comes down to purely personal preferences. The hard truth is that I’ve always been more attracted to Civilization as an idea than an actual game; impressive as Civilization I was in the context of its time, I’ll go to my grave insisting that there are even tighter, even more playable designs than that one in the Sid Meier canon, such as Pirates! and Railroad Tycoon. I’m less hesitant to proclaim Master of Orion I a near-perfect strategy masterpiece, but my extreme admiration for it only makes me unhappy with the sequel, which removed some of the things I liked best about the original in favor of new complexities that I find less innovative and less compelling. I genuinely love Heroes I as well — and thus love the sequel even more for not trying to reinvent this particular wheel, for trying only to make it glide along that much more smoothly.

I do think I can put my finger on some objective reasons why Heroes II manages to add so much to its predecessor’s template without adding any more tedium. It’s starting from a much sparser base, for one thing; Heroes I is a pretty darn simple beast as computer strategy games of the 1990s go. The places where Heroes II really slathers on the new features — in the realms of character development, narrative, and to some extent tactical combat — are precisely those where its predecessor feels most under-developed. The rest of the new stuff, for all its quantity, adds variety more so than mechanical complexity or playing time. A complete game of Heroes II doesn’t take significantly longer to play than a complete game of Heroes I (unless you’re playing on one of those new epic-sized maps, of course). That’s because you won’t even see most of the new stuff in any given scenario. Heroes II gives its scenario designers a toolbox with many more bits and pieces to choose from, so that the small subset of them you see as a player each time out is always fresh and surprising. You have to play an awful lot of scenarios to exhaust all this game has to offer. It, on the other hand, will never exhaust you with fiddly details.

Anyway, suffice to say that I love Heroes II dearly. I’ve without a doubt spent more hours with it than any other game I’ve written about on this site to date. One reason for that is that my wife, who would never be caught dead playing a game like Civilization II or Master of Orion II, likes this one almost as much as I do. We’ve whiled away many a winter evening in multiplayer games, sitting side by side on the sofa with our laptops. (If that isn’t a portrait of the modern condition, I don’t know what is…) Sure, Heroes II is a bit slow to play this way by contemporary standards, being turn-based, and with consecutive rather than simultaneous turns at that, but that’s what good tunes on the stereo are for, isn’t it? We don’t like to fight each other, so we prefer the scenarios that let us cooperate — another welcome new feature. Or, failing that, we just agree to play until we’re the only two factions left standing.

What makes Heroes II such a great game in the eyes of both of us, and such a superb example of an iterative sequel done well? Simply put, everything that was fun in the first game is even more fun in the sequel. It still combines military strategy with the twin joys of exploration and character development in a way that I’ve never seen bettered. (My wife, bless her heart, is more interested in poking her head into every nook and cranny of a map and accessorizing her heroes like they’ve won a gift certificate to Lord & Taylor than she is in actually taking out the enemy factions, which means that’s usually down to me…) The strengthened narrative elements, not only between but within scenarios — a system of triggers now allows the scenario designer to advance the story even as you play — only makes the stew that much richer. Meanwhile the whole game is exquisitely polished, showing in its interface’s every nuance the hours and hours of testing and iterating that went into it before its release.

In this respect and many others, the strengths of Heroes II are the same as those of Heroes I. Both, for example, manage to dodge some of the usual problems of grand-strategy games by setting their sights somewhat lower than the 4X likes of Civilization and Master of Orion. There is no research tree here, meaning that the place where 4X strategizing has a tendency to become most rote is deftly dodged. Very, very little is rote about Heroes; many of its human-designed maps are consciously crafted to force you to abandon your conventional thinking. (The downside of this is a certain puzzle-like quality to some of the most difficult scenarios — a One True Way to Win that you must discover through repeated attempts and repeated failures — but even here, the thrill of figuring them outweighs the pain if you ask me.) Although the problem of the long, anticlimactic ending — that stretch of time after you know you’re going to win — isn’t entirely absent, some scenarios do have alternative victory conditions, and the fact that most of them can be played in a few hours at most from start to finish helps as well. The game never gets overly bogged down by tedious micromanagement, thanks to some sagacious limits that have been put in place, most notably the maximum of eight heroes you’re allowed to recruit, meaning that you can never have more than eight armies in the field. (It’s notable and commendable that New World resisted the temptation that must surely have existed to raise this limit in Heroes II.) The artificial intelligence of your computer opponents isn’t great by any means, but somehow even that doesn’t feel so annoying here; if the more difficult scenarios must still become that way by pitting your human cleverness against silicon-controlled hordes that vastly outnumber you, there are at least always stated reasons for the disparity to hand in a narrative-driven game like this one.

Also like its predecessor, Heroes II is an outlier among the hit games of the late 1990s, being turn-based rather than real-time, and relying on 2D pixel art rather than 3D graphics. Jon Van Caneghem has revealed in interviews that he actually did come alarmingly close to chasing both of those trends, but gave up on them for reasons having more to do with budgetary and time constraints than any sort of purist design ideology. For my part, I can only thank the heavens that such practicalities forced him to keep it old-school in the end. Heroes II still looks great today, which would probably not be the case if it was presented in jaggy 1990s 3D. New World’s artists had a distinct style, one that also marked the Might and Magic CRPG series: light, colorful, whimsical, and unabashedly cartoon-like, in contrast to the darker hued, ultra-violent aesthetic that marked so much of the industry in the post-DOOM era. Heroes II is perhaps slightly murkier in tone and tint than the first game, but it remains a warming ray of sunshine when stood up next to its contemporaries, who always seem to be trying too hard to be an epic saga, man. Its more whimsical touches never lose their charm: the vampires who go Blahhh! like Count Chocula when they make an attack, the medusae who slink around the battlefield like supermodels on the catwalk. Whatever else you can say about it, you can never accuse Heroes of Might and Magic of taking itself too seriously.

But there is one more way that Heroes II improves on Heroes I, and it is in some senses the most important of them all. The sequel includes a scenario construction kit, the very same tool that was used to build the official maps; the only thing missing is a way to make the cut scenes that separate the campaign scenarios. It came at the perfect time to give Heroes II a vastly longer life than it otherwise would have enjoyed, even with all of its other merits.

The idea of gaming construction kits was already a venerable one by the time of Heroes II‘s release. Electronic Arts made it something of their claim to fame in their early years, with products like Pinball Construction Set, Adventure Construction Set, and Racing Destruction Set. Meanwhile EA’s affiliated label Strategic Simulations had a Wargame Construction Set and Unlimited Adventures (the latter being a way of making new scenarios for the company’s beloved Gold Box CRPG engine). But all of these products were hampered somewhat by the problem of what you the buyer were really to do with a new creation into which you had poured your imagination, talent, and time. You could share it among your immediate circle of friends, assuming they had all bought (or pirated) the same game you had, maybe even upload it to a bulletin board or two or to a commercial online service like CompuServe, but doing so only let you reach a tiny cross-section of the people who might be able and willing to play it. And this in turn led to you asking yourself an unavoidable question: is this thing I want to make really worth the effort if it will hardly get played?

The World Wide Web changed all that at a stroke, as it did so much else in computing and gaming. The rise of a free and open, easily navigable Internet meant that you could now share your creation with everyone with the same base game you had bought. And so gaming construction kits of all stripes suddenly became much more appealing, were allowed to begin to fulfill their potential at last. Heroes of Might and Magic II is a prime case in point.

A bustling community of amateur Heroes II designers sprang up on the Internet after the game’s release, to stretch it in all sorts of delightful ways that New World had never anticipated. The best of the thousands of scenarios they produced are so boldly innovative as to make the official ones seem a bit dull and workmanlike by comparison. For example, “Colossal Cavern” lives up to its classic text-adventure namesake by re-imagining Heroes II as a game of dungeon delving and puzzle solving rather than strategic conquest. “Go Ask Alice,” by contrast, turns it into a game of chess with living pieces, like in Alice in Wonderland. “The Road Home” is a desperate chase across a sprawling map with enemy armies that outnumber you by an order of magnitude hot on your heels. And “Agent of Heaven” is a full campaign — one of a surprising number created by enterprising fans — that lets you live out ancient Chinese history, from the age of Confucius through the rise of the Qin and Han dynasties; it’s spread over seven scenarios, with lengthy journal entries to read between and within them as you go along.

The scenario editor has its limits as a vehicle for storytelling, but it goes farther than you might expect. Text boxes like these feature in many scenarios, and not only as a way of introducing them. The designer can set them to appear when certain conditions are fulfilled, such as a location visited for the first time by the player or a given number of days gone by. In practice, the most narratively ambitious scenarios tend to be brittle and to go off the rails from a storytelling perspective as soon as you do something in the wrong order, but one can’t help but be impressed by the lengths to which some fans went. Call it the triumph of hope over experience…

As the size and creative enthusiasm of its fan community will attest, Heroes II was hugely successful in commercial terms, leaving marketers everywhere shaking their heads at its ability to be so whilst bucking the trends toward real-time gameplay and 3D graphics. I can give you no hard numbers on its sales, but anecdotal and circumstantial evidence alone would place it not too far outside the ballpark of Civilization II‘s sales of 3 million copies. Certainly its critical reception was nothing short of rapturous; Computer Gaming World magazine pronounced it “nearly perfect,” “a five-star package that will suck any strategy gamer into [a] black hole of addictive fun.” The expansion too garnered heaps of justified praise and stellar sales when it arrived some nine months after the base game. The only loser in the equation was Heroes I, a charming little game in its own right that was rendered instantly superfluous by the superior sequel in the eyes of most gamers.

Personally, though, I’m still tempted to recommend that you start with Heroes I and take the long way home, through the whole of one of the best series in the history of gaming. Then again, time is not infinite, and mileages do vary. The fact is that this series tickles my sweet spots with uncanny precision. Old man that I’m fast becoming, I prefer its leisurely turn-based gameplay to the frenetic pace of real-time strategy. At the same time, though, I do appreciate that it plays quickly in comparison to a 4X game. I love its use of human-crafted scenarios, which I almost always prefer to procedurally-generated content, regardless of context. And of course, as a dyed-in-the-wool narratological gamer, I love the elements of story and character-building that it incorporates so well.

So, come to think of it, this might not be such a bad place to start with Heroes of Might and Magic after all. Or to finish, for that matter — if only it wasn’t for Heroes III. Now there’s a story in itself…

(Sources: Retro Gamer 239; Computer Gaming World of February 1997 and September 1997; XRDS: The ACM Magazine for Students of Summer 2017. Online sources include Matt Barton’s interview with John Van Caneghem.

Heroes of Might and Magic II is available for digital purchase on GOG.com, in a “Gold” edition that includes the expansion pack.

And here’s a special treat for those of you who’ve made it all the way down here to read the fine print. I’ve put together a zip file of all of the Heroes II scenarios from a “Millennium” edition of the first three Heroes games that was released in 1999. It includes a generous selection of fan-made scenarios, curated for quality. You’ll also find the “Agent of Heaven” campaign mentioned above, which, unlike the three other fan-made scenarios aforementioned, wasn’t a part of the Millennium edition. To access the new scenarios, rename the folder “MAPS” in your Heroes II installation directory to something else for safekeeping, then unzip the downloaded archive into the installation directory. The next time you start Heroes II, you should find all of the new scenarios available through the standard “New Game” menu. Note that some of the more narratively ambitious new scenarios feature supplemental materials, found in the “campaigns” and “Journals” folders. Have fun!)

 
26 Comments

Posted by on February 17, 2023 in Digital Antiquaria, Interactive Fiction

 

Tags: , , ,

Sequels in Strategy Gaming, Part 2: Master of Orion II

MicroProse had just published Master of Magic, the second grand-strategy game from the Austin, Texas-based studio SimTex, when SimCity 2000 made the world safe for numbered strategy sequels. After a quick palate cleanser in the form of a computerized version of the Avalon Hill board game 1830: Railroads & Robber Barons, Steve Barcia and the rest of the SimTex crew turned their attention to a sequel to Master of Orion, their 1993 space opera that was already widely revered as one of the finest ever examples of its breed.

Originally announced as a product for the Christmas of 1995, it took the sequel one full year longer than that to actually appear. And this was, it must be said, all for the better. Master of Magic had been a rather brilliant piece of game design whose commercial prospects had been all but destroyed by its premature release in a woefully buggy state. To their credit, SimTex patched it, patched it, and then patched it some more in the months that followed, until it had realized most of its immense potential as a game. But by then the damage had been done, and what might have been an era-defining strategy game like Civilization — or, indeed, the first Master of Orion — had been consigned to the status of a cult classic. On the bright side, MicroProse did at least learn a lesson from this debacle: Master of Orion II: Battle at Antares was given the time it needed to become its best self. The game that shipped just in time for the Christmas of 1996 was polished on a surface level, whilst being relatively well-balanced and mostly bug-free under the hood.

Gamers’ expectations had changed in some very significant ways in the three years since its predecessor’s release, and not generally to said predecessor’s benefit. The industry had now completed the transition from VGA graphics, usually running at a resolution of 320 X 200, to SVGA, with its resolutions of 640 X 480 or even more. The qualitative difference belies the quantitative one. Seen from the perspective of today, the jump to SVGA strikes me as the moment when game graphics stop looking undeniably old, when they can, in the best cases at any rate, look perfectly attractive and even contemporary. Unfortunately, Master of Orion I was caught on the wrong side of this dividing line; a 1993 game like it tended to look far uglier in 1996 than, say, a 1996 game would in 1999.

So, the first and most obvious upgrade in Master of Orion II was a thoroughgoing SVGA facelift. The contrast is truly night and day when you stand the two games up side by side; the older one looks painfully pixelated and blurry, the newer one crisp and sharp, so much so that it’s hard to believe that only three years separate them. But the differences at the interface level are more than just cosmetic. Master of Orion II‘s presentation also reflects the faster processor and larger memory of the typical 1996 computer, as well as an emerging belief in this post-Windows 95 era that the interface of even a complex strategy game aimed at the hardcore ought to be welcoming, intuitive, and to whatever extent possible self-explanatory. The one we see here is a little marvel, perfectly laid out, with everything in what one intuitively feels to be its right place, with a helpful explanation never any farther away than a right click on whatever you have a question about. It takes advantage of all of the types of manipulation that are possible with a mouse — in particular, it sports some of the cleverest use of drag-and-drop yet seen in a game to this point. In short, everything just works the way you think it ought to work, which is just about the finest compliment you can give to a user interface. Master of Orion I, for all that it did the best it could with the tools at its disposal in 1993, feels slow, jerky, and clumsy by comparison — not to mention ugly.

The home screen of Master of Orion I

…and its equivalent in Master of Orion II. One of the many benefits of a higher resolution is that even the “Huge” galaxy I’ve chosen to play in here now fits onto a single screen.

If Master of Orion II had attempted to be nothing other than a more attractive, playable version of its antecedent, plenty of the original game’s fans would doubtless have welcomed it on that basis alone. In fact, one is initially tempted to believe that this is where its ambitions end. When we go to set up a new game, what we find is pretty much what we would imagine seeing in just such a workmanlike upgrade. Once again, we’re off to conquer a procedurally generated galaxy of whatever size we like, from Small to Huge, while anywhere from two to eight other alien races are attempting to do the same. Sure, there are a few more races to play as or against this time, a new option to play as a custom race with strengths and weaknesses of our own choosing, and a few other new wrinkles here and there, but nothing really astonishing. For example, we do have the option of playing against other real people over a network now, but that was becoming par for the course in this post-DOOM era, when just about every game was expected to offer some sort of networked multiplayer support, and could expect to be dinged by the critics if it didn’t. So, we feel ourselves to be in thoroughly familiar territory when the game proper begins, greeting us with that familiar field of stars, representing yet another galaxy waiting to be explored and conquered.

Master of Orion II‘s complete disconnection from the real world can be an advantage: it can stereotype like crazy when it comes to the different races, thereby making each of them very distinct and memorable. None of us have to feel guilty for hating the Darlocks for the gang of low-down, backstabbing, spying blackguards they are. If Civilization tried to paint its nationalities with such a broad brush, it would be… problematic.

But when we click on our home star, we get our first shock: we see that each star now has multiple planets instead of the single one we’re used to being presented with in the name of abstraction and simplicity. Then we realize that the simple slider bars governing each planetary colony’s output have been replaced by a much more elaborate management screen, where we decide what proportion of our population will work on food production (a commodity we never even had to worry about before), on industrial production, and on research. And we soon learn that now we have to construct each individual upgrade we wish our colony to take advantage of by slotting it into a build queue that owes more to Master of Magic — and by extension to that game’s strong influence Civilization — than it does to Master of Orion I.

By the middle and late game, your options for building stuff can begin to overwhelm; by now you’re managing dozens (or more) of individual colonies, each with its own screen like this. The game does offer an “auto-build” option, but it rarely makes smart choices; you can kiss your chances of winning goodbye if you use it on any but the easiest couple of difficulty levels. It would be wonderful if you could set up default build queues of your own and drag and drop them onto colonies, but the game’s interest in automation doesn’t extend this far.

This theme of superficial similarities obscuring much greater complexity will remain the dominant one. The mechanics of Master of Orion II are actually derived as much from Master of Magic and Civilization as from Master of Orion I. It is, that is to say, nowhere near such a straightforward extension of its forerunner as Civilization II is. It’s rather a whole new game, with whole new approaches in several places. Whereas the original Master of Orion was completely comfortable with high-level abstraction, the sequel’s natural instinct is to drill down into the details of everything it can. Does this make it better? Let’s table that question for just a moment, and look at some of the other ways in which the game has changed and stayed the same.

The old research system, which allowed you to make progress in six different fields at once by manipulating a set of proportional sliders, has been replaced by one where you can research just one technology at a time, like in Civilization. It’s one of the few places where the second game is less self-consciously “realistic” than the first; the scientific establishment of most real space-faring societies will presumably be able to walk and chew gum at the same time. But, in harking back so clearly to Civilization rather than to its own predecessor, it says much about where Steve Barcia’s head was at as he was putting this game together.

Master of Orion I injected some entropy into its systems by giving you the opportunity to research only a randomized subset of the full technology tree, forcing you to think on your feet and play the hand you were given. The sequel divides the full ladder of Progress into groupings of one to three technologies that are always the same, and lets you choose one of them from each group — and only one of them — for yourself rather than choosing for you. You still can’t research everything, in other words, but now it’s you who decides what does get researched. (This assumes that you aren’t playing a race with the “Creative” ability, which lets you gain access to all available technologies each step of the way, badly unbalancing the game in the process.)

The research screen in a game that’s pretty far along. We can choose to research in just one of the eight categories at a time, and must choose just one technology within that category. The others are lost to us, unless we can trade for or steal them from another race.

We’re on more familiar ground when it comes to our spaceships and all that involves them. Once again, we can design our own ships using all of the fancy technologies our scientists have recently invented, and once again we can command them ourselves in tactical battles that don’t depart all that much from what we saw in the first game. That said, even here there are some fresh complications. There’s a new “command point” system that makes the number of fleets we can field dependent on the communications infrastructure we’ve built in our empire, while now we also need to build “freighters” to move food from our bread-basket planets to those focused more on industry or research. Another new wrinkle here is the addition of “leaders,” individuals who come along to offer us their services from time to time. They’re the equivalent of Master of Magic‘s heroes, to the extent that they even level up CRPG-style over time, although they wind up being vastly less consequential and memorable than they were in that game.

Leaders for hire show up from time to time, but you never develop the bonds with them that you do with Master of Magic‘s heroes. That’s a pity; done differently, leaders might have added some emotional interest to a game that can feel a bit dry.

The last major facet of the game after colony, research, and ship management is your relationship with the other aliens you eventually encounter. Here again, we’re on fairly familiar ground, with trade treaties, declarations of war and peace and alliance, and spying for purposes of information theft or sabotage all being possible and, on the more advanced difficulty levels, necessary. We have three ways of winning the game, which is one more than in Master of Orion I. As before, we can simply exterminate all of the other empires, or we can win enough of them over through friendship or intimidation that they vote to make us the supreme leader of a Galactic Council. But we can now also travel to a different dimension and defeat a mysterious alien race called the Antarans that live there, whereupon all of the races back in our home dimension will recognize us as the superior beings we’ve just proved ourselves to be. Here there are more echoes of Master of Magic — specifically, of that game’s two planes of Arcanus and Myrror and the dimensional gates that link them together.

The workings of the Galactic Council vote are virtually unchanged from Master of Orion I.

What to make of this motley blend, which I would call approximately 50 percent Master of Orion I, 25 percent Civilization, and 25 percent Master of Magic? First, let me tell you what most fans of grand strategy think. Then, I’ll give you my own contrarian take on it..

The verdict of the masses is clear: Master of Orion II is one of the most beloved and influential strategy games of all time. As popular in the latter 1990s as any grand-strategy game not called Civilization, it’s still widely played today — much more so, I would reckon, than the likes of its contemporary Civilization II. (Certainly Master of Orion II looks far less dated today by virtue of not running under Windows and using the Windows 3 widgets — to say nothing of those oh-so-1990s live-action video clips Civilization II featured.)  It’s often described as the archetypal strategic space opera, the Platonic ideal which every new space-based grand-strategy game must either imitate or kick against (or a little of both). And why not? Having received several patches back in the day to correct the few issues in its first release, it’s finely balanced (that “Creative” ability aside — and even it has been made more expensive than it used to be), rich in content, and reasonably attractive to look at even today. And on top of all that there’s a gob-smackingly good interface that hardly seems dated at all. What’s not to like?

Well… a few things, in this humble writer’s opinion. For me, the acid test for additional complexity in a game is partially whether it leads to more “interesting choices,” as Sid Meier would put it, but even more whether it makes the fiction come more alive. (I am, after all, very much an experiential player, very much in tune with Meier’s description of the ideal game of Civilization as “an epic story.”) Without one or preferably both of these qualities, added complexity just leads to added tedium in my book. In the beginning, when I’m developing only one or two planets, I can make a solid case for Master of Orion II‘s hands-on approach to colony management using these criteria. But when one or two colonies become one or two dozen, then eventually one or two hundred, the negatives rather outweigh the positives for me. Any benefits you get out of dragging all those little colonists around manually live only at the margins, as it were. For the reality is that you’ll quickly come up with a standard, rote approach to building up each new planet, and see it through as thoughtlessly as you put your shirt on each morning. At most, you might have just a few default approaches, depending on whether you want the colony to focus on agriculture, industry, or research. Only in a rare crisis, or maybe in the rare case of a truly exceptional planet, will you mix it up all that much.

Master of Orion II strikes me as emblematic of a very specific era in strategy gaming, when advances in computing hardware weren’t redounding entirely to the benefit of game design. During the 1980s and early 1990s, designs were brutally constrained by slow processors and small memories; games like the first Master of Orion (as well as such earlier space operas as the 1983 SSG classic Reach for the Stars) were forced by their circumstance to boil things down to their essentials. By 1996, however, with processor speeds starting to be measured in the hundreds of megahertz and memory in the tens of megabytes, there was much more space for bells, whistles, and finicky knob-twiddling. We can see this in Civilization II, and we can see it even more in Master of Orion II. The problem, I want to say, was that computing technology had fallen into a sort of uncanny valley: the latest hardware could support a lot more mechanical, quantitative complexity, but wasn’t yet sufficient to implement more fundamental, qualitative changes, such as automation that allows the human player to intervene only where and when she will and improved artificial intelligence for the computer players. Tellingly, this last is the place where Master of Orion II has changed least. You still have the same tiny set of rudimentary diplomatic options, and the computer players remain as simple-minded and manipulable as ever. As with so many games of this era, the higher difficulty levels don’t make the computer players smarter; they only let them cheat more egregiously, giving them ever greater bonuses to all of the relevant numbers.

There are tantalizing hints that Steve Barcia had more revolutionary ambitions for Master of Orion II at one point in time. Alan Emrich, the Computer Gaming World scribe who coined the term “4X” (“Explore, Expand, Exploit, Exterminate”) for the first game and did so much to shape it as an early play-tester that a co-designer credit might not have been out of order, was still in touch with SimTex while they worked on the second. He states that Barcia originally “envisioned a ‘layered’ design approach so that people could focus on what they wanted to play. Unfortunately, that goal wasn’t reached.” Perhaps the team fell back on what was relatively easy to do when these ambitions proved too hard to realize, or perhaps at least part of the explanation lies in another event: fairly early in the game’s development, Barcia sold his studio to his publisher MicroProse, and accepted a more hands-off executive role at the parent company. From then on, the day-to-day design work on Master of Orion II largely fell to one Ken Burd, previously the lead programmer.

For whatever reason, Master of Orion II not only fails to advance the conceptual state of the art in grand strategy, but actually backpedals on some of the important innovations of its predecessor, which had already addressed some of the gameplay problems of the then-nascent 4X genre. I lament most of all the replacement of the first game’s unique approach to research with something much more typical of the genre. By giving you the possibility of researching only a limited subset of technologies, and not allowing you to dictate what that subset consists of, Master of Orion I forced you to improvise, to build your strategy around what your scientific establishment happened to be good at. (No beam-weapon technologies? Better learn to use missiles! Weak on spaceship-range-extending technologies to colonize faraway star systems? Better wring every last bit of potential out of those closer to home!) In doing so, it ensured that every single game you played was different. Master of Orion II, by contrast, strikes me as too amenable to rote, static strategizing that can be written up almost like an adventure-game walkthrough: set up your race like this, research this, this, and this, and then you have this, which will let you do this… every single time. Once you’ve come up with a set of standard operating procedures that works for you, you’ve done so forever. After that point, “it’s hard to lose Master of Orion II,” as the well-known game critic Tom Chick admitted in an otherwise glowing 2000 retrospective.

In the end, then, the sequel is a peculiar mix of craft and complacency. By no means can one call it just a re-skinning; it does depart significantly from its antecedent. And yet it does so in ways that actually make it stand out less rather than more from other grand-strategy games of its era, thanks to the anxiety of influence.

For influence, you see, can be a funny thing. Most creative pursuits should be and are a sort of dialog. Games especially have always built upon one another, with each worthy innovation — grandly conceptual or strictly granular, it really doesn’t matter — finding its way into other games that follow, quite possibly in a more evolved form; much of what I’ve written on this very site over the past decade and change constitutes an extended attempt to illustrate that process in action. Yet influence can prove a double-edged sword when it hardens into a stultifying conventional wisdom about how games ought to be. Back in 1973, the literary critic Harold Bloom coined the term “anxiety of influence” in reference to the gravitational pull that the great works of the past can exert on later writers, convincing them to cast aside their precious idiosyncrasies out of a perceived need to conform to the way things ought to be done in the world of letters. I would argue that Civilization‘s set of approaches have cast a similar pall over grand-strategy-game design. The first Master of Orion escaped its long shadow, having been well along already by the time Sid Meier’s own landmark game was released. But it’s just about the last grand-strategy game about which that can be said. Master of Orion II reverts to what had by 1996 become the mean: a predictable set of bits and bobs for the player to busy herself with, arranged in a comfortably predictable way.

When I think back to games of Master of Orion I, I remember the big events, the lightning invasions and deft diplomatic coups and unexpected discoveries. When I think back to games of Master of Orion II, I just picture a sea of data. When there are too many decisions, it’s hard to call any of them interesting. Then again, maybe it’s just me. I know that there are players who love complexity for its own sake, who see games as big, fascinating systems to tweak and fiddle with — the more complicated the better. My problem, if problem it be, is that I tend to see games as experiences — as stories.

Ah, well. Horses for courses. If you’re one of those who love Master of Orion II — and I’m sure that category includes many of you reading this — rest assured that there’s absolutely nothing wrong with that. As for me, all this time spent with the sequel has only given me the itch to fire up the first one again…



Although I’ve never seen any hard sales numbers, all indications are that Master of Orion II was about as commercially successful as a game this time-consuming, slow-paced, and cerebral — and not named Civilization — could possibly be, most likely selling well into the hundreds of thousands of units. Yet its success didn’t lead to an especially bright future for SimTex — or MicroProse Austin, as it had now become known. In fact, the studio never managed to finish another game after it. Its last years were consumed by an expensive boondoggle known as Guardians: Agents of Justice, another brainchild of Steve Barcia, an “X-COM in tights,” with superheroes and supervillains instead of soldiers and aliens. That sounds like a pretty fantastic idea to me. But sadly, a turn-based tactical-combat game was at odds with all of the prevailing trends in an industry increasingly dominated by first-person shooters and real-time strategy; one frustrated MicroProse executive complained loudly that Barcia’s game was “slow as a pig.” It was accordingly forced through redesign after redesign, without ever arriving at anything that both satisfied the real or perceived needs of the marketers and was still fun to play. At last, in mid-1998, MicroProse pulled the plug on the project, shutting down the entirety of its brief-lived Austin-based subsidiary at the same time. And so that was that for SimTex; Master of Orion III, when it came, would be the work of a completely different group of people.

Guardians: Agents of Justice was widely hyped over the years. MicroProse plugged it enthusiastically at each of the first four E3 trade shows, and a preview was the cover story of Computer Games Strategy Plus‘s December 1997 issue. “At least Agents never graced a CGW cover,” joshed Terry Coleman of the rival Computer Gaming World just after Guardians‘s definitive cancellation.

Steve Barcia never took up the design reins of another game after conceiving Guardians of Justice, focusing instead on his new career in management, which took him to the very different milieu of the Nintendo-exclusive action-games house Retro Studios after his tenure at MicroProse ended. Some might consider this an odd, perchance even vaguely tragic fate for the designer of three of the most respected and beloved grand-strategy games of all time. On the other hand, maybe he’d just said all he had to say in game design, and saw no need to risk tarnishing his stellar reputation. Either way, his creative legacy is more than secure.

(Sources: the book The Anxiety of Influence: A Theory of Poetry by Harold Bloom; Computer Gaming World of October 1995, December 1996, March 1997, June 1997, July 1997, and October 1998; Computer Games Strategy Plus of December 1997. Online sources include Alan Emrich’s retrospective on the old Master of Orion III site and Tom Chick’s piece on Master of Orion II for IGN.

Master of Orion I and II are available as a package from GOG.com. So, you can compare and contrast, and decide for yourself whether I’m justified in favoring the original.)

 
 

Tags: , ,

Sequels in Strategy Gaming, Part 1: Civilization II

How do you make a sequel to a game that covers all of human history?

— Brian Reynolds

At the risk of making a niche website still more niche, allow me to wax philosophical for a moment on the subject of those Roman numerals that have been appearing just after the names of so many digital games almost from the very beginning. It seems to me that game sequels can be divided into two broad categories: the fiction-driven and the systems-driven.

Like so much else during gaming’s formative years, fiction-driven sequels were built off the example of Hollywood, which had already discovered that no happily ever after need ever be permanent if there was more money to be made by getting the old gang of heroes back together and confronting them with some new threat. Game sequels likewise promised their players a continuation of an existing story, or a new one that took place in a familiar setting with familiar characters. Some of the most iconic names in 1980s and early 1990s gaming operated in this mode: Zork, Ultima, Wizardry, King’s Quest, Carmen Sandiego, Leisure Suit Larry, Wing Commander. As anyone who has observed the progress of those series will readily attest, their technology did advance dramatically over the years. And yet this was only a part of the reason people stayed loyal to them. Gamers also wanted to get the next bit of story out of them, wanted to do something new in their comfortingly recognizable worlds. Unsurprisingly, the fiction-driven sequel was most dominant among games that foregrounded their fictions — namely the narrative-heavy genres of the adventure game and the CRPG.

But there was another type of sequel, which functioned less like a blockbuster Hollywood franchise and more like the version numbers found at the end of other types of computer software. It was the domain of games that were less interested in their fictions. These sequels rather promised to do and be essentially the same thing as their forerunner(s), only to do and be it even better, taking full advantage of the latest advances in hardware. Throughout the 1980s and well into the 1990s, the technology- or systems-driven sequel was largely confined to the field of vehicular simulations, a seemingly fussily specific pursuit that was actually the source in some years of no less than 25 percent of the industry’s total revenues. The poster child for the category is Microsoft’s Flight Simulator series, the most venerable in the entire history of computer gaming, being still alive and well as I write these words today, almost 43 years after it debuted on the 16 K Radio Shack TRS-80 under the imprint of its original publisher subLogic. If you were to follow this franchise’s evolution through each and every installment, from that monochrome, character-graphic-based first specimen to today’s photo-realistic feast for the senses, you’d wind up with a pretty good appreciation of the extraordinary advances personal computing has undergone over the past four decades and change. Each new Flight Simulator didn’t so much promise a new experience as the same old one perfected, with better graphics, better sound, a better frame rate, better flight modeling,  etc. When you bought the latest Flight Simulator — or F-15 Strike Eagle, or Gunship, or Falcon — you did so hoping it would take you one or two steps closer to that Platonic ideal of flying the real thing. (The fact that each installment was so clearly merely a step down that road arguably explains why these types of games have tended to age more poorly than others, and why you don’t find nearly as many bloggers and YouTubers rhapsodizing about old simulations today as you do games in most other genres.)

For a long time, the conventional wisdom in the industry held that strategy games were a poor fit with both of these modes of sequel-making. After all, they didn’t foreground narrative in the same way as adventures and CRPGs, but neither were they so forthrightly tech-centric as simulations. As a result, strategy games — even the really successful ones — were almost always standalone affairs.

But all that changed in a big way in 1993, when Maxis Software released SimCity 2000, a sequel to its landmark city-builder of four years earlierSimCity 2000 was a systems-driven sequel in the purest sense. It didn’t attempt to be anything other than what its predecessor had been; it just tried to be a better incarnation of that thing. Designer Will Wright had done his level best to incorporate every bit of feedback he had received from players of his original game, whilst also taking full advantage of the latest hardware to improve the graphics, sound, and interface. “Is SimCity 2000 a better program than the original SimCity?” asked Computer Gaming World magazine rhetorically. “It is without question a superior program. Is it more fun than the original SimCity? It is.” Wright was rewarded for his willingness to revisit his past with another huge hit, even bigger than his last one.

Other publishers greeted SimCity 2000‘s success as something of a revelation. At a stroke, they realized that the would-be city planners and generals among their customers were as willing as the would-be pilots and submarine captains to buy a sequel that enhanced a game they had already bought before, by sprucing up the graphics, addressing exploits, incongruities, and other weaknesses, and giving them some additional complexity to sink their teeth into. For better or for worse, the industry’s mania for franchises and sequels thus came to encompass strategy games as well.

In the next few articles, I’d like to examine a few of the more interesting results of this revelation — not SimCity 2000, a game about which I have oddly little to say, but another trio that would probably never have come to be without it to serve as a commercial proof of concept. All of the games I’ll write about are widely regarded as strategy classics, but I must confess that I can find unreserved love in my heart for only one of them. As for which one that is, and the reasons for my slight skepticism about the others… well, you’ll just have to read on and see, won’t you?


Civilization, Sid Meier’s colossally ambitious and yet compulsively playable strategy game of everything, was first released by MicroProse Software just in time to miss the bulk of the Christmas 1991 buying season. That would have been the death knell of many a game, but not this one. Instead Civilization became the most celebrated computer game since SimCity in terms of mainstream-media coverage, even as it also became a great favorite with the hardcore gamers. Journalists writing for newspapers and glossy lifestyle magazines were intrigued by it for much the same reason they had been attracted to SimCity, because its sweeping, optimistic view of human Progress writ large down through the ages marked it in their eyes as something uniquely high-toned, inspiring, and even educational in a cultural ghetto whose abiding interest in dwarfs, elves, and magic spells left outsiders like them and their readers nonplussed. The gamers loved it, of course, simply because it could be so ridiculously fun to play. Never a chart-topping hit, Civilization became a much rarer and more precious treasure: a perennial strong seller over months and then years, until long after it had begun to look downright crude in comparison to all of the slick multimedia extravaganzas surrounding it on store shelves. It eventually sold 850,000 copies in this low-key way.

Yet neither MicroProse nor Sid Meier himself did anything to capitalize on its success for some years. The former turned to other games inside and outside of the grand-strategy tent, while the latter turned his attention to C.P.U. Bach, a quirky passion project in computer-generated music that wasn’t even a game at all and didn’t even run on conventional computers. (Its home was the 3DO multimedia console.) The closest thing to a Civilization sequel or expansion in the three years after the original game’s release was Colonization, a MicroProse game from designer Brian Reynolds that borrowed some of Civilization‘s systems and applied them to the more historically grounded scenario of the European colonization of the New World. The Colonization box sported a blurb declaring that “the tradition of Civilization continues,” while Sid Meier’s name became a possessive prefix before the new game’s title. (Reynolds’s own name, by contrast, was nowhere to be found on the box.) Both of these were signs that MicroProse’s restless marketing department felt that the legacy of Civilization ought to be worth something, even if it wasn’t yet sure how best to make use of it.

Colonization hit the scene in 1994, one year after SimCity 2000 had been accorded such a positive reception, and proceeded to sell an impressive 300,000 copies. These two success stories together altered MicroProse’s perception of Civilization forever, transforming what had started as just an opportunistic bit of marketing on Colonization‘s box into an earnest attempt to build a franchise. Not one but two new Civilization games were quickly authorized. The one called CivNet was rather a stopgap project, which transplanted the original game from MS-DOS to Windows and added networked or hot-seat multiplayer capabilities to the equation. The other Civilization project was also to run under Windows, but was to be a far more extensive revamping of the original, making it bigger, prettier, and better balanced than before. Its working title of Civilization 2000 made clear its inspiration. Only at the last minute would MicroProse think better of making SimCity 2000‘s influence quite so explicit, and rename it simply Civilization II.

Unfortunately for MicroProse’s peace of mind, Sid Meier, a designer who always followed his own muse, said that he had no interest whatsoever in repeating himself at this point in time. Thus the project devolved to Brian Reynolds as the logical second choice: he had acquitted himself pretty well with Colonization, and Meier liked him a lot and would at least be willing to serve as his advisor, as he had for Reynold’s first strategy game. “They pitched it to me as if [they thought] I was probably going to be really upset,” laughs Reynolds. “I guess they thought I had my heart set on inventing another weird idea like Colonization. ‘Okay, will he be too mad if we tell him that we want him to do Civilization 2000?’ Which of course to me was the ultimate dream job. You couldn’t have asked me to do something I wanted to do more than make a version of Civilization.”

Like his mentor Meier, Reynolds was an accomplished programmer as well as game designer. This allowed him to do the initial work of hammering out a prototype on his own — from, of all locations, Yorkshire, England, where he had moved to be with his wife, an academic who was there on a one-year Fulbright scholarship. While she went off to teach and be taught every day, he sat in their little flat putting together the game that would transform Civilization from a one-off success into the archetypal strategy franchise.

Brian Reynolds

As Reynolds would be the first to admit, Civilization II is more of a nuts-and-bolts iteration on what came before than any wild flight of fresh creativity. He approached his task as a sacred trust. Reynolds:

My core vision for Civ II was not to be the guy that broke Civilization. How can I make each thing a little bit better without breaking any of it? I wanted to make the AI better. I wanted to make it harder. I wanted to add detail. I wanted to pee in all the corners. I didn’t have the idea that we were going to change one thing and everything else would stay the same. I wanted to make everything a little bit better. So, I both totally respected [Civilization I] as an amazing game, and thought, I can totally do a better job at every part of this game. It was a strange combination of humility and arrogance.

Reynolds knew all too well that Civilization I could get pretty wonky pretty quickly when you drilled down into the details. He made it his mission to fix as many of these incongruities as possible — both the ones that could be actively exploited by clever players and the ones that were just kind of weird to think about.

At the top of his list was the game’s combat system, the source of much hilarity over the years, what with the way it made it possible — not exactly likely, mind you, but possible — for a militia of ancient spearmen to attack and wipe out a modern tank platoon. This was a result of the game’s simplistic “one hit and done” approach to combat. Let’s consider our case of a militia attacking tanks. A militia has an attack strength of one, a tank platoon a defense strength of five. The outcome of the confrontation is determined by adding these numbers together, then taking each individual unit’s strength as its chance of destroying the other unit rather than being destroyed itself. In this case, then, our doughty militia men have a one-in-six chance of annihilating the tanks rather than vice versa — not great odds, to be sure, but undoubtedly better than those they would enjoy in any real showdown.

It was economic factors that made this state of affairs truly unbalancing. A very viable strategy for winning Civilization every single time was the “barbarian hordes” approach: forgo virtually all technological and social development, flood the map with small, primitive cities, then use those cities to pump out huge numbers of primitive units. A computer opponent diligently climbing the tech tree and developing its society over a broader front would in time be able to create vastly superior units like tanks, but would never come close to matching your armies in quantity. So, you could play the law of averages: you might have to attack a given tank platoon five times or more with different militias, but you knew that you would eventually destroy it, as you would the rest of your opponent’s fancy high-tech military with your staggering numbers of bottom feeders. The barbarian-horde strategy made for an unfun way to play once the joy of that initial eureka moment of discovering it faded, yet many players found the allure of near-certain victory on even the highest difficulty levels hard to resist. Part of a game designer’s job is to save players like this from themselves.

This was in fact the one area of Civilization II that Sid Meier himself dived into with some enthusiasm. He’d been playing a lot of Master of Magic, yet another MicroProse game that betrayed an undeniable Civilization influence, although unlike Colonization it was never marketed on the basis of those similarities. When two units met on the world map in Master of Magic, a separate tactical-battle screen opened up for you to manage the fight. Meier went so far as prototyping such a system for Civilization II, but gave up on it in the end as a poor fit with the game’s core identity. “Being king is the heart of Civilization,” he says. “Slumming as a lowly general puts the player in an entirely different story (not to mention violates the Covert Action rule). Win-or-lose battles are not the only interesting choice on the path to good game design, but they’re the only choice that leads to Civ.”

With his mentor having thus come up empty, Brian Reynolds addressed the problem via a more circumspect complication of the first game’s battle mechanics. He added a third and fourth statistic to each unit: firepower and hit points. Now, instead of being one-and-done, each successful “hit” would merely subtract the one unit’s firepower from the other’s total hit points, and then the battle would continue until one or the other reached zero hits points. The surviving unit would quite possibly exit the battle “wounded” and would need some time to recuperate, adding another dimension to military strategy. It was still just barely possible that a wildly inferior unit could defeat its better — especially if the latter came into a battle already at less than its maximum hit points — but such occurrences became the vanishingly rare miracles they ought to be. Consider: Civilization II‘s equivalent of a militia — renamed now to “warriors” — has ones across the board for all four statistics; a tank platoon, by contrast, has an attack strength of ten, a defense strength of five, a firepower of one, and three hit points when undamaged. This means that a group of ancient warriors needs to roll the same lucky number three times in a row on a simulated six-sided die in order to attack an undamaged tank platoon and win. A one-in-six chance has become one chance in 216 — odds that we can just about imagine applying in the real world, where freak happenstances really do occur from time to time.

This change was of a piece with those Reynolds introduced at every level of the game — pragmatic and judicious, evolutionary rather than revolutionary in spirit. I won’t enumerate them exhaustively here, but will just note that they were all very defensible if not always essential in this author’s opinion.

Civilization II was written for Windows 3, and uses that operating system’s standard Windows interface.

The layers of the program that were not immediately visible to the player got an equally judicious sprucing up — especially diplomacy and artificial intelligence, areas where the original had been particularly lacking. The computer players became less erratic in their interactions with you and with one another; no longer would Mahatma Gandhi go to bed one night a peacenik and wake up a nuke-spewing madman. Combined with other systemic changes, such as a rule making it impossible for players to park their military units inside the city boundaries of their alleged allies, these improvements made it much less frustrating to pursue a peaceful, diplomatic path to victory — made it less likely, that is to say, that the other players would annoy you into opening a can of Gandhi-style whoop-ass on them just to get them out of your hair.

In addition to the complications that were introduced to address specific weaknesses of the first game, Civilization II got a whole lot more stuff for the sake of it: more nationalities to play and play against (21 instead of 14); more advances to research (89 instead of 71); more types of units to move around the map (51 instead of 28); a bewildering variety of new geological, biological, and ecological parameters to manipulate to ensure that the game built for you just the sort of random world that you desired to play in; even a new, ultra-hard “Deity” difficulty level to address Reynold’s complaint that Meier’s Civilization was just too easy. There was also a new style of government added to the original five: “Fundamentalism” continued the tradition of mixing political, economic, and now religious ideologies indiscriminately, with all of them seen through a late-twentieth-century American triumphalist lens that might have been offensive if it wasn’t so endearingly naïve in its conviction that the great debates down through history about how human society can be most justly organized had all been definitively resolved in favor of American-style democracy and capitalism. And then the game got seven new Wonders of the World to add to the existing 21. Like their returning stablemates, they were a peculiar mix of the abstract and the concrete, from Adam Smith’s Trading Company (there’s that triumphalism again!) in the realm of the former to the Eiffel Tower in that of the latter.

Reynolds’s most generous move of all was to crack open the black box of the game for its players, turning it into a toolkit that let them try their own hands at strategy-game design. Most of the text and vital statistics were stored in plain-text files that anyone could open up in an editor and tinker with. Names could be changed, graphics and sounds could be replaced, and almost every number in the game could be altered at will. MicroProse encouraged players to incorporate their most ambitious “mods” into set-piece scenarios, which replaced the usual randomized map and millennia-spanning timeline with a more focused premise. Scenarios dealing with Rome during the time of transition from Republic to Empire and World War II in Europe were included with the game to get the juices flowing. In shrinking the timeline so dramatically and focusing on smaller goals, scenarios did tend to bleed away some of Civilization‘s high-concept magic and turn it into more of a typical strategic war game, but that didn’t stop the hardcore fans from embracing them. They delivered scenarios of their own about everything from Egyptian, Greek, and Norse mythology to the recent Gulf War against Iraq, from a version of Conway’s Game of Life to a cut-throat competition among Santa’s elves to become the dominant toy makers.

The ultimate expression of Brian Reynolds’s toolkit approach can be seen right there on the menu every time you start a new game of Civilization II, under the heading of simply “Cheat.” You can use it to change anything you want any time you want, at the expense of not having your high score recorded, should you earn one. At a click of the mouse, you can banish an opposing player from the game, research any advance instantly, give yourself infinite money… you name it. More importantly in the long run, the Cheat menu lets you peek behind the curtain to find out exactly what is going on at any given moment, almost like a programmer sitting in front of a debugging console. Sid Meier was shocked the first time he saw it.

Cheating was an inherent part of the game now, right on the main screen? This was not good. Like all storytelling, gaming is about the journey, and if you’re actively finding ways to jump to the end, then we haven’t made the fantasy compelling enough. A gripping novel would never start with an insert labeled, “Here’s the Last Page, in Case You Want to Read It Now.” Players who feel so inclined will instinctively find their own ways to cheat, and we shouldn’t have to help them out. I could not be convinced this was a good idea.

But Reynolds stuck to his guns, and finally Meier let him have it his way. It was, he now acknowledges, the right decision. The Cheat menu let players rummage around under the hood of the game as it was running, until some of them came to understand it practically as well as Reynolds himself. This was a whole new grade of catnip for the types of mind that tend to be attracted by big, complex strategy games like this one. Meanwhile the loss of a high score to boast about was enough to ensure that gamers weren’t unduly tempted to use the Cheat menu when playing for keeps, as it were.

Of course, the finished Civilization II is not solely a creation of Brian Reynolds. After he returned from Britain with his prototype in hand, two other MicroProse designers named Doug Kaufman and Jeff Briggs joined him for the hard work of polishing, refining, and balancing. Ditto a team of artists and even a film crew.

Yes, a film crew: the aspect of Civilization II that most indelibly dates it to the mid-1990s — even more so than its Windows 3 interface — must surely be your “High Council,” who pop up from time to time to offer their wildly divergent input on the subject of what you should be doing next. They’re played by real actors, hamming it up gleefully in video clips, changing from togas to armor to military uniforms to business suits as the centuries go by. Most bizarre of all is the entertainment advisor, played by… an Elvis Presley impersonator. What can one say? This sort of thing was widely expected to be the future of gaming, and MicroProse didn’t want to be left completely in the cold when the much-mooted merger of Silicon Valley and Hollywood finally became a reality.


Civilization II was released in the spring of 1996 to glowing reviews. Computer Gaming World gave it five stars out of five, calling it “a spectacularly addictive and time-consuming sequel.” Everything I’ve said in this article and earlier ones about the appeal, success, and staying power of Civilization I applies treble to Civilization II. It sold 3 million copies over the five years after its release, staying on store shelves right up to the time that the inevitable Civilization III arrived to replace it. Having now thoroughly internalized the lesson that strategy games could become franchises too, MicroProse sustained interest in the interim with two scenario packs, a “Multiplayer Gold Edition” that did for Civilization II what CivNet had done for Civilization I, and another reworking called Civilization II: Test of Time that extended the timeline of the game into the distant future. Civilization as a whole thus become one of gaming’s most inescapable franchises, the one name in the field of grand strategy that even most non-gamers know.

Given all of this, and given the obvious amount of care and even love that was lavished on Civilization II, I feel a bit guilty to admit that I struggled to get into it when I played it in preparation for this article. Some of my lack of enthusiasm may be down to purely proximate causes. I played a lot of Civilization I in preparation for the long series of articles I wrote about it and the Progress-focused, deeply American worldview it embodies, and the sequel is just more of the same from this perspective. If I’d come to Civilization II cold, as did the majority of those 3 million people who bought it, I might well have had a very different experience with it.

Still, I do think there’s a bit more to my sense of vague dissatisfaction than just a jaded player’s ennui. I miss one or two bold leaps in Civilization II to go along with all of the incrementalist tinkering. Its designers made no real effort to address the big issues that dog games of this ilk: the predictable tech tree that lends itself to rote strategies, the ever more crushing burden of micromanagement as your empire expands, and an anticlimactic endgame that can go on for hours after you already know you’re going to win. How funny to think that Master of Orion, another game published by MicroProse, had already done a very credible job of addressing all of these problems three years before Civilization II came to be!

Then, too, Civilization II may be less wonky than its predecessor, but I find that I actually miss the older game’s cock-eyed jeu d’esprit, of which those ancient militias beating up on tanks was part and parcel. Civilization II‘s presentation, using the stock Windows 3 menus and widgets, is crisper and cleaner, but only adds to the slight sense of sterility that dogs the whole production. Playing it can feel rather like working a spreadsheet at times — always a danger in these kinds of big, data-driven strategy games. Those cheesy High Council videos serve as a welcome relief from the austerity of it all; if you ask me, the game could have used some more of that sort of thing.

I do appreciate the effort that went into all the new nationalities, advances, units, and starting parameters. In the end, though, Civilization II only provides further proof for me — as if I needed it — that shoehorning more stuff into a game doesn’t always or even usually make it better, just slower and more ponderous. In this sense too, I prefer its faster playing, more lovably gonzo predecessor. It strikes me that Civilization II is more of a gamer’s game, emphasizing min-maxing and efficient play above all else, at the expense of the original’s desire to become a flight of the imagination, letting you literally write your own history of a world. Sid Meier liked to call his game first and foremost “an epic story.” I haven’t heard any similar choice of words from Brian Reynolds, and I’ve definitely never felt when playing Civilization I that it needed to be harder, as he did.

I hasten to emphasize, however, that mine is very much a minority opinion. Civilization II was taken up as a veritable way of life by huge numbers of strategy gamers, some of whom have refused to abandon it to this day, delivering verdicts on the later installments in the series every bit as mixed as my opinions about this one. Good for them, I say; there are no rights or wrongs in matters like these, only preferences.


Postscript: The Eternal War

In 2012, a fan with the online handle of Lycerius struck a chord with media outlets all over the world when he went public with a single game of Civilization II which he had been playing on and off for ten years of real time. His description of it is… well, chilling may not be too strong a word.

The world is a hellish nightmare of suffering and devastation. There are three remaining super nations in AD 3991, each competing for the scant resources left on the planet after dozens of nuclear wars have rendered vast swaths of the world uninhabitable wastelands.

The ice caps have melted over 20 times, due primarily to the many nuclear wars. As a result, every inch of land in the world that isn’t a mountain is inundated swampland, useless to farming. Most of which is irradiated anyway.

As a result, big cities are a thing of the distant past. Roughly 90 percent of the world’s population has died either from nuclear annihilation or famine caused by the global warming that has left absolutely zero arable land to farm. Engineers are busy continuously building roads so that new armies can reach the front lines. Roads that are destroyed the very next turn. So, there isn’t any time to clear swamps or clean up the nuclear fallout.

Only three massive nations are left: the Celts (me), the Vikings, and the Americans. Between the three of us, we have conquered all the other nations that have ever existed and assimilated them into our respective empires.

You’ve heard of the 100 Year War? Try the 1700 Year War. The three remaining nations have been locked in an eternal death struggle for almost 2000 years. Peace seems to be impossible. Every time a ceasefire is signed, the Vikings will surprise-attack me or the Americans the very next turn, often with nuclear weapons. So, I can only assume that peace will come only when they’re wiped out. It is this that perpetuates the war ad infinitum.

Because of SDI, ICBMs are usually only used against armies outside of cities. Instead, cities are constantly attacked by spies who plant nuclear devices which then detonate. Usually the downside to this is that every nation in the world declares war on you. But this is already the case, so it’s no longer a deterrent to anyone, myself included.

The only governments left are two theocracies and myself, a communist state. I wanted to stay a democracy, but the Senate would always overrule me when I wanted to declare war before the Vikings did. This would delay my attack and render my turn and often my plans useless. And of course the Vikings would then break the ceasefire like clockwork the very next turn. I was forced to do away with democracy roughly a thousand years ago because it was endangering my empire. But of course the people hate me now, and every few years since then, there are massive guerrilla uprisings in the heart of my empire that I have to deal with, which saps resources from the war effort.

The military stalemate is airtight, perfectly balanced because all remaining nations already have all the technologies, so there is no advantage. And there are so many units at once on the map that you could lose twenty tank units and not have your lines dented because you have a constant stream moving to the front. This also means that cities are not only tiny towns full of starving people, but that you can never improve the city. “So you want a granary so you can eat? Sorry! I have to build another tank instead. Maybe next time.”

My goal for the next few years is to try to end the war and use the engineers to clear swamps and fallout so that farming may resume. I want to rebuild the world. But I’m not sure how.

One can’t help but think about George Orwell’s Oceania, Eurasia, and Eastasia when reading of Lycerius’s three perpetually warring empires. Like Nineteen Eighty-Four, his after-action report has the uncanny feel of a dispatch from one of our own world’s disturbingly possible futures. Many people today would surely say that recent events have made his dystopia seem even more probable than ten years ago.

But never fear: legions of fans downloaded the saved game of the “Eternal War” which Lycerius posted and started looking for a way to end the post-apocalyptic paralysis. A practical soul who called himself “stumpster” soon figured out how to do so: “I opted for a page out of MacArthur’s book and performed my own Incheon landing.” In the game of Civilization, there is always a way. Let us hope the same holds true in reality.

(Sources: the book Sid Meier’s Memoir! by Sid Meier; Computer Gaming World of April/May 1985, November 1987, March 1993, June 1996, July 1996, and August 1996; Retro Gamer 86, 112, and 219. Online sources include Soren Johnson’s interviews with Sid Meier and Brian Reynolds, PC Gamer‘s “Complete History of Civilization,” and  Huffington Post‘s coverage of Lycerius’s game of Civilization and stumpster’s resolution of the stalemate. The original text of original Lycenrius’s Reddit message is posted on the Civilization II wiki.

Civilization II is not currently available for online purchase. You can, however, find it readily enough on any number of abandonware archives; some are dodgier than others, so be cautious. I recommend that you avoid the Multiplayer Gold Edition in favor of the original unless you really, really want to play with your mates. For, in a rather shocking oversight, MicroProse released the Gold Edition with bugged artificial intelligence that makes all of the computer-controlled players ridiculously aggressive and will keep you more or less constantly at war with everyone. If perpetual war is your thing, on the other hand, go for it…

Update: See Blake’s comment below for information on how to get the Multiplayer Gold Edition running with the original artificial intelligence, thereby getting the best of both worlds!

Once you’ve managed to acquire it, there’s a surprisingly easy way to run Civilization II on modern versions of Windows. You just need to install a little tool called WineVDM, and then the game should install and run transparently, right from the Windows desktop. It’s probably possible to get it running on Linux and MacOS using the standard Wine layer, but I haven’t tested this personally.)

In a feat of robust programming of which its makers deserve to be proud, Civilization II is capable of scaling to seemingly any size of screen. Here it is running on my Windows 10 desktop at a resolution of 3440 X 1440 — numbers that might as well have been a billion by a million back in 1996.

 
 

Tags: , , ,

Normality

Sometimes these articles come from the strangest places. When I was writing a little while back about The Pandora Directive, the second of the Tex Murphy interactive movies, I lavished with praise its use of a free-roaming first-person 3D perspective, claiming in the process that “first-person 3D otherwise existed only in the form of action-oriented shooters and static, node-based, pre-rendered Myst clones.” Such a blanket statement is just begging to be contradicted, and you folks didn’t disappoint. Our tireless fact-checker Aula rightly noted that I’d forgotten a whole family of action-CRPGs which followed in the wake of Ultima Underworld (another game I’d earlier lavished with praise, as it happened). And, more pertinently for our subject of today, Sarah Walker informed me that “Gremlin’s Normality was an early 1996 point-and-clicker using a DOOM-style engine.”

I must confess that I’d never even heard of Normality at that point, but Sarah’s description of it made me very interested in checking it out. What I found upon doing so was an amiable little game that feels somehow less earthshaking than its innovative technical approach might lead one to expect, but that I nevertheless enjoyed very much. So, I decided to write about it today, as both an example of a road largely not taken in traditional adventure games and as one of those hidden gems that can still surprise even me, a man who dares to don the mantle of an expert in the niche field of interactive narratives of the past.



The story of Normality‘s creation is only a tiny part of the larger story of Gremlin Interactive, the British company responsible for it, which was founded under the name of Gremlin Graphics in 1984 by Ian Stewart and Kevin Norburn, the proprietors of a Sheffield software shop. Impressed by the coding talents of the teenagers who flocked around their store’s demo machines every afternoon and weekend, one-upping one another with ever more audacious feats of programming derring-do, Stewart and Norburn conceived Gremlin as a vehicle for bringing these lads’ inventions to the world. The company’s name became iconic among European owners of Sinclair Spectrums and Commodore 64s, thanks to colorfully cartoony and deviously clever platformers and other types of action games: the Monty Mole series, Thing on a Spring, Bounder, Switchblade, just to name few. When the 1980s came to an end and the 8-bit machines gave way to the Commodore Amiga, MS-DOS, and the new 16-bit consoles, Gremlin navigated the transition reasonably well, keeping their old aesthetic alive through games like Zool whilst also branching out in new directions, such as a groundbreaking line of 3D sports simulations that began with Actua Soccer. Through it all, Gremlin was an institution unto itself in British game development, a rite of passage for countless artists, designers, and programmers, some of whom went on to found companies of their own. (The most famous of Gremlin’s spinoffs is Core Design, which struck international gold in 1996 with Tomb Raider.)

The more specific story of Normality begins with a fellow named Tony Crowther. While still a teenager in the 1980s, he was one of the elite upper echelon of early British game programmers, who were feted in the gaming magazines like rock stars. A Sheffield lad himself, Crowther’s fame actually predated the founding of Gremlin, but his path converged with its on a number of occasions afterward. Unlike many of his rock-star peers, he was able to sustain his career if not his personal name recognition into the 1990s, when lone-wolf programmers were replaced by teams and project budgets and timelines increased exponentially. He remembers his first sight of id Software’s DOOM as a watershed moment in his professional life: “This was the first game I had seen with 3D graphics, and with what appeared to be a free-roaming camera in the world.” It was, in short, the game that would change everything. Crowther immediately started working on a DOOM-style 3D engine of his own.

He brought the engine, which he called True3D, with him to Gremlin Interactive when he accepted the title of Technical Consultant there in early 1994. “I proposed two game scenarios” for using it, he says. “Gremlin went with the devil theme; the other was a generic monster game.”

The “devil theme” would become Realms of the Haunting, a crazily ambitious and expensive project that would take well over two years to bring to fruition, that would wind up filling four CDs with DOOM-style carnage, adventure-style dialogs and puzzle solving, a complicated storyline involving a globe-spanning occult conspiracy of evil (yes, yet another one), and 90 minutes of video footage of human actors (this was the mid-1990s, after all). We’ll have a closer look at this shaggy beast in a later article.

Today’s more modest subject of inquiry was born in the head of one Adrian Carless, a long-serving designer, artist, writer, and general jack-of-all-trades at Gremlin. He simply “thought it would be cool to make an adventure game in a DOOM-style engine. Realms of the Haunting was already underway, so why not make two games with the same engine?” And so NormalityRealms of the Haunting‘s irreverent little brother, was born. A small team of about half a dozen made it their labor of love for some eighteen months, shepherding it to a European release in the spring of 1996. It saw a North American release, under the auspices of the publisher Interplay, several months later.



To the extent that it’s remembered at all, Normality is known first and foremost today for its free-roaming first-person 3D engine — an approach that had long since become ubiquitous in the realm of action games, where “DOOM clones” were a dime a dozen by 1996, but was known to adventure gamers only thanks to Access Software’s Tex Murphy games. Given this, it might be wise for us to review the general state of adventure-game visuals circa 1996.

By this point, graphical adventures had bifurcated into two distinct groups whose Venn diagram of fans overlapped somewhat, but perhaps not as much as one might expect. The older approach was the third-person point-and-click game, which had evolved out of the 1980s efforts of Sierra and LucasArts. Each location in one of these games was built from a background of hand-drawn pixel art, with the player character, non-player characters, and other interactive objects superimposed upon it as sprites. Because drawing each bespoke location was so intensive in terms of human labor, there tended to be relatively few of them to visit in any given game. But by way of compensation, these games usually offered fairly rich storylines and a fair degree of dynamism in terms of their worlds and the characters that inhabited them. Puzzles tended to be of the object-oriented sort — i.e., a matter of using this thing from your inventory on this other thing.

The alternative approach was pioneered and eternally defined by Myst, a game from the tiny studio Cyan Productions that first appeared on the Macintosh in late 1993 and went on to sell over 6 million copies across a range of platforms. Like DOOM and its ilk, Myst and its many imitators presented a virtual world to their players from a first-person perspective, and relied on 3D graphics rendered by a computer using mathematical algorithms rather than hand-drawn pixel art. In all other ways, however, they were DOOM‘s polar opposite. Rather than corridors teeming with monsters to shoot, they offered up deserted, often deliberately surreal — some would say “sterile” — worlds for their players to explore. And rather than letting players roam freely through said worlds, they presented them as a set of discrete nodes that they could hop between.

Why did they choose this slightly awkward approach? As happens so often in game development, the answer has everything to do with technological tradeoffs. Both DOOM and Myst were 3D-rendered; their differences came down to where and when that rendering took place. DOOM created its visuals on the fly, which meant that the player could go anywhere in the world but which limited the environment’s visual fidelity to what an ordinary consumer-grade computer of the time could render at a decent frame rate. Myst, on the other hand, was built from pre-rendered scenes: scenes that had been rendered beforehand on a high-end computer, then saved to disk as ordinary graphics files — effectively converted into pixel art. This work stream let studios turn out far more images far more quickly than even an army of human pixel-artists could have managed, but forced them to construct their worlds as a network of arbitrarily fixed nodes and views which many players — myself among them — can find confusing to navigate. Further, these views were not easy to alter in any sort of way after they had been rendered, which sharply limited the dynamism of Myst clones in comparison to traditional third-person adventure games. Thus the deserted quality that became for good or ill one of their trademarks, and their tendency to rely on set-piece puzzles such as slider and button combinations rather than more flexible styles of gameplay. (Myst itself didn’t have a player inventory of any sort — a far cry from the veritable pawn shop’s worth of seemingly random junk one could expect to be toting around by the middle stages of the typical Sierra or LucasArts game.)

By no means did Normality lift the set of technical constraints I’ve just described. Yet it did serve as a test bed for a different set of tradeoffs from the ones that adventure developers had been accepting before this point. It asked the question of whether you could make an otherwise completely conventional adventure game — unlike its big brother Realms of the Haunting, Normality has no action elements whatsoever — using a Doom-style engine, accepting that the end result would not be as beautiful as Myst but hoping that the world would feel a lot more natural to move around in. And the answer turned out to be — in this critic’s opinion, at any rate — a pretty emphatic yes.

Tony Crowther may have chosen to call his engine True3D, but it is in reality no such thing. Like the DOOM engine which inspired it, it uses an array of tricks and shortcuts to minimize rendering times whilst creating a reasonably convincing subjective experience of inhabiting a 3D space. That said, it does boast some improvements over DOOM: most notably, it lets you look up and down, an essential capability for an old-school adventure game in which the player is expected to scour every inch of her environment for useful thingamabobs. It thus proved in the context of adventure games a thesis that DOOM had already proved for action games: that gains in interactivity can often more than offset losses in visual fidelity. Just being able to, say, look down from a trapdoor above a piece of furniture and see a crucial detail that had been hidden from floor level was something of a revelation for adventure gamers.

You move freely around Normality‘s world using the arrow keys, just as you do in DOOM. (The “WASD” key combination, much less mouse-look, hadn’t yet become commonplace in 1996.) You interact with the things you see on the screen by clicking on them with the mouse. It feels perfectly natural in no time — more natural, I must say, than any Myst clone has ever felt for me. And you won’t feel bored or lonely in Normality, as so many tend to do in that other style of game; its environment changes constantly and it has plenty of characters to talk to. In this respect as in many others, it’s more Sierra and LucasArts than Myst.

The main character of Normality is a fellow named Kent Knutson, who, some people who worked at Gremlin have strongly implied, was rather a chip off the old block of Adrian Carless himself. He’s an unrepentant slacker who just wants to rock out to his tunes, chow down on pizza, and, one has to suspect based on the rest of his persona, toke up until he’s baked to the perfection of a Toll House cookie. Unfortunately, he’s living in a dictatorial dystopia of the near future, in which conformity to the lowest common denominator — the titular Normality — has been elevated to the highest social value, to be ruthlessly enforced by any and all means necessary. When we first meet Kent, he’s just been released from a stint in jail, his punishment for walking down the street humming a non-sanctioned song. Now he’s to spend some more time in house arrest inside his grotty apartment, with a robot guard just outside the door making sure he keeps his television on 24 hours per day, thereby to properly absorb the propaganda of the Dear Leader, a thoroughly unpleasant fellow named Paul Mystalux. With your help, Kent will find a way to bust out of his confinement. Then he’ll meet the most ineffectual group of resistance fighters in history, prove himself worthy to join their dubious ranks, and finally find a way to bring back to his aptly named city of Neutropolis the freedom to let your freak flag fly.

Adrian Carless. It seems that the apple named Kent didn’t fall far from the tree named Adrian…

There’s a core of something serious here, as I know all too well; I’ve been researching and writing of late about Chairman Mao Zedong’s Cultural Revolution in China, whose own excesses in the name of groupthink were every bit as absurd in their way as the ones that take place in Neutropolis. In practice, though, the game is content to play its premise for laughs. As the creators of Normality put it, “It’s possible to draw parallels between Paul [Mystalux] and many of the truly evil dictators in history — Hitler, Mussolini, Stalin — but we won’t do that now because this is supposed to be light-hearted and fun.” It’s far from the worst way in the world to neutralize tyranny; few things are as deflating to the dictators and would-be dictators among us than being laughed at for the pathetic personal insecurities that make them want to commit such terrible crimes against humanity.

This game is the very definition of laddish humor, as unsubtle as a jab in the noggin, as rarefied as a molehill, as erudite as that sports fan who always seems to be sitting next to you at the bar of a Saturday night. And yet it never fails to be likeable. It always has its heart in the right place, always punches up rather than down. What can I say? I’m a simple man, and this game makes me laugh. My favorite line comes when, true adventure gamer that you are, you try to get Kent to sift through a public trashcan for valuable items: “I have enough trash in my apartment already!”

Normality‘s visual aesthetic is in keeping with its humor aesthetic (not to mention Kent’s taste in music): loud, a little crude, even a trifle obnoxious, but hard to hate for all that. The animations were created by motion-capturing real people, but budget and time constraints meant that it didn’t quite work out. “Feet would float and swim, hands wouldn’t meet, and overall things could look rather strange,” admits artist Ricki Martin. “For sure the end results would have been better if it had been hand-animated.” I must respectfully disagree. To my mind, the shambolic animation only adds to the delightfully low-rent feel of the whole — like an old 1980s Dinosaur Jr. record where the tape hiss and distortion are an essential part of the final impression. (In fact, the whole vibe of the game strikes me as more in line with 1980s underground music than the 1990s grunge that was promised in some of its advertising, much less the Britpop that was sweeping its home country at the time.)

But for all its tossed-off-seeming qualities, Normality has its head screwed on tight where it’s important: it proves to be a meticulously designed adventure game, something neither its overall vibe not its creators’ lack of experience with the genre would lead one to expect. Thankfully, they learned from the best; all of the principals recall the heavy influence that LucasArts had on them — so much so that they even tried to duplicate the onscreen font found in classics like The Secret of Monkey Island, Day of the Tentacle, and Sam and Max Hit the Road. The puzzles are often bizarre — they do take place in a bizarre setting, after all — but they always have an identifiable cartoon logic to them, and there are absolutely no dead ends to ruin your day. As a piece of design, Normality thus acquits itself much better than many another game from more established adventure developers. You can solve this one on your own, folks; its worst design sin is an inordinate number of red herrings, which I’m not sure really constitutes a sin at all. It’s wonderful to discover an adventure game that defies the skepticism with which I always approach obscure titles in the genre from unseasoned studios.


The game begins in Kent’s hovel of a flat.

The game’s verb menu is capable of frightening small children — or, my wife, who declared it the single ugliest thing I’ve ever subjected her to when I play these weird old games in our living room.

Sometimes Normality‘s humor is sly. These rooms with painted-on furniture are a riff on the tendency of some early 3D engines to appear, shall we say, less than full-bodied.

Other times the humor is just dumb — but it still makes me laugh.

The game ends in a noisy concert that’s absolutely off the hook, which is absolutely perfect.



Normality was released with considerable fanfare in Europe, including a fifteen-page promotional spread in the popular British magazine PC Zone, engineered to look like a creation of the magazine’s editorial staff rather than an advertisement. (Journalistic ethics? Schmethics!) Here and elsewhere, Gremlin plugged the game as a well-nigh revolutionary adventure, thanks to its 3D engine. But the public was less than impressed; the game never caught fire.

In the United States, Interplay tried to inject a bit of star power into the equation by hiring the former teen idol Corey Feldman to re-record all of Kent’s lines; mileages will vary here, but personally I prefer original actor Tom Hill’s more laconic approach to Feldman’s trademark amped-up surfer-dude diction. Regardless, the change in casting did nothing to help Normality‘s fortunes in the United States, where it sank without a trace — as is amply testified by the fact that this lifelong adventure fan never even knew it existed until recently. Few of the magazines bothered to review it at all, and those that did took strangely scant notice of its formal and technical innovations. Scorpia, Computer Gaming World‘s influential adventure columnist, utterly buried the lede, mentioning the 3D interface only in nonchalant passing halfway into her review. Her conclusion? “Normality isn’t bad.” Another reviewer pronounced it “mildly fun and entertaining.” With faint praise like that, who needs criticism?

Those who made Normality have since mused that Gremlin and Interplay’s marketing folks might have leaned a bit too heavily on the game’s innovative presentation at the expense of its humorous premise and characters, and there’s probably something to this. Then again, its idiosyncratic vibe resisted easy encapsulation, and was perhaps of only niche appeal anyway — a mistake, if mistake it be, that LucasArts generally didn’t make. Normality was “‘out there,’ making it hard to put a genre on it,” says Graeme Ing, another artist who worked on the game — “unlike Monkey Island being ‘pirates’ and [Day of the] Tentacle being ‘time travel.'” Yet he admits that “I loved the game for the same reasons. Totally unique, not just a copy of another hit.”

I concur. Despite its innovations, Normality is not a major game in any sense of the word, but sometimes being “major” is overrated. To paraphrase Neil Young, traveling in the middle of the road all the time can become a bore. Therefore this site will always have time for gaming’s ditches — more time than ever, I suspect, as we move deeper into the latter half of the 1990s, an era when gaming’s mainstream was becoming ever more homogenized. My thanks go to Sarah Walker for turning me onto this scruffy outsider, which I’m happy to induct into my own intensely idiosyncratic Hall of Fame.

(Sources: the book A Gremlin in the Works by Mark James Hardisty, which with its digital supplement included gives you some 800 pages on the history of Gremlin Interactive, thus nicely remedying this site’s complete silence on that subject prior to now. It comes highly recommended! Also Computer Gaming World of November 1996, Next Generation of November 1996, PC Zone of May 1996, PC World of September 1996, Retro Gamer 11, 61, and 75.

Normality is available for digital purchase at GOG.com, in a version with the original voice acting. Two tips: remember that you can look up and down using the Page Up and Page Down, and know that you can access the map view to move around the city at any time by pressing “M.” Don’t do what I did: spend more than an hour searching in vain for the exit to a trash silo you thought you were trapped inside — even if that does seem a very Kent thing to do…)

 
 

Tags: , ,

Doing Windows, Part 12: David and Goliath

Microsoft, intent on its mission to destroy Netscape, rolled out across the industry with all the subtlety and attendant goodwill of Germany invading Poland…

— Merrill R. Chapman

No one reacted more excitedly to the talk of Java as the dawn of a whole new way of computing than did the folks at Netscape. Marc Andreessen, whose head had swollen exactly as much as the average 24-year-old’s would upon being repeatedly called a great engineer, businessman, and social visionary all rolled into one, was soon proclaiming Netscape Navigator to be far more than just a Web browser: it was general-purpose computing’s next standard platform, possibly the last one it would ever need. Java, he said, generously sharing the credit for this development, was “as revolutionary as the Web itself.” As for Microsoft Windows, it was merely “a poorly debugged set of device drivers.” Many even inside Netscape wondered whether he was wise to poke the bear from Redmond so, but he was every inch a young man feeling his oats.

Just two weeks before the release of Windows 95, the United States Justice Department had ended a lengthy antitrust investigation of Microsoft’s business practices with a decision not to bring any charges. Bill Gates and his colleague took this to mean it was open season on Netscape.

Thus, just a few weeks after the bravura Windows 95 launch, a war that would dominate the business and computing press for the next three years began. The opening salvo from Microsoft came in a weirdly innocuous package: something called the “Windows Plus Pack,” which consisted mostly of slightly frivolous odds and ends that hadn’t made it into the main Windows 95 distribution — desktop themes, screensavers, sound effects, etc. But it also included the very first release of Microsoft’s own Internet Explorer browser, the fruit of the deal with Spyglass. After you put the Plus! CD into the drive and let the package install itself, it was as hard to get rid of Internet Explorer as it was a virus. For unlike all other applications, there appeared no handy “uninstall” option for Internet Explorer. Once it had its hooks in your computer, it wasn’t letting go for anything. And its preeminent mission in life there seemed to be to run roughshod over Netscape Navigator. It inserted itself in place of its arch-enemy in your file associations and everywhere else, so that it kept turning up like a bad penny every time you clicked a link. If you insisted on bringing up Netscape Navigator in its stead, you were greeted with the pointed “suggestion” that Internet Explorer was the better, more stable option.

Microsoft’s biggest problem at this juncture was that that assertion didn’t hold water; Internet Explorer 1.0 was only a modest improvement over the old NCSA Mosaic browser on whose code it was based. Meanwhile Netscape was pushing aggressively forward with its vision of the browser as a platform, a home for active content of all descriptions. Netscape Navigator 2.0, whose first beta release appeared almost simultaneously with Internet Explorer 1.0, doubled down on that vision by including an email and Usenet client. More importantly, it supported not only Java but a second programming language for creating active content on the Web — a language that would prove much more important to the evolution of the Web in the long run.

Even at this early stage — still four months before Sun would deign to grant Java its own 1.0 release — some of the issues with using it on the Web were becoming clear: namely, the weight of the virtual machine that had to be loaded and started before a Java applet could run, and said applet’s inability to communicate easily with the webpage that had spawned it. Netscape therefore decided to create something that lay between the static simplicity of vanilla HTML and the dynamic complexity of Java. The language called JavaScript would share much of its big brother’s syntax, but it would be interpreted rather than compiled, and would live in the same environment as the HTML that made up a webpage rather than in a sandbox of its own. In fact, it would be able to manipulate that HTML directly and effortlessly, changing the page’s appearance on the fly in response to the user’s actions. The idea was that programmers would use JavaScript for very simple forms of active content — like, say, a popup photo gallery or a scrolling stock ticker — and use Java for full-fledged in-browser software applications — i.e., your word processors and the like.

In contrast to Java, a compiled language walled off inside its own virtual machine, JavaScript is embedded directly into the HTML that makes up a webpage, using the handy “<script>” tag.

​There’s really no way to say this kindly: JavaScript was (and is) a pretty horrible programming language by any objective standard. Unlike Java, which was the product of years of thought, discussion, and experimentation, JavaScript was the very definition of “quick and dirty” in a computer-science context. Even its principal architect Brendan Eich doesn’t speak of it like an especially proud parent; he calls it “Java’s dumb little brother” and “a rush job.” Which it most certainly was: he designed and implemented JavaScript from scratch in a matter of bare weeks.

What he ended up with would revolutionize the Web not because it was good, but because it was good enough, filling a craving that turned out to be much more pressing and much more satisfiable in the here and now than the likes of in-browser word processing. The lightweight JavaScript could be used to bring the Web alive, to make it a responsive and interactive place, more quickly and organically than the heavyweight Java. Once JavaScript had reached a critical mass in that role, it just kept on rolling with all the relentlessness of a Microsoft operating system. Today an astonishing 98 percent of all webpages contain at least a little bit of JavaScript in addition to HTML, and a cottage industry has sprung up to modify and extend the language — and attempt to fix the many infelicities that haunt the sleep of computer-science professors all over the world. JavaScript has become, in other words, the modern world’s nearest equivalent to what BASIC was in the 1980s, a language whose ease of use, accessibility, and populist appeal make up for what it lacks in elegance. These days we even do online word processing in JavaScript. If you had told Brendan Eich that that would someday be the case back in 1995, he would have laughed as loud and long at you as anyone.

Although no one could know it at the time, JavaScript also represents the last major building block to the modern Web for which Marc Andreessen can take a substantial share of the credit, following on from the “image” tag for displaying inline graphics, the secure sockets layer (SSL) for online encryption (an essential for any form of e-commerce), and to a lesser extent the Java language. Microsoft, by contrast, was still very much playing catch-up.

Nevertheless, on December 7, 1995 — the symbolism of this anniversary of the United States’s entry into World War II was lost on no one — Bill Gates gave a major address to the Microsoft faithful and assembled press, in which he made it clear that Microsoft was in the browser war to win it. In addition to announcing that his company too would bite the bullet and license Java for Internet Explorer, he said that the latter browser would no longer be a Windows 95 exclusive, but would soon be made available for Windows 3 and even MacOS as well. And everywhere it appeared, it would continue to sport the very un-Microsoft price tag of free, proof that this old dog was learning some decidedly new tricks for achieving market penetration in this new era of online software distribution. “When we say the browser’s free, we’re saying something different from other people,” said Gates, in a barbed allusion to Netscape’s shareware distribution model. “We’re not saying, ‘You can use it for 90 days,’ or, ‘You can use it and then maybe next year we’ll charge you a bunch of money.'” Netscape, whose whole business revolved around its browser, couldn’t afford to give Navigator away, a fact of which Gates was only too well aware. (Some pundits couldn’t resist contrasting this stance with Gates’s famous 1976 “Open Letter To Hobbyists,” in which he had asked, “Who can afford to do professional work for nothing?” Obviously Microsoft now could…)

Netscape’s stock price dropped by $28.75 that day. For Microsoft’s research budget alone was five times the size of Netscape’s total annual revenues, while the bigger company now had more than 800 people — twice Netscape’s total headcount — working on Internet Explorer alone. Marc Andreessen could offer only vague Silicon Valley aphorisms when queried about these disparities: “In a fight between a bear and an alligator, what determines the victor is the terrain” — and Microsoft, he claimed, had now moved “onto our terrain.” The less abstractly philosophical Larry Ellison, head of the database giant Oracle and a man who had had more than his share of run-ins with Bill Gates in the past, joked darkly about the “four stages” of Microsoft stealing someone else’s innovation. Stage 1: to “ridicule” it. Stage 2: to admit that, “yeah, there are a few interesting ideas here.” Stage 3: to make its own version. Stage 4: to make the world forget that the non-Microsoft version had ever existed.

Yet for the time being the Netscape tail continued to wag the Microsoft dog. A more interactive and participatory vision of the Web, enabled by the magic of JavaScript, was spreading like wildfire by the middle of 1996. You still needed Netscape Navigator to experience this first taste of what would eventually be labelled Web 2.0, a World Wide Web that blurred the lines between readers and writers, between content consumers and content creators. For if you visited one of these cutting-edge sites with Internet Explorer, it simply wouldn’t work. Despite all of Microsoft’s efforts, Netscape in June of 1996 could still boast of a browser market share of 85 percent. Marc Andreessen’s Sun Tzu-lite philosophy appeared to have some merit to it after all; his company was by all indications still winning the browser war handily. Even in its 2.0 incarnation, which had been released at about the same time as Gates’s Pearl Harbor speech, Internet Explorer remained something of a joke among Windows users, the annoying mother-in-law you could never seem to get rid of once she showed up.

But then, grizzled veterans like Larry Ellison had seen this movie before; they knew that it was far too early to count Microsoft out. That August, both Netscape and Microsoft released 3.0 versions of their browsers. Netscape’s was a solid evolution of what had come before, but contained no game changers like JavaScript. Microsoft’s, however, was a dramatic leap forward. In addition to Java support, it introduced JScript, a lightweight scripting language that just so happened to have the same syntax as JavaScript. At a stroke, all of those sites which hadn’t worked with earlier versions of Internet Explorer now displayed perfectly well in either browser.

With his browser itself more or less on a par with Netscape’s, Bill Gates decided it was time to roll out his not-so-secret weapon. In October of 1996, Microsoft began shipping Windows 95’s “Service Pack 2,” the second substantial revision of the operating system since its launch. Along with a host of other improvements, it included Internet Explorer. From now on, the browser would ship with every single copy of Windows 95 and be installed automatically as part of the operating system, whether the user wanted it or not. New Windows users would have to make an active choice and then an active effort to go to Netscape’s site — using Internet Explorer, naturally! — and download the “alternative” browser. Microsoft was counting on the majority of these users not knowing anything about the browser war and/or just not wanting to be bothered.

Microsoft employed a variety of carrots and sticks to pressure other companies throughout the computing ecosystem to give or at the bare minimum to recommend Internet Explorer to their customers in lieu of Netscape Navigator. It wasn’t above making the favorable Windows licensing deals it signed with big consumer-computer manufacturers like Compaq dependent on precisely this. But the most surprising pact by far was the one Microsoft made with America Online (AOL).

Relations between the face of the everyday computing desktop and the face of the Internet in the eyes of millions of ordinary Americans had been anything but cordial in recent years. Bill Gates had reportedly told Steve Case, his opposite number at AOL, that he would “bury” him with his own Microsoft Network (MSN). Meanwhile Case had complained long and loud about Microsoft’s bullying tactics to the press, to the point of mooting a comparison between Gates and Adolf Hitler on at least one occasion. Now, though, Gates was willing to eat crow and embrace AOL, even at the expense of his own MSN, if he could stick it to Netscape in the process.

For its part, AOL had come as far as it could with its Booklink browser. The Web was evolving too rapidly for the little development team it had inherited with that acquisition to keep up. Case grudgingly accepted that he needed to offer his customers one of the Big Two browsers. All of his natural inclinations bent toward Netscape. And indeed, he signed a deal with Netscape to make Navigator the browser that shipped with AOL’s turnkey software suite — or so Netscape believed. It turned out that Netscape’s lawyers had overlooked one crucial detail: they had never stipulated exclusivity in the contract. This oversight wasn’t lost on the interested bystander Microsoft, which swooped in immediately to take advantage of it. AOL soon announced another deal, to provide its customers with Internet Explorer as well. Even worse for Netscape, this deal promised Microsoft not only availability but priority: Internet Explorer would be AOL’s recommended, default browser, Netscape Navigator merely an alternative for iconoclastic techies (of which there were, needless to say, very few in AOL’s subscriber base).

What did AOL get in return for getting into bed with Adolf Hitler and “jilting Netscape at the altar,” as the company’s own lead negotiator would later put it? An offer that was impossible for a man with Steve Case’s ambitions to refuse, as it happened. Microsoft would put an AOL icon on the desktop of every new Windows 95 installation, where the hundreds of thousands of Americans who were buying a computer every month in order to check out this Internet thing would see it sitting there front and center, and know, thanks to AOL’s nonstop advertising blitz, that the wonders of the Web were just one click on it away. It was a stunning concession on Microsoft’s part, not least because it came at the direct cost of MSN, the very online network Bill Gates had originally conceived as his method of “burying” AOL. Now, though, no price was too high to pay in his quest to destroy Netscape.

Which raises the question of why he was so obsessed, given that Microsoft was making literally no money from Internet Explorer. The answer is rooted in all that rhetoric that was flying around at the time about the browser as a computing platform — about the Web effectively turning into a giant computer in its own right, floating up there somewhere in the heavens, ready to give a little piece of itself to anyone with a minimalist machine running Netscape Navigator. Such a new world order would have no need for a Microsoft Windows — perish the thought! But if, on the other hand, Microsoft could wrest the title of leading browser developer out of the hands of Netscape, it could control the future evolution of this dangerously unruly beast known as the World Wide Web, and ensure that it didn’t encroach on its other businesses.

That the predictions which prompted Microsoft’s downright unhinged frenzy to destroy Netscape were themselves wildly overblown is ironic but not material. As tech journalist Merrill R. Chapman has put it, “The prediction that anyone was going to use Navigator or any other browser anytime soon to write documents, lay out publications, build budgets, store files, and design presentations was a fantasy. The people who made these breathless predictions apparently never tried to perform any of these tasks in a browser.” And yet in an odd sort of way this reality check didn’t matter. Perception can create its own reality, and Bill Gates’s perception of Netscape Navigator as an existential threat to the software empire he had spent the last two decades building was enough to make the browser war feel like a truly existential clash for both parties, even if the only one whose existence actually was threatened — urgently threatened! — was Netscape. Jim Clark, Marc Andreessen’s partner in founding Netscape, makes the eyebrow-raising claim that he “knew we were dead” in the long run well before the end of 1996, when the Department of Justice declined to respond to an urgent plea on Netscape’s part to take another look at Microsoft’s business practices.

Perhaps the most surprising aspect of the conflict is just how long Netscape’s long run proved to be. It was in most respects David versus Goliath: Netscape in 1996 had $300 million in annual revenues to Microsoft’s nearly $9 billion. But whatever the disparities of size, Netscape had built up a considerable reservoir of goodwill as the vehicle through which so many millions had experienced the Web for the first time. Microsoft found this soft power oddly tough to overcome, even with a browser of its own that was largely identical in functional terms. A remarkable number of people continued to make the active choice to use Netscape Navigator instead of the passive one to use Internet Explorer. By October of 1997, one year after Microsoft brought out the big gun and bundled Internet Explorer right into Windows 95, its browser’s market share had risen as high as 39 percent — but it was Netscape that still led the way at 51 percent.

Yet Netscape wasn’t using those advantages it did possess all that effectively. It was not a happy or harmonious company: there were escalating personality clashes between Jim Clark and Marc Andreessen, and also between Andreessen and his programmers, who thought their leader had become a glory hound, too busy playing the role of the young dot.com millionaire to pay attention to the vital details of software development. Perchance as a result, Netscape’s drive to improve its browser in paradigm-shifting ways seemed to slowly dissipate after the landmark Navigator 2.0 release.

Netscape, so recently the darling of the dot.com age, was now finding it hard to make a valid case for itself merely as a viable business. The company’s most successful quarter in financial terms was the third of 1996 — just before Internet Explorer became an official part of Windows 95 — when it brought in $100 million in revenue. Receipts fell precipitously after that point, all the way down to just $18.5 million in the last quarter of 1997. By so aggressively promoting Internet Explorer as entirely and perpetually free, Bill Gates had, whether intentionally or inadvertently, instilled in the general public an impression that all browsers were or ought to be free, due to some unstated reason inherent in their nature. (This impression has never been overturned, as has been testified over the years by the failure of otherwise worthy commercial browsers like Opera to capture much market share.) Thus even the vast majority of those who did choose Netscape’s browser no longer seemed to feel any ethical compulsion to pay for it. Netscape was left in a position all too familiar to Web firms of the past and present alike: that of having immense name recognition and soft power, but no equally impressive revenue stream to accompany them. It tried frantically to pivot into back-end server architecture and corporate intranet solutions, but its efforts there were, as its bottom line will attest, not especially successful. It launched a Web portal and search engine known as Netcenter, but struggled to gain traction against Yahoo!, the leader in that space. Both Jim Clark and Marc Andreessen sold off large quantities of their personal stock, never a good sign in Silicon Valley.

Netscape Navigator was renamed Netscape Communicator for its 4.0 release in June of 1997. As the name would imply, Communicator was far more than just a browser, or even just a browser with an integrated email client and Usenet reader, as Navigator had been since version 2.0. Now it also sported an integrated editor for making your own websites from scratch, a real-time chat system, a conference caller, an appointment calendar, and a client for “pushing” usually unwanted content to your screen. It was all much, much too much, weighted down with features most people would never touch, big and bloated and slow and disturbingly crash-prone; small wonder that even many Netscape loyalists chose to stay with Navigator 3 after the release of Communicator. Microsoft had not heretofore been known for making particularly svelte software, but Internet Explorer, which did nothing but browse the Web, was a lean ballerina by comparison with the lumbering Sumo wrestler that was Netscape Communicator. The original Netscape Navigator had sprung from the hacker culture of institutional computing, but the company had apparently now forgotten one of that culture’s key dictums in its desire to make its browser a platform unto itself: the best programs are those that do only one thing, but do that one thing very, very well, leaving all of the other things to other programs.

Netscape Communicator. I’m told that there’s an actual Web browser buried somewhere in this pile. Probably a kitchen sink too, if you look hard enough.

Luckily for Netscape, Internet Explorer 4.0, which arrived three months after Communicator, violated the same dictum in an even more inept way. It introduced what Microsoft called the “Active Desktop,” which let it bury its hooks deeper than ever into Windows itself. The Active Desktop was — or tried to be —  Bill Gates’s nightmare of a Web that was impossible to separate from one’s local computer come to life, but with Microsoft’s own logo on it. Ironically, it blurred the distinction between the local computer and the Internet more thoroughly than anything the likes of Sun or Netscape had produced to date; local files and applications became virtually indistinguishable from those that lived on the Internet in the new version of the Windows desktop it installed in place of the old. The end result served mainly to illustrate how half-baked all of the prognostications about a new era of computing exclusively in the cloud really were. The Active Desktop was slow and clumsy and confusing, and absolutely everyone who was exposed to it seemed to hate it and rush to find a way to turn it off. Fortunately for Microsoft, it was possible to do so without removing the Internet Explorer 4 browser itself.

The dreaded Active Desktop. Surprisingly, it was partially defended on philosophical grounds by Tim Berners-Lee, not normally a fan of Microsoft. “It was ridiculous for a person to have two separate interfaces, one for local information (the desktop for their own computer) and one for remote information (a browser to reach other computers),” he writes. “Why did we need an entire desktop for our own computer, but only get little windows through which to view the rest of the planet? Why, for that matter, should we have folders on our desktop but not on the Web? The Web was supposed to be the universe of all accessible information, which included, especially, information that happened to be stored locally. I argued that the entire topic of where information was physically stored should be made invisible to the user.” For better or for worse, though, the public didn’t agree. And even he had to allow that “this did not have to imply that the operating system and browser should be the same program.”

The Active Desktop damaged Internet Explorer’s reputation, but arguably not as badly as Netscape’s had been damaged by the bloated Communicator. For once you turned off all that nonsense, Internet Explorer 4 proved to be pretty good at doing the rest of its job. But there was no similar method for trimming the fat from Netscape Communicator.

While Microsoft and Netscape, those two for-profit corporations, had been vying with one another for supremacy on the Web, another, quieter party had been looking on with great concern. Before the Web had become the hottest topic of the business pages, it had been an idea in the head of the mild-mannered British computer scientist Tim Berners-Lee. He had built the Web on the open Internet, using a new set of open standards; his inclination had never been to control his creation personally. It was to be a meeting place, a library, a forum, perhaps a marketplace if you liked — but always a public commons. When Berners-Lee formed the non-profit World Wide Web Consortium (W3C) in October of 1994 in the hope of guiding an orderly evolution of the Web that kept it independent of the moneyed interests rushing to join the party, it struck many as a quaint endeavor at best. Key technologies like Java and JavaScript appeared and exploded in popularity without giving the W3C a chance to say anything about them. (Tellingly, the word “JavaScript” never even appears in Berners-Lee’s 1999 book about his history with and vision for the Web, despite the scripting language’s almost incalculable importance to making it the dynamic and diverse place it had become by that point.)

From the days when he had been a mere University of Illinois student making a browser on the side, Marc Andreessen had blazed his own trail without giving much thought to formal standards. When the things he unilaterally introduced proved useful, others rushed to copy them, and they became de-facto standards. This was as true of JavaScript as it was of anything else. As we’ve seen, it began as a Netscape-exclusive feature, but was so obviously transformative to what the Web could do and be that Microsoft had no choice but to copy it, to incorporate its own implementation of it into Internet Explorer.

But JavaScript was just about the last completely new feature to be rolled out and widely adopted in this ad-hoc fashion. As the Web reached a critical mass, with Netscape Navigator and Internet Explorer both powering users’ experiences of it in substantial numbers, site designers had a compelling reason not to use any technology that only worked on the one or the other; they wanted to reach as many people as possible, after all. This brought an uneasy sort of equilibrium to the Web.

Nevertheless, the first instinct of both Netscape and Microsoft remained to control rather than to share the Web. Both companies’ histories amply demonstrated that open standards meant little to them; they preferred to be the standard. What would happen if and when one company won the browser war, as Microsoft seemed slowly to be doing by 1997, what with the trend lines all going in its favor and Netscape in veritable financial free fall? Once 90 percent or more of the people browsing the Web were doing so with Internet Explorer, Microsoft would be free to give its instinct for dominance free reign. With an army of lawyers at its beck and call, it would be able to graft onto the Web proprietary, patented technologies that no upstart competitor would be able to reverse-engineer and copy, and pragmatic website designers would no longer have any reason not to use them, if they could make their sites better. And once many or most websites depended on these features that were available only in Internet Explorer, that would be that for the open Web. Despite its late start, Microsoft would have managed to embrace, extend, and in a very real sense destroy Tim Berners-Lee’s original vision of a World Wide Web. The public commons would have become a Microsoft-branded theme park.

These worries were being bandied about with ever-increasing urgency in January of 1998, when Netscape made what may just have been the most audacious move of the entire dot.com boom. Like most such moves, it was born of sheer desperation, but that shouldn’t blind us to its importance and even bravery. First of all, Netscape made its browser free as in beer, finally giving up on even asking people to pay for the thing. Admittedly, though, this in itself was little more than an acceptance of the reality on the ground, as it were. It was the other part of the move that really shocked the tech world: Netscape also made its browser free as in freedom — it opened up its source code to all and sundry. “This was radical in its day,” remembers Mitchell Baker, one of the prime drivers of the initiative at Netscape. “Open source is mainstream now; it was not then. Open source was deep, deep, deep in the technical community. It never surfaced in a product. [This] was a very radical move.”

Netscape spun off a not-for-profit organization, led by Baker and called Mozilla, after a cartoon dinosaur that had been the company’s office mascot almost from day one. Coming well before the Linux operating system began conquering large swaths of corporate America, this was to be open source’s first trial by fire in the real world. Mozilla was to concentrate on the core code required for rendering webpages — the engine room of a browser, if you will. Then others — not least among them the for-profit arm of Netscape — would build the superstructures of finished applications around that sturdy core.

Alas, Netscape the for-profit company was already beyond saving. If anything, this move only hastened the end; Netscape had chosen to give away the one product it had that some tiny number of people were still willing to pay for. Some pundits talked it up as a dying warrior’s last defiant attempt to pass the sword to others, to continue the fight against Microsoft and Internet Explorer: “From the depths of Hell, I spit at thee!” Or, as Tim Berners-Lee put it more soberly: “Microsoft was bigger than Netscape, but Netscape was hoping the Web community was bigger than Microsoft.” And there may very well be something to these points of view. But regardless of the motivations behind it, the decision to open up Netscape’s browser proved both a landmark in the history of open-source software and a potent weapon in the fight to keep the Web itself open and free. Mozilla has had its ups and downs over the years since, but it remains with us to this day, still providing an alternative to the corporate-dominated browsers almost a quarter-century on, having outlived the more conventional corporation that spawned it by a factor of six.

Mozilla’s story is an important one, but we’ll have to leave the details of it for another day. For now, we return to the other players in today’s drama.

While Microsoft and Netscape were battling one another, AOL was soaring into the stratosphere, the happy beneficiary of Microsoft’s decision to give it an icon on the Windows 95 desktop in the name of vanquishing Netscape. In 1997, in a move fraught with symbolic significance, AOL bought CompuServe, its last remaining competitor from the pre-Web era of closed, proprietary online services. By the time Netscape open-sourced its browser, AOL had 12 million subscribers and annual profits — profits, mind you, not revenues — of over $500 million, thanks not only to subscription fees but to the new frontier of online advertising, where revenues and profits were almost one and the same. At not quite 40 years old, Steve Case had become a billionaire.

“AOL is the Internet blue chip,” wrote the respected stock analyst Henry Blodget. And indeed, for all of its association with new and shiny technology, there was something comfortingly stolid — even old-fashioned — about the company. Unlike so many of his dot.com compatriots, Steve Case had found a way to combine name recognition and a desirable product with a way of getting his customers to actually pay for said product. He liked to compare AOL with a cable-television provider; this was a comparison that even the most hidebound investors could easily understand. Real, honest-to-God checks rolled into AOL’s headquarters every month from real, honest-to-God people who signed up for real, honest-to-God paid subscriptions. So what if the tech intelligentsia laughed and mocked, called AOL “the cockroach of cyberspace,” and took an “@AOL.com” suffix on someone’s email address as a sign that they were too stupid to be worth talking to? Case and his shareholders knew that money from the unwashed masses spent just as well as money from the tech elites.

Microsoft could finally declare victory in the browser war in the summer of 1998, when the two browsers’ trend lines crossed one another. At long last, Internet Explorer’s popularity equaled and then rapidly eclipsed that of Netscape Navigator/Communicator. It hadn’t been clean or pretty, but Microsoft had bludgeoned its way to the market share it craved.

A few months later, AOL acquired Netscape through a stock swap that involved no cash, but was worth a cool $9.8 billion on paper — an almost comical sum in relation to the amount of actual revenue the purchased company had brought in during its lifetime. Jim Clark and Marc Andreessen walked away very, very rich men. Just as Netscape’s big IPO had been the first of its breed, the herald of the dot.com boom, Netscape now became the first exemplar of the boom’s unique style of accounting, which allowed people to get rich without ever having run a profitable business.

Even at the time, it was hard to figure out just what it was about Netscape that AOL thought was worth so much money. The deal is probably best understood as a product of Steve Case’s fear of a Microsoft-dominated Web; despite that AOL icon on the Windows desktop, he still didn’t trust Bill Gates any farther than he could throw him. In the end, however, AOL got almost nothing for its billions. Netscape Communicator was renamed AOL Communicator and offered to the service’s subscribers, but even most of them, technically unsophisticated though they tended to be, could see that Internet Explorer was the cleaner and faster and just plain better choice at this juncture. (The open-source coders working with Mozilla belatedly realized the same; they would wind up spending years writing a brand-new browser engine from scratch after deciding that Netscape’s just wasn’t up to snuff.)

Most of Netscape’s remaining engineers walked soon after the deal was made. They tended to describe the company’s meteoric rise and fall in the terms of a Shakespearean tragedy. “At least the old timers among us came to Netscape to change the world,” lamented one. “Getting killed by the Evil Empire, being gobbled up by a big corporation — it’s incredibly sad.” If that’s painting with rather too broad a brush — one should always run away screaming when a Silicon Valley denizen starts talking about “changing the world” — it can’t be denied that Netscape at no time enjoyed a level playing field in its war against Microsoft.

But times do change, as Microsoft was about to learn to its cost. In May of 1998, the Department of Justice filed suit against Microsoft for illegally exploiting its Windows monopoly in order to crush Netscape. The suit came too late to save the latter, but it was all over the news even as the first copies of Windows 98, the hotly anticipated successor to Windows 95, were reaching store shelves. Bill Gates had gotten his wish; Internet Explorer and Windows were now indissolubly bound together. Soon he would have cause to wish that he had not striven for that outcome quite so vigorously.

(Sources: the books Overdrive: Bill Gates and the Race to Control Cyberspace by James Wallace, The Silicon Boys by David A. Kaplan, Architects of the Web by Robert H. Reid, Competing on Internet Time: Lessons from Netscape and Its Battle with Microsoft by Michael Cusumano and David B. Yoffie, dot.con: The Greatest Story Ever Sold by John Cassidy, Stealing Time: Steve Case, Jerry Levin, and the Collapse of AOL Time Warner by Alec Klein, Fools Rush In: Steve Case, Jerry Levin, and the Unmaking of AOL Time Warner by Nina Munk, There Must be a Pony in Here Somewhere: The AOL Time Warner Debacle by Kara Swisher, In Search of Stupidity: Over Twenty Years of High-Tech Marketing Disasters by Merrill R. Chapman, Coders at Work: Reflections on the Craft of Programming by Peter Seibel, and Weaving the Web by Tim Berners-Lee. Online sources include “1995: The Birth of JavaScript” at Web Development History, the New York Times timeline of AOL’s history, and Mitchell Baker’s talk about the history of Mozilla, which is available on Wikipedia.)

 
41 Comments

Posted by on December 23, 2022 in Digital Antiquaria, Interactive Fiction

 

Tags: , , , ,