RSS

Tag Archives: microprose

Darklands

Darklands may well have been the most original single CRPG of the 1990s, but its box art was planted firmly in the tacky CRPG tradition. I’m not sure that anyone in Medieval Germany really looked much like these two…

Throughout the 1980s and well into the 1990s, the genres of the adventure game and the CRPG tended to blend together, in magazine columns as well as in the minds of ordinary gamers. I thus considered it an early point of order for this history project to attempt to identify the precise differences between the genres. Rather than addressing typical surface attributes — a CRPG, many a gamer has said over the years, is an adventure game where you also have to kill monsters — I tried to peek under the hood and identify what really makes the two genres tick. At bottom, I decided, the difference was one of design philosophy. The adventure game focuses on set-piece, handcrafted puzzles and other unique interactions, simulating the world that houses them only to the degree that is absolutely necessary. (This latter is especially true of the point-and-click graphic adventures that came to dominate the field after the 1980s; indeed, throughout gaming history, the trend in adventure games has been to become less rather than more ambitious in terms of simulation.) The CRPG, meanwhile, goes in much more for simulation, to a large degree replacing set-piece behaviors with systems of rules which give scope for truly emergent experiences that were never hard-coded into the design.

Another clear difference between the two genres, however, is in the scope of their fictions’ ambitions. Since the earliest days of Crowther and Woods and Scott Adams, adventure games have roamed widely across the spectrum of storytelling; Infocom alone during the 1980s hit on most of the viable modern literary genres, from the obvious (fantasy, science fiction) to the slightly less obvious (mysteries, thrillers) to the downright surprising (romance novels, social satires). CRPGs, on the other hand, have been plowing more or less the same small plot of fictional territory for decades. How many times now have groups of stalwart men and ladies set forth to conquer the evil wizard? While we do get the occasional foray into science fiction — usually awkwardly hammered into a frame of gameplay conventions more naturally suited to heroic fantasy — it’s for the most part been J.R.R. Tolkien and Dungeons & Dragons, over and over and over again.

This seeming lack of adventurousness (excuse the pun!) among CRPG designers raises some interesting questions. Can the simulation-oriented approach only be made to work within a strictly circumscribed subset of possible virtual worlds? Or is the lack of variety in CRPGs down to a simple lack of trying? An affirmative case for the latter question might be made by Origin Systems’s two rather wonderful Worlds of Ultima games of the early 1990s, which retained the game engine from the more traditional fantasy CRPG Ultima VI but moved it into settings inspired by the classic adventure tales of Arthur Conan Doyle and H.G. Wells. Sadly, though, Origin’s customers seemed not to know what to make of Ultima games not taking place in a Renaissance Faire world, and both were dismal commercial failures — thus providing CRPG makers with a strong external motivation to stick with high fantasy, whatever the abstract limits of the applicability of the CRPG formula to fiction might be.

Our subject for today — Darklands, the first CRPG ever released by MicroProse Software — might be described as the rebuttal to the case made by the Worlds of Ultima games, in that its failings point to some of the intrinsic limits of the simulation-oriented approach. Then again, maybe not; today, perhaps even more so than when it was new, this is a game with a hardcore fan base who love it with a passion, even as other players, like the one who happens to be writing this article, see it as rather collapsing under the weight of its ambition and complexity. Whatever your final verdict on it, it’s undeniable that Darklands is overflowing with original ideas for a genre which, even by the game’s release year of 1992, had long since settled into a set of established expectations. By upending so many of them, it became one of the most intriguing CRPGs ever made.



Darklands was the brainchild of Arnold Hendrick, a veteran board-game, wargame, tabletop-RPG, and console-videogame designer who joined MicroProse in 1985, when it was still known strictly as a maker of military simulations. As the first MicroProse employee hired only for a design role — he had no programming or other technical experience whatsoever — he began to place his stamp on the company’s products immediately. It was Hendrick who first had the germ of an idea that Sid Meier, MicroProse’s star programmer/designer, turned into Pirates!, the first MicroProse game to depart notably from the company’s established formula. In addition to Pirates!, for which he continued to serve as a scenario designer and historical consultant even after turning the lead-designer reins over to Meier, Hendrick worked on other games whose feet were more firmly planted in MicroProse’s wheelhouse: titles like Gunship, Project Stealth Fighter, Red Storm Rising, M1 Tank Platoon, and Silent Service II.

“Wild” Bill Stealey, the flamboyant head of MicroProse, had no interest whatsoever in any game that wasn’t a military flight simulator. Still, he liked making money even more than he liked flying virtual aircraft, and by 1990 he wasn’t sure how much more he could grow his company if it continued to make almost nothing but military simulations and the occasional strategic wargame. Meanwhile he had Pirates! and Railroad Tycoon, the latter being Sid Meier’s latest departure from military games, to look at as examples of how successful non-traditional MicroProse games could be. Not knowing enough about other game genres to know what else might be a good bet for his company, he threw the question up to his creative and technical staff: “Okay, programmers, give me what you want to do, and tell me how much money you want to spend. We’ll find a way to sell it.”

And so Hendrick came forward with a proposal to make a CRPG called Darklands, to be set in the Germany of the 15th century, a time and place of dark forests and musty monasteries, Walpurgis Night and witch covens. It could become, Hendrick said, the first of a whole new series of historical CRPGs that, even as they provided MicroProse with an entrée into one of the most popular genres out there, would also leverage their reputation for making games with roots in the real world.

The typical CRPG, then as now, took place in a version of Medieval times that had only ever existed in the imagination of a modern person raised on Tolkien and Dungeons & Dragons. It ignored how appallingly miserable and dull life was for the vast majority of people who lived through the historical reality of the Middle Ages, with its plagues, wars, filth, hard labor, and nearly universal illiteracy. Although he was a dedicated student of history, with a university degree in the field, Hendrick too was smart enough to realize that there wasn’t much of a game to be had by hewing overly close to this mundane historical reality. But what if, instead of portraying a Medieval world as his own contemporaries liked to imagine it to have been, he conjured up the world of the Middle Ages as the people who had lived in it had imagined it to be? God and his many saints would take an active role in everyday affairs, monsters and devils would roam the forests, alchemy would really work, and those suspicious-looking folks who lived in the next village really would be enacting unspeakable rituals in the name of Satan every night. “This is an era before logic or science,” Hendrick wrote, “a time when anything is possible. In short, if Medieval Germans believed something to be true, in Darklands it might actually be true.”

He wanted to incorporate an interwoven tapestry of Medieval imagination and reality into Darklands: a magic system based on Medieval theories about alchemy; a pantheon of real saints to pray to, each able to grant her own special favors; a complete, lovingly detailed map of 15th-century Germany and lands adjacent, over which you could wander at will; hundreds of little textual vignettes oozing with the flavor of the Middle Ages. To make it all go, he devised a set of systems the likes of which had never been seen in a CRPG, beginning with a real-time combat engine that let you pause it at any time to issue orders; its degree of simulation would be so deep that it would include penetration values for various weapons against various materials (thus ensuring that a vagabond with rusty knife could never, ever kill a full-fledged knight in shining armor). The character-creation system would be so detailed as to practically become a little game in itself, asking you not so much to roll up each character as live out the life story that brought her to this point: bloodline, occupations, education (such as it was for most in the Middles Ages), etc.

Character creation in Darklands is really, really complicated. And throughout the game, the spidery font superimposed on brown-sauce backgrounds will make your eyes bleed.

All told, it was one heck of a proposition for a company that had never made a CRPG before. Had Stealey been interested enough in CRPGs to realize just how unique the idea was, he might have realized as well how doubtful its commercial prospects were in a market that seemed to have little appetite for any CRPG that didn’t hew more or less slavishly to the Dungeons & Dragons archetype. But Stealey didn’t realize, and so Darklands got the green light in mid-1990. What followed was a tortuous odyssey; it became the most protracted and expensive development project MicroProse had ever funded.

We’ve seen in some of my other recent articles how companies like Sierra and Origin, taking stock of escalating complexity in gameplay and audiovisuals and their inevitable companion of escalating budgets, began to systematize the process of game development around this time. And we’ve at least glimpsed as well how such systematization could be a double-edged sword, leading to creatively unsatisfied team members and final products with something of a cookie-cutter feel.

MicroProse, suffice to say, didn’t go that route. Stealey took a hands-off approach to all projects apart from his beloved flight simulators, allowing his people to freelance their way through them. For all the drawbacks of rigid hierarchies and strict methodologies, the Darklands project could have used an injection of exactly those things. It was plagued by poor communication and outright confusion from beginning to end, as Arnold Hendrick and his colleagues improvised like mad in the process of making a game that was like nothing any of them had ever tried to make before.

Hendrick today forthrightly acknowledges that his own performance as project leader was “terrible.” Too often, the right hand didn’t know what the left was doing. An example cited by Hendrik involves Jim Synoski, the team’s first and most important programmer. For some months at the beginning of the project, he believed he was making essentially a real-time fighting game; while that was in fact some of what Darklands was about, it was far from the sum total of the experience. Once made aware at last that his combat code would need to interact with many other modules, he managed to hack the whole mess together, but it certainly wasn’t pretty. It seems there wasn’t so much as a design document for the team to work from — just a bunch of ideas in Hendrick’s head, imperfectly conveyed to everyone else.

The first advertisement for Darklands appeared in the March 1991 issue of Computer Gaming World. The actual product wouldn’t materialize until eighteen months later.

It’s small wonder, then, that Darklands went so awesomely over time and over budget; the fact that MicroProse never cancelled it likely owes as much to the sunk-cost fallacy as anything else. Hendrick claims that the game cost as much as $3 million to make in the end — a flabbergasting number that, if correct, would easily give it the crown of most expensive computer game ever made at the time of its release. Indeed, even a $2 million price tag, the figure typically cited by Stealey, would also qualify it for that honor. (By way of perspective, consider that Origin Systems’s epic CRPG Ultima VII shipped the same year as Darklands with an estimated price tag of $1 million.)

All of this was happening at the worst possible time for MicroProse. Another of Stealey’s efforts to expand the company’s market share had been an ill-advised standup-arcade version of F-15 Strike Eagle, MicroProse’s first big hit. The result, full of expensive state-of-the-art graphics hardware, was far too complex for the quarter-eater market; it flopped dismally, costing MicroProse a bundle. Even as that investment was going up in smoke, Stealey, acting again purely on the basis of his creative staff’s fondest wishes, agreed to challenge the likes of Sierra by making a line of point-and-click graphic adventures. Those products too would go dramatically over time and over budget.

Stealey tried to finance these latest products by floating an initial public offering in October of 1991. By June of 1992, on the heels of an announcement that not just Darklands but three other major releases as well would not be released that quarter — more fruit of Stealey’s laissez-faire philosophy of game development — the stock tumbled to almost 25 percent below its initial price. A stench of doom was beginning to surround the company, despite such recent successes as Civilization.

Games, like most creative productions, generally mirror the circumstances of their creation. This fact doesn’t bode well for Darklands, a project which started in chaos and ended, two years later, in a panicked save-the-company scramble.


Pirates!

Darklands

If you squint hard enough at Darklands, you can see its roots in Pirates!, the first classic Arnold Hendrick helped to create at MicroProse. As in that game, Darklands juxtaposes menu-driven in-town activities, written in an embodied narrative style, with more free-form wanderings over the territories that lie between the towns. But, in place of the straightforward menu of six choices in Pirates!, your time in the towns of Darklands becomes a veritable maze of twisty little passages; you start the game in an inn, but from there can visit a side street or a main street, which in turn can lead you to the wharves or the market, dark alleys or a park, all with yet more things to see and do. Because all of these options are constantly looping back upon one another — it’s seldom clear if the side street from this menu is the same side street you just visited from that other menu — just trying to buy some gear for your party can be a baffling undertaking for the beginner.

Thus, in spite of the superficial interface similarities, we see two radically opposing approaches to game design in Pirates! and Darklands. The older game emphasizes simplicity and accessibility, being only as complex as it needs to be to support the fictional experience it wants to deliver. But Darklands, for its part, piles on layer after layer of baroque detail with gleeful abandon. One might say that here the complexity is the challenge; learning to play the entirety of Darklands at all requires at least as much time and effort as getting really, truly good at a game like Pirates!.

The design dialog we see taking place here has been with us for a long time. Dave Arneson and Gary Gygax, the co-creators of the first incarnation of tabletop Dungeons & Dragons, parted ways not long afterward thanks largely to a philosophical disagreement about how their creation should evolve. Arneson saw the game as a fairly minimalist framework to enable a shared storytelling session, while Gygax saw it as something more akin to the complex wargames on which he’d cut his teeth. Gygax, who would go on to write hundreds of pages of fiddly rules for Advanced Dungeons & Dragons, his magnum opus, was happily cataloging and quantifying every variant of pole arm used in Medieval times when an exasperated Arneson finally lost his cool: “It’s a pointy thing on the end of a stick!” Your appreciation for Darklands must hinge on whether you are a Gary Gygax or a Dave Arneson in spirit. I know to which camp I belong; while there is a subset of gamers who truly enjoy Darkland‘s type of complexity — and more power to them for it — I must confess that I’m not among them.

In an interview conducted many years after the release of Darklands, Arnold Hendrick himself put his finger on what I consider to be its core problem: “Back then, game systems were often overly complicated, and attention to gameplay was often woefully lacking. These days, there’s a much better balance between gameplay and the human psychology of game players and the game systems underlying that gameplay.” Simply put, there are an awful lot of ideas in Darklands which foster complexity, but don’t foster what ought to be the ultimate arbitrator in game design: Fun. Modern designers often talk about an elusive sense of “flow” — a sense by the player that all of a game’s parts merge into a harmonious whole which makes playing for hours on end all too tempting. For this player at least, Darklands is the polar opposite of this ideal. Not only is it about as off-putting a game as I’ve ever seen at initial startup, but it continues always, even after a certain understanding has begun to dawn, to be a game of disparate parts: a character-generation game, a combat game, a Choose Your Own Adventure-style narrative, a game of alchemical crafting. There are enough original ideas here for ten games, but it never becomes clear why they absolutely, positively all need to be in this one. Darklands, in other words, is kind of a muddle.

Your motivation for adventuring in Medieval Germany in the first place is one of Darklands‘s original ideas in CRPG design. Drawing once again comparisons to Pirates!, Darklands dispenses with any sort of overarching plot as a motivating force. Instead, like your intrepid corsair of the earlier game, your party of four has decided simply “to bring everlasting honor and glory to your names.” If you play for long enough, something of a larger plot will eventually begin to emerge, involving a Satan-worshiping cult and a citadel dedicated to the demon Baphomet, but even after rooting out the cult and destroying the citadel the game doesn’t end.

In place of an overarching plot, Darklands relies on incidents and anecdotes, from a wandering knight challenging you to a duel to a sinkhole that swallows up half your party. While these are the products of a human writer (presumably Arnold Hendrick for the most part), their placements in the world are randomized. To improve your party’s reputation and earn money, you undertake a variety of quests of the “take item A to person B” or “go kill monster C” variety. All of this too is procedurally generated. Indeed, you begin a new game of Darklands by choosing the menu option “Create a New World.” Although the geography of Medieval Germany won’t change from game to game, most of what you’ll find in and around the towns is unique to your particular created world. It all adds up to a game that could literally, as MicroProse’s marketers didn’t hesitate to declare, go on forever.

But, as all too commonly happens with these things, it’s a little less compelling in practice than it sounds in theory. I’ve gone on record a number of times now with my practical objections to generative narratives. Darklands too often falls prey to the problems that are so typical of the approach. The quests you pick up, lacking as they do any larger relationship to a plot or to the world, are the very definition of FedEx quests, bereft of any interest beyond the reputation and money they earn for you. And, while it can sometimes surprise you with an unexpectedly appropriate and evocative textual vignette, the game more commonly hews to the predictable here as well. Worse, it has a dismaying tendency to show you the same multiple-choice vignettes again and again, pulling you right out of the fiction.

And yet the vignettes are actually the most narratively interesting parts of the game; it will be some time before you begin to see them at all. As in so many other vintage CRPGs, the bulk of your time at the beginning of Darklands is spent doing boring things in the name of earning the right to eventually do less boring things. In this case, you’ll likely have to spend several hours roaming the vacant back streets of whatever town you happen to begin in, seeking out and killing anonymous bands of robbers, just to build up your party enough to leave the starting town.

The open-ended structure works for Pirates! because that game dispenses with this puritanical philosophy of design. It manages to be great fun from the first instant by keeping the pace fast and the details minimal, even as it puts a definite time limit on your career, thus tempting you to play again and again in order to improve on your best final score. Darklands, by contrast, doesn’t necessarily end even when your party is too old to adventure anymore (aging becomes a factor after about age thirty); you can just make new characters and continue where the old ones left off, in the same world with the same equipment, quests, and reputation. Darklands, then, ends only when you get tired of it. Just when that exact point arrives will doubtless differ markedly from player to player, but it’s guaranteed to be anticlimactic.

The ostensible point of Darklands‘s enormously complex systems of character creation, alchemy, religion, and combat is to evoke its chosen time and place as richly as possible. One might even say the same about its lack of an overarching epic plot; such a thing doesn’t exist in the books of history and legend to which the game is so determined to be so faithful. Yet I can’t help but feel that this approach — that of trying to convey the sense of a time and place through sheer detail — is fundamentally misguided. Michael Bate, a designer of several games for Accolade during the 1980s, coined the term “aesthetic simulations” for historical games that try to capture the spirit of their subject matter rather than every piddling detail. Pirates! is, yet again, a fine example of this approach, as is the graceful, period-infused but not period-heavy-handed writing of the 1992 adventure game The Lost Files of Sherlock Holmes.

The writing in Darklands falls somewhat below that standard. It isn’t terrible, but it is a bit graceless, trying to make up for in concrete detail what it isn’t quite able to conjure in atmosphere. So, we get money that is laboriously explicated in terms of individual pfenniges, groschen, and florins, times of day described in terms that a Medieval monk would understand (Matins, Latins, Prime, etc.), and lots of off-putting-to-native-English-speakers German names, but little real sense of being in Medieval Germany.

Graphically as well, the game is… challenged. Having devoted most of their development efforts to 3D vehicular simulators during the 1980s, MicroProse’s art department plainly struggled to adapt to the demands of other genres. Even an unimpeachable classic like Sid Meier’s Civilization achieves its classic status despite rather than because of its art; visually, it’s a little garish compared to what other studios were putting out by this time. But Darklands is much more of a visual disaster, a conflicting mishmash of styles that sometimes manage to look okay in isolation, such as in the watercolor-style backgrounds to many of the textual vignettes. Just as often, though, it verges on the hideous; the opening movie is so absurdly amateurish that, according to industry legend, some people actually returned the game after seeing it, thinking they must have gotten a defective disk or had an incompatible video card.

One of Darklands‘s more evocative vignettes, with one of its better illustrations as a backdrop. Unfortunately, you’re likely to see this same vignette and illustration several times, with a decided sense of diminishing returns.

But undoubtedly the game’s biggest single problem, at the time of its release and to some extent still today, was all of the bugs. Even by the standards of an industry at large which was clearly struggling to come to terms with the process of making far more elaborate games than had been seen in the previous decade, Darklands stood out upon its belated release in August of 1992 for its woefully under-baked state. Whether this was despite or because of its extended development cycle remains a question for debate. What isn’t debatable, however, is that it was literally impossible to complete Darklands in its initial released state, and that, even more damningly, a financially pressured MicroProse knew this and released it anyway. To their credit, the Darklands team kept trying to fix the game after its release, with patch after patch to its rickety code base. The patches eventually numbered at least nine in all, a huge quantity for long-suffering gamers to acquire at a time when they could only be distributed on physical floppy disks or via pricey commercial online services like CompuServe. After about a year, the team managed to get the game into a state where it only occasionally did flaky things, although even today it remains far from completely bug-free.

By the time the game reached this reasonably stable state, however, the damage had been done. It sold fairly well in its first month or two, but then came a slew of negative reviews and an avalanche of returns that actually exceeded new sales for some time; Darklands thus managed the neat trick of continuing to be a drain on MicroProse’s precarious day-to-day finances even after it had finally been released. Hendrick had once imagined a whole line of similar historical CRPGs; needless to say, that didn’t happen.

Combined with the only slightly less disastrous failure of the new point-and-click graphic-adventure line, Darklands was directly responsible for the end of MicroProse as an independent entity. In December of 1993, with the company’s stock now at well under half of its IPO price and the creditors clamoring, a venture-capital firm arranged a deal whereby MicroProse was acquired by Spectrum Holobyte, known virtually exclusively for a truly odd pairing of products: the home-computer version of the casual game Tetris and the ultra-hardcore flight simulator Falcon. The topsy-turvy world of corporate finance being what it was, this happened despite the fact that MicroProse’s total annual sales were still several times that of Spectrum Holobyte.

Stealey, finding life unpleasant in a merged company where he was no longer top dog, quit six months later. His evaluation of the reasons for MicroProse’s collapse was incisive enough in its fashion:

You have to be known for something. We were known for two things [military simulators and grand-strategy games], but we tried to do more. I think that was a big mistake. I should have been smarter than that. I should have stuck with what we were good at.



I’ve been pretty hard on Darklands in this article, a stance for which I don’t quite feel a need to apologize; I consider it a part of my duty as your humble scribe to call ’em like I see ’em. Yet there is far more to Darklands‘s legacy than a disappointing game which bankrupted a company. Given how rare its spirit of innovation has been in CRPG design, plenty of players in the years since its commercial vanishing performance have been willing to cut it a lot of slack, to work hard to enjoy it on its own terms. For reasons I’ve described at some length now, I can’t manage to join this group, but neither can I begrudge them their passion.

But then, Darklands has been polarizing its players from the very beginning. Shortly after the game’s release, Scorpia, Computer Gaming World magazine’s famously opinionated adventure-game columnist, wrote a notably harsh review of it, concluding that it “might have been one of the great ones” but instead “turns out to be a game more to be avoided than anything else.” Johnny L. Wilson, the magazine’s editor-in-chief, was so bothered by her verdict that he took the unusual step of publishing a sidebar response of his own. It became something of a template for future Darklands apologies by acknowledging the game’s obvious flaws yet insisting that its sheer uniqueness nevertheless made it worthwhile. (“The game is as repetitive as Scorpia and some of the game’s online critics have noted. One comes across some of the same encounters over and over. Yet only occasionally did I find this disconcerting.”) He noted as well that he personally hadn’t seen many of the bugs and random crashes which Scorpia had described in her review. Perhaps, he mused, his computer was just an “immaculate contraption” — or perhaps Scorpia’s was the opposite. In response to the sidebar, Wilson was castigated by his magazine’s readership, who apparently agreed with Scorpia much more than with him and considered him to have undermined his own acknowledged reviewer.

The reader response wasn’t the only interesting postscript to this episode. Wilson:

Later, after 72 hours of playing around with minor quests and avoiding the main plot line of Darklands, I decided it was time to finish the game. I had seven complete system crashes in less than an hour and a half once I decided to jump in and finish the game. I didn’t really have an immaculate contraption, I just hadn’t encountered the worst crashes because I hadn’t filled my upper memory with the system-critical details of the endgame. Scorpia hadn’t overreacted to the crashes. I just hadn’t seen how bad it was because I was fooling around with the game instead of trying to win. Since most players would be trying to win, Scorpia’s review was more valid than my sidebar. Ah, well, that probably isn’t the worst thing I’ve ever done when I thought I was being fair.

This anecdote reveals what may be a deciding factor — in addition to a tolerance for complexity for its own sake — as to whether one can enjoy Darklands or not. Wilson had been willing to simply inhabit its world, while the more goal-oriented Scorpia approached it as she would any other CRPG — i.e., as a game that she wanted to win. As a rather plot-focused, goal-oriented player myself, I naturally sympathize more with her point of view.

In the end, then, the question of where the point of failure lies in Darklands is one for the individual player to answer. Is Darklands as a whole a very specific sort of failure, a good idea that just wasn’t executed as well as it might have been? Or does the failure lie with the CRPG format itself, which this game stretched beyond the breaking point? Or does the real failure lie with the game’s first players, who weren’t willing to look past the bugs and other occasional infelicities to appreciate what could have been a whole new type of CRPG? I know where I stand, but my word is hardly the final one.

Given the game’s connection to the real world and its real cultures, so unusual to the CRPG genre, perhaps the most interesting question of all raised by Darklands is that of the appropriate limits of gamefication. A decade before Darklands‘s release, the Dungeons & Dragons tabletop RPG was embroiled in a controversy engendered by God-fearing parents who feared it to be an instrument of Satanic indoctrination. In actuality, the creators of the game had been wise enough to steer well clear of any living Western belief system. (The Deities & Demigods source book did include living native-American, Chinese, Indian, and Japanese religions, which raises some troublesome questions of its own about cultural appropriation and respect, but wasn’t quite the same thing as what the angry Christian contingent was complaining about.)

It’s ironic to note that much of the content which Evangelical Christians believed to be present in Dungeons & Dragons actually is present in Darklands, including the Christian God and Satan and worshipers of both. Had Darklands become successful enough to attract the attention of the same groups who objected so strongly to Dungeons & Dragons, there would have been hell to pay. Arnold Hendrick had lived through the earlier controversy from an uncomfortably close vantage point, having been a working member of the tabletop-game industry at the time it all went down. In his designer’s notes in Darklands‘s manual, he thus went to great pains to praise the modern “vigorous, healthy, and far more spiritual [Catholic] Church whose quiet role around the globe is more altruistic and beneficial than many imagine.” Likewise, he attempted to separate modern conceptions of Satanism and witchcraft from those of Medieval times. Still, the attempt to build a wall between the Christianity of the 15th century and that of today cannot be entirely successful; at the end of the day, we are dealing with the same religion, albeit in two very different historical contexts.

Opinions vary as to whether the universe in which we live is entirely mechanistic, reducible to the interactions of concrete, understandable, computable physical laws. But it is clear that a computer simulation of a world must be exactly such a thing. In short, a simulation leaves no room for the ineffable. And yet Darklands chooses to grapple, to an extent unrivaled by almost any other game I’m aware of, with those parts of human culture that depend upon a belief in the ineffable. By bringing Christianity into its world, it goes to a place virtually no other game has dared approach. Its vending-machine saints reduce a religion — a real, living human faith — to a game mechanic. Is this okay? Or are there areas of the human experience which ought not to be turned into banal computer code? The answer must be in the eye — and perhaps the faith — of the beholder.

Darklands‘s real-time-with-pause combat system. The interface here is something of a disaster, and the visuals too leave much to be desired, but the core idea is sound.

By my lights, Darklands is more of a collection of bold ideas than a coherent game, more of an experiment in the limits of CRPG design than a classic example of same. Still, in a genre which is so often in thrall to the tried and true, its willingness to experiment can only be applauded.

For sometimes experiments yield rich rewards, as the most obvious historical legacy of this poor-selling, obscure, bug-ridden game testifies. Ray Muzyka and Greg Zeschuk, the joint CEOs of Bioware at the time that studio made the Baldur’s Gate series of CRPGs, have acknowledged lifting the real-time-with-pause combat systems in those huge-selling and much-loved games directly out of Darklands. Since the Baldur’s Gate series’s heyday around the turn of the millennium, dozens if not hundreds of other CRPGs have borrowed the same system second-hand from Bioware. Such is the way that innovation diffuses itself through the culture of game design. So, the next time you fire up a Steam-hosted extravaganza like Pillars of Eternity, know that part of the game you’re playing owes its existence to Darklands. Lumpy and imperfect though it is in so many ways, we could use more of its spirit of bold innovation today — in CRPG design and, indeed, across the entire landscape of interactive entertainment.

(Sources: the book Gamers at Work: Stories Behind the Games People Play by Morgan Ramsay; Computer Gaming World of March 1991, February 1992, May 1992, September 1992, December 1992, January 1993, and June 1994; Commodore Magazine of September 1987; Questbusters of November 1992; Compute! of October 1993; PC Zone of September 2001; Origin Systems’s internal newsletter Point of Origin of January 17 1992; New York Times of June 13 1993. Online sources include Matt Barton’s interview with Arnold Hendrick, Just Adventure‘s interview with Johnny L. Wilson, and Arnold Hendrick’s discussion of Darklands in the Steam forum.

Darklands is available for purchase on GOG.com.)

 
 

Tags: , ,

The Designer’s Designer

Dan Bunten delivers the keynote at the 1990 Game Developers Conference.

Dan Bunten and his little company Ozark Softscape could look back on a tremendous 1984 as that year came to an end. Seven Cities of Gold had been a huge success, Electronic Arts’s biggest game of the year, doing much to keep the struggling publisher out of bankruptcy court by selling well over 100,000 copies. Bunten himself had become one the most sought-after interviewees in the industry. Everyone who got the chance to speak with him seemed to agree that Seven Cities of Gold was only the beginning, that he was destined for even greater success.

As it turned out, though, 1984 would be the high-water mark for Bunten, at least in terms of that grubbiest but most implacable metric of success in games: quantity of units shifted. The years that followed would be frustrating as often as they would be inspiring, as Bunten pursued a vision that seemed at odds with every trend in the industry, all the while trying to thread the needle between artistic fulfillment and commercial considerations.


In the wake of Seven Cities of Gold‘s success, EA badly wanted a follow-up with a similar theme, so much so that they offered Bunten a personal bonus of $5000 to make it Ozark’s next project. The result was Heart of Africa, a game which at first glance looks like precisely the sequel EA was asking for but that actually plays quite differently. Instead of exploring the Americas as Hernán Cortés during the 1600s, it has you exploring Africa as an intrepid Victorian adventurer (“Livingston, I presume?”). In keeping with the changed time and location, your goal isn’t to conquer the land for your country — Africa had, for better or for worse, already been thoroughly partitioned among the European nations by 1890, the year in which the game takes place — but simply to discover and to map. In the best tradition of Victorian adventure novels like King Solomon’s Mines, your ultimate goal is to find the tomb of a mythical Egyptian pharaoh. Bunten later admitted that the differences from Heart of Africa‘s predecessor weren’t so much a product of original design intent as improvisation after he had bumbled into an historical context that just wouldn’t work as a more faithful sequel.

Indeed, Bunten in later years dismissed Heart of Africa, his most adventure-like game ever and his last ever that was single-player only, as nothing more than “a game done to please EA”: “I honestly didn’t want to do the project.” Its biggest problem hinges on the fact that its environment is randomly generated each time you start a new game, itself an attempt to remedy the most obvious failing of adventure games as a commercial proposition: their lack of replayability. Yet the random maps can never live up to what a hand-crafted map, designed for challenge and dramatic effect, might have been; the “story” in Heart of Africa is all too clearly just a bunch of shifting interchangeable parts. Bunten later acknowledged that “the attempt to make a replayable adventure game made for a shallow product (which seems true in every other case designers have tried it as well). I guess that if elements are such that they can be randomly shifted then they [aren’t] substantive enough to make for a compelling game. So, even though I don’t like linear games, they seem necessary to have the depth a good story needs.”

Heart of Africa did quite well for EA upon its release in 1985 — well enough, in fact, to become Bunten’s third most successful game of all time. Yet the whole experience left a bad taste in his mouth. He came away from the project determined to return to the guiding vision behind his first game for EA, the commercially unsuccessful but absolutely brilliant M.U.L.E.: a vision of computer games that people played together rather than alone. In the future, he would continue to compromise at times on the style and subject matter of his games in order to sell them to his publishers, but he would never again back away from his one great principle. All of his games henceforward would be multiplayer — first, foremost, and in one case exclusively. In fact, that one case would be his very next game.

The success of his previous two games having opened something of a window of opportunity with EA, Bunten charged ahead on what he would later describe as his single “most experimental game.” Robot Rascals is a multiplayer scavenger hunt in which two physical decks of cards are integral to the game. Each player controls a robot, and must use it to collect the four items shown on the cards in her hand and return with them to home base in order to win. The game lives on the razor’s edge of pure chaos, the product both of random events generated by the computer and of a second deck of cards — the “specials” — which among other things can force players to draw new item cards, trash their old cards, or trade cards among one another; thus everyone’s goals are shifting almost constantly. As always in a Dan Bunten game, there are lots of thoughtful features here, from ways to handicap the game for players of different ages or skill levels to three selectable levels of overall complexity. He designed it to be “a game that anyone could play” rather than one limited to “special-interest groups like role-playing people or history buffs.” It can be a lot of fun, even if it’s not quite on the level of M.U.L.E. (then again, what is, right?). But this latest bid to make computer games acceptable family entertainment wound up selling hardly at all upon its release in 1986, ending Bunten’s two-game commercial hot streak.

By this point in Bunten’s career, changes in his personal life were beginning to have a major impact on the games he made. In 1985, while still working on Heart of Africa, he had divorced his second wife and married his third, with all the painful complications such disruptions entail when one is leaving children behind with the former spouse. In 1986, he and his new wife moved from Little Rock, Arkansas, to Hattiesburg, Mississippi, so she could complete a PhD. This event marked the effective end of Ozark Softscape as anything but a euphemism for Dan Bunten himself and whatever programmers and artists he happened to contract work out to. The happy little communal house/office where Dan and Bill Bunten, Jim Rushing, and Alan Watson had created games, with a neighborhood full of eager testers constantly streaming through the living room, was no more; only Watson continued to work on Bunten’s games from Robot Rascals on, and then more as just another hired programmer than a valued design voice. Even after moving back to Little Rock in 1988, Bunten would never be able to recapture the communal alchemy of 1982 to 1985.

Coupled with these changes were other, still more ominous ones in Dan Bunten himself. Those who knew him during these years generally refer only vaguely to his “problems,” and this discretion of course does them credit; I too have no desire to psychoanalyze the man. What does seem clear, however, is that he was growing increasingly unhappy as time wore on. He became more demanding of his colleagues, difficult enough to work with that many of them decided it just wasn’t worth it, even as he became more erratic in his own habits, perhaps due to an alcohol intake that struck many as alarming.

Yet Bunten was nothing if not an enigmatic personality. At the same time that close friends were worrying about his moodiness and his drinking, he could show up someplace like The Computer Game Developers Conference and electrify the attendees with his energy and ideas. Certainly his eyes could still light up when he talked about the games he was making and wanted to make. The worrisome questions were how much longer he would be allowed to make those games in light of their often meager sales, and, even more pressingly, why his eyes didn’t seem to light up about much else in his life anymore.

But, to return to the firmer ground of the actual games he was continuing to make: Modem Wars, his next one, marked the beginning of a new chapter in his tireless quest to get people playing computer games together. “We’ve failed at gathering people around the computer,” Bunten said before starting work on it. “We’re going to have to connect them out of the back by connecting their computers to each other.” He would make, in other words, a game played by two people on two separate computers, connected via modem.

Modem Wars was known as Sport of War until just prior to its release by EA in 1988, and in many ways that was a better title. Its premise is a new version of Bunten’s favorite sport of football, played not by individual athletes but by infantry, artillery, and even aircraft, if you can imagine such a thing. One might call it a mashup between two of his early designs for SSI: the strategic football simulator Computer Quarterback and the proto-real-time-strategy game Cytron Masters.

It’s the latter aspect that makes Modem Wars especially visionary. The game was nothing less than an online real-time-strategy death match years before the world had heard of such a thing. While a rudimentary artificial intelligence was provided for single-player play, it was made clear by the game’s very title that this was strictly a tool for learning to play rather than the real point of the endeavor. Daniel Hockman’s review of Modem Wars for Computer Gaming World ironically describes the qualities of online real-time strategy as a potential “problem” and “marketing weakness” — the very same qualities which a later generation would take as the genre’s main attractions:

A sizable number of gamers are not used to thinking in real-time situations. They can spend hours ordering tens of thousands of men into mortal combat, but they wimp out when they have to think under fire. They want to play chess instead of speed chess. They want to analyze instead of act. As the enemy drones zero in on their comcen, they throw up their hands in frustration when it’s knocked out before they can extract themselves from the maelstrom of fire that has engulfed them.

Whether because gamers really were daunted by this need to think on their feet or, more likely, because of the relative dearth of fast modems and stable online connections in 1988, Modem Wars became another crushing commercial disappointment for Bunten. EA declared themselves “hesitant” to keep pursuing this direction in the wake of the game’s failure. Rather than causing Bunten to turn away from multiplayer gaming, this loss of faith caused him to turn away from EA.

In the summer of 1989, MicroProse Software announced that they had signed a five-year agreement with Bunten, giving them first rights to all of the games he made during that period. The great hidden driver behind the agreement was MicroProse’s own star designer Sid Meier, who had never hidden his enormous admiration for Bunten’s work. Bunten doubtless hoped that a new, more supportive publisher would mark the beginning of a new, more commercially successful era in his career. And in the beginning at least, such optimism would, for once, prove well-founded.

Known at first simply as War!, then as War Room, and finally as Command H.Q., Bunten’s first game for MicroProse was aptly described by its designer as being akin to an abstract, casual board game of military strategy, like Risk or Axis & Allies. The big wrinkle was that this beer-and-pretzels game was to be played in real time rather than turns. But, perhaps in response to complaints about his previous game like those voiced by Daniel Hockman above, the pace is generally far less frenetic this time around. Not only can the player select an overall speed, but the program itself actually takes charge to speed up the action when not much is happening and slow it down when things heat up. Although a computer opponent is provided, the designer’s real focus was once more on modem-to-modem play.

But, whatever its designer’s preferences, MicroProse notably de-emphasized the multiplayer component in their advertising upon Command H.Q.‘s release in 1990, and this, combined with a more credible artificial intelligence for the computer opponent, gave it more appeal to the traditional wargame crowd than Modem Wars had demonstrated. Ditto a fair measure of evangelizing done by Computer Gaming World, with whom Bunten had always had a warm relationship, having even authored a regular column there for a few years in the mid-1980s. The magazine’s lengthy review concluded by saying, “This is the game we’ve all been waiting for”; they went on to publish two more lengthy articles on Command H.Q. strategy, and made it their “Wargame of the Year” for 1990. For all these reasons, Command H.Q. sold considerably better than had Bunten’s last couple of games; one report places its total sales at around 75,000 units, enough to make it his second most successful game ever.

With that to buoy his spirits, Bunten made big plans for his next game, Global Conquest. “Think of it as Command H.Q. meets Seven Cities of Gold meets M.U.L.E.,” he said. Drawing heavily from Command H.Q. in particular, as well as the old grand-strategy classic Empire, he aimed to make a globe-spanning strategy game where economics would be as important as military maneuvers. He put together a large and vocal group of play testers on CompuServe, and tried to incorporate as many of their suggestions as possible, via a huge options panel that allowed players to customize virtually every aspect of the game, from the rules themselves to the geography and topography of the planet they were fighting over, all the way down to the look of the icons representing the individual units. This time, up to four humans could play against one another in a variety of ways: they could all play together by taking turns on one computer, or they could each play on their own computer via a local-area network, or four players could share two computers that were connected via modem. The game was turn-based, but with an interesting twist designed to eliminate analysis paralysis: when the first player mashed the “next turn” button, everyone else had just twenty seconds to finish up their own turns before the execution phase began.

In later years, Dan Bunten himself had little good to say about what would turn out to be his last boxed game. In fact, he called it his absolute “worst game” of all the ones he had made. While play-testing in general is a wonderful thing, and every designer should do as much of it as possible, a designer also needs to keep his own vision for what kind of game he wants to make at the forefront. In the face of prominent-in-their-own-right, opinionated testers like Computer Gaming World‘s longtime wargame scribe Alan Emrich, Bunten failed to do this, and wound up creating not so much a single coherent strategy game as a sort of strategy-game construction set that baffled more than it delighted. “This game was a hodgepodge rather than an integration,” he admitted several years later. “It was just the opposite of the KISS doctrine. It was a kitchen-sink design. It had everything. Build your own game by struggling through several options menus.” He acknowledged as well that the mounting unhappiness in his personal life, which had now led to a divorce from his third wife, was making it harder and harder to do good work.

Released in 1992, Global Conquest under-performed commercially as well. In addition to the game’s intrinsic failings, it didn’t help matters that MicroProse had just five months prior released Sid Meier’s Civilization, another exercise in turn-based grand strategy on a global scale, also heavily influenced by Empire, that managed to be far more thematically and texturally ambitious while remaining more focused and playable as a game — albeit without the multiplayer element that was so important to Bunten.

But of course, there’s more to a game than whether it’s played by one person or more than one, and it strikes me as reasonable to question whether Bunten was beginning to lose his way as a designer in other respects even as he stuck so obstinately to his multiplayer guns. Setting aside their individual strengths and failings, the final three boxed games of Bunten’s career, with their focus on “wars” and “command” and “conquest,” can feel a little disheartening when compared to what came before. Games like M.U.L.E., Robot Rascals, and to some extent even Seven Cities of Gold and Heart of Africa had a different, friendlier, more welcoming personality. This last, more militaristic trio feels like a compromise, the product of a Dan Bunten who said that, if he couldn’t bring multiplayer gaming to the masses, he would settle for the grognard crowd, indulging their love for guns and tanks and bombs. So be it. Now, though, he was about to give that same crowd the shock of their lives.

In November of 1992, just months after completing the supremely masculine wargame Global Conquest, Dan Bunten had sexual-reassignment surgery, becoming the woman Danielle “Dani” Bunten Berry. (For continuity’s sake, I’ll generally continue to refer to her by the shorthand of “Bunten” rather than “Berry” for the remainder of this article.) It’s not for us to speculate about the personal trauma that must have accompanied such a momentous decision. What we can and should take note of, however, is that it was an unbelievably brave decision. For all that we still have a long way to go today when it comes to giving transsexuals the rights and respect they deserve, the early 1990s were a far less enlightened time than even our own on this issue. And it wasn’t as if Bunten could take comfort in the anything-goes anonymity of a New York City or San Francisco.  Dan Bunten had lived, and as Dani Bunten now continued to live, in the intensely conservative small-town atmosphere of Little Rock, Arkansas. Many of those closest to her disowned her, including her mother and her ex-wives, making it heartbreakingly difficult for her to maintain a relationship with her children. She had remained in Little Rock all these years, at no small cost to her career prospects, largely because of these ties of blood, which she had believed to be indissoluble. This rejection, then, must have felt like the bitterest of betrayals.

Dan Bunten with his beverage of choice.

The games industry as well, with its big-breasted damsels in distress and its machine-gun-toting male heroes, wasn’t exactly notable for its enlightened attitudes toward sex and gender. Many of Bunten’s old friends and colleagues would see her for the first time after her surgery and convalescence at the Game Developers Conference scheduled for April of 1993, and they looked forward to that event with almost as much trepidation as Bunten herself must have felt. It was all just so very unexpected. To whatever extent they had carried around a mental image of a man who would choose to become a woman, Dan Bunten didn’t fit the profile at all. He had been the games’ industry own Ozark Mountains boy, a true son of the South, always ready with his “folksy mountain humor” (read, “dirty jokes”). His rangy frame stood six feet two inches tall. He loved nothing more than a rough-and-tumble game of back-lot football, unless it be beer and poker afterward. As his three ex-wives and three children attested, he had certainly seemed to like women, but no one had ever imagined that he liked them enough to want to be one. What were they supposed to say to him — er, to her — now?

They needn’t have worried. Dani Bunten handled her coming-out party with the same low-key grace and humor she would display for the rest of her life as a woman. She said that she had made the switch to do her part to redress the gender imbalance inside the industry, and to help improve the aesthetics of game designers to match the improving aesthetics of their games. The tension dissipated, and soon everyone got into the spirit of the thing. A straw poll named Dani Bunten the game designer most likely to appear on the Oprah Winfrey Show. A designer named Gordon Walton had a typical experience: “I was put off when she made the change to become Dani, until the minute I spoke to her. It was clear to me she was much happier as Dani, and if anything an even more incredible person.” Another GDC regular remembered the “unhappy man” from the 1992 event, “sitting on the hallway floor drinking and smoking,” and contrasted him with the “happy woman” he now saw.

No one with any interest in the inner workings of those strangest of creatures, their fellow humans, could fail to be fascinated by Bunten’s dispatches from both sides of the gender divide. “Aren’t there things you’ve always wanted to know about women but were afraid to ask?” she said. “Well, now’s your chance!”

I had to learn a lot to actually “count” as a woman! I had to learn how to walk, speak, dress as a woman. Those little things which are necessary so that other people don’t [feel] alienated.There’s a little summary someone gave me to make clear what being a woman means: as a woman you have to sing when you speak, dance when you walk, and you have to open your heart… I know how stereotypical that sounds, but it is true! Speech for a man is something completely different: the melody of speech is fast, monotone, and decreases at the end of a sentence. Sometimes, this still happens to me, and people are always irritated. Female speech is a little bit like song – we have a lot more melody and different speech patterns. Walking is really a bit like dancing: slower and connected, with a lot of subtle movements. I enjoyed it at once.

She had few filters when talking about the nitty-gritty details:

One of the saddest changes I had to deal with after my operation was the fact that I couldn’t aim anymore when urinating. Boys — I have two little sons and a daughter — simply love to aim.

Bunten said that, in keeping with her new identity, she didn’t feel much desire to design any more wargames; this led to the end of her arrangement with MicroProse. By way of compensation, Electronic Arts that year released a nicely done “commemorative edition” of Seven Cities of Gold, complete with dramatically upgraded graphics and sound to suit the times. Bunten had little to nothing to do with the project, but it sold fairly well, and perhaps helped to remind her of her roots.

In the same spirit, Bunten’s first real project after her transformation became a new version of M.U.L.E. EA’s founder Trip Hawkins had always named that game as one of his all-time favorites, and had frequently stated how disappointed he was that it had never gotten the attention it deserved. Now, Hawkins had left his day-to-day management role at EA to run 3DO, a spin-off company peddling a multimedia set-top box for the living room. Hawkins thought M.U.L.E. would be perfect for the platform, and recruited Bunten to make it happen. It was a dream project; showing excellent taste, she still regarded M.U.L.E. as the best thing she had ever done. But the dream quickly began to sour.

3DO first requested that, instead of taking turns managing their properties on the map, players all be allowed to do so simultaneously. Bunten somewhat reluctantly agreed. And then:

As soon as I added the simultaneity, it instantly put into their heads, “Why can’t we shoot at each other?” And I said, “No guns.” And they said, “What about bombs? Can we drop a bomb in front of you? It won’t hurt you. It will be a cartoon thing, it will just slow you down.” And I said, “You don’t get it. It’s changing the whole notion of how this thing works!”

[3DO is] staking its future on the idea of a new generation of hardware and therefore, you’d assume, a new generation of software, but they said, “No, our market is still 18 to 35, male. We need something with action, something with intensity.” Chrome and sizzle. Ugh.

In the end, Bunten walked out, disappointed enough that she seriously considered getting out of games altogether, going so far as to apply for jobs as the industrial engineer Dan Bunten had once been before his first personal computer came along.

Instead she found a role with a new company called Mpath as a design and strategy consultant. The goal of that venture was to bring multiplayer gaming to the new frontier of the World Wide Web, and its founders included her fellow game designer Brian Moriarty, of Infocom and LucasArts fame. She also studied the elusive concept of “games for girls” in association with a think tank set up by Microsoft co-founder Paul Allen; some of her proposals would later come to market as the products of Purple Moon, Brenda Laurel’s brief-lived but important publisher of games for girls aged 8 to 14.

Offers to do conventional boxed games as sole designer, however, weren’t forthcoming; how much that was down to lingering personal prejudices against her for her changed sex and how much to the fact that the games she wanted to make just weren’t considered commercially viable must always be open for debate. Refusing as usual to be a victim, Bunten said that her “priorities had shifted” since her change anyway: “I don’t identify myself with the job as strongly as before.” Deciding that, for her, heaven was other people after a life spent programming computers, she devoured anthropology texts and riffed on Karl Jung’s theories of a collective unconscious. “Literature, anthropology, and even dance,” she noted, “have a good deal more to teach designers about human drives and abilities than the technologists of either end of California, who know silicon and celluloid but not much else.” So, she bided her time as a designer, waiting for a more inclusive ludic future to arrive. At the 1997 GDC, she described a prescient vision of “small creative shops” freed from the inherent conservatism of the “distribution trap” by the magic of the Internet.

That future would indeed come to pass — but, sadly, not in time for Dani Bunten Berry to see it. Shortly after delivering that speech, she went to see her doctor about a persistent cough, whereupon she was diagnosed with an advanced case of lung cancer. In one of those cruel ironies which always seem to dog the lives of us poor mortals, she had finally kicked a lifelong habit of heavy smoking just a few months before.

She appeared in public for the last time in May of 1998. The occasion was, once again, the Game Developers Conference, where she had always shone so. She struggled audibly for breath as she gave the last presentation of her life, entitled “Do Online Games Still Suck?,” but her passion carried her through. At the end of the conference, at a special ceremony held aboard the Queen Mary in Long Beach Harbor, she was presented with the first ever GDC Lifetime Achievement Award. The master of ceremonies for that evening was her friend and colleague Brian Moriarty, who knew, like everyone else in attendance, that the end was near. He closed his heartfelt tribute thus:

It is no exaggeration to characterize tonight’s honoree as the world’s foremost authority on multiplayer computer games. Nobody has worked harder to demonstrate how technology can be used to realize one of the noblest of human endeavors: bringing people together. Historians of electronic gaming will find in these eleven boxes the prototypes of the defining art form of the 21st century.

As one of those historians, I can only heartily concur with his assessment.

It would be nice to say that Dani Bunten passed peacefully to her rest. But, as anyone with any experience with lung cancer will recognize, that just isn’t how the disease works. Throughout her life, she had done nothing the easy way, and her death — ugly, painful, and slow — was no exception. On the brighter side, she did reconcile to some extent with her mother and other family members and friends who had rejected her. The end came on July 3, 1998. Rather incredibly in light of the prodigious, multifaceted life she had lived, she was just 49 years old.

It’s a life which resists pigeonholing or sloganeering. Bunten herself explicitly rejected the role of transgender advocate, inside or outside of the games industry. Near the end of her life, she expressed regret for her decision to change her physical sex, saying she could have found ways to live in a more gender-fluid way without taking such a drastic step. Whether this was a reasoned evaluation or a product of the pain and trauma of terminal illness must remain, like so much else about her, an enigma.

What is clear, however, is that Bunten, through the grace and humor with which she handled her transition and through her refusal to go away and hide thereafter as some might have wished, taught others in the games industry who were struggling with similar issues of identity that a new gender need not mean a decisive break with every aspect of one’s past — that a prior life in games could continue to be a life in games even with a different pronoun attached. She did this in a quieter way than the speechifying some might have wished for from her, but, nevertheless, do it she did. Jessica Mulligan, who transitioned from male to female a few years after her, remembers meeting Bunten shortly before her own sexual-reassignment surgery, hoping to hear some “profound words on The Transition”: “While I was looking for spiritual guidance, she was telling me where to shop for shoes. Talk about keeping someone honest! Every change in our personal lives is profound to us. You still have to pay attention to the nuts and bolts or the change is meaningless.”

Danielle Bunten Berry does her makeup.

For some, of course — even for some with generally good intentions — Danielle Bunten Berry’s transgenderism will always be the defining aspect of her life, her career in games a mere footnote to that other part of her story. But that’s not how she would have wanted it. She regarded her games as her greatest legacy after her children, and would doubtless want to be remembered as a game designer above all else.

Back in 1989, after Modem Wars had failed in the marketplace, Electronic Arts decided that the lack of “a network of people to play” was a big reason for its failure. The great what-if question pertaining to Bunten’s career is what she might have done in partnership with an online network like CompuServe, which could have provided stable connectivity along with an eager group of players and all the matchmaking and social intrigue anyone could ask for. She finally began to explore this direction late in her life, through her work with Mpath. But what might have happened if she had made the right connections — forgive the pun! — earlier? We can only speculate.

As it is, though, it’s true that, in terms of units shifted and profits generated, there have been far more impressive careers. She suffered the curse of any pioneer who gets too far out in front of the culture. All of her eleven games combined probably sold no more than 400,000 copies at the outside, a figure some prominent designers’ new games can easily better on their first week today. Certainly her commercial disappointments far outnumber her successes. But then, sales aren’t the only metric by which to measure success.

Dani Bunten, one might say, is the designer’s designer. Greg Costikyan once told what happened when he offered to introduce Warren Spector — one of those designers who can sell more games in a week than Bunten did in a lifetime — to her back in the day: “He regretfully refused; he had loved M.U.L.E. so much he was afraid he wouldn’t know what to say. He would sound like a blithering fanboy and be embarrassed.” Chris Crawford calls the same title simply “the best computer-game design of all time.” Brenda Laurel dedicated Purple Moon’s output to Bunten. Sid Meier was so taken with Seven Cities of Gold that Pirates!, Railroad Tycoon, and Civilization, his trilogy of masterpieces, can all be described as extensions in one way or another of what Bunten first wrought. And Seven Cities of Gold was only Meier’s second favorite Bunten game: he loved M.U.L.E. so much that he was afraid to even try to improve on it.

Ironically, the very multiplayer affordances that Bunten so steadfastly refused to give up on, much to the detriment of her income, continue to make it difficult for her games to be seen at their best today. M.U.L.E. can be played as its designer really intended it only on an Atari 8-bit computer — real or emulated — with four vintage joysticks plugged in and four players holding onto them in a single living room; that is, needless to say, not a trivial thing to arrange in this day and age. Likewise, the need to have the exceedingly rare physical cards to hand has made it impossible for most people to even try out Robot Rascals today. (It took me months to track down a pricey German edition on eBay.) And Bunten’s final run of boxed games, reliant on ancient modem hookups as they are, are even more difficult to play with others today than they were in their own time.

Dani Bunten didn’t have an easy life, internally or externally. She remained always an enigma — the life of the party who goes home alone, the proverbial stranger among her best friends. One person who knew her after she became a woman claimed she still had a “shadowed, slightly haunted look, even when she was smiling.” Given the complicated emotions that are still stirred up in so many of us by transgenderism, that may have been projection. On the other hand, though, it may have been perception. Even Bunten’s childhood had been haunted by the specter of familial discord and possibly abuse, to such an extent that she refused to talk much about it. But she did once tell Greg Costikyan that she grew up loving games mainly because it was only when playing them that her family wasn’t “totally dysfunctional.”

I think that for Dani Bunten games were most of all a means of communication, a way of punching through that bubble of ego and identity that isolates all of us to one degree or another, and that perhaps isolated her more so than most. Thus her guiding vision became, as Sid Meier puts it, “the family gathered around the computer.” After all, it’s a small step to go from communicating to connecting, from connecting to loving. She openly stated that she had made Robot Rascals for her own family most of all: “They’ve never played my games. I think they found them too esoteric or complex. I wanted something that I could enjoy with them, that they’d all be able to relate to.” The tragedy for her — perhaps a key to the essential sadness many felt at Bunten’s core, whether she was living as a man or a woman — is that reality never quite lived up to that Norman Rockwell dream of the happy family gathered around a computer; her daughter, the duly appointed caretaker of her legacy, still calls M.U.L.E. “boring and tedious” today. But the dream remains, and her games have given those of us privileged to discover them great joy and comfort in the midst of lives that have admittedly — hopefully! — been far easier than that of their creator. And so I’ll close, in predictable but unavoidable fashion, with Danielle Bunten Berry’s most famous quote — a quote predictable precisely because it so perfectly sums up her career: “No one on their death bed ever said, ‘I wish I had spent more time alone with my computer!'” Words to live by, my fellow gamers. Words to live by.

Danielle Bunten Berry, 1949-1998.

(Sources: Compute! of March 1989, December 1989, April 1990, January 1992, and December 1993; Questbusters of May 1986; Commodore Power Play of June/July 1986; Commodore Magazine of July 1987, October 1988, and June 1989; Ahoy! of March 1987; Computer Gaming World of January/February 1987, May 1988, February 1989, February 1990, December 1990, February 1991, March 1991, May 1991, April 1992, June 1992, August 1992, June 1993, August 1993, July 1994, September 1995, and October 1998; Family Computing of January 1987; Compute!’s Gazette of August 1989; The One of April 1991; Game Players PC Entertainment of September 1992; Game Developer of February/March 1995, July 1998, September 1998, and October 1998; Electronic Arts’s newsletter Farther of Winter 1986; Power Play of January 1995; Arkansas Times of February 8 2012. Online sources include the archived contents of the old World of Mule site, the archived contents of a Danielle Bunten Berry tribute site, the Salon article “Get Behind the M.U.L.E.”, and Bunten’s interview at Halcyon Days.)

 
35 Comments

Posted by on November 16, 2018 in Digital Antiquaria, Interactive Fiction

 

Tags: , ,

The Game of Everything, Part 10: Civilization and the Limits of Progress

To listen to what Sid Meier says about his most famous achievement today, my writing all of these articles on Civilization has been like doing a deep reading of an episode of The Big Bang Theory; there just isn’t a whole lot of there there. Meier claims that the game presents at best a children’s-book view of history, that the only real considerations that went into it were what would be fun and what wouldn’t. I don’t want to criticize him for that stance here, any more than I want to minimize the huge place that fun or the lack thereof really did fill in the decisions that he and his partner Bruce Shelley made about Civilization. I understand why he says what he says: he’s a commercial game designer, not a political pundit, and he has no desire to wade into controversy — and possibly shrink his customer base — by taking public positions on the sorts of fractious topics I’ve been addressing over the course of these articles. If he should need further encouragement to stay well away from those topics, he can find it in the many dogmatic academic critiques of Civilization which accuse it of being little more than triumphalist propaganda. He’d rather spend his time talking about game design, which strikes me as perfectly reasonable.

Having said all that, it’s also abundantly clear to me that Civilization reflects a much deeper and more earnest engagement with the processes of history than Meier is willing to admit these days. This is, after all, a game which cribs a fair amount of its online Civilopedia directly from Will Durant, author of the eleven-volume The Story of Civilization, the most ambitious attempt to tell the full story of human history to date. And it casually name-drops the great British historian Arnold J. Toynbee, author of the twelve-volume A Study of History, perhaps the most exhaustive — and certainly the most lengthy — attempt ever to construct a grand unified theory of history. These are not, needless to say, books which are widely read by children. There truly is a real theory of history to be found in Civilization as well, one which, if less thoroughly worked-out than what the likes of Toynbee have presented in book form, is nevertheless worth examining and questioning at some length.

The heart of Civilization‘s theory of history is of course the narrative of progress. In fact, the latter is so central to the game that it’s joined it as the second of our lodestars throughout this series of articles. And so, as we come to the end of the series, it seems appropriate to look at what the game and the narrative of progress have to say about one another one last time, this time in the context of a modern society like the ones in which we live today. Surprisingly given how optimistic the game’s take on history generally is, it doesn’t entirely ignore the costs that have all too clearly been shown to be associated with progress in this modern era of ours.

Meier and Shelley were already working on Civilization when the first international Earth Day was held on April 22, 1990, marking the most important single event in the history of the environmental movement since the publication of Rachel Carson’s Silent Spring back in 1962. Through concerts, radio and television programs, demonstrations, and shrewd publicity stunts like a Mount Everest “Peace Climb” including American, Soviet, and Chinese climbers roped together in symbolic co-dependence, Earth Day catapulted the subject of global warming among other environmental concerns into the mass media, in some cases for the first time.

Whether influenced by this landmark event or not, Civilization as well manifests a serious concern for the environment in the later, post-Industrial Revolution stages of the game. Coal- and oil-fired power plants increase the productivity of your factories dramatically, but also spew pollution into the air which you must struggle to clean up. Nuclear power plants, while the cheapest, cleanest, and most plentiful sources of energy most of the time, can occasionally melt down with devastating consequences to your civilization. Large cities generate pollution of their own even absent factories and power plants, presumably as a result of populations that have discovered the joy of automobiles. Too much pollution left uncleaned will eventually lead not only to sharply diminished productivity for your civilization but also to global warming, making Civilization one of the first works of popular entertainment to acknowledge the growing concern surrounding the phenomenon already among scientists of the early 1990s.

In fighting your rearguard action against these less desirable fellow travelers on the narrative of progress, you have various tools at your disposal. To clean up pollution that’s already occurred, you can build and deploy settler units to the affected areas. To prevent some pollution from occurring at all, you can invest in hydroelectric plants in general and/or the Wonder of the World that is the Hoover Dam. And/or you can build mass-transit systems to wean your people away from their precious cars, and/or build recycling centers to prevent some of their trash from winding up in landfills.

Interestingly, the original Civilization addresses the issues of environment and ecology that accompany the narrative of progress with far more earnestness than any of its sequels — another fact that rather gives the lie to Meier’s assertion that the game has little to do with the real world. Although even the first game’s implementation of pollution is far from unmanageable by the careful player, it’s something that most players just never found to be all that much fun, and this feedback caused the designers who worked on the sequels to gradually scale back its effects.

In the real world as well, pollution and the threat of global warming aren’t much fun to talk or think about — so much so that plenty of people, including an alarming number of those in positions of power, have chosen to stick their heads in the sand and pretend they don’t exist. None of us enjoy having our worldviews questioned in the uncomfortable ways that discussions of these and other potential limits of progress — progress as defined in terms of Francis Fukuyama’s explicit and Civilization‘s implicit ideals of liberal, capitalistic democracy — tend to engender.

As Adam Smith wrote in the pivotal year of 1776 and the subsequent centuries of history quite definitively proved, competitive free markets do some things extraordinarily well. The laws of supply and demand conspire to ensure that a society’s resources are allocated to those things its people actually need and want, while the profit motive drives innovation in a way no other economic system has ever come close to equaling. The developed West’s enormous material prosperity — a prosperity unparalleled in human history — is thanks to capitalism and its kissing cousin, democracy.

Yet unfettered capitalism, that Platonic ideal of libertarian economists, has a tendency to go off the rails if not monitored and periodically corrected by entities who are not enslaved by the profit motive. The first great crisis of American capitalism could be said to have taken place as early as the late 1800s, during the “robber baron” era of monopolists who discovered a way to cheat the law of supply and demand by cornering entire sectors of the market to themselves. Meanwhile the burgeoning era of mass production and international corporations, so dramatically different from Adam Smith’s world of shopkeepers and village craftsmen, led to the mass exploitation of labor. The response from government was an ever-widening net of regulations to keep corporations honest, while the response from workers was to unionize for the same purpose. Under these new, more restrictive conditions, capitalism continued to hum along, managing to endure another, still greater crisis of confidence in the form of the Great Depression, which led to the idea of a taxpayer-funded social safety net for the weak and the unlucky members of society.

The things that pure capitalism doesn’t do well, like providing for the aforementioned weak and unlucky who lack the means to pay for goods and services, tend to fall under the category that economists call “externalities”: benefits and harms that aren’t encompassed by Adam Smith’s supposedly all-encompassing law of supply and demand. In Smith’s world of shopkeepers, what was best for the individual supplier was almost always best for the public at large: if I sold you a fine plow horse for a reasonable price, I profited right then and there, and also knew that you were likely to tell your friends about it and to come back yourself next year when you needed another. If I sold you a lame horse, on the other hand, I’d soon be out of business. But if I’m running a multinational oil conglomerate in the modern world, that simple logic of capitalism begins to break down in the face of a much more complicated web of competing concerns. In this circumstance, the best thing for me to do in order to maximize my profits is to deny that global warming exists and do everything I can to fight the passage of laws that will hurt my business of selling people viscous black gunk to burn inside dirty engines. This, needless to say, is not in the public’s long-term interest; it’s an externality that could quite literally spell the end of human civilization. So, government must step in — hopefully! — to curb the burning of dirty fuels and address the effects of those fossil fuels that have already been burned.

But externalities are absolutely everywhere in our modern, interconnected, globalized world of free markets. Just as there’s no direct financial benefit in an unfettered free market for a doctor to provide years or decades worth of healthcare to a chronically sick person who lacks the means to pay for it, there’s no direct financial harm entailed in a factory dumping its toxic effluent into the nearest lake. There is, of course, harm in the abstract, but that harm is incurred by the people unlucky enough to live by the lake rather than by the owners of the factory. The trend throughout the capitalist era has therefore been for government to step in more and more; every successful capitalist economy in the world today is really a mixed economy, to a degree that would doubtless have horrified Adam Smith. As externalities continue to grow in size and scope, governments are forced to shoulder a bigger and bigger burden in addressing them. At what points does that burden become unbearable?

One other internal contradiction of modern capitalism, noticed by Karl Marx already in the nineteenth century, has come to feel more real and immediate than ever before in the years since the release of Civilization. The logic of modern finance demands yearly growth — ever greater production, ever greater profits. Just holding steady isn’t good enough; if you doubt my word, consider what your pension fund will look like come retirement time if the corporations in which you’ve invested it are content to merely hold steady. Up to this point, capitalism’s efficiency as an economic system has allowed it to deliver this growth over a decade-by-decade if not always year-by-year basis. But the earth’s resources are not unlimited. At some point, constant growth — the constant demand for more, more, more — must become unsustainable. What happens to capitalism then?

Exactly the future that believers in liberal democracy and capitalism claim to be the best one possible — that the less-developed world remakes itself in the mold of North America and Western Europe — would appear to be literally impossible in reality. The United States alone, home to 6 percent of the world’s population, consumes roughly 35 percent of its resources. One doesn’t need to be a statistician or an ecologist to understand that the rest of the world simply cannot become like the United States without ruining a global ecosystem that already threatens to collapse under the weight of 7.5 billion souls — twice the number of just thirty years ago. Humans are now the most common mammal on the planet, outnumbering even the ubiquitous mice and rats. Two-thirds of the world’s farmland is already rated as “somewhat” or “strongly” degraded by the Organization for Economic Cooperation and Development. Three-quarters of the world’s biodiversity has been lost since 1900, and 50 percent of all remaining plant and animal species are expected to go extinct before 2100. And hovering over it all is the specter of climate change; the polar ice caps have melted more in the last 20 years than they did in the previous 12,000 years since the end of the last ice age.

There’s no doubt about it: these are indeed uncomfortable conversations to have. Well before the likes of Brexit and President Donald Trump, even before the events of September 11, 2001, Western society was losing the sense of triumphalism that had marked the time of the original Civilization, replacing it with a jittery sense that humanity was packed too closely together on an overcrowded and overheating little planet, that the narrative of progress was rushing out of control toward some natural limit point that was difficult to discern or describe. The first clear harbinger of the generalized skittishness to come was perhaps the worldwide angst that accompanied the turn of the millennium — better known as “Y2K,” a fashionable brand name for disaster that smacked of Hollywood, thereby capturing the strange mixture of gloom and mass-media banality that would come to characterize much of post-millennial life. The historian of public perception David Lowenthal, writing in 2015:

Events spawned media persistently catastrophic in theme and tone, warning of the end of history, the end of humanity, the end of nature, the end of everything. Millennial prospects in 2000 were lacklustre and downbeat; Y2K seemed a portent of worse to come. Not even post-Hiroshima omens of nuclear annihilation unleashed such a pervasive glum foreboding. Today’s angst reflects unexampled loss of faith in progress: fears that our children will be worse off than ourselves, doubts that neither government nor industry, science nor technology, can set things right.

The turn of the millennium had the feeling of an end time, yet none of history’s more cherished eschatologies seemed to be coming true: not Christianity’s Rapture, not Karl Marx’s communist world order, not Georg Wilhelm Friedrich Hegel or Francis Fukuyama’s liberal-democratic end of history, certainly not Sid Meier and Bruce Shelley’s trip to Alpha Centauri. Techno-progressives began to talk more and more of a new secular eschatology in the form of the so-called Singularity, the point where, depending on the teller, artificial intelligence would either merge with human intelligence to create a new super-species fundamentally different from the humans of prior ages, or our computers would simply take over the world, wiping out their erstwhile masters or relegating them to the status of pets. And that was one of the more positive endgames for humanity that came to be batted around. Others nursed apocalyptic visions of a world ruined by global warming and the rising sea levels associated with it — a secular version of the Biblical Flood — or completely overrun by Islamic Jihadists, those latest barbarians at the gates of civilization heralding the next Dark Ages. Our television and movies turned increasingly dystopic, with anti-heroes and planet-encompassing disasters coming to rule our prime-time entertainment.

The last few years in particular haven’t been terribly good ones for believers in the narrative of progress and the liberal-democratic world order it has done so much to foster. The Arab Spring, touted for a time as a backward region’s belated awakening to progress, collapsed without achieving much of anything at all. Britain is leaving the European Union; the United States elected Donald Trump; Russia is back to relishing the role of the Evil Empire, prime antagonist to the liberal-democratic West; China has gone a long way toward consummating a marriage once thought impossible: the merging of an autocratic, human-rights-violating government with an economy capable of competing with the best that democratic capitalism can muster. Our politicians issue mealy-mouthed homages to “realism” and “transactional diplomacy,” ignoring the better angels of our nature. Everywhere nativism and racism seem to be on the rise. Even in the country where I live now, the supposed progressive paradise of Denmark, the Danish People’s Party has won considerable power in the government by sloganeering that “Denmark is not a multicultural society,” by drawing lines between “real” Danes and those of other colors and other religions. In my native land of the United States, one side of the political discourse, finding itself unable to win a single good-faith argument on the merits, has elected to simply lie about the underlying facts, leading some to make the rather chilling assertion that we now live in a “post-truth” world. (How ironic that the American right, long the staunchest critic of postmodernism, should have been the ones to turn its lessons about the untenability of objective truth into an electoral strategy!)

And then there’s the incoming fire being taken by the most sacred of all of progress’s sacred cows, as The Economist‘s latest Democracy Index announces that it “continues its disturbing retreat.” In an event redolent with symbolism, the same index in 2016 changed the classification of the United States, that beacon of democracy throughout its history, from a “Full Democracy” to a “Flawed Democracy.” Functioning as both cause and symptom of this retreat is the old skepticism about whether democracy is just too chaotic to efficiently run a country, whether people who can so easily be duped by Facebook propaganda and email chain letters can really be trusted to decide their countries’ futures.

Looming over such discussions of democracy and its efficacy is the specter of China. When Mao Zedong’s Communist Party seized power there in 1949, the average Chinese citizen earned just $448 per year in inflation-adjusted terms, making it one of the poorest countries in the world. Mao’s quarter-century of orthodox communist totalitarianism, encompassing the horrors of the Great Leap Forward and the Cultural Revolution, managed to improve that figure only relatively slowly; average income had increased to $978 by 1978. But, following Mao’s death, his de-facto successor Deng Xiaoping began to depart from communist orthodoxy, turning from a centrally-managed economy to the seemingly oxymoronic notion of “market-oriented communism” — effectively a combination of authoritarianism with capitalism. Many historians and economists — not least among them Francis Fukuyama — have always insisted that a non-democracy simply cannot compete with a democracy on economic terms over a long span of time. Yet the economy of the post-Mao China has seemingly grown at a far more impressive rate than they allow to be possible, with average income reaching $6048 by 2006, then $16,624 by 2017. China today would seem to be a compelling rebuttal to all those theories about the magic conjunction of personal freedoms and free markets.

But is it really? We should be careful not to join some of our more excitable pundits in getting ahead of the real facts of the case. China’s economic transformation, remarkable as it’s been, has only elevated it to the 79th position among all the world’s nations in terms of GDP per capita. Its considerable economic clout in the contemporary world, in other words, has a huge amount to do with the fact that it’s the most populous country in the world. Further, the heart of its economy is manufacturing, as is proved by all of those “Made in China” tags on hard goods of every description that are sold all over the world. China is still a long, long way from joining the vanguard of post-industrial knowledge economies. To a large extent, economic innovation still comes from the latter; China then does the grunt work of manufacturing the products that the innovators design.

Of course, authoritarianism does have its advantages. China’s government, which doesn’t need to concern itself with elections every set number of years, can set large national projects in motion, such as a green energy grid spanning the entire country or even a manned trip to Mars, and see them methodically through over the course of decades if need be. But can China under its current system of government produce a truly transformative, never-seen-or-imagined-anything-like-it product like the Apple iPhone and iPad, the World Wide Web, or the Sony Walkman? It isn’t yet clear to me that it can transcend being an implementor of brilliant ideas — thanks to all those cheap and efficient factories — to being an originator of same. So, personally, I’m not quite ready to declare the death of the notion that a country requires democracy to join the truly top-tier economies of the world. The next few decades should be very interesting in one way or another — whether because China does definitively disprove that notion, because its growth tops out, or, most desirably, because a rising standard of living there and the demands of a restive middle class bring an end at last to China’s authoritarian government.

Still, none of these answers to The China Puzzle will do anything to help us with the fundamental limit point of the capitalistic world order: the demand for infinite economic growth in a world of decidedly finite resources. Indeed, the Chinese outcome I just named as the most desirable — that of a democratic, dynamic China free of the yoke of its misnamed Communist Party — only causes our poor, suffering global ecosystem to suffer that much more under the yoke of capitalism. For this reason, economists today have begun to speak more and more of a “crisis of capitalism,” to question whether Adam Smith’s brilliant brainchild is now entering its declining years. For a short time, the “Great Recession” of 2007 and 2008, when some of the most traditionally rock-solid banks and corporations in the world teetered on the verge of collapse, seemed like it might be the worldwide shock that signaled the beginning of the end. Desperate interventions by governments all over the world managed to save the capitalists from themselves at the last, but even today, when the economies of most Western nations are apparently doing quite well, the sense of unease that was engendered by that near-apocalypse of a decade ago has never fully disappeared. The feeling remains widespread that something has to give sooner or later, and that that something might be capitalism as we know it today.

But what would a post-capitalist world look like? Aye, there’s the rub. Communism, capitalism’s only serious challenger over the course of the last century, would seem to have crashed and burned a long time ago as a practical way of ordering an economy. Nor, based on the horrid environmental record of the old Soviet bloc, is it at all clear that it would have proved any better a caretaker of our planet than capitalism even had it survived.

One vision for the future, favored by the anarchist activists whom we briefly met in an earlier article, entails a deliberate winding down of the narrative of progress before some catastrophe or series of catastrophes does it for us. It’s claimed that we need to abandon globalization and re-embrace localized, self-sustaining ways of life; it’s thus perhaps not so much a complete rejection of capitalism as a conscious return to Adam Smith’s era of shopkeepers and craftsmen. The prominent American anarchist Murray Bookchin dreams of a return to “community, decentralization, self-sufficiency, mutual aid, and face-to-face democracy” — “a serious challenge to [globalized] society with its vast, hierarchical, sexist, class-ruled state apparatus and militaristic history.” Globalization, he and other anarchists note, often isn’t nearly as efficient as its proselytizers claim. In fact, the extended international supply chains it fosters for even the most basic foodstuffs are often absurdly wasteful in terms of energy and other resources, and brittle to boot, vulnerable to the slightest shock to the globalized system. Why should potatoes which can be grown in almost any back garden in the world need to be shipped in via huge, fuel-guzzling jet airplanes and forty-ton semis? Locally grown agriculture, anarchists point out, can provide eight units of food energy for every one unit of fossil-fuel energy needed to bring it to market, while in many cases exactly the opposite ratio holds true for internationally harvested produce.

But there’s much more going on here philosophically than a concern with the foodstuff supply chain. Modern anarchist thought reflects a deep discomfort with consumer culture, a strand of philosophy we’ve met before in the person of Jean-Jacques Rousseau and his “noble savage.” In truth, Rousseau noted, the only things a person really, absolutely needs to survive are food and shelter. All else is, to paraphrase the Bible, vanity, and all too often brings only dissatisfaction. Back in the eighteenth century, Rousseau could already describe the collector who is never satisfied by the collection he’s assembled, only dissatisfied by its gaps.

What would he make of our times? Today’s world is one of constant beeping enticements — cars, televisions, stereos, computers, phones, game consoles — that bring only the most ephemeral bursts of happiness before we start craving the upgraded model. The anarchist activist Peter Harper:

People aspire to greater convenience and comfort, more personal space, easy mobility, a sense of expanding possibilities. This is the modern consumerist project: what modern societies are all about. It is a central feature of mainstream politics and economics that consumerist aspirations are not seriously challenged. On the contrary, the implied official message is “Hang on in there: we will deliver.” The central slogan is brutally simple: MORE!

Harper claims that, as the rest of the world continues to try and fail to find happiness in the latest shiny objects, anarchists will win them over to their cause by example. For those who reject materialist culture “will quite visibly be having a good time: comfortable, with varied lives and less stress, healthy and fit, having rediscovered the elementary virtues of restraint and balance.”

Doubtless we could all use a measure of restraint and balance in our lives, but the full anarchist project for happiness and sustainability through a deliberate deconstruction of the fruits of progress is so radical — entailing as it does the complete dissolution of nation-states and a return to decentralized communal living — that it’s difficult to fully envision how it could happen absent the sort of monumental precipitating global catastrophe that no one can wish for. While human nature will always be tempted to cast a wistful eye back to an imagined simpler, more elemental past, another, perhaps nobler part of our nature will always look forward with an ambitious eye to a bolder, more exciting future. The oft-idealized life of a tradesman prior to the Industrial Revolution, writes Francis Fukuyama, “involved no glory, dynamism, innovation, or mastery; you just plied the same traditional markets or crafts as your father and grandfather.” For many or most people that may be a fine life, and more power to them. But what of those with bigger dreams, who would spur humanity on to bigger and better things? That is to say, what of the authors of the narrative of progress of the past, present, and future, who aren’t willing to write the whole thing off as fun while it lasted and return to the land? The builders among us will never be satisfied with a return to some agrarian idyll.

The world’s current crisis of faith in progress and in the liberal-democratic principles that are so inextricably bound up with it isn’t the first or the worst of its kind. Not that terribly long ago, Nazi Germany and Imperial Japan posed a far more immediate and tangible threat to liberal democracy all over the world than anything we face today; the American Nazi party was once strong enough to rent and fill Madison Square Garden, a fact which does much to put the recent disconcerting events in Charlottesville in perspective. And yet liberal democracy got through that era all right in the end.

Even in 1983, when the Soviet Union was already teetering on the verge of economic collapse, an unknowing Jean-François Revel could write that “democracy may, after all, turn out to have been an historical accident, a brief parenthesis that is closing before our eyes.” The liberal West’s periods of self-doubt have always seemed to outnumber and outlast its periods of triumphalism, and yet progress has continued its march. During the height of the fascist era, voting rights in many democratic countries were being expanded to include all of their citizens at long last; amidst the gloominess about the future that has marked so much of post-millennial life, longstanding prejudices toward gay and lesbian people have fallen away so fast in the developed West that it’s left even many of our ostensibly progressive politicians scrambling to keep up.

Of course, the fact still remains that our planet’s current wounds are real, and global warming may in the long run prove to be the most dangerous antagonist humanity has ever faced. If we’re unwilling to accept giving up the fruits of progress in the name of healing our planet, where do we go from here? One thing that is clear is that we will have to find different, more sustainable ways of ordering our economies if progress is to continue its march. Capitalism is often praised for its ability to sublimate what Friedrich Nietzsche called the megalothymia of the most driven souls among us — the craving for success, achievement, recognition, victory — into the field of business rather than the field of battle. Would other megalothymia sublimators, such as sport, be sufficient in a post-capitalist world? What would a government/economy look like that respects people’s individual freedoms but avoids the environment-damaging, resource-draining externalities of capitalism? No one — certainly not I! — can offer entirely clear answers to these questions today. This is not so much a tribute to anything unique about our current times as it is a tribute to the nature of history itself. Who anticipated Christianity? Who anticipated that we would use the atomic bomb only twice? Who, for that matter, anticipated a President Donald Trump?

One possibility, at least in the short term, is to rejigger the rules of capitalism to bring its most problematic externalities back under the umbrella of the competitive marketplace. Experiments in cap-and-trade, which turn environment-ruining carbon emissions into a scarce commodity that corporations can exchange among themselves, have shown promising results.

But in the longer term, still more than just our economics will have to change. Because the problems of ecology and environment are global problems of a scope we’ve never faced before, we will need to think of ourselves more and more as a global society in order to solve them. In time, the nation-states in which we still invest so much patriotic fervor today may need to go the way of the scattered, self-sufficient settlements of a few dozens or hundreds that marked the earliest stages of the earliest civilizations. In time, the seeds that were planted with the United Nations in the aftermath of the bloodiest of all our stupid, pointless wars may flower into a single truly global civilization.

Really, though, I can’t possibly predict how humanity will progress its way out of its current set of predicaments. I can only have faith in the smarts and drive that have brought us this far. The best we can hope for is probably to muddle through by the skin of our teeth — but then, isn’t that what we’ve always been doing? The first civilizations began as improvised solutions to the problem of a changing climate, and we’ve been making it up as we go along ever since. So, maybe the first truly global civilization will also arise as, you guessed it, an improvised solution to the problem of a changing climate. Even if we’ve met our match with our latest nemesis of human-caused climate change, perhaps it really is better to burn out than to fade away. Perhaps it’s better to go down swinging than to survive at the cost of the grand dream of an eventual trip to Alpha Centauri.

The game which has the fulfillment of that dream as its most soul-stirring potential climax has been oft-chided for promoting a naive view of history — for being Western- and American-centric, for ignoring the plights of the vast majority of the people who have ever inhabited this planet of ours, for ignoring the dangers of the progress it celebrates. It is unquestionably guilty of all these things in whole or in part, and guilty of many more sins against history besides. But I haven’t chosen to emphasize overmuch its many problems in this series of articles because I find its guiding vision of a human race capable of improving itself down through the millennia so compelling and inspiring. Human civilization needs it critics, but it needs its optimists perhaps even more. So, may the optimistic outlook of the narrative of progress last as long as our species, and may we always have to go along with it the optimism of the game of Civilization — or of a Civilization VI, Civilization XVI, or Civilization CXVI — to exhort us to keep on keeping on.

(Sources: the books Civilization, or Rome on 640K a Day by Johnny L. Wilson and Alan Emrich, The End of History and the Last Man by Francis Fukuyama, Democracy: A Very Short Introduction by Bernard Crick, Anarchism: A Very Short Introduction by Colin Ward, Environmental Economics: A Very Short Introduction by Stephen Smith, Globalization: A Very Short Introduction by Manfred B. Steger, Economics: A Very Short Introduction by Partha Dasgupta, Global Economic History: A Very Short Introduction by Robert C. Allen, Capital by Karl Marx, The Social Contract by Jean-Jacques Rousseau, The Genealogy of Morals by Friedrich Nietzsche, Lectures on the Philosophy of History by Georg Wilhelm Friedrich Hegel, The Wealth of Nations by Adam Smith, How Democracies Perish by Jean-François Revel, and The Past is a Foreign Country by David Lowenthal.)

 

Tags: , , ,

The Game of Everything, Part 9: Civilization and Economics

If the tailor goes to war against the baker, he must henceforth bake his own bread.

— Ludwig von Mises

There’s always the danger that an analysis of a game spills into over-analysis. Some aspects of Civilization reflect conscious attempts by its designers to model the processes of history, while some reflect unconscious assumptions about history; some aspects represent concessions to the fact that it first and foremost needs to work as a playable and fun strategy game, while some represent sheer random accidents. It’s important to be able to pull these things apart, lest the would-be analyzer wander into untenable terrain.

Any time I’m tempted to dismiss that prospect, I need only turn to Johnny L. Wilson and Alan Emrich’s ostensible “strategy guide” Civilization: or Rome on 640K a Day, which is actually far more interesting as the sort of distant forefather of this series of articles — as the very first attempt ever to explore the positions and assumptions embedded in the game. Especially given that it is such an early attempt — the book was published just a few months after the game, being largely based on beta versions of same that MicroProse had shared with the authors — Wilson and Emrich do a very credible job overall. Yet they do sometimes fall into the trap of seeing what their political beliefs make them wish to see, rather than what actually existed in the minds of the designers. The book doesn’t explicitly credit which of the authors wrote what, but one quickly learns to distinguish their points of view. And it turns out that Emrich, whose arch-conservative worldview is on the whole more at odds with that of the game than Wilson’s liberal-progressive view, is particularly prone to projection. Among the most egregious and amusing examples of him using the game as a Rorschach test is his assertion that the economy-management layer of Civilization models a rather dubious collection of ideas that have more to do with the American political scene in 1991 than they do with any proven theories of economics.

We know we’re in trouble as soon as the buzzword “supply-side economics” turns up prominently in Emrich’s writing. It burst onto the stage in a big way in the United States in 1980 with the election of Ronald Reagan as president, and has remained to this day one of his Republican party’s main talking points on the subject of economics in general. Its central, counter-intuitive claim is that tax revenues can often be increased by cutting rather than raising tax rates. Lower taxes, goes the logic, provide such a stimulus to the economy as a whole that people wind up making a lot more money. And this in turn means that the government, even though it brings in less taxes per dollar, ends up bringing in more taxes in the aggregate.

In seeing what he wanted to see in Civilization, Alan Emrich decided that it hewed to contemporary Republican orthodoxy not only on supply-side economics but also on another subject that was constantly in the news during the 1980s and early 1990s: the national debt. The Republican position at the time was that government deficits were always bad; government should be run like a business in all circumstances, went their argument, with an orderly bottom line.

But in the real world, supply-side economics and a zero-tolerance policy on deficits tend to be, shall we say, incompatible with one another. Since the era of Ronald Reagan, Republicans have balanced these oil-and-water positions against one another by prioritizing tax cuts when in power and wringing their hands over the deficit — lamenting the other party’s supposedly out-of-control spending on priorities other than their own — when out of power. Emrich, however, sees in Civilization‘s model of an economy the grand unifying theory of his dreams.

Let’s quickly review the game’s extremely simplistic handling of the economic aspects of civilization-building before we turn to his specific arguments, such as they are. The overall economic potential of your cities is expressed as a quantity of “trade arrows.” As leader, you can devote the percentage of trade arrows you choose to taxes, which add money to your treasury for spending on things like the maintenance costs of your buildings and military units and tributes to other civilizations; research, which lets you acquire new advances; and, usually later in the game, luxuries, which help to keep your citizens content. There’s no concept of deficit spending in the game; if ever you don’t have enough money in the treasury to maintain all of your buildings and units at the end of a turn, some get automatically destroyed. This, then, leads Emrich to conclude that the game supports his philosophy on the subject of deficits in general.

But the more entertaining of Emrich’s arguments are the ones he deploys to justify supply-side economics. At the beginning of a game of Civilization, you have no infrastructure to support, and thus you have no maintenance costs at all — and, depending on which difficulty level you’ve chosen to play at, you may even start with a little bit of money already in the treasury. Thus it’s become standard practice among players to reduce taxes sharply from their default starting rate of 50 percent, devoting the bulk of their civilization’s economy early on to research on basic but vital advances like Pottery, Bronze Working, and The Wheel. With that in mind, let’s try to follow Emrich’s thought process:

To maximize a civilization’s potential for scientific and technological advancement, the authors recommend the following exercise in supply-side economics. Immediately after founding a civilization’s initial city, pull down the Game Menu and select “Tax Rate.” Reduce the tax rate from its default 50% to 10% (90% Science). This reduced rate will allow the civilization to continue to maintain its current rate of expenditure while increasing the rate at which scientific advancements occur. These advancements, in turn, will accelerate the wealth and well-being of the civilization as a whole.

In this way, the game mechanics mirror life. The theory behind tax reduction as a spur to economic growth is built on two principles: the multiplier and the accelerator. The multiplier effect is abstracted out of Sid Meier’s Civilization because it is a function of consumer spending.

The multiplier effect says that each tax dollar cut from a consumer’s tax burden and actually spent on consumer goods will net an additional 50 cents at a second stage of consumer spending, an additional 25 cents at a third stage, an additional 12.5 cents at a fourth stage, etc. Hence, economists claim that the full progression nets a total of two dollars for each extra consumer dollar spent as a result of a tax cut.

The multiplier effect cannot be observed in the game because it is only presented indirectly. Additional consumer spending causes a flash point where additional investment takes place to increase, streamline, and advance production capacity and inventory to meet the demands of the increased consumption. Production increases and advances, in turn, have an additional multiplier effect beyond the initial consumer spending. When the scientific advancements occur more rapidly in Sid Meier’s Civilization, they reflect that flash point of additional investment and allow civilizations to prosper at an ever accelerating rate.

Wow. As tends to happen a lot after I’ve just quoted Mr. Emrich, I’m not quite sure where to start. But let’s begin with his third paragraph, in particular with a phrase which is all too easy to overlook: that for this to work, the dollar cut must “actually be spent on consumer goods.” When tax rates for the wealthy are cut, the lucky beneficiaries don’t tend to go right out and spend their extra money on consumer goods. The most direct way to spur the economy through tax cuts thus isn’t to slash the top tax bracket, as Republicans have tended to do; it’s to cut the middle and lower tax brackets, which puts more money in the pockets of those who don’t already have all of the luxuries they could desire, and thus will be more inclined to go right out and spend their windfall.

But, to give credit where it’s due, Emrich does at least include that little phrase about the importance of spending on consumer goods, even if he does rather bury the lede. His last paragraph is far less defensible. To appreciate its absurdity, we first have to remember that he’s talking about “consumer spending” in a Stone Age economy of 4000 BC. What are these consumers spending on? Particularly shiny pieces of quartz?  And for that matter what are they spending, considering that your civilization hasn’t yet developed currency? And how on earth can any of this be said to justify supply-side economics over the long term? You can’t possibly maintain your tax rate of 10 percent forever; as you build up your cities and military strength, your maintenance costs steadily increase, forcing you back toward that starting default rate of 50 percent. To the extent that Civilization can be said to send any message at all on taxes, said message must be that a maturing civilization will need to steadily increase its tax rate as it advances toward modernity. And indeed, as we learned in an earlier article in this series, this is exactly what has happened over the long arc of real human history. Your economic situation at the beginning of a game of Civilization isn’t some elaborate testimony to supply-side economies; it just reflects the fact that one of the happier results of a lack of civilization is the lack of a need to tax anyone to maintain it.

In reality, then, the taxation model in the game is a fine example of something implemented without much regard for real-world economics, simply because it works in the context of a strategy game like this one. Even the idea of such a centralized system of rigid taxation for a civilization as a whole is a deeply anachronistic one in the context of most societies prior to the Enlightenment, for whose people local government was far more important than some far-off despot or monarch. Taxes, especially at the national level, tended to come and go prior to AD 1700, depending on the immediate needs of the government, and lands and goods were more commonly taxed than income, which in the era before professionalized accounting was hard for the taxpayer to calculate and even harder for the tax collector to verify. In fact, a fixed national income tax of the sort on which the game’s concept of a “tax rate” seems to be vaguely modeled didn’t come to the United States until 1913. Many ancient societies — including ones as advanced as Egypt during its Old Kingdom and Middle Kingdom epochs —  never even developed currency at all. Even in the game Currency is an advance which you need to research; the cognitive dissonance inherent in earning coins for your treasury when your civilization lacks the concept of money is best just not thought about.

Let’s take a moment now to see if we can make a more worthwhile connection between real economic history and luxuries, that third category toward which you can devote your civilization’s economic resources. You’ll likely have to begin doing so only if and when your cities start to grow to truly enormous sizes, something that’s likely to happen only under the supercharged economy of a democracy. When all of the usual bread and circuses fail, putting resources into luxuries can maintain the delicate morale of your civilization, keeping your cities from lapsing into revolt. There’s an historical correspondence that actually does seem perceptive here; the economies of modern Western democracies, by far the most potent the world has ever known, are indeed driven almost entirely by a robust consumer market in houses and cars, computers and clothing. Yet it’s hard to know where to really go with Civilization‘s approach to luxuries beyond that abstract statement. At most, you might put 20 or 30 percent of your resources into them, leaving the rest to taxes and research, whereas in a modern developed democracy like the United States those proportions tend to be reversed.

Ironically, the real-world economic system to which Civilization‘s overall model hews closest is actually a centrally-planned communist economy, where all of a society’s resources are the property of the state — i.e., you — which decides how much to allocate to what. But Sid Meier and Bruce Shelley would presumably have run screaming from any such association — not to mention our friend Mr. Emrich, who would probably have had a conniption. It seems safe to say, then, that what we can learn from the Civilization economic model is indeed sharply limited, that most of it is there simply as a way of making a playable game.

Still, we might usefully ask whether there’s anything in the game that does seem like a clear-cut result of its designers’ attitudes toward real-world economics. We actually have seen some examples of that already in the economic effects that various systems of government have on your civilization, from the terrible performance of despotism to the supercharging effect of democracy. And there is one other area where Civilization stakes out some clear philosophical territory: in its attitude toward trade between civilizations, a subject that’s been much in the news in recent years in the West.

In the game, your civilization can reap tangible benefits from its contact with other civilizations in two ways. For one, you can use special units called caravans, which become available after you’ve researched the advance of Trade, to set up “trade routes” between your cities and those of other civilizations. Both then receive a direct boost to their economies, the magnitude of which depends on their distance from one another — farther is better — and their respective sizes. A single city can set up such mutually beneficial arrangements with up to five other cities, and see them continue as long as the cities in question remain in existence.

In addition to these arrangements, you can horse-trade advances directly with the leaders of other civilizations, giving your counterpart one of your advances in exchange for one you haven’t yet acquired. It’s also possible to take advances from other civilizations by conquering their cities or demanding tribute, but such hostile approaches have obvious limits to which a symbiotic trading relationship isn’t subject; fighting wars is expensive in terms of blood and treasure alike, and you’ll eventually run out of enemy cities to conquer. If, on the other hand, you can set up warm relationships with four or five other civilizations, you can positively rocket up the Advances Chart.

The game’s answer to the longstanding debate between free trade and protectionism — between, to put a broader framing on it, a welcoming versus an isolationist attitude toward the outside world — is thus clear: those civilizations which engage economically with the world around them benefit enormously and get to Alpha Centauri much faster. Such a position is very much line in line with the liberal-democratic theories of history that were being espoused by thinkers like Francis Fukuyama at the time Meier and Shelley were making the game — thinkers whose point of view Civilization unconsciously or knowingly adopts.

As has become par for the course by now, I believe that the position Civilization and Fukuyama alike take on this issue is quite well-supported by the evidence of history. To see proof, one doesn’t have to do much more than look at where the most fruitful early civilizations in history were born: near oceans, seas, and rivers. Egypt was, as the ancient historian Herodotus so famously put it, “the gift of the Nile”; Athens was born on the shores of the Mediterranean; Rome on the east bank of the wide and deep Tiber river. In ancient times, when overland travel was slow and difficult, waterways were the superhighways of their era, facilitating the exchange of goods, services, and — just as importantly — ideas over long distances. It’s thus impossible to imagine these ancient civilizations reaching the heights they did without this access to the outside world. Even today port cities are often microcosms of the sort of dynamic cultural churn that spurs civilizations to new heights. Not for nothing does every player of the game of Civilization want to found her first city next to the ocean or a river — or, if possible, next to both.

To better understand how these things work in practice, let’s return one final time to the dawn of history for a narrative of progress involving one of the greatest of all civilizations in terms of sheer longevity.

Egypt was far from the first civilization to spring up in the Fertile Crescent, that so-called “cradle of civilization.” The changing climate that forced the hunter-gatherers of the Tigris and Euphrates river valleys to begin to settle down and farm as early as 10,000 BC may not have forced the peoples roaming the lands near the Nile to do the same until as late as 4000 BC. Yet Egyptian civilization, once it took root, grew at a crazy pace, going from primitive hunter-gatherers to a culture that eclipsed all of its rivals in grandeur and sophistication in less than 1500 years. How did Egypt manage to advance so quickly? Well, there’s strong evidence that it did so largely by borrowing from the older, initially wiser civilizations to its east.

Writing is among the most pivotal advances for any young civilization; it allows the tallying of taxes and levies, the inventorying of goods, the efficient dissemination of decrees, the beginning of contracts and laws and census-taking. It was if anything even more important in Egypt than in other places, for it facilitated a system of strong central government that was extremely unusual in the world prior to the Enlightenment of many millennia later. (Ancient Egypt at its height was, in other words, a marked exception to the rule about local government being more important than national prior to the modern age.) Yet there’s a funny thing about Egypt’s famous system of hieroglyphs.

In nearby Sumer, almost certainly the very first civilization to develop writing, archaeologists have traced the gradual evolution of cuneiform writing by fits and starts over a period of many centuries. But in Egypt, by contrast, writing just kind of appears in the archaeological record, fully-formed and out of the blue, around 3000 BC. Now, it’s true that Egypt didn’t simply take the Sumerian writing system; the two use completely different sets of symbols. Yet many archaeologists believe that Egypt did take the idea of writing from Sumer, with whom they were actively trading by 3000 BC. With the example of a fully-formed vocabulary and grammar, all translated into a set of symbols, the actual implementation of the idea in the context of the Egyptian language was, one might say, just details.

How long might it have taken Egypt to make the conceptual leap that led to writing without the Sumerian example? Not soon enough, one suspects, to have built the Pyramids of Giza by 2500 BC. Further, we see other diverse systems of writing spring up all over the Mediterranean and Middle East at roughly the same time. Writing was an idea whose time had come, thanks to trading contacts. Trade meant that every new civilization wasn’t forced to reinvent every wheel for itself. It’s since become an axiom of history that an outward-facing civilization is synonymous with youth and innovation and vigorous growth, an inward-turning civilization synonymous with age and decadence and decrepit decline. It happened in Egypt; it happened in Greece; it happened in Rome.

But, you might say, the world has changed a lot since the heyday of Rome. Can this reality that ancient civilizations benefited from contact and trade with one another really be applied to something like the modern debate over free trade and globalization? It’s a fair point. To address it, let’s look at the progress of global free trade in times closer to our own.

In the game of Civilization, you won’t be able to set up a truly long-distance, globalized trading network with other continents until you’ve acquired the advance of Navigation, which brings with it the first ships that are capable of transporting your caravan units across large tracts of ocean. In real history, the first civilizations to acquire such things were those of Europe, in the late fifteenth century AD. Economists have come to call this period “The First Globalization.”

And, tellingly, they also call this period “The Great Divergence.” Prior to the arrival of ships capable of spanning the Atlantic and Pacific Oceans, several regions of the world had been on a rough par with Europe in terms of wealth and economic development. In fact, at least one great non-European civilization — that of China — was actually ahead; roughly one-third of the entire world’s economic output came from China alone, outdistancing Europe by a considerable margin. But, once an outward-oriented Europe began to establish itself in the many less-developed regions of the world, all of that changed, as Europe surged forward to the leading role it would enjoy for the next several centuries.

How did the First Globalization lead to the Great Divergence? Consider: when the Portuguese explorer Vasco da Gama reached India in 1498, he found he could buy pepper there, where it was commonplace, for a song. He could then sell it back in Europe, where it was still something of a delicacy, for roughly 25 times what he had paid for it, all while still managing to undercut the domestic competition. Over the course of thousands of similar trading arrangements, much of the rest of the world came to supply Europe with the cheap raw materials which were eventually used to fuel the Industrial Revolution and to kick the narrative of progress into overdrive, making even tiny European nations like Portugal into deliriously rich and powerful entities on the world stage.

And what of the great competing civilization of China? As it happens, it might easily have been China instead of Europe that touched off the First Globalization and thereby separated itself from the pack of competing civilizations. By the early 1400s, Chinese shipbuilding had advanced enough that its ships were regularly crisscrossing the Indian Ocean between established trading outposts on the east coast of Africa. If the arts of Chinese shipbuilding and navigation had continued to advance apace, it couldn’t have been much longer until its ships crossed the Pacific to discover the Americas. How much different would world history have been if they had? Unfortunately for China, the empire’s imperial leaders, wary of supposedly corrupting outside influences, made a decision around 1450 to adopt an isolationist posture. Existing trans-oceanic trade routes were abandoned, and China retreated behind its Great Wall, leaving Europe to reap the benefits of global trade. By 1913, China’s share of the world’s economy had dropped to 4 percent. The most populous country in the world had become a stagnant backwater in economic terms. So, we can say that Europe’s adoption of an outward-facing posture just as China did the opposite at this critical juncture became one of the great difference-makers in world history.

We can already see in the events of the late fifteenth century the seeds of the great debate over globalization that rages as hotly as ever today. While it’s clear that the developed countries of Europe got a lot out of their trading relationships, it’s far less clear that the less-developed regions of the world benefited to anything like the same extent — or, for that matter, that they benefited at all.

This first era of globalization was the era of colonialism, when developed Europe freely exploited the non-developed world by toppling or co-opting whatever forms of government already existed among its new trading “partners.” The period brought a resurgence of the unholy practice of slavery, along with forced religious conversions, massacres, and the theft of entire continents’ worth of territory. Much later, over the course of the twentieth century, Europe gradually gave up most of its colonies, allowing the peoples of its former overseas possessions their ostensible freedom to build their own nations. Yet the fundamental power imbalances that characterized the colonial period have never gone away. Today the developing world of poor nations trades with the developed world of rich nations under the guise of being equal sovereign entities, but the former still feeds raw materials to the industrial economies of the latter — or, increasingly, developing industrial economies feed finished goods to the post-industrial knowledge economies of the ultra-developed West. Proponents of economic globalization argue that all of this is good for everyone concerned, that it lets each country do what it does best, and that the resulting rising economic tide lifts all their boats. And they argue persuasively that the economic interconnections globalization has brought to the world have been a major contributing factor to the unprecedented so-called “Long Peace” of the last three quarters of a century, in which wars between developed nations have not occurred at all and war in general has become much less frequent.

But skeptics of economic globalism have considerable data of their own to point to. In 1820, the richest country in the world on a per-capita basis was the Netherlands, with an inflation-adjusted average yearly income of $1838, while the poorest region of the world was Africa, with an average income of $415. In 2017, the Netherlands had an average income of $53,582, while the poorest country in the world for which data exists was in, you guessed it, Africa: it was the Central African Republic, with an average income of $681. The richest countries, in other words, have seen exponential economic growth over the last two centuries, while some of the poorest have barely moved at all. This pattern is by no means entirely consistent; some countries of Asia in particular, such as Taiwan, South Korea, Singapore, and Japan, have done well enough for themselves to join the upper echelon of highly-developed post-industrial economies. Yet it does seem clear that the club of rich nations has grown to depend on at least a certain quantity of nations remaining poor in order to keep down the prices of the raw materials and manufactured goods they buy from them. If the rising tide lifted these nations’ boats to equality with those of the rich, the asymmetries on which the whole world economic order runs today wouldn’t exist anymore. The very stated benefits of globalization carry within them the logic for keeping the poor nations’ boats from rising too high: if everyone has a rich, post-industrial economy, who’s going to do the world’s grunt work? This debate first really came to the fore in the 1990s, slightly after the game of Civilization, as anti-globalization became a rallying cry of much of the political left in the developed world, who pointed out the seemingly inherent contradictions in the idea of economic globalization as a universal force for good.

Do note that I referred to “economic globalization” there. We should do what we can to separate it from the related concepts of political globalization and cultural globalization, even as the trio can often seem hopelessly entangled in the real world. Still, political globalization, in the form of international bodies like the United Nations and the International Court of Justice, is usually if not always supported by leftist critics of economic globalization.

But cultural globalization is decried to almost an equal degree, being sometimes described as the “McDonaldization” of the world. Once-vibrant local cultures all over the world, goes the claim, are being buried under the weight of an homogenized global culture of consumption being driven largely from the United States. Kids in Africa who have never seen a baseball game rush out to buy the Yankees caps worn by the American rap stars they worship, while gangsters kill one another over Nike sneakers in the streets of China. Developing countries, the anti-globalists say, first get exploited to produce all this crap, then get the privilege of having it sold back to them in ways that further eviscerate their cultural pride.

And yet, as always with globalization, there’s also a flip side. A counter-argument might point out that at the end of the day people have a right to like what they like (personally, I have no idea why anyone would eat a McDonald’s hamburger, but tastes evidently vary), and that cultures have blended with and assimilated one another from the days when ancient Egypt traded with ancient Sumer. Young people in particular in the world of today have become crazily adept at juggling multiple cultures: getting married in a traditional Hindu ceremony on Sunday and then going to work in a smart Western business suit on Monday, listening to Beyoncé on their phone as they bike their way to sitar lessons. Further, the emergence of new forms of global culture, assisted by the magic of the Internet, have already fostered the sorts of global dialogs and global understandings that can help prevent wars; it’s very hard to demonize a culture which has produced some of your friends, or even just creative expressions you admire. As the younger generations who have grown up as members of a sort of global Internet-enabled youth culture take over the levers of power, perhaps they will become the vanguard of a more peaceful, post-nationalist world.

The debate about economic globalization, meanwhile, has shifted in some surprising ways in recent years. Once a cause associated primarily with the academic left, cosseted in their ivory towers, the anti-globalization impulse has now become a populist movement that has spread across the political spectrum in many developed countries of the West. Even more surprisingly, the populist debate has come to center not on globalization’s effect on the poor nations on the wrong side of the power equation but on those rich nations who would seem to be its clear-cut beneficiaries. In just the last couple of years as of this writing, blue-collar workers who feel bewildered and displaced by the sheer pace of an ever-accelerating narrative of progress in an ever more multicultural world were a driving force behind the Brexit vote in Britain and the election of Donald Trump to the presidency of the United States. The understanding of globalization which drove both events was simplistic and confused — trade deficits are no more always a bad thing for any given country than is a national tax deficit — but the visceral anger behind them was powerful enough to shake the established Western world order more than any event since the World Trade Center attack of 2001. It should become more clear in the next decade or so whether, as I suspect, these movements represent a reactionary last gasp of the older generation before the next, more multicultural and internationalist younger generation takes over, or whether they really do herald a more fundamental shift in geopolitics.

As for the game of Civilization: to attempt to glean much more from its simple trading mechanisms than we already have would be to fall into the same trap that ensnared Alan Emrich. A skeptic of globalization might note that the game is written from the perspective of the developed world, and thus assumes that your civilization is among the privileged ranks for whom globalization on the whole has been — sorry, Brexiters and Trump voters! — a clear benefit. This is true even if the name of the civilization you happen to be playing is the Aztecs or the Zulus, peoples for whom globalization in the real world meant the literal end of their civilizations. As such examples prove, the real world is far more complicated than the game makes it appear. Perhaps the best lesson to take away — from the game as well as from the winners and arguable losers of globalization in our own history — is that it really does behoove a civilization to actively engage with the world. Because if it doesn’t, at some point the world will decide to engage with it.

(Sources: the books Civilization, or Rome on 640K a Day by Johnny L. Wilson and Alan Emrich, The End of History and the Last Man by Francis Fukuyama, Economics by Paul Samuelson, The Rise and Fall of Ancient Egypt by Toby Wilkinson, Enlightenment Now: The Case for Reason, Science, Humanism, and Progress by Steven Pinker, Global Economic History: A Very Short Introduction by Robert C. Allen, Globalization: A Very Short Introduction by Manfred B. Steger, Taxation: A Very Short Introduction by Stephen Smith, and Guns, Germs, and Steel: The Fates of Human Societies by Jared Diamond.)

 

Tags: , , ,

The Game of Everything, Part 8: Civilization and Government II (Democracy, Communism, and Anarchy)

Democracy is like a raft. It never sinks, but, damn it, your feet are always in the water.

— Fisher Ames

What can we say about democracy, truly one of the most important ideas in human history? Well, we can say, for starters, that it’s yet another Greek word, a combination of “demos” — meaning the people or, less favorably, the mob — with “kratos,” meaning rule. Rule by the people, rule by the mob… the preferred translations have varied with the opinion of the translator.

The idea of democracy originated, as you might expect given the word’s etymology, in ancient Greece, where Plato detested it, Aristotle was ambivalent about it, and the citizens of Athens were intrigued enough to actually try it out for a while in its purest form: that of a government in which every significant decision is made through a direct vote of the people. Yet on the whole it was regarded as little more than an impractical ideal for many, many centuries, even as some countries, such as England, developed some mechanisms for sharing power between the monarch and elected or appointed representatives of other societal interests. It wasn’t until 1776 that a new country-to-be called the United States declared its intention to make a go of it as a full-blown representational democracy, thereby touching off the modern era of government, in which democracy has increasingly come to be seen as the only truly legitimate form of government in the world.

Like the Christianity that had done so much to lay the groundwork for its acceptance, democracy was a meme with such immediate, obvious mass appeal that it was well-nigh impossible to control once the world had a concrete example of it to look at in the form of the United States. Over the course of the nineteenth century, responding to the demands of their restive populations, remembering soberly what had happened to Louis XVI in France when he had tried to resist the democratic wave, many of the hidebound old monarchies of Europe found ways to democratize in part if not in total; in Britain, for example, about 40 percent of adult males were allowed to vote by 1884. When the drift toward democracy failed to prevent the carnage of World War I, and when that war was followed by a reactionary wave of despotic fascism, many questioned whether democracy was really all it had been cracked up to be. Yet even as the pundits doubted, the slow march of democracy continued; by 1930, almost all adult citizens of Britain, including women, were allowed to vote. By the time the game of Civilization was made near the end of the twentieth century, any doubts about democracy’s ethical supremacy and practical efficacy had been cast aside, at least in the developed West. In missives like Francis Fukuyama’s The End of History, it was once again being full-throatedly hailed as the natural endpoint of the whole history of human governance.

We may not wish to go as far as calling democracy the end of history, but there’s certainly plenty of historical data in its favor. There’s been an undeniable trend line from the end of the eighteenth century to today, in which more and more countries have become more and more democratic. And, equally importantly, over the last century or so virtually all of the most successful countries in terms of per-capita economic performance have been democracies. A few interrelated factors likely explain why this should be the case.

One of them is the reality that as societies and economies develop they inevitably become more and more complex, a confusing mosaic of competing and cooperating interests which seemingly only democracy is equipped to navigate. “Democracies permit participation and therefore feedback,” writes Francis Fukuyama.

Another factor is the way that democracies manage to subsume within them the seemingly competing virtues of stability and renewal. As anyone who’s observed the worldwide stock markets after one of President Donald Trump’s more unhinged tweets can attest, business in particular loves stability and hates the uncertainty that’s born of political change. Yet often change truly is necessary, and often an aged, rigid-thinking despot or monarch is the very last person equipped to push it through. An election every fixed number of years provides a country with the ability to put new blood in power whenever it’s needed, without the chaos of revolution.

The final factor is another reality disliked by despots everywhere: the reality that education and democracy go hand in hand. A successful economy requires an educated workforce, but an educated workforce has a disconcerting tendency to demand a greater role in civic life. Francis Fukuyama:

Economic development demonstrates to the slave the concept of mastery, as he discovers he can master nature through technology, and master himself as well through the discipline of work and education. As societies become better educated, slaves have the opportunity to become more conscious of the fact that they are slaves and would like to be masters, and to absorb the ideas of other slaves who have reflected on their condition of servitude. Education teaches them that they are human beings with dignity, and that they ought to struggle to have that dignity recognized.

When making the game of Civilization, Sid Meier and Bruce Shelley clearly understood the longstanding relationship between a stable democracy and a strong economy — a relationship which is engendered by all of the factors I’ve just described. Switching your government to democracy in the game thus supercharges your civilization’s economic performance, dramatically increasing the number of “trade” units your cities collect.

But the game isn’t always so clear-sighted; the Civilopedia describes democracy as “fragile” in comparison to other forms of government. I would argue that in many ways just the opposite is the case. It’s true that democracies can be incredibly difficult to start in a country with little tradition of same, as the multiple false starts that we’ve seen in places like Russia and much of sub-Saharan Africa will attest. Yet once they’ve taken root they can be extremely difficult if not impossible to dislodge. Having, as we’ve already seen, the means of self-correction baked into them in a way that no other form of government does, mature democracies are surprisingly robust things. In fact, examples of mature, stable democracies falling back into autocracy simply don’t exist in history to date. [1]The collapsed democracies of places like Venezuela and Sri Lanka, which managed on paper to survive several decades before their downfall, could never be described as mature or stable, having been plagued throughout those decades with constant coup attempts and endemic corruption. Ditto Turkey, which has sadly embraced Putin-style sham democracy in the last few years after almost a century of intermittent crises, including earlier coups or military interventions in civilian government in 1960, 1971, 1980, and 1997. Of course, we have to be wary of straying into the logical fallacy of simply defining any democracy which collapses as never having been stable to begin with. Still, I think the evidence, at least as of this writing, justifies the claim that a mature, stable democracy has never yet collapsed back into blatant authoritarianism. History would seem to indicate that, if a new democracy can survive thirty or forty years without coups or civil wars — long enough, one might say, for democracy to put down roots and become an inviolate cultural tradition — it can survive for the foreseeable future.

Ironically, Civilization portrays its dubious assertion of democratic “fragility” using methods that actually do feel true to history. The ease with which democracies can fall into unrest means that you must pay much closer attention to public opinion — taking the form of your population’s proportion of “unhappy” to “happy” citizens — than under any other system of government. Any democratic politician in the real world, forced to live and die by periodic opinion polls that take the form of elections, would no doubt sympathize with your plight. It’s particularly difficult in the game to prosecute a foreign war as a democracy, both because sending military units abroad sends your population’s morale into the toilet and because the game forces you to always accept peace overtures from your enemies as a matter of public policy.

In light of this last aspect of the game, the intersection of democracy and war in the real world merits digging into a bit further. Earlier in this series of articles, I wrote about the so-called “Long Peace” in which we’ve been living since the end of World War II, in which the great powers of the world have ceased to fight one another directly even when they find themselves at odds politically, and in which war in general has been on a marked decline in the world. I introduced theories about why that might be, such as the fear of nuclear annihilation and the emergence of global peacekeeping institutions like the United Nations. Well, another strong theory comes down to the advance of democracy. It’s long been an accepted rule among historians that mature, stable democracies simply don’t go to war with one another. Thus, as democracies multiply in the world, the possibilities for war decrease in rhythm, thanks to the incontrovertible logic of statistics. For this reason, some historians prefer to call the Long Peace the “Democratic Peace.”

Civilization reflects this democratic aversion to war through the draconian disadvantages that make its version of democracy, although the best government you can have in peacetime, the absolute worst you can have during war. As demonstrated not least by the United States’s many and varied military interventions since 1945, the game if anything overstates the case for democracy as force for peace. Yet, as I also noted in that earlier article, this crippling need the United States military now feels to make its wars, which are now covered by legions of journalists and shown every night on television, such clean affairs says much about its citizens’ unwillingness to accept the full, ugly toll of the country’s voluntary “police actions” and “liberations.”

But what of wars that have bigger stakes? Civilization‘s mechanics actually vastly understate the case for democracy here. They fail to account for the fact that, once the people of a democracy have firmly committed themselves to fighting an all-out war, history gives us little reason to believe that they can’t prosecute that war as well as they could under any other form of government. In reality, the strong economies that usually accompany democracies are an immense military advantage; the staggering economic might of the United States is undoubtedly the primary reason the Allied Powers were able to reverse the tide of Nazi Germany and Imperial Japan and win World War II in, all things considered, fairly short order.

There’s one final element of the game of Civilization‘s take on democracy that merits discussion: its complete elimination of corruption. Under other forms of government, the corruption mechanic causes cities other than your capital to lose a portion of their economic potential to this most insidious of social forces, with how much they lose depending on their distance from your capital. You can combat it only by building courthouses in some of your non-capital cities; they’re fairly expensive in both purchase and maintenance costs, but reduce corruption within their sphere of influence. Or you can eliminate all corruption at a stroke by making your civilization a democracy.

At first blush, this sounds both hilarious and breathtakingly naive. It would seem to indicate, as Johnny L. Wilson and Alan Emrich note in Civilization: or Rome on 640K a Day, that Meier and Shelley’s research into the history of democracy neglected such icons of its American version as Tammany Hall and Teapot Dome, not to mention Watergate. Yet when we really stop to consider, we find that this seemingly naive mechanic may actually be one of the most historically and sociologically perceptive in the whole game.

If you’ve ever traveled independently in a non-democratic, less-developed country, you’ve likely seen a culture of corruption first-hand. Personal business there is done through wads of cash passed from pocket to pocket, and every good and service tends to have a price that fluctuates from customer to customer, based on a reading of what that particular market will bear. Most obviously from your foreigner’s perspective, there are tourist prices and native prices.

The asymmetries that lead to the rampant “cheating” of foreign customers aren’t hard to understand. You can pay twenty times the going rate for that bottle of soda and never think about it again, while your shopkeeper can use the extra money to put some meat on his family’s table tonight; the money is far more important to him than it is to you because you are rich and he is poor. This reality will probably cause you to give up quibbling about petty (to you) sums in fairly short order. But the mindset behind it is deadly to a country’s economic prospects — not least to its tax base, which could otherwise be used to institute the programs of education and infrastructure that can lead a country out of the cycle of poverty. High levels of corruption are comprehensively devastating to a country’s economy — witness, to take my favorite whipping boy again, Vladimir Putin’s thoroughly corrupt Russia with its economy 7 percent the size of the United States’s — while a relative lack of corruption allows it to thrive.

As it happens, corruption levels across government, business, and personal life in the real world correlate incredibly well with the presence or absence of democracy. When we look at the ten least-corrupt countries in the world according to the Corruption Perceptions Index for 2017, we find that nine of them are among the nineteen countries that are given the gold star of Full Democracy by The Economist‘s latest Democracy Index. (Singapore, the sole exception among the top ten, is classified as a Hybrid System.) Meanwhile none of the ten most-corrupt countries qualify as Full or even Flawed Democracies, with seven of the ten classified as full-on authoritarian states. When we further consider that levels of corruption are inversely correlated to a country’s overall economic performance, we add to our emerging picture of just why democracy has accrued so much wealth and power to the developed West since the beginning of the great American experiment back in 1776.

And there may be yet another, more subtle inverse linkage between democracy and corruption. As I noted at the beginning of this pair of articles on Civilization‘s systems of government, I’ve tried to arrange them in an order that reflects the relative stress they place on the individual leader versus the institutions of leadership. Thus the despotic state and the monarchy are so defined by their leaders as to be almost indistinguishable as entities apart from them, while the republic and the democracy mark the emergence of the concept of the state as a sovereign entity unto itself, with its individual leaders mere stewards of a legacy greater than themselves. I don’t believe that this shift in thinking is reflected only in a country’s leadership; it rather extends right through its society. A culture of corruption emphasizes personal, transactional relationships, while its opposite places faith in strong, stable institutions with a lifespan that will hopefully transcend that of the people who staff them at any given time.

So, let’s turn back now to the game’s once-laughable assertion that democracy eliminates corruption, which now seems at least somewhat less laughable. It is, of course, an abstraction at best; a country can no more eliminate corruption than it can eliminate poverty or terrorism (to name a couple of other non-proper nouns on which our politicians like to declare war). Yet a country can sharply de-incentivize it by bringing it to light when it does appear, and by shaming and punishing those who engage in it.

Given the times in which I’m writing this article, I do understand how strange it may sound to argue that Civilization‘s optimistic take on corruption in democracy is at bottom a correct one. Just a couple of years ago in the Full Democracy of Germany, the twelfth least-corrupt country on the planet according to the Corruption Perceptions Index, executives in the biggest of the country’s auto manufacturers were shown to have concocted a despicable scheme to cheat emissions standards worldwide in the name of profit, ignoring the environmental consequences to future generations. And as I write these words the Trump administration in the Flawed Democracy of the United States, sixteenth least-corrupt country on the planet, has so many ongoing scandals that the newspapers literally don’t have enough reporters to cover them all. But the fact that we know about these scandals — that we’re reading about them and arguing about them and in some cases taking to the streets to protest them — is proof that liberal democracy is still working rather than the opposite. Compare the anger and outrage manifested by opponents and defenders alike of Donald Trump with the sullen, defeated acceptance of an oligarchical culture of corruption that’s so prevalent in Russia.

Which isn’t to say that democracy is without its disadvantages. From the moment the idea of rule by the people was first broached in ancient Athens, it’s had fierce critics who have regarded it as inherently dangerous. Setting aside the despots and monarchs who have a vested interest in other philosophies of government, thoughtful criticisms of democracy have almost always boiled down to the single question of whether the great unwashed masses can really be trusted to rule.

Plato was the first of the great democratic skeptics, describing it as the victory of opinion over knowledge. Many of the great figures of American history have ironically taken his point of view to heart, showing considerable ambivalence toward this supposedly greatest of American virtues. The framers of the Constitution twisted themselves into knots over a potential tyranny of the ignorant over the educated, and built into it machinations to hopefully prevent such a scenario — machinations that still determine the direction of American politics to this day. (The electoral college which has awarded the presidency twice in the course of the last five elections to someone who didn’t win the popular vote was among the results of the Founding Fathers’ terror of the masses; in amplifying the votes of the country’s generally less-educated rural areas in recent years, it has arguably had exactly the opposite of its intended effect). Even the great progressive justice Oliver Wendell Holmes could disparage democracy as merely “what the crowd wants.”

In the cottage industry of American political punditry as well, there’s a long tradition of lamenting the failure of the working class to vote their own self-interest on economic matters, of earnest hand-wringing over the way they supposedly fall prey instead to demagogic appeals to cultural identity and religion. One of the best-selling American nonfiction books of 2011 was The Myth of the Rational Voter, which deployed reams of sociological data to reveal that (gasp!) the ballot-box choices of most people have more to do with emotion, prejudice, and rigid ideology than rationality or enlightened self-interest. Recently, such concerns have been given new urgency among the intellectual elite all over the West by events like the election of Donald Trump in the United States, the Brexit vote in Britain, and the wave of populist political victories and near-victories across Europe — all movements that found the bulk of their support among the less educated, a fact that was lost on said elite not at all.

Back in 1872, the British journalist Walter Bagehot wrote of the dangers of rampant democracy in the midst of another conflicted time in British history, as the voting franchise was being gradually expanded through a series of controversial so-called “Reform Bills.” His writing rings in eerie accord with the similar commentaries from our own time, warning as it does of “the supremacy of ignorance over instruction and of numbers [of voters] over knowledge”:

In plain English, what I fear is that both our political parties will bid for the support of the working man; that both of them will promise to do as he likes if he will only tell them what it is. I can conceive of nothing more corrupting or worse for a set of poor ignorant people than that two combinations of well-taught and rich men should constantly defer to their decision, and compete for the office of executing it. “Vox populi” [“the voice of the people”] will be “Vox diaboli” [“the voice of the devil”] if it is worked in that manner.

Consider again my etymology of the word “democracy” from the beginning of this article. “Demos” in the Greek can variously mean, as I explained, the people or the mob. It’s the latter of these that is instinctively feared, by no means entirely without justification, by democratic skeptics like the ones whose views I’ve just been describing. In The Origins of Totalitarianism, Hannah Arendt defines the People as a constructive force, citizens acting in good faith to improve their country’s society, while the Mob is a destructive force, citizens acting out of hate and fear against rather than for the society from which they feel themselves excluded. We often hear it suggested today that we may have reached the tipping point where the People become a Mob in many places in the West. We hear frequently that the typical Brexit or Trump voter feels so disenfranchised and excluded that she just doesn’t care anymore, that she wants to throw Molotov cocktails into the middle of the elites’ most sacred institutions and watch them burn — that she wants to blow up the entire postwar world order that progressives like me believe have kept us safe and prosperous for all these decades.

I can’t deny that the sentiment exists, sometimes even with good reason; modern democracies all remain to a greater or lesser degree flawed creations in terms of equality, opportunity, and inclusivity. I will, however, offer three counter-arguments to the Mob theory of democracy — one drawing from history, one from practicality, and one from a thing that seems in short supply these days, good old idealistic humanism.

My historical argument is that democracies are often messy, chaotic things, but, once again — and this really can’t be emphasized enough — a mature, stable democracy has never, ever collapsed back into a more retrograde system of government. If it were to happen to a democracy as mature and stable as the United States, as is so often suggested by alarmists in the Age of Trump, it would be one of the more shockingly unprecedented events in all of history. As things stand today, there’s little reason to believe that the institutions of democracy won’t survive President Donald Trump, as they have 44 other good, bad, and indifferent presidents before him. Ditto with respect to many of the other reactionary populist waves in other developed democracies.

My practical argument is the fact that, while democracies sometimes go down spectacularly misguided paths at the behest of their citizenry, they’re also far better at self-correcting than any other form of government. The media in the United States has made much of the people who were able to justify voting for Donald Trump in 2016 after having voted for Barack Obama in 2008 and 2012. It’s become fashionable on this basis to question whether the ebbing of racial animus the latter’s election had seemed to represent was all an illusion. Yet there’s grounds for hope as well as dismay there for the current president’s opponents — grounds for hope in the knowledge that the pendulum can swing back in the other direction just as quickly. The anonymity of the voting booth means that people have the luxury of changing their minds entirely with the flick of a pen, without having to justify their choice to anyone, without losing face or appearing weak. Many an autocratic despot or monarch has doubtless dreamed of the same luxury. This unique self-correcting quality of democracy does much to explain why this form of government that the Civilopedia describes as so “fragile” is actually so amazingly resilient.

Finally, my argument from principle comes from the same idealistic place as those famous opening paragraphs of the American Declaration of Independence (“We hold these truths to be self-evident…”). The Enlightenment philosophy that led to that document said, for the first time in the history of the world, that every man was or ought to be master of his own life. If we believe wholeheartedly in these metaphysical principles, we must believe as well that even a profoundly misguided democracy is superior to Plato’s beloved autocracy — even an autocracy under a “philosopher king” who benevolently makes all the best choices for the good of his country’s benighted citizens. For rule by the people is itself the greatest good, and one which no philosopher king can ever provide. Perhaps the best way to convert a Mob back into a People is to let them have their demagogues. When it doesn’t work out, they can just vote them out again come next election and try something else. What other form of government can make that claim?

Most people in the West during most of the second half of the twentieth century would agree that the overarching historical question of their times was whether the world’s future lay with democracy or communism. This was, after all, the question over which the Cold War was being fought (or, if you prefer, not being fought).

For someone studying the period from afar, however, the whole formulation is confusing from the get-go. Democracy has always been seen as a system of government, while communism, in theory anyway, has more to do with economics. In fact, the notion of a “communist democracy,” oxymoronic as it may sound to Western sensibilities, is by no means incompatible with communist theory as articulated by Karl Marx. Plenty of communist states once claimed to be exactly that, such as the German Democratic Republic — better known as East Germany. It’s for this reason that, while people in the West spoke of a Cold War between the supposed political ideologies of communism and democracy, people in the Soviet sphere preferred to talk of a conflict between the economic ideologies of communism and capitalism. And yet accepting the latter’s way of framing the conflict is giving twentieth-century communism far too much credit — as is, needless to say, accepting communism’s claim to have fostered democracies. By the time the Cold War got going in earnest, communism in practice was already a cynical lie.

This divide between communism as it exists in the works of Karl Marx and communism as it has existed in the real world haunts every discussion of the subject. We’ll try to pull theory and practice apart by looking first at Marx’s rosy nineteenth-century vision of a classless society of the future, then turning to the ugly reality of communism in the twentieth century.

One thing that makes communism unique among the systems of government we’ve examined thus far is how very recent it is. While it has roots in Enlightenment thinkers like Henri de Saint-Simon and Charles Fourier, in its complete form it’s thoroughly a product of the Industrial Revolution of the nineteenth century. Observing the world around him, Karl Marx divided society in the new industrial age into two groups. There were the “bourgeoisie,” a French word meaning literally “those who live in a borough,” or more simply “city dwellers”; these people owned the means of industrial production. And then there were the “proletariat,” a Latin word meaning literally “without property”; these people worked the means of production. Casting his eye further back, Marx articulated a view of all of human history as marked by similar dualities of class; during the Middle Ages, for instance, the fundamental divide was between the aristocrats who owned the land which was that era’s wellspring of wealth and the peasants who worked it. “The history of all hitherto existing societies,” he wrote, “is the history of class struggles.” As I mentioned in a previous article, his economic theory of history divided it into six phases: pre-civilized “primitive communism,” the “slave society” (i.e., despotism), feudalism (i.e., monarchy), pure laissez-faire capitalism (the phase the richest and most developed countries were in at the time he wrote), socialism (a mixed economy, not all that different from most of the developed democracies of today), and mature communism. Importantly, Marx believed that the world had to work through these phases in order, each one laying the groundwork for what would follow.

But, falling victim perhaps to a tendency that has dogged many great theorists of history, Marx saw his own times’ capitalist phase as different from all those that had come before in one important respect. Previously, class conflicts had been between the old elite and a new would-be elite that sought to wrest power from them — most recently, the landed gentry versus the new capitalist class of factory owners. But now, with the industrial age in full swing, he believed the next big struggles would be between the bourgeois elites and the proletarian masses as a whole. The proletariat would eventually win those struggles, resulting in a new era of true equality and majority rule. (Here, the eagerness of so many of the later communist states to label themselves democracies starts to become more clear.)

In light of what would follow in the name of Karl Marx, it’s all too easy to overlook the fact that he didn’t see himself as the agent which would bring about this new era; his communism was a description of what would happen in the future rather than a prescription for what should happen. Many of the direct calls to action in 1848’s The Communist Manifesto, by far his most rabble-rousing document, would ironically be universally embraced by the liberal democracies which would become the ideological enemy of communism in the century to come: things such as a progressive income tax, the abolition of child labor, and a basic taxpayer-funded education for everyone. The literary project he considered his most important life’s work, the famously dense three volumes of Capital, are, as the name would indicate, almost entirely concerned with capitalism and its discontents as Marx understood them to already exist, saying almost nothing about the communist future. Written later in his life and thus reflecting a more mature form of his philosophy, Capital shies away from even such calls to action as are found in The Communist Manifesto, saying again and again that the contradictions inherent in capitalism itself will inevitably bring it down when the time is right.

By this point in this life, Marx had become a thoroughgoing historical determinist, and was deeply wary of those who would use his theories to justify premature revolutions of the proletariat. Even The Communist Manifesto‘s calls to action had been intended not to force the onset of the last phase of history — communism — but to prod the world toward the penultimate phase of socialism. True communism, Marx believed, was still a long, long way off. Not least because he wrote so many more words about capitalism than he did about communism, Marx’s vision of the latter can be surprisingly vague for what would later become the ostensible blueprint for dozens upon dozens of governments, including those of two of the three biggest nations on the planet.

With this very basic understanding of Marxist theory, we can begin to understand the intellectual rot that lay at the heart of communism as it was actually implemented in the twentieth century. Russia in 1917 hadn’t even made it to Marx’s fourth phase of industrialized capitalism; as an agrarian economy, more feudal than capitalist, it was still mired in the third phase of history. Yet Vladimir Lenin proposed to leapfrog both of the intervening phases and take it straight to communism — something Marx had explicitly stated was not possible. Similarly ignoring Marx’s description of the transition to communism as a popular revolution of the people, Lenin’s approach hearkened back to Plato’s philosopher kings; he stated that he and his secretive cronies represented the only ones qualified to manage the transition. “It is an irony of history,” remarks historian Leslie Holmes dryly, “that parties committed to the eventual emergence of highly egalitarian societies were in many ways among the most elitist in the world.”

When Lenin ordered the cold-blooded murder of Czar Nicholas II and his entire family, he sketched the blueprint of communism’s practical future as little more than amoral despotism hiding behind a facade of Marxist rhetoric. And when capitalist systems all over the world didn’t collapse in the wake of the Russian Revolution, as he had so confidently predicted, there was never a question of saying, “Well, that’s that then!” and moving on. One of the most repressive governments in history was now firmly entrenched, and it wouldn’t give up power easily. “Socialism in One Country” became Josef Stalin’s slogan, as nationalism became an essential component of the new communism, again in direct contradiction to Marx’s theory of a new world order of classless equality. The guns and tanks parading through Red Square every May Day were a yearly affront to everything Marx had written.

Still, communist governments did manage some impressive achievements. Universal free healthcare, still a pipe dream throughout the developed West at the time, was achieved in the new Soviet Union in the 1920s. Right through the end of the Cold War, average life expectancy and infant-mortality rates weren’t notably worse in most communist countries than they were in Western democracies. Their educational systems as well were often competitive with those in the West, if sometimes emphasizing rote learning over critical thinking to a disturbing degree. Illiteracy was virtually nonexistent behind the Iron Curtain, and fluency in multiple languages was at least as commonplace as in Western Europe. Women were not just encouraged but expected to join the workforce, and were given a degree of equality that many of their counterparts in the West could only envy. The first decade or even in some cases several decades after the transition to communism would often bring an economic boom, as women entered the workforce for the first time and aged infrastructures were wrenched toward modernity, arguably at a much faster pace than could have been managed under a government more concerned about the individual rights of its citizens. Under these centrally planned economies, unemployment and the pain it can cause were literally unknown, as was homelessness. In countries where cars were still a luxury reserved for the more equal among the equal, public transport too was often surprisingly modern and effective.

In time, however, economic stagnation inevitably set in. Corruption in the planning departments — the root of the oligarchical system that still holds sway in the Russia of today — caused some industries to be favored over others with no regard to actual needs; the growing complexity of a modernizing economy overwhelmed the planners; a lack of personal incentive led to a paucity of innovation; prices and demand seemed to have no relation to one another, distorting the economy from top to bottom; the quality of consumer goods remained notoriously terrible. By the late 1970s, the Soviet Union, possessed of some of the richest farmland in the world, was struggling and failing just to feed itself, relying on annual imports of millions of tons of wheat and other raw foodstuffs. The very idea of the shambling monstrosity that was the Soviet economy competing with the emerging post-industrial knowledge economies of the West, which placed a premium on the sort of rampant innovation that can only be born of free press, free speech, and free markets, had become laughable. Francis Fukuyama:

The failure of central planning in the final analysis is related to the problem of technological innovation. Scientific inquiry proceeds best in an atmosphere of freedom, where people are permitted to think and communicate freely, and more importantly where they are rewarded for innovation. The Soviet Union and China both promoted scientific inquiry, particularly in “safe” areas of basic or theoretical research, and created material incentives to stimulate innovation in certain sectors like aerospace and weapons design. But modern economies must innovate across the board, not only in hi-tech fields but in more prosaic areas like the marketing of hamburgers and the creation of new types of insurance. While the Soviet state could pamper its nuclear physicists, it didn’t have much left over for the designers of television sets, which exploded with some regularity, or for those who might aspire to market new products to new consumers, a completely non-existent field in the USSR and China.

Marx had dreamed of a world where everyone worked just four hours per day to contribute her share of the necessities of life to the collective, leaving the rest of her time free to pursue hobbies and creative endeavors. Communism in practice did manage to get half of that equation right; few people put in more than four honest hours of labor per day. (As a popular joke said, “they pretend to pay me and I pretend to work.”) But these sad, ugly gray societies hardly encouraged a fulfilling personal life, given that the tools for hobbies were almost impossible to come by and so many forms of creative expression could land you in jail.

If there’s one adjective I associate more than any other with the communist experiments of the twentieth century, it’s “corrupt.” Born of a self-serving corruption of Marx’s already questionable theories, their economies functioned so badly that corruption on low and on high, of forms small and large, was the only way they could muddle through at all. Just as the various national communist parties were vipers’ nests of intrigue and backstabbing in the name of very non-communist personal ambitions, ordinary citizens had to rely on an extensive black market that lived outside the planned economy in order to simply survive.

So, in examining the game of Civilization‘s take on communism, one first has to ask which version of same is being modeled, the idealistic theory or the corrupt reality. It turns out pretty obviously to be the reality of communism as it was actually practiced in the twentieth century. In another of their crazily insightful translations of history to code, Meier and Shelley made communism’s effect on the game’s mechanic of corruption its defining attribute. A communist economy in the game performs up to the same mediocre baseline standard as a monarchy — which is probably being generous, on the whole. Yet it has the one important difference that economy-draining corruption, rather than increasing in cities located further from your capital, is uniform across the entirety of your civilization. While the utility of this is highly debatable in game terms, it’s rather brilliant and kind of hilarious as a reflection of the way that corruption and communism have always been so inseparable from one another — essential to one another, one might even say — in the real world. After all, when your economy runs on corruption, you need to make sure you have it everywhere.

For all that history since the original Civilization was made has had plenty of troubling aspects, it hasn’t seen any resurgence of communism; even Russia hasn’t regressed quite that far. The new China, while still ruled by a cabal who label themselves the Communist Party, gives no more than occasional lip service to Chairman Mao, having long since become something new to history: a joining of authoritarianism and capitalism that’s more interested in doing business with the West than fomenting revolutions there, and has been far more successful at it than anyone could have expected, enough to challenge some of the conventional wisdom that democracy is required to manage a truly thriving economy. (I’ll turn back to the situation in China and ask what it might mean in the last article of this series.) Meanwhile the last remaining hard-line communist states are creaky old relics from another era, just waiting to take their place in hipster living rooms between vinyl record albums and lava lamps; a place like North Korea would almost be kitschy if its chubby man-child of a leader wasn’t killing and torturing so many of his own people and threatening the world with nuclear war.

When those last remaining old-school communist regimes finally collapse in one way or another, will that be that for Karl Marx as well? Probably not. There are still Marxists among us, many of whom say that the real, deterministic communist revolution is still ahead of us, who claim that the communism of the twentieth century was all a misguided and tragic false start, an attempt to force upon history what history was not yet ready for. They find grist for their mill in the fact that so many of the most progressive democracies in the world have embraced socialism, providing for their citizens much of what Marx asked for in The Communist Manifesto. If this vanguard has thus reached the fifth phase of history, can the sixth and final be far behind? We shall see. In the meantime, though, liberal democracy continues to provide something communism has never yet been able to: a practical, functional framework for a healthy economy and a healthy government right here and now, in the world in which we actually live.

I couldn’t conclude this survey without saying something about anarchy, Civilization‘s least desirable system of government — or, in this case, system of non-government. You fall into it only as a transitional phase between two other forms of government, or if you let your population under a democracy get too unhappy. Anarchy is, as the Civilopedia says, “a breakdown in government” that brings “panic, disruption, waste, and destruction.” It’s comprehensively devastating to your economy; you want to spend as little time in anarchy as you possibly can. And that, it would seem, is just about all there is to say about it.

Or is it? It’s worth noting that the related word “anarchism” in the context of government has another meaning that isn’t acknowledged by the game, one born from many of the same patterns of thought that spawned Karl Marx’s communism. Anarchism’s version of Marx could be said to be one Pierre-Joseph Proudhon, who in 1840 applied what had hitherto been a pejorative term to a new, positive vision of social organization characterized not by yet another new system of government but by government’s absence. Community norms, working in tandem with the natural human desire to be accepted and respected, could according to the anarchists replace government entirely. By 1905, they had earned themselves an entry in the Encyclopædia Britannica:

[Anarchism is] the name given to a principle or theory of life and conduct under which society is conceived without government — harmony in such a society being obtained, not by submission to law, or by obedience to any authority, but by free agreements, concluded between the various groups, territorial and professional, freely constituted for the sake of production and consumption, as also for the satisfaction of the infinite variety of needs and aspirations of a civilised being.

As a radical ideology advocating a classless society, anarchism has often seemed to walk hand in hand with communism. As an ideology advocating the absolute supremacy of individual freedom, it’s sometimes seemed most at home in right-wing libertarian circles. Yet its proponents insist it to be dramatically different from either of these philosophies, as described by the American anarchist activist and journalist Dwight Macdonald in 1957:

The revolutionary alternative to the status quo today is not collectivised property administered by a “workers’ state,” whatever that means, but some kind of anarchist decentralisation that will break up mass society into small communities where individuals can live together as variegated human beings instead of as impersonal units in the mass sum. The shallowness of the New Deal and the British Labour Party’s postwar regime is shown by their failure to improve any of the important things in people’s lives — the actual relationships on the job, the way they spend their leisure, and child-rearing and sex and art. It is mass living that vitiates all these today, and the State that holds together the status quo. Marxism glorifies “the masses” and endorses the State [the latter is not quite true in terms of Marx’s original theories, as we’ve seen]. Anarchism leads back to the individual and the community, which is “impractical” but necessary — that is to say, it is revolutionary.

As Macdonald tacitly admits, it’s always been difficult to fully grasp how anarchism would work in theory, much less in practice; if you’ve always felt that communism is too practical a political ideology, anarchism is the radical politics for you. Its history has been one of constant defeat — or rather of never even getting started — but it never seems to entirely go away. Like Rousseau’s vision of the “noble savage,” it will always have a certain attraction in a world that only continues to get more complicated, in societies that continue to remove themselves further and further from what feels to some like their natural wellspring. For this reason, we’ll have occasion to revisit some anarchist ideas again in the last article of this series.


 

What, then, should we say in conclusion about Civilization and government? The game has often been criticized for pointing you toward one type of government — democracy — as by far the best for developing your civilization all the way to Alpha Centauri. That bias is certainly present in the game, but it’s hard for me to get as exercised about it as some simply because I’m not at all sure it isn’t also present in history. At least if we define progress in the same terms as Civilization, democracy has proved itself to be more than just an airy-fairy ideal; it’s the most effective means for organizing a society which we’ve yet come up with.

Appeals to principle aside, the most compelling argument for democracy has long been the simple fact that it works, that it’s better than any other form of government at creating prosperous, peaceful countries where, as our old friend Georg Wilhelm Friedrich Hegel would put it, the most people have the most chance to fulfill their individual thymos. Tellingly, many of the most convincing paeans to democracy tend to come in the form of backhanded compliments. “Democracy is the worst form of government,” famously said Winston Churchill, “except for all those other forms that have been tried from time to time.” Or, as the theologian Reinhold Niebuhr wrote, “Man’s inclination to justice makes democracy possible, but man’s capacity for injustice makes it necessary.” Make no mistake: democracy is a messy business. But history tells us that it really does work.

None of this is to say that you should be sanguine about your democracy’s future, assuming you’re lucky enough to live in one. Like videogames, democracy is an interactive medium. Protests and bitter arguments are a sign that it’s working, not the opposite. So, go protest and argue and all the rest, but remember as you do so that this too — whatever this happens to be — shall pass. And, at least if history is any guide, democracy shall live on after it does.

(Sources: the books Civilization, or Rome on 640K a Day by Johnny L. Wilson and Alan Emrich, The End of History and the Last Man by Francis Fukuyama, The Republic by Plato, Politics by Aristotle, Plough, Sword, and Book: The Structure of Human History by Ernest Gellner, Aristocracy: A Very Short Introduction by William Doyle, Democracy: A Very Short Introduction by Bernard Crick, Plato: A Very Short Introduction by Julia Annas, Political Philosophy: A Very Short Introduction by David Miller, The Myth of the Rational Voter by Bryan Caplan, Anarchism: A Very Short Introduction by Colin Ward, Communism: A Very Short Introduction by Leslie Holmes, Corruption: A Very Short Introduction by Leslie Holmes, The Communist Manifesto by Karl Marx and Friedrich Engels, Capital by Karl Marx, The Better Angels of Our Nature: Why Violence Has Declined by Steven Pinker, What’s the Matter with Kansas: How Conservatives Won the Heart of America by Thomas Frank, The Origins of Totalitarianism by Hannah Arendt.)

Footnotes

Footnotes
1 The collapsed democracies of places like Venezuela and Sri Lanka, which managed on paper to survive several decades before their downfall, could never be described as mature or stable, having been plagued throughout those decades with constant coup attempts and endemic corruption. Ditto Turkey, which has sadly embraced Putin-style sham democracy in the last few years after almost a century of intermittent crises, including earlier coups or military interventions in civilian government in 1960, 1971, 1980, and 1997. Of course, we have to be wary of straying into the logical fallacy of simply defining any democracy which collapses as never having been stable to begin with. Still, I think the evidence, at least as of this writing, justifies the claim that a mature, stable democracy has never yet collapsed back into blatant authoritarianism.
 

Tags: , , ,