RSS

Tag Archives: nintendo

Putting the “J” in the RPG, Part 2: PlayStation for the Win

From the Seven Hills of Rome to the Seven Sages of China’s Bamboo Grove, from the Seven Wonders of the Ancient World to the Seven Heavens of Islam, from the Seven Final Sayings of Jesus to Snow White and the Seven Dwarfs, the number seven has always struck us as a special one. Hironobu Sakaguchi and his crew at Square, the people behind the Final Fantasy series, were no exception. In the mid-1990s, when the time came to think about what the seventh entry in the series ought to be, they instinctively felt that this one had to be bigger and better than any that had come before. It had to double down on all of the series’s traditional strengths and tropes to become the ultimate Final Fantasy. Sakaguchi and company would achieve these goals; the seventh Final Fantasy game has remained to this day the best-selling, most iconic of them all. But the road to that seventh heaven was not an entirely smooth one.

The mid-1990s were a transformative period, both for Square as a studio and for the industry of which it was a part. For the former, it was “a perfect storm, when Square still acted like a small company but had the resources of a big one,” as Matt Leone of Polygon writes.  Meanwhile the videogames industry at large was feeling the ground shift under its feet, as the technologies that went into making and playing console-based games were undergoing their most dramatic shift since the Atari VCS had first turned the idea of a machine for playing games on the family television into a popular reality. CD-ROM drives were already available for Sega’s consoles, with a storage capacity two orders of magnitude greater than that of the most capacious cartridges. And 3D graphics hardware was on the horizon as well, promising to replace pixel graphics with embodied, immersive experiences in sprawling virtual worlds. Final Fantasy VII charged headlong into these changes like a starving man at a feast, sending great greasy globs of excitement — and also controversy — flying everywhere.

The controversy came in the form of one of the most shocking platform switches in the history of videogames. To fully appreciate the impact of Square’s announcement on January 12, 1996, that Final Fantasy VII would run on the new Sony PlayStation rather than Nintendo’s next-generation console, we need to look a little closer at the state of the console landscape in the years immediately preceding it.


Through the first half of the 1990s, Nintendo was still the king of console gaming, but it was no longer the unchallenged supreme despot it had been during the 1980s. Nintendo had always been conservative in terms of hardware, placing its faith, like Apple Computer in an adjacent marketplace, in a holistic customer experience rather than raw performance statistics. As part and parcel of this approach, every game that Nintendo agreed to allow into its walled garden was tuned and polished to a fine sheen, having any jagged edges that might cause anyone any sort of offense whatsoever painstakingly sanded away. An upstart known as Sega had learned to live in the gaps this business philosophy opened up, deploying edgier games on more cutting-edge hardware. As early as December of 1991, Sega began offering its Japanese customers a CD-drive add-on for its current console, the Mega Drive (known as the Sega Genesis in North America, which received the CD add-on the following October). Although the three-year-old Mega Drive’s intrinsic limitations made this early experiment in multimedia gaming for the living room a somewhat underwhelming affair — there was only so much you could do with 61 colors at a resolution of 320 X 240 — it perfectly illustrated the differences in the two companies’ approaches. While Sega threw whatever it had to hand at the wall just to see what stuck, Nintendo held back like a Dana Carvey impression of George Herbert Walker Bush: “Wouldn’t be prudent at this juncture…”

Sony was all too well-acquainted with Nintendo’s innate caution. As the co-creator of the CD storage format, it had signed an agreement with Nintendo back in 1988 to make a CD drive for the upcoming Super Famicom console (which was to be known as the Super Nintendo Entertainment System in the West) as soon as the technology had matured enough for it to be cost-effective. By the time the Super Famicom was released in 1990, Sony was hard at work on the project. But on May 29, 1991, just three days before a joint Nintendo/Sony “Play Station” was to have been demonstrated to the world at the Summer Consumer Electronics Show in Chicago, Nintendo suddenly backed out of the deal, announcing that it would instead be working on CD-ROM technology with the Dutch electronics giant Philips — ironically, Sony’s partner in the creation of the original CD standard.

This prototype of the Sony “Play Station” surfaced in 2015.

Nintendo’s reason for pulling out seems to have come down to the terms of the planned business relationship. Nintendo, whose instinct for micro-management and tough deal-making was legendary, had uncharacteristically promised Sony a veritable free hand, allowing it to publish whatever CD-based software it wanted without asking Nintendo’s permission or paying it any royalty whatsoever. In fact, given that a contract to that effect had already been signed long before the Consumer Electronics Show, Sony was, legally speaking, still free to continue with the Play Station on its own, piggybacking on the success of Nintendo’s console. And initially it seemed inclined to do just that. “Sony will throw open its doors to software makers to produce software using music and movie assets,” it announced at the show, promising games based on its wide range of media properties, from the music catalog of Michael Jackson to the upcoming blockbuster movie Hook. Even worse from Nintendo’s perspective, “in order to promote the Super Disc format, Sony intends to broadly license it to the software industry.” Nintendo’s walled garden, in other words, looked about to be trampled by a horde of unwashed, unvetted, unmonetized intruders charging through the gate Sony was ready and willing to open to them. The prospect must have sent the control freaks inside Nintendo’s executive wing into conniptions.

It was a strange situation any way you looked at it. The Super Famicom might soon become the host of not one but two competing CD-ROM solutions, an authorized one from Philips and an unauthorized one from Sony, each using different file formats for a different library of games and other software. (Want to play Super Mario on CD? Buy the Philips drive! Want Michael Jackson? Buy the Play Station!)

In the end, though, neither of the two came to be. Philips decided it wasn’t worth distracting consumers from its own stand-alone CD-based “multimedia box” for the home, the CD-i.[1]Philips wasn’t, however, above exploiting the letter of its contract with Nintendo to make a Mario game and three substandard Legend of Zelda games available for the CD-i. Sony likewise began to wonder in the aftermath of its defiant trade-show announcement whether it was really in its long-term interest to become an unwanted squatter on Nintendo’s real estate.

Still, the episode had given some at Sony a serious case of videogame jealousy. It was clear by now that this new industry wasn’t a fad. Why shouldn’t Sony be a part of it, just as it was an integral part of the music, movie, and television industries? On June 24, 1992, the company held an unusually long and heated senior-management debate. After much back and forth, CEO Norio Ohga pronounced his conclusion: Sony would turn the Play Station into the PlayStation, a standalone CD-based videogame console of its own, both a weapon with which to bludgeon Nintendo for its breach of trust and — and ultimately more importantly — an entrée to the fastest-growing entertainment sector in the world.

The project was handed to one Ken Kutaragi, who had also been in charge of the aborted Super Famicom CD add-on. He knew precisely what he wanted Sony’s first games console to be: a fusion of CD-ROM with another cutting-edge technology, hardware-enabled 3D graphics. “From the mid-1980s, I dreamed of the day when 3D computer graphics could be enjoyed at home,” he says. “What kind of graphics could we create if we combined a real-time, 3D computer-graphics engine with CD-ROM? Surely this would develop into a new form of entertainment.”

It took him and his engineers a little over two years to complete the PlayStation, which in addition to a CD drive and a 3D-graphics system sported a 32-bit MIPS microprocessor running at 34 MHz, 3 MB of memory (of which 1 MB was dedicated to graphics alone), audiophile-quality sound hardware, and a slot for 128 K memory cards that could be used for saving game state between sessions, ensuring that long-form games like JRPGs would no longer need to rely on tedious manual-entry codes or balky, unreliable cartridge-mounted battery packs for the purpose.

 

In contrast to the consoles of Nintendo, which seemed almost self-consciously crafted to look like toys, and those of Sega, which had a boy-racer quality about them, the Sony PlayStation looked stylish and adult — but not too adult. (The stylishness came through despite the occasionally mooted comparisons to a toilet.)

The first Sony PlayStations went on sale in Tokyo’s famed Akihabara electronics district on December 3, 1994. Thousands camped out in line in front of the shops the night before. “It’s so utterly different from traditional game machines that I didn’t even think about the price,” said one starry-eyed young man to a reporter on the scene. Most of the shops were sold out before noon. Norio Ohga was mobbed by family and friends in the days that followed, all begging him to secure them a PlayStation for their children before Christmas. It was only when that happened, he would later say, that he fully realized what a game changer (pun intended) his company had on its hands. Just like that, the fight between Nintendo and Sega — the latter had a new 32-bit CD-based console of its own, the Saturn, while the former was taking it slowly and cautiously, as usual — became a three-way battle royal.

The PlayStation was an impressive piece of kit for the price, but it was, as always, the games themselves that really sold it. Ken Kutaragi had made the rounds of Japanese and foreign studios, and found to his gratification that many of them were tired of being under the heavy thumb of Nintendo. Sony’s garden was to be walled just like Nintendo’s — you had to pay it a fee to sell games for its console as well — but it made a point of treating those who made games for its system as valued partners rather than pestering supplicants: the financial terms were better, the hardware was better, the development tools were better, the technical support was better, the overall vibe was better. Nintendo had its own home-grown line of games for its consoles to which it always gave priority in every sense of the word, a conflict of interest from which Sony was blessedly free.[2]Sony did purchase the venerable British game developer and publisher Psygnosis well before its console’s launch to help prime the pump with some quality games, but it largely left it to manage its own affairs on the other side of the world. Game cartridges were complicated and expensive to produce, and the factories that made them for Nintendo’s consoles were all controlled by that company. Nintendo was notoriously slow to approve new production runs of any but its own games, leaving many studios convinced that their smashing success had been throttled down to a mere qualified one by a shortage of actual games in stores at the critical instant. CDs, on the other hand, were quick and cheap to churn out from any of dozens of pressing plants all over the world. Citing advantages like these, Kutaragi found it was possible to tempt even as longstanding a Nintendo partner as Namco — the creator of the hallowed arcade classics Galaxian and Pac-Man — into committing itself “100 percent to the PlayStation.” The first fruit of this defection was Ridge Racer, a port of a stand-up arcade game that became the new console’s breakout early hit.

Square was also among the software houses that Ken Kutaragi approached, but he made no initial inroads there. For all the annoyances of dealing with Nintendo, it still owned the biggest player base in the world, one that had treated Final Fantasy very well indeed, to the tune of more than 9 million games sold to date in Japan alone. This was not a partner that one abandoned lightly — especially not with the Nintendo 64, said partner’s own next-generation console, due at last in 1996. It promised to be every bit as audiovisually capable as the Sony PlayStation or Sega Saturn, even as it was based around a 64-bit processor in place of the 32-bit units of the competition.

Indeed, in many ways the relationship between Nintendo and Square seemed closer than ever in the wake of the PlayStation’s launch. When Yoshihiro Maruyama joined Square in September of 1995 to run its North American operations, he was told that “Square will always be with Nintendo. As long as you work for us, it’s basically the same as working for Nintendo.” Which in a sense he literally was, given that Nintendo by now owned a substantial chunk of Square’s stock. In November of 1995, Nintendo’s president Hiroshi Yamauchi cited the Final Fantasy series as one of his consoles’ unsurpassed crown jewels — eat your heart out, Sony! — at Shoshinkai, Nintendo’s annual press shindig and trade show. As its farewell to the Super Famicom, Square had agreed to make Super Mario RPG: Legend of the Seven Stars, dropping Nintendo’s Italian plumber into a style of game completely different from his usual fare. Released in March of 1996, it was a predictably huge hit in Japan, while also, encouragingly, leveraging the little guy’s Stateside popularity to become the most successful JRPG since Final Fantasy I in those harsh foreign climes.

But Super Mario RPG wound up marking the end of an era in more ways than Nintendo had imagined: it was not just Square’s last Super Famicom RPG but its last major RPG for a Nintendo console, full stop. For just as it was in its last stages of development, there came the earthshaking announcement of January 12, 1996, that Final Fantasy was switching platforms to the PlayStation. Et tu, Square? “I was kind of shocked,” Yoshihiro Maruyama admits. As was everyone else.

The Nintendo 64, which looked like a toy — and an anachronistic one at that — next to the PlayStation.

Square’s decision was prompted by what seemed to have become an almost reactionary intransigence on the part of Nintendo when it came to the subject of CD-ROM. After the two abortive attempts to bring CDs to the Super Famicom, everyone had assumed as a matter of course that they would be the storage medium of the Nintendo 64. It was thus nothing short of baffling when the first prototypes of the console were unveiled in November of 1995 with no CD drive built-in and not even any option on the horizon for adding one. Nintendo’s latest and greatest was instead to live or die with old-school cartridges which had a capacity of just 64 MB, one-tenth that of a CD.

Why did Nintendo make such a counterintuitive choice? The one compelling technical argument for sticking with cartridges was the loading time of CDs, a mechanical storage medium rather than a solid-state one. Nintendo’s ethos of user-friendly accessibility had always insisted that a game come up instantly when you turned the console on and play without interruption thereafter. Nintendo believed, with considerable justification, that this quality had been the not-so-secret weapon in its first-generation console’s victorious battle against floppy-disk-based 8-bit American microcomputers that otherwise boasted similar audiovisual and processing capabilities, such as the Commodore 64. The PlayStation CD drive, which could transfer 300 K per second into memory, was many, many times faster than the Commodore 64’s infamously slow disk drive, but it wasn’t instant. A cartridge, on the other hand, for all practical purposes was.

Fair enough, as far as it went. Yet there were other, darker insinuations swirling around the games industry which had their own ring of truth. Nintendo, it was said, was loath to give up its stranglehold on the means of production of cartridges and embrace commodity CD-stamping facilities. Most of all, many sensed, the decision to stay with cartridges was bound up with Nintendo’s congenital need to be different, and to assert its idiosyncratic hegemony by making everyone else dance to its tune while it was at it. The question now was whether it had taken this arrogance too far, was about to dance itself into irrelevance while the makers of third-party games moved on to other, equally viable alternative platforms.

Exhibit Number One of same was the PlayStation, which seemed tailor-made for the kind of big, epic game that every Final Fantasy to date had strained to be. It was far easier to churn out huge quantities of 3D graphics than it was hand-drawn pixel art, while the staggering storage capacity of CD-ROM gave Square someplace to keep it all — with, it should not be forgotten, the possibility of finding even more space by the simple expedient of shipping a game on multiple CDs, another affordance that cartridges did not allow. And then there were those handy little memory cards for saving state. Those benefits were surely worth trading a little bit of loading time for.

But there was something else about the PlayStation as well that made it an ideal match for Hironobu Sakaguchi’s vision of gaming. Especially after the console arrived in North America and Europe in September of 1995, it fomented a sweeping change in the way the gaming hobby was perceived. “The legacy of the original Playstation is that it took gaming from a pastime that was for young people or maybe slightly geeky people,” says longtime Sony executive Jim Ryan, “and it turned it into a highly credible form of mass entertainment, really comparable with the music business and the movie business.” Veteran game designer Cliff Bleszinski concurs: “The PlayStation shifted the console from having an almost toy-like quality into consumer electronics that are just as desired by twelve-year-olds as they are by 35-year-olds.”

Rather than duking it out with Nintendo and Sega for the eight-to-seventeen age demographic, Sony shifted its marketing attention to young adults, positioning PlayStation gaming as something to be done before or after a night out at the clubs — or while actually at the clubs, for that matter: Sony paid to install the console in trendy nightspots all over the world, so that their patrons could enjoy a round or two of WipEout between trips to the dance floor. In effect, Sony told the people who had grown up with Nintendo and Sega that it was okay to keep on gaming, as long as they did it on a PlayStation from now on. Sony’s marketers understood that, if they could conquer this demographic, that success would automatically spill down into the high-school set that had previously been Sega’s bread and butter, since kids of that age are always aspiring to do whatever the university set is up to. Their logic was impeccable; the Sony PlayStation would destroy the Sega Saturn in due course.

For decades now, the hipster stoner gamer, slumped on the couch with controller in one hand and a bong in the other, has been a pop-culture staple. Sony created that stereotype in the space of a year or two in the 1990s. Whatever else you can say about it, it plays better with the masses than the older one of a pencil-necked nerd sitting bolt upright on his neatly made bed. David James, star goalkeeper for the Premier League football team Liverpool F.C., admitted that he had gotten “carried away” playing PlayStation the night before by way of explaining the three goals that he conceded in a match against Newcastle. It was hard to imagine substituting “Nintendo” or “Saturn” for “PlayStation” in that statement. In May of 1998, Sony would be able to announce triumphantly that, according to its latest survey, the average age of a PlayStation gamer was a positively grizzled 22. It had hit the demographic it was aiming for spot-on, with a spillover that reached both younger and older folks. David Ranyard, a member of Generation PlayStation who has had a varied and successful career in games since the millennium:

At the time of its launch, I was a student, and I’d always been into videogames, from the early days of arcades. I would hang around playing Space Invaders and Galaxian, and until the PlayStation came out, that kind of thing made me a geek. But this console changed all that. Suddenly videogames were cool — not just acceptable, but actually club-culture cool. With a soundtrack from the coolest techno and dance DJs, videogames became a part of [that] subculture. And it led to more mainstream acceptance of consoles in general.

The new PlayStation gamer stereotype dovetailed beautifully with the moody, angsty heroes that had been featuring prominently in Final Fantasy for quite some installments by now. Small wonder that Sakaguchi was more and more smitten with Sony.

Still, it was one hell of a bridge to burn; everyone at Square knew that there would be no going back if they signed on with Sony. Well aware of how high the stakes were for all parties, Sony declared its willingness to accept an extremely low per-unit royalty and to foot the bill for a lot of the next Final Fantasy game’s marketing, promising to work like the dickens to break it in the West. In the end, Sakaguchi allowed himself to be convinced. He had long run Final Fantasy as his own fiefdom at Square, and this didn’t change now: upper management rubber-stamped his decision to make Final Fantasy VII for the Sony PlayStation.

The announcement struck Japan’s games industry with all the force of one of Sakaguchi’s trademark Final Fantasy plot twists. For all the waves Sony had been making recently, nobody had seen this one coming. For its part, Nintendo had watched quite a number of studios defect to Sony already, but this one clearly hurt more than any of the others. It sold off all of its shares in Square and refused to take its calls for the next five years.

The raised stakes only gave Sakaguchi that much more motivation to make Final Fantasy VII amazing — so amazing that even the most stalwart Nintendo loyalists among the gaming population would be tempted to jump ship to the PlayStation in order to experience it. There had already been an unusually long delay after Final Fantasy VI, during which Square had made Super Mario RPG and another, earlier high-profile JRPG called Chrono Trigger, the fruit of a partnership between Hironobu Sakaguchi and Yuji Horii of Dragon Quest fame. (This was roughly equivalent in the context of 1990s Western pop culture to Oasis and Blur making an album together.) Now the rush was on to get Final Fantasy VII out the door within a year, while the franchise and its new platform the PlayStation were still smoking hot.

In defiance of the wisdom found in The Mythical Man-Month, Sakaguchi decided to both make the game quickly and make it amazing by throwing lots and lots of personnel at the problem: 150 people in all, three times as many as had worked on Final Fantasy VI. Cost was no object, especially wherever yen could be traded for time. Square spent the equivalent of $40 million on Final Fantasy VII in the course of just one year, blowing up all preconceptions of how much it could cost to make a computer or console game. (The most expensive earlier game that I’m aware of is the 1996 American “interactive movie” Wing Commander IV, which its developer Origin Systems claimed to have cost $12 million.) By one Square executive’s estimate, almost half of Final Fantasy VII‘s budget went for the hundreds of high-end Silicon Graphics workstations that were purchased, tools for the unprecedented number of 3D artists and animators who attacked the game from all directions at once. Their output came to fill not just one PlayStation CD but three of them — almost two gigabytes of raw data in all, or 30 Nintendo 64 cartridges.

Somehow or other, it all came together. Square finished Final Fantasy VII on schedule, shipping it in Japan on January 31, 1997. It went on to sell over 3 million copies there, bettering Final Fantasy VI‘s numbers by about half a million and selling a goodly number of PlayStations in the process. But, as that fairly modest increase indicates, the Japanese domestic market was becoming saturated; there were only so many games you could sell in a country of 125 million people, most of them too old or too young or lacking the means or the willingness to acquire a PlayStation. There was only one condition in which it had ever made sense to spend $40 million on Final Fantasy VII: if it could finally break the Western market wide open. Encouraged by the relative success of Final Fantasy VI and Super Mario RPG in the United States, excited by the aura of hipster cool that clung to the PlayStation, Square — and also Sony, which lived up to its promise to go all-in on the game — were determined to make that happen, once again at almost any cost. After renumbering the earlier games in the series in the United States to conform with its habit of only releasing every other Final Fantasy title there, Square elected to call this game Final Fantasy VII all over the world. For the number seven was an auspicious one, and this was nothing if not an auspicious game.

Final Fantasy VII shipped on a suitably auspicious date in the United States: September 7, 1997. It sold its millionth unit that December.

In November of 1997, it came to Europe, which had never seen any of the previous six mainline Final Fantasy game before and therefore processed the title as even more of a non sequitur. No matter. Wherever the game went, the title and the marketing worked — worked not only for the game itself, but for the PlayStation. Coming hot on the heels of the hip mega-hit Tomb Raider, it sealed the deal for the console, relegating the Sega Saturn to oblivion and the Nintendo 64 to the status of a disappointing also-ran. Paul Davies was the editor-in-chief of Britain’s Computer and Video Games magazine at the time. He was a committed Sega loyalist, he says, but

I came to my senses when Square announced Final Fantasy VII as a PlayStation exclusive. We received sheets of concept artwork and screenshots at our editorial office, sketches and stills from the incredible cut scenes. I was smitten. I tried and failed to rally. This was a runaway train. [The] PlayStation took up residence in all walks of life, moved from bedrooms to front rooms. It gained — by hook or by crook — the kind of social standing that I’d always wanted for games. Sony stomped on my soul and broke my heart, but my God, that console was a phenomenon.

Final Fantasy VII wound up selling well over 10 million units in all, as many as all six previous entries in the series combined, divided this time almost equally between Japan, North America, and Europe. Along the way, it exploded millions of people’s notions of what games could do and be — people who weren’t among the technological elite who invested thousands of dollars into high-end rigs to play the latest computer games, who just wanted to sit down in front of their televisions after a busy day with a plug-it-in-and-go console and be entertained.

Of course, not everyone who bought the game was equally enamored. Retailers reported record numbers of returns to go along with the record sales, as some people found all the walking around and reading to be not at all what they were looking for in a videogame.

In a way, I share their pain. Despite all its exceptional qualities, Final Fantasy VII fell victim rather comprehensively to the standard Achilles heel of the JRPG in the West: the problem of translation. Its English version was completed in just a couple of months at Square’s American branch, reportedly by a single employee working without supervision, then sent out into the world without a second glance. I’m afraid there’s no way to say this kindly: it’s almost unbelievably terrible, full of sentences that literally make no sense punctuated by annoying ellipses that are supposed to represent… I don’t know what. Pauses… for… dramatic… effect, perhaps? To say it’s on the level of a fan translation would be to insult the many fans of Japanese videogames in the West, who more often than not do an extraordinary job when they tackle such a project. That a game so self-consciously pitched as the moment when console-based videogames would come into their own as a storytelling medium and as a form of mass-market entertainment to rival movies could have been allowed out the door with writing like this boggles the mind. It speaks to what a crossroads moment this truly was for games, when the old ways were still in the process of going over to the new. Although the novelty of the rest of the game was enough to keep the poor translation from damaging its commercial prospects overmuch, the backlash did serve as a much-needed wake-up call for Square. Going forward, they would take the details of “localization,” as such matters are called in industry speak, much more seriously.

Oh, my…

Writerly sort that I am, I’ll be unable to keep myself from harping further on the putrid translation in the third and final article in this series, when I’ll dive into the game itself. Right now, though, I’d like to return to the subject of what Final Fantasy VII meant for gaming writ large. In case I haven’t made it clear already, let me state it outright now: its arrival and reception in the West in particular marked one of the watershed moments in the entire history of gaming.

It cemented, first of all, the PlayStation’s status as the overwhelming victor in the late-1990s edition of the eternal Console Wars, as it did the Playstation’s claim to being the third socially revolutionary games console in history, after the Atari VCS and the original Nintendo Famicom. In the process of changing forevermore the way the world viewed videogames and the people who played them, the PlayStation eventually sold more than 100 million units, making it the best-selling games console of the twentieth century, dwarfing the numbers of the Sega Saturn (9 million units) and even the Nintendo 64 (33 million units), the latter of which was relegated to the status of the “kiddie console” on the playgrounds of the world. The underperformance of the Saturn followed by that of its successor the Dreamcast (again, just 9 million units sold) led Sega to abandon the console-hardware business entirely. Even more importantly, the PlayStation shattered the aura of remorseless, monopolistic inevitability that had clung to Nintendo since the mid-1980s; Nintendo would be for long stretches of the decades to come an also-ran in the very industry it had almost single-handedly resurrected. If the PlayStation was conceived partially as revenge for Nintendo’s jilting of Sony back in 1991, it was certainly a dish served cold — in fact, one that Nintendo is to some extent still eating to this day.

Then, too, it almost goes without saying that the JRPG, a sub-genre that had hitherto been a niche occupation of American gamers and virtually unknown to European ones, had its profile raised incalculably by Final Fantasy VII. The JRPG became almost overnight one of the hottest of all styles of game, as millions who had never imagined that a game could offer a compelling long-form narrative experience like this started looking for more of the same to play just as soon as its closing credits had rolled. Suddenly Western gamers were awaiting the latest JRPG releases with just as much impatience as Japanese gamers — releases not only in the Final Fantasy series but in many, many others as well. Their names, which tended to sound strange and awkward to English ears, were nevertheless unspeakably alluring to those who had caught the JRPG fever: Xenogears, Parasite Eve, Suikoden, Lunar, Star Ocean, Thousand Arms, Chrono Cross, Valkyrie Profile, Legend of Mana, Saiyuki. The whole landscape of console gaming changed; nowhere in the West in 1996, these games were everywhere in 1998 and 1999. It required a dedicated PlayStation gamer indeed just to keep up with the glut. At the risk of belaboring a point, I must note here that there were relatively few such games on the Nintendo 64, due to the limited storage capacity of its cartridges. Gamers go where the games they want to play are, and, for gamers in their preteens or older at least, those games were on the PlayStation.

From the computer-centric perspective that is this site’s usual stock in trade, perhaps the most important outcome of Final Fantasy VII was the dawning convergence it heralded between what had prior to this point been two separate worlds of gaming. Shortly before its Western release on the PlayStation, Square’s American subsidiary had asked the parent company for permission to port Final Fantasy VII to Windows-based desktop computers, perchance under the logic that, if American console gamers did still turn out to be nonplussed by the idea of a hundred-hour videogame despite marketing’s best efforts, American computer gamers would surely not be.

Square Japan agreed, but that was only the beginning of the challenge of getting Final Fantasy VII onto computer-software shelves. Square’s American arm called dozens of established computer publishers, including the heavy hitters like Electronic Arts. Rather incredibly, they couldn’t drum up any interest whatsoever in a game that was by now selling millions of copies on the most popular console in the world. At long last, they got a bite from the British developer and publisher Eidos, whose Tomb Raider had been 1996’s PlayStation game of the year whilst also — and unusually for the time — selling in big numbers on computers.

That example of cross-platform convergence notwithstanding, everyone involved remained a bit tentative about the Final Fantasy VII Windows port, regarding it more as a cautious experiment than the blockbuster-in-the-offing that the PlayStation version had always been treated as. Judged purely as a piece of Windows software, the end result left something to be desired, being faithful to the console game to a fault, to the extent of couching its saved states in separate fifteen-slot “files” that stood in for PlayStation memory cards.

The Windows version of Final Fantasy VII came out a year after the PlayStation version. “If you’re open to new experiences and perspectives in role-playing and can put up with idiosyncrasies from console-game design, then take a chance and experience some of the best storytelling ever found in an RPG,” concluded Computer Gaming World in its review, stamping the game “recommended, with caution.” Despite that less than rousing endorsement, it did reasonably well, selling somewhere between 500,000 and 1 million units by most reports.

They were baby steps to be sure, but Tomb Raider and Final Fantasy VII between them marked the start of a significant shift, albeit one that would take another half-decade or so to come to become obvious to everyone. The storage capacity of console CDs, the power of the latest console hardware, and the consoles’ newfound ability to easily save state from session to session had begun to elide if not yet erase the traditional barriers between “computer games” and “videogames.” Today the distinction is all but eliminated, as cross-platform development tools and the addition of networking capabilities to the consoles make it possible for everyone to play the same sorts of games at least, if not always precisely the same titles. This has been, it seems to me, greatly to the benefit of gaming in general: games on computers have became more friendly and approachable, even as games on consoles have become deeper and more ambitious.

So, that’s another of the trends we’ll need to keep an eye out for as we continue our journey down through the years. Next, though, it will be time to ask a more immediately relevant question: what is it like to actually play Final Fantasy VII, the game that changed so much for so many?



Did you enjoy this article? If so, please think about pitching in to help me make many more like it. You can pledge any amount you like.


Sources: the books Pure Invention: How Japan Made the Modern World by Matt Alt, Power-Up: How Japanese Video Games Gave the World an Extra Life by Chris Kohler, Fight, Magic, Items: The History of Final Fantasy, Dragon Quest, and the Rise of Japanese RPGs in the West by Aidan Moher, Atari to Zelda: Japan’s Videogames in Global Contexts by Mia Consalvo, Revolutionaries at Sony: The Making of the Sony PlayStation by Reiji Asakura, and Game Over: How Nintendo Conquered the World by David Sheff. Retro Gamer 69, 96, 108, 137, 170, and 188; Computer Gaming World of September 1997, October 1997, May 1998, and November 1998.

Online sources include Polygon‘s authoritative Final Fantasy 7: An Oral History”, “The History of Final Fantasy VII at Nintendojo, “The Weird History of the Super NES CD-ROM, Nintendo’s Most Notorious Vaporware” by Chris Kohler at Kotaku, and “The History of PlayStation was Almost Very Different” by Blake Hester at Polygon.

Footnotes

Footnotes
1 Philips wasn’t, however, above exploiting the letter of its contract with Nintendo to make a Mario game and three substandard Legend of Zelda games available for the CD-i.
2 Sony did purchase the venerable British game developer and publisher Psygnosis well before its console’s launch to help prime the pump with some quality games, but it largely left it to manage its own affairs on the other side of the world.
 
 

Tags: , , , ,

Putting the “J” in the RPG, Part 1: Dorakue!


Fair warning: this article includes some plot spoilers of Final Fantasy I through VI.

The videogame industry has always run on hype, but the amount of it that surrounded Final Fantasy VII in 1997 was unparalleled in its time. This new game for the Sony PlayStation console was simply inescapable. The American marketing teams of Sony and Square Corporation, the game’s Japanese developer and publisher, had been given $30 million with which to elevate Final Fantasy VII to the same status as the Super Marios of the world. They plastered Cloud, Aerith, Tifa, Sephiroth, and the game’s other soon-to-be-iconic characters onto urban billboards, onto the sides of buses, and into the pages of glossy magazines like Rolling Stone, Playboy, and Spin. Commercials for the game aired round the clock on MTV, during NFL games and Saturday Night Live, even on giant cinema screens in lieu of more traditional coming-attractions trailers. “They said it couldn’t be done in a major motion picture,” the stentorian announcer intoned. “They were right!” Even if you didn’t care a whit about videogames, you couldn’t avoid knowing that something pretty big was going down in that space.

And if you did care… oh, boy. The staffs of the videogame magazines, hardly known for their sober-mindedness in normal times, worked themselves up to positively orgasmic heights under Square’s not-so-gentle prodding. GameFan told its readers that Final Fantasy VII would be “unquestionably the greatest entertainment product ever created.”

The game is ridiculously beautiful. Analyze five minutes of gameplay in Final Fantasy VII and witness more artistic prowess than most entire games have. The level of detail is absolutely astounding. These graphics are impossible to describe; no words are great enough. Both map and battle graphics are rendered to a level of detail completely unprecedented in the videogame world. Before Final Fantasy VII, I couldn’t have imagined a game looking like this for many years, and that’s no exaggeration. One look at a cut scene or call spell should handily convince you. Final Fantasy VII looks so consistently great that you’ll quickly become numb to the power. Only upon playing another game will you once again realize just how fantastic it is.

But graphics weren’t all that the game had going for it. In fact, they weren’t even the aspect that would come to most indelibly define it for most of its players. No… that thing was, for the very first time in a mainstream console-based videogame with serious aspirations of becoming the toppermost of the poppermost, the story.

I don’t have any room to go into the details, but rest assured that Final Fantasy VII possesses the deepest, most involved story line ever in an RPG. There’s few games that have literally caused my jaw to drop at plot revelations, and I’m most pleased to say that Final Fantasy VII doles out these shocking, unguessable twists with regularity. You are constantly motivated to solve the latest mystery.

So, the hype rolled downhill, from Square at the top to the mass media, then on to the hardcore gamer magazines to ordinary owners of PlayStations. You would have to have been an iconoclastic PlayStation owner indeed not to be shivering with anticipation as the weeks counted down toward the game’s September 7 release. (Owners of other consoles could eat their hearts out; Final Fantasy VII was a PlayStation exclusive.)

Just last year, a member of an Internet gaming forum still fondly recalled how

the lead-up for the US launch of this game was absolutely insane, and, speaking personally, it is the most excited about a game I think I had ever been in my life, and nothing has come close since then. I was only fifteen at the time, and this game totally overtook all my thoughts and imagination. I had never even played a Final Fantasy game before, and I didn’t even like RPGs, yet I would spend hours reading and rereading all the articles from all the gaming magazines I had, inspecting all the screenshots and being absolutely blown away at the visual fidelity I was witnessing. I spent multiple days/hours with my Sony Discman listening to music and drawing the same artwork that was in all the mags. It was literally a genre- and generation-defining game.

Those who preferred to do their gaming on personal computers rather than consoles might be excused for scoffing at all these breathless commentators who seemed to presume that Final Fantasy VII was doing something that had never been done before. If you spent your days playing Quake, Final Fantasy VII‘s battle graphics probably weren’t going to impress you overmuch; if you knew, say, Toonstruck, even the cut scenes might strike you as pretty crude. And then, too, computer-based adventure games and RPGs had been delivering well-developed long-form interactive narratives for many years by 1997, most recently with a decidedly cinematic bent more often than not, with voice actors in place of Final Fantasy VII‘s endless text boxes. Wasn’t Final Fantasy VII just a case of console gamers belatedly catching on to something computer gamers had known all along, and being forced to do so in a technically inferior fashion at that?

Well, yes and no. It’s abundantly true that much of what struck so many as so revelatory about Final Fantasy VII really wasn’t anywhere near as novel as they thought it was. At the same time, though, the aesthetic and design philosophies which it applied to the abstract idea of the RPG truly were dramatically different from the set of approaches favored by Western studios. They were so different, in fact, that the RPG genre in general would be forever bifurcated in gamers’ minds going forward, as the notion of the “JRPG” — the Japanese RPG — entered the gaming lexicon. In time, the label would be applied to games that didn’t actually come from Japan at all, but that evinced the set of styles and approaches so irrevocably cemented in the Western consciousness under the label of “Japanese” by Final Fantasy VII.

We might draw a parallel with what happened in music in the 1960s. The Beatles, the Rolling Stones, and all the other Limey bands who mounted the so-called “British Invasion” of their former Colonies in 1964 had all spent their adolescence steeped in American rock and roll. They took those influences, applied their own British twist to them, then sold them back to American teenagers, who screamed and fainted in the concert halls like Final Fantasy VII fans later would in the pages of the gaming magazines, convinced that the rapture they were feeling was brought on by something genuinely new under the sun — which in the aggregate it was, of course. It took the Japanese to teach Americans how thrilling and accessible — even how emotionally moving — the gaming genre they had invented could truly be.



The roots of the JRPG can be traced back not just to the United States but to a very specific place and time there: to the American Midwest in the early 1970s, where and when Gary Gygax and Dave Arneson, a pair of stolid grognards who would have been utterly nonplussed by the emotional histrionics of a Final Fantasy VII, created a “single-unit wargame” called Dungeons & Dragons. I wrote quite some years ago on this site that their game’s “impact on the culture at large has been, for better or for worse, greater than that of any single novel, film, or piece of music to appear during its lifetime.” I almost want to dismiss those words now as the naïve hyperbole of a younger self. But the thing is, I can’t; I have no choice but to stand by them. Dungeons & Dragons really was that earthshaking, not only in the obvious ways — it’s hard to imagine the post-millennial craze for fantasy in mass media, from the Lord of the Rings films to Game of Thrones, ever taking hold without it — but also in subtler yet ultimately more important ones, in the way it changed the role we play in our entertainments from that of passive spectators to active co-creators, making interactivity the watchword of an entire age of media.

The early popularity of Dungeons & Dragons coincided with the rise of accessible computing, and this proved a potent combination. Fans of the game with access to PLATO, a groundbreaking online community rooted in American universities, moved it as best they could onto computers, yielding the world’s first recognizable CRPGs. Then a couple of PLATO users named Robert Woodhead and Andrew Greenberg made a game of this type for the Apple II personal computer in 1981, calling it Wizardry. Meanwhile Richard Garriott was making Ultima, a different take on the same broad concept of “Dungeons & Dragons on a personal computer.”

By the time Final Fantasy VII stormed the gates of the American market so triumphantly in 1997, the cultures of gaming in the United States and Japan had diverged so markedly that one could almost believe they had never had much of anything to do with one another. Yet in these earliest days of digital gaming — long before the likes of the Nintendo Entertainment System, when Japanese games meant only coin-op arcade hits like Space Invaders, Pac-Man, and Donkey Kong in the minds of most Americans — there was in fact considerable cross-pollination. For Japan was the second place in the world after North America where reasonably usable, pre-assembled, consumer-grade personal computers could be readily purchased; the Japanese Sharp MZ80K and Hitachi MB-6880 trailed the American Trinity of 1977 — the Radio Shack TRS-80, Apple II, and Commodore PET — by less than a year. If these two formative cultures of computing didn’t talk to one another, whom else could they talk to?

Thus pioneering American games publishers like Sierra On-Line and Brøderbund forged links with counterparts in Japan. A Japanese company known as Starcraft became the world’s first gaming localizer, specializing in porting American games to Japanese computers and translating their text into Japanese for the domestic market. As late as the summer of 1985, Roe R. Adams III could write in Computer Gaming World that Sierra’s sprawling twelve-disk-side adventure game Time Zone, long since written off at home as a misbegotten white elephant, “is still high on the charts after three years” in Japan. Brøderbund’s platformer Lode Runner was even bigger, having swum like a salmon upstream in Japan, being ported from home computers to coin-op arcade machines rather than the usual reverse. It had even spawned the world’s first e-sports league, whose matches were shown on Japanese television.

At that time, the first Wizardry game and the second and third Ultima had only recently been translated and released in Japan. And yet if Adams was to be believed,[1]Adams was not an entirely disinterested observer. He was already working with Robert Woodhead on Wizardry IV, and had in fact accompanied him to Japan in this capacity. both games already

have huge followings. The computer magazines cover Lord British [Richard Garriott’s nom de plume] like our National Inquirer would cover a television star. When Robert Woodhead of Wizardry fame was recently in Japan, he was practically mobbed by autograph seekers. Just introducing himself in a computer store would start a near-stampede as people would run outside to shout that he was inside.

Robert Woodhead with Japanese Wizardry fans.

The Wizardry and Ultima pump had been primed in Japan by a game called The Black Onyx, created the year before in their image for the Japanese market by an American named Henk Rogers.[2]A man with an international perspective if ever there was one, Rogers would later go on to fame and fortune as the man who brought Tetris out of the Soviet Union. But his game was quickly eclipsed by the real deals that came directly out of the United States.

Wizardry in particular became a smashing success in Japan, even as a rather lackadaisical attitude toward formal and audiovisual innovation on the part of its masterminds was already condemning it to also-ran status against Ultima and its ilk in the United States. It undoubtedly helped that Wizardry was published in Japan by ASCII Corporation, that country’s nearest equivalent to Microsoft, with heaps of marketing clout and distributional muscle to bring to bear on any challenge. So, while the Wizardry series that American gamers knew petered out in somewhat anticlimactic fashion in the early 1990s after seven games,[3]It would be briefly revived for one final game, the appropriately named Wizardry 8, in 2001. it spawned close to a dozen Japanese-exclusive titles later in that decade alone, plus many more after the millennium, such that the franchise remains to this day far better known by everyday gamers in Japan than it is in the United States. Robert Woodhead himself spent two years in Japan in the early 1990s working on what would have been a Wizardry MMORPG, if it hadn’t proved to be just too big a mouthful for the hardware and telecommunications infrastructure at his disposal.

Box art helps to demonstrate Wizardry‘s uncanny legacy in Japan. Here we see the original 1981 American release of the first game.

And here we have a Japan-only Wizardry from a decade later, self-consciously echoing a foreboding, austere aesthetic that had become more iconic in Japan than it had ever been in its home country. (American Wizardry boxes from the period look nothing like this, being illustrated in a more conventional, colorful epic-fantasy style.)

Much of the story of such cultural exchanges inevitably becomes a tale of translation. In its original incarnation, the first Wizardry game had had the merest wisp of a plot. In this as in all other respects it was a classic hack-and-slash dungeon crawler: work your way down through ten dungeon levels and kill the evil wizard, finito. What background context there was tended to be tongue-in-cheek, more Piers Anthony than J.R.R. Tolkien; the most desirable sword in the game was called the “Blade of Cuisinart,” for Pete’s sake. Wizardry‘s Japanese translators, however, took it all in with wide-eyed earnestness, missing the winking and nodding entirely. They saw a rather grim, austere milieu a million miles away from the game that Americans knew — a place where a Cuisinart wasn’t a stainless-steel food processor but a portentous ancient warrior clan.

When the Japanese started to make their own Wizardry games, they continued in this direction, to almost hilarious effect if one knew the source material behind their efforts; it rather smacks of the post-apocalyptic monks in A Canticle for Liebowitz making a theology for themselves out of the ephemeral advertising copy of their pre-apocalyptic forebears. A franchise that had in its first several American releases aspired to be about nothing more than killing monsters for loot — and many of them aggressively silly monsters at that — gave birth to audio CDs full of po-faced stories and lore, anime films and manga books, a sprawling line of toys and miniature figures, even a complete tabletop RPG system. But, lest we Westerners begin to feel too smug about all this, know that the same process would eventually come to work in reverse in the JRPG field, with nuanced Japanese writing being flattened out and flat-out misunderstood by clueless American translators.

The history of Wizardry in Japan is fascinating by dint of its sheer unlikeliness, but the game’s importance on the global stage actually stems more from the Japanese games it influenced than from the ones that bore the Wizardry name right there on the box. For Wizardry, along with the early Ultima games, happened to catch the attention of Koichi Nakamura and Yuji Horii, a software-development duo who had already made several games together for a Japanese publisher called Enix. “Horii-san was really into Ultima, and I was really into Wizardry,” remembers Nakamura. This made sense. Nakamura was the programmer of the pair, naturally attracted to Wizardry‘s emphasis on tactics and systems. Horii, on the other hand, was the storytelling type, who wrote for manga magazines in addition to games, and was thus drawn to Ultima‘s quirkier, more sprawling world and its spirit of open-ended exploration. The pair decided to make their own RPG for the Japanese market, combining what they each saw as the best parts of Wizardry and Ultima.

Yuji Horii in the 1980s. Little known outside his home country, he is a celebrity inside its borders. In his book on Japanese videogame culture, Chris Kohler calls him a Steven Spielberg-like figure there, in terms both of name recognition and the style of entertainment he represents.

This was interesting, but not revolutionary in itself; you’ll remember that Henk Rogers had already done essentially the same thing in Japan with The Black Onyx before Wizardry and Ultima ever officially arrived there. Nevertheless, the choices Nakamura and Horii made as they set about their task give them a better claim to the title of revolutionaries on this front than Rogers enjoys. They decided that making a game that combined the best of Wizardry and Ultima really did mean just that: it did not mean, that is to say, throwing together every feature of each which they could pack in and calling it a day, as many a Western developer might have. They decided to make a game that was simpler than either of its inspirations, much less the two of them together.

Their reasons for doing so were artistic, commercial, and technical. In the realm of the first, Horii in particular just didn’t like overly complicated games; he was the kind of player who would prefer never to have to glance at a manual, whose ideal game intuitively communicated to you everything you needed to know in order to play it. In the realm of the second, the pair was sure that the average Japanese person, like the average person in most countries, felt the same as Horii; even in the United States, Ultima and Wizardry were niche products, and Nakamura and Horii had mass-market ambitions. And in the realm of the third, they were sharply limited in how much they could put into their RPG anyway, because they intended it for the Nintendo Famicom console, where their entire game — code, data, graphics, and sound — would have to fit onto a 64 K cartridge in lieu of floppy disks and would have to be steerable using an eight-button controller in lieu of a keyboard. Luckily, Nakamura and Horii already had experience with just this sort of simplification. Their most recent output had been inspired by the adventure games of American companies like Sierra and Infocom, but had replaced those games’ text parsers with controller-friendly multiple-choice menus.

In deciding to put American RPGs through the same wringer, they established one of the core attributes of the JRPG sub-genre: generally speaking, these games were and would remain simpler than their Western counterparts, which sometimes seemed to positively revel in their complexity as a badge of honor. Another attribute emerged fully-formed from the writerly heart of Yuji Horii. He crafted an unusually rich, largely linear plot for the game. Rather than being a disadvantage, he thought linearity would make this new style of console game “more accessible to consumers”: “We really focused on ensuring people would be able to experience the fun of the story.”

He called upon his friends at the manga magazines to help him illustrate his tale with large, colorful figures in that distinctly Japanese style that has become so immediately recognizable all over the world. At this stage, it was perhaps more prevalent on the box than in the game itself, the Famicom’s graphical fidelity being what it was. Nonetheless, another precedent that has held true in JRPGs right down to the present day was set by the overall visual aesthetic of this, the canonical first example of the breed. Ditto its audio aesthetic, which took the form of a memorable, melodic, eminently hummable chip-tune soundtrack. “From the very beginning, we wanted to create a warm, inviting world,” says Horii.

Dragon Quest. Ultima veterans will almost expect to meet Lord British on his throne somewhere. With its overhead view and its large over-world full of towns to be visited, Dragon Quest owed even more to Ultima than it did to Wizardry — unsurprisingly so, given that the former was the American RPG which its chief creative architect Yuji Horii preferred.

Dragon Quest was released on May 27, 1986. Console gamers — not only those in Japan, but anywhere on the globe — had never seen anything like it. Playing this game to the end was a long-form endeavor that could stretch out over weeks or months; you wrote down an alphanumeric code it provided to you on exit, then entered this code when you returned to the game in order to jump back to wherever you had left off.

That said, the fact that the entire game state could be packed into a handful of numbers and letters does serve to illustrate just how simple Dragon Quest really was at bottom. By the standards of only a few years later, much less today, it was pretty boring. Fighting random monsters wasn’t so much a distraction from the rest of the game as the only thing available to do; the grinding was the game. In 2012, critic Nick Simberg wondered at “how willing we were to sit down on the couch and fight the same ten enemies over and over for hours, just building up gold and experience points”; he compared Dragon Quest to “a child’s first crayon drawing, stuck with a magnet to the fridge.”

And yet, as the saying goes, you have to start somewhere. Japanese gamers were amazed and entranced, buying 1 million copies of Dragon Quest in its first six months, over 2 million copies in all. And so a new sub-genre was born, inspired by American games but indelibly Japanese in a way The Black Onyx had not been. Many or most of the people who played and enjoyed Dragon Quest had never even heard of its original wellspring Dungeons & Dragons.

We all know what happens when a game becomes a hit on the scale of Dragon Quest. There were sequels — two within two years of the first game, then three more in the eight years after them, as the demands of higher production values slowed down Enix’s pace a bit. Wizardry was big in Japan, but it was nothing compared to Dragon Quest, which sold 2.4 million copies in its second incarnation, followed by an extraordinary 3.8 million copies in its third. Middle managers and schoolmasters alike learned to dread the release of a new entry in the franchise, as about half the population of Japan under a certain age would invariably call in sick that day. When Enix started bringing out the latest games on non-business days, a widespread urban legend said this had been done in accordance with a decree from the Japanese Diet, which demanded that “henceforth Dragon Quest games are to be released on Sunday or national holidays only”; the urban legend wasn’t true, but the fact that so many people in Japan could so easily believe it says something in itself. Just as the early American game Adventure lent its name to an entire genre that followed it, the Japanese portmanteau word for “Dragon Quest” — Dorakue — became synonymous with the RPG in general there, such that when you told someone you were “playing dorakue” you might really be playing one of the series’s countless imitators.

Giving any remotely complete overview of these dorakue games would require dozens of articles, along with someone to write them who knows far more about them than I do. But one name is inescapable in the field. I refer, of course, to Final Fantasy.


Hironobu Sakaguchi in 1991.

Legend has it that Hironobu Sakaguchi, the father of Final Fantasy, chose that name because he thought that the first entry in the eventual franchise would be the last videogame he ever made. A former professional musician with numerous and diverse interests, Sakaguchi had been working for the Japanese software developer and publisher Square for a few years already by 1987, designing and programming Famicom action games that he himself found rather banal and that weren’t even selling all that well. He felt ready to do something else with his life, was poised to go back to university to try to figure out what that thing ought to be. But before he did so, he wanted to try something completely different at Square.

Another, less dramatic but probably more accurate version of the origin story has it that Sakaguchi simply liked the way the words “final’ and “fantasy” sounded together. At any rate, he convinced his managers to give him half a dozen assistants and six months to make a dorakue game.[4]In another unexpected link between East and West, one of his most important assistants became Nasir Gebelli, an Iranian who had fled his country’s revolution for the United States in 1979 and become a game-programming rock star on the Apple II. After the heyday of the lone-wolf bedroom auteur began to fade there, Doug Carlston, the head of Brøderbund, brokered a job for him with his friends in Japan. There he maximized the Famicom’s potential in the same way he had that of the Apple II, despite not speaking a word of Japanese when he arrived. (“We’d go to a restaurant and no matter what he’d order — spaghetti or eggs — they’d always bring out steak,” Sakaguchi laughs.) Gebelli would program the first three Final Fantasy games almost all by himself.

 

Final Fantasy I.

The very first Final Fantasy may not have looked all that different from Dragon Quest at first glance — it was still a Famicom game, after all, with all the audiovisual limitations that implies — but it had a story line that was more thematically thorny and logistically twisted than anything Yuji Horii might have come up with. As it began, you found yourself in the midst of a quest to save a princess from an evil knight, which certainly sounded typical enough to anyone who had ever played a dorakue game before. In this case, however, you completed that task within an hour, only to learn that it was just a prologue to the real plot. In his book-length history and study of the aesthetics of Japanese videogames, Chris Kohler detects an implicit message here: “Final Fantasy is about much more than saving the princess. Compared to the adventure that is about to take place, saving a princess is merely child’s play.” In fact, only after the prologue was complete did the opening credits finally roll, thus displaying another consistent quality of Final Fantasy: its love of unabashedly cinematic drama.

Still, for all that it was more narratively ambitious than what had come before, the first Final Fantasy can, like the first Dragon Quest, seem a stunted creation today. Technical limitations meant that you still spent 95 percent of your time just grinding for experience. “Final Fantasy may have helped build the genre, but it didn’t necessarily know exactly how to make it fun,” acknowledges Aidan Moher in his book about JRPGs. And yet when it came to dorakue games in the late 1980s, it seemed that Sakaguchi’s countrymen were happy to reward even the potential for eventual fun. They made Final Fantasy the solid commercial success that had heretofore hovered so frustratingly out of reach of its creator; it sold 400,000 copies. Assured that he would never have to work on a mindless action game again, Sakaguchi agreed to stay on at Square to build upon its template.

Final Fantasy II, which was released exactly one year after the first game in December of 1988 and promptly doubled its sales, added more essential pieces to what would become the franchise’s template. Although labelled and marketed as a sequel, its setting, characters, and plot had no relation to what had come before. Going forward, it would remain a consistent point of pride with Sakaguchi to come up with each new Final Fantasy from whole cloth, even when fans begged him for a reunion with their favorite places and people. In a world afflicted with the sequelitis that ours is, he can only be commended for sticking to his guns.

In another sense, though, Final Fantasy II was notable for abandoning a blank slate rather than embracing it. For the first time, its players were given a pre-made party full of pre-made personalities to guide rather than being allowed to roll their own. Although they could rename the characters if they were absolutely determined to do so — this ability would be retained as a sort of vestigial feature as late as Final Fantasy VII — they were otherwise set in stone, the better to serve the needs of the set-piece story Sakaguchi wanted to tell. This approach, which many players of Western RPGs did and still do regard as a betrayal of one of the core promises of the genre, would become commonplace in JRPGs. Few contrasts illustrate so perfectly the growing divide between these two visions of the RPG: the one open-ended and player-driven, sometimes to a fault; the other tightly scripted and story-driven, again sometimes to a fault. In a Western RPG, you write a story for yourself; in a JRPG, you live a story that someone else has already written for you.

Consider, for example, the two lineage’s handling of mortality. If one of your characters dies in battle in a Western RPG, it might be difficult and expensive, or in some cases impossible, to restore her to life; in this case, you either revert to an earlier saved state or you just accept her death as another part of the story you’re writing and move on to the next chapter with an appropriately heavy heart. In a JRPG, on the other hand, death in battle is never final; it’s almost always easy to bring a character who gets beat down to zero hit points back to life. What are truly fatal, however, are pre-scripted deaths, the ones the writers have deemed necessary for storytelling purposes. Final Fantasy II already contained the first of these; years later, Final Fantasy VII would be host to the most famous of them all, a death so shocking that you just have to call it that scene and everyone who has ever played the game will immediately know what you’re talking about. To steal a phrase from Graham Nelson, the narrative always trumps the crossword in JRPGs; they happily override their gameplay mechanics whenever the story they wish to tell demands it, creating an artistic and systemic discontinuity that’s enough to make Aristotle roll over in his grave. Yet a huge global audience of players are not bothered at all by it — not if the story is good enough.

But we’ve gotten somewhat ahead of ourselves; the evolution of the 1980s JRPG toward the modern-day template came in fits and starts rather than a linear progression. Final Fantasy III, which was released in 1990, actually returned to a player-generated party, and yet the market failed to punish it for its conservatism. Far from it: it sold 1.4 million copies.

Final Fantasy IV, on the other hand, chose to double down on the innovations Final Fantasy II had deployed, and sold in about the same numbers as Final Fantasy III. Released in July of 1991, it provided you with not just a single pre-made party but an array of characters who moved in and out of your control as the needs of the plot dictated, thereby setting yet another longstanding precedent for the series going forward. Ditto the nature of the plot, which leaned into shades of gray as never before. Chris Kohler:

The story deals with mature themes and complex characters. In Final Fantasy II, the squeaky-clean main characters were attacked by purely evil dark knights; here, our main character is a dark knight struggling with his position, paid to kill innocents, trying to reconcile loyalty to his kingdom with his sense of right and wrong. He is involved in a sexual relationship. His final mission for the king turns out to be a mass murder: the “phantom monsters” are really just a town of peaceful humans whose magic the corrupt king has deemed dangerous. (Note the heavy political overtones.)

Among Western RPGs, only the more recent Ultima games had dared to deviate so markedly from the absolute-good-versus-absolute-evil tales of everyday heroic fantasy. (In fact, the plot of Final Fantasy IV bears a lot of similarities to that of Ultima V…)

Ever since Final Fantasy IV, the series has been filled with an inordinate number of moody young James Deans and long-suffering Natalie Woods who love them.

Final Fantasy IV was also notable for introducing an “active-time battle system,” a hybrid between the turn-based systems the series had previously employed and real-time combat, designed to provide some of the excitement of the latter without completely sacrificing the tactical affordances of the former. (In a nutshell, if you spend too long deciding what to do when it’s your turn, the enemies will jump in and take another turn of their own while you dilly-dally.) It too would remain a staple of the franchise for many installments to come.

Final Fantasy V, which was released in December of 1992, was like Final Fantasy III something of a placeholder or even a retrenchment, dialing back on several of the fourth game’s innovations. It sold almost 2.5 million copies.

Both the fourth and fifth games had been made for the Super Famicom, Nintendo’s 16-bit successor to its first console, and sported correspondingly improved production values. But most JRPG fans agree that it was with the sixth game — the last for the Super Famicom — that all the pieces finally came together into a truly friction-less whole. Indeed, a substantial and vocal minority will tell you that Final Fantasy VI rather than its immediate successor is the best Final Fantasy ever, balanced perfectly between where the series had been and where it was going.

Final Fantasy VI abandoned conventional epic-fantasy settings for a steampunk milieu out of Jules Verne. As we’ll see in a later article, Final Fantasy VII‘s setting would deviate even more from the norm. This creative restlessness is one of the series’s best traits, standing it in good stead in comparison to the glut of nearly indistinguishably Tolkienesque Western RPGs of the 1980s and 1990s.

From its ominous opening-credits sequence on, Final Fantasy VI strained for a gravitas that no previous JRPG had approached, and arguably succeeded in achieving it at least intermittently. It played out on a scale that had never been seen before; by the end of the game, more than a dozen separate characters had moved in and out of your party. Chris Kohler identifies the game’s main theme as “love in all its forms — romantic love, parental love, sibling love, and platonic love. Sakaguchi asks the player, what is love and where can we find it?”

Before that scene in Final Fantasy VII, Hironobu Sakaguchi served up a shocker of equal magnitude in Final Fantasy VI. Halfway through the game, the bad guys win despite your best efforts and the world effectively ends, leaving your party wandering through a post-apocalyptic World of Ruin like the characters in a Harlan Ellison story. The effect this had on some players’ emotions could verge on traumatizing — heady stuff for a videogame on a console still best known worldwide as the cuddly home of Super Mario. For many of its young players, Final Fantasy VI was their first close encounter on their own recognizance — i.e., outside of compulsory school assignments — with the sort of literature that attempts to move beyond tropes to truly, thoughtfully engage with the human condition.

It’s easy for an old, reasonably well-read guy like me to mock Final Fantasy VI‘s highfalutin aspirations, given that they’re stuffed into a game that still resolves at the granular level into bobble-headed figures fighting cartoon monsters. And it’s equally easy to scoff at the heavy-handed emotional manipulation that has always been part and parcel of the JRPG; subtle the sub-genre most definitely is not. Nonetheless, meaningful literature is where you find it, and the empathy it engenders can only be welcomed in a world in desperate need of it. Whatever else you can say about Final Fantasy and most of its JRPG cousins, the messages these games convey are generally noble ones, about friendship, loyalty, and the necessity of trying to do the right thing in hard situations, even when it isn’t so easy to even figure out what the right thing is. While these messages are accompanied by plenty of violence in the abstract, it is indeed abstracted — highly stylized and, what with the bifurcation between game and story that is so prevalent in the sub-genre, often oddly divorced from the games’ core themes.

Released in April of 1994, Final Fantasy VI sold 2.6 million copies in Japan. By this point the domestic popularity of the Final Fantasy franchise as a whole was rivaled only by that of Super Mario and Dragon Quest; two of the three biggest gaming franchises in Japan, that is to say, were dorakue games. In the Western world, however, the picture was quite different.

In the United States, the first-generation Nintendo Famicom was known as the Nintendo Entertainment System, the juggernaut of a console that rescued videogames in the eyes of the wider culture from the status of a brief-lived fad to that of a long-lived entertainment staple, on par with movies in terms of economics if not cachet. Yet JRPGs weren’t a part of that initial success story. The first example of the breed didn’t even reach American shores until 1989. It was, appropriately enough, the original Dragon Quest, the game that had started it all in Japan; it was renamed Dragon Warrior for the American market, due to a conflict with an old American tabletop RPG by the name of Dragonquest whose trademarks had been acquired by the notoriously litigious TSR of Dungeons & Dragons fame. Enix did make some efforts to modernize the game, such as replacing the password-based saving system with a battery that let you save your state to the cartridge itself. (This same method had been adopted by Final Fantasy and most other post-Dragon Quest JRPGs on the Japanese market as well.) But American console gamers had no real frame of reference for Dragon Warrior, and even the marketing geniuses of Nintendo, which published the game itself in North America, struggled to provide them one. With cartridges piling up in Stateside warehouses, they were reduced to giving away hundreds of thousands of copies of Dragon Warrior to the subscribers of Nintendo Power magazine. For some of these, the game came as a revelation seven years before Final Fantasy VII; for most, it was an inscrutable curiosity that was quickly tossed aside.

Final Fantasy I, on the other hand, received a more encouraging reception in the United States when it reached there in 1990: it sold 700,000 copies, 300,000 more than it had managed in Japan. Nevertheless, with the 8-bit Nintendo console reaching the end of its lifespan, Square didn’t bother to export the next two games in the series. It did export Final Fantasy IV for the Super Famicom — or rather the Super Nintendo Entertainment System, as it was known in the West. The results were disappointing in light of the previous game’s reception, so much so that Square didn’t export Final Fantasy V.[5]Square did release a few spinoff games under the Final Fantasy label in the United States and Europe as another way of testing the Western market: Final Fantasy Legend and Final Fantasy Adventure for the Nintendo Game Boy handheld console, and Final Fantasy: Mystic Quest for the Super Nintendo. Although none of them were huge sellers, the Game Boy titles in particular have their fans even today. This habit of skipping over parts of the series led to a confusing state of affairs whereby the American Final Fantasy II was the Japanese Final Fantasy IV and the American Final Fantasy III was the Japanese Final Fantasy VI. The latter game shifted barely one-fourth as many copies in the three-times larger American marketplace as it had in Japan — not disastrous numbers, but still less than the first Final Fantasy had managed.

The heart of the problem was translation, in both the literal sense of the words on the screen and a broader cultural sense. Believing with some justification that the early American consoles from Atari and others had been undone by a glut of substandard product, Nintendo had long made a science out of the polishing of gameplay, demanding that every prospective release survive an unrelenting testing gauntlet before it was granted the “Nintendo Seal of Quality” and approved for sale. But the company had no experience or expertise in polishing text to a similar degree. In most cases, this didn’t matter; most Nintendo games contained very little text anyway. But RPGs were the exception. The increasingly intricate story lines which JRPGs were embracing by the early 1990s demanded good translations by native speakers. What many of them actually got was something very different, leaving even those American gamers who wanted to fall in love baffled by the Japanese-English-dictionary-derived word salads they saw before them. And then, too, many of the games’ cultural concerns and references were distinctly Japanese, such that even a perfect translation might have left Americans confused. It was, one might say, the Blade of Cuisinart problem in reverse.

To be sure, there were Americans who found all of the barriers to entry into these deeply foreign worlds to be more bracing than intimidating, who took on the challenge of meeting the games on their own terms, often emerging with a lifelong passion for all things Japanese. At this stage, though, they were the distinct minority. In Japan and the United States alike, the conventional wisdom through the mid-1990s was that JRPGs didn’t and couldn’t sell well overseas; this was regarded as a fact of life as fundamental as the vagaries of climate. (Thanks to this belief, none of the mainline Final Fantasy games to date had been released in Europe at all.) It would take Final Fantasy VII and a dramatic, controversial switch of platforms on the part of Square to change that. But once those things happened… look out. The JRPG would conquer the world yet.


Where to Get It: Remastered and newly translated versions of the Japanese Final Fantasy I, II, III, IV, V, and VI are available on Steam. The Dragon Quest series has been converted to iOS and Android apps, just a search away on the Apple and Google stores.



Did you enjoy this article? If so, please think about pitching in to help me make many more like it. You can pledge any amount you like.


Sources: the books Pure Invention: How Japan Made the Modern World by Matt Alt, Power-Up: How Japanese Video Games Gave the World an Extra Life by Chris Kohler, Fight, Magic, Items: The History of Final Fantasy, Dragon Quest, and the Rise of Japanese RPGs in the West by Aidan Moher, and Atari to Zelda: Japan’s Videogames in Global Contexts by Mia Consalvo. GameFan of September 1997; Retro Gamer 69, 108, and 170; Computer Gaming World of September 1985 and December 1992.

Online sources include Polygon‘s authoritative Final Fantasy 7: An Oral History”; “The Long Life of the Original Wizardry by guest poster Alex on The CRPG Addict blog; Wizardry: Japanese Franchise Outlook” by Sam Derboo at Hardcore Gaming 101, plus an interview Robert Woodhead, conducted by Jared Petty at the same site; Wizardry‘s Wild Ride from West to East” at VentureBeat; “The Secret History of AnimEigo” at that company’s homepage; Robert Woodhead’s slides from a presentation at the 2022 KansasFest Apple II convention; a post on tabletop Wizardry at the Japanese Tabletop RPG blog; and Dragon Warrior: Aging Disgracefully” by Nick Simberg at (the now-defunct) DamnLag.

Footnotes

Footnotes
1 Adams was not an entirely disinterested observer. He was already working with Robert Woodhead on Wizardry IV, and had in fact accompanied him to Japan in this capacity.
2 A man with an international perspective if ever there was one, Rogers would later go on to fame and fortune as the man who brought Tetris out of the Soviet Union.
3 It would be briefly revived for one final game, the appropriately named Wizardry 8, in 2001.
4 In another unexpected link between East and West, one of his most important assistants became Nasir Gebelli, an Iranian who had fled his country’s revolution for the United States in 1979 and become a game-programming rock star on the Apple II. After the heyday of the lone-wolf bedroom auteur began to fade there, Doug Carlston, the head of Brøderbund, brokered a job for him with his friends in Japan. There he maximized the Famicom’s potential in the same way he had that of the Apple II, despite not speaking a word of Japanese when he arrived. (“We’d go to a restaurant and no matter what he’d order — spaghetti or eggs — they’d always bring out steak,” Sakaguchi laughs.) Gebelli would program the first three Final Fantasy games almost all by himself.
5 Square did release a few spinoff games under the Final Fantasy label in the United States and Europe as another way of testing the Western market: Final Fantasy Legend and Final Fantasy Adventure for the Nintendo Game Boy handheld console, and Final Fantasy: Mystic Quest for the Super Nintendo. Although none of them were huge sellers, the Game Boy titles in particular have their fans even today.
 
65 Comments

Posted by on November 17, 2023 in Digital Antiquaria, Interactive Fiction

 

Tags: , , , ,

The Ratings Game, Part 4: E3 and Beyond

In 1994, the Consumer Electronics Show was a seemingly inviolate tradition among the makers of videogame consoles and game-playing personal computers. Name a landmark product, and chances were there was a CES story connected with it. The Atari VCS had been shown for the first time there in 1977; the Commodore 64 in 1982; the Amiga in 1984; the Nintendo Entertainment System in 1985; Tetris in 1988; the Sega Genesis in 1989; John Madden Football in 1990; the Super NES in 1991, just to name a few. In short, CES was the first and most important place for the videogame industry to show off its latest and greatest to an eager public.

For all that, though, few inside the industry had much good to say about the experience of actually exhibiting at the show. Instead of getting the plum positions these folks thought they deserved, their cutting-edge, transformative products were crammed into odd corners of the exhibit hall, surrounded by the likes of soft-porn and workout videos. The videogame industry’s patently second-class status may have been understandable once upon a time, when it was a tiny upstart on the media landscape with a decidedly uncertain future. But now, with it approaching the magic mark of $5 billion in annual revenues in the United States alone, its relegation at the hands of CES’s organizers struck its executives as profoundly unjust. Wherever and whenever they got together, they always seemed to wind up kibitzing about the hidebound CES people, who still lived in a world where toasters, refrigerators, and microwave ovens were the apex of technological excitement in the home.

They complained at length as well to Gary Shapiro, the man in charge of CES, but his cure proved worse than the disease. He and the other organizers of the 1993 Summer CES, which took place as usual in June in Chicago, promised to create some special “interactive showcase pavilions” for the industry. When the exhibitors arrived, they saw that their “pavilions” were more accurately described as tents, pitched in the middle of an unused parking lot. Pat Ferrell, then the editor-in-chief of GamePro magazine, recalls that “they put some porta-potties out there and a little snack stand where you could pick up a cookie. Everybody was like, ‘This is bullshit. This is like Afghanistan.'” It rained throughout the show, and the tents leaked badly, ruining several companies’ exhibits. Tom Kalinske, then the CEO of Sega of America, remembers that he “turned to my team and said, ‘That’s it. We’re never coming back here again.'”

Kalinske wasn’t true to his word; Sega was at both the Winter and Summer CES of the following year. But he was now ready and willing to listen to alternative proposals, especially after his and most other videogame companies found themselves in the basement of Chicago’s McCormick Place in lieu of the parking lot in June of 1994.

In his office at GamePro, Pat Ferrell pondered an audacious plan for getting his industry out of the basement. Why not start a trade show all his own? It was a crazy idea on the face of it — what did a magazine editor know about running a trade show? — but Ferrell could be a force of nature when he put his mind to something. He made a hard pitch to the people who ran his magazine’s parent organization: the International Data Group (IDG), who also published a wide range of other gaming and computing magazines, and already ran the Apple Macintosh’s biggest trade show under the banner of their magazine Macworld. They were interested, but skeptical whether he could really convince an entire industry to abandon its one proven if painfully imperfect showcase for such an unproven one as this. It was then that Ferrell started thinking about the brand new Interactive Digital Software Association. The latter would need money to fund the ratings program that was its first priority, then would need more money to do who knew what else in the future. Why not fund the IDSA with a trade show that would be a vast improvement over the industry’s sorry lot at CES?

Ferrell’s negotiations with the IDSA’s members were long and fraught, not least because CES seemed finally to be taking the videogame industry’s longstanding litany of complaints a bit more seriously. In response to a worrisome decline in attendance in recent editions of the summer show, Shapiro had decided to revamp his approach dramatically for 1995. After the usual January CES, there would follow not one but four smaller shows, each devoted to a specific segment of the larger world of consumer electronics. The videogame and consumer-computing industries were to get one of these, to take place in May in Philadelphia. So, the IDSA’s members stood at a fork in the road. Should they give CES one more chance, or should they embrace Ferrell’s upstart show?

The battle lines over the issue inside the IDSA were drawn, as usual, between Sega and Nintendo. Thoroughly fed up as he was with CES, Tom Kalinske climbed aboard the alternative train as soon as it showed up at the station. But Howard Lincoln of Nintendo, very much his industry’s establishment man, wanted to stick with the tried and true. To abandon a known commodity like CES, which over 120,000 journalists, early adopters, and taste-makers were guaranteed to visit, in favor of an untested concept like this one, led by a man without any of the relevant organizational experience, struck him as the height of insanity. Yes, IDG was willing to give the IDSA a five-percent stake in the venture, which was five percent more than it got from CES, but what good was five percent of a failure?

In the end, the majority of the IDSA decided to place their faith in Ferrell in spite of such reasonable objections as these — such was the degree of frustration with CES. Nintendo, however, remained obstinately opposed to the new show. Everyone else could do as they liked; Nintendo would continue going to CES, said Lincoln.

And so Pat Ferrell, a man with a personality like a battering ram, decided to escalate. He didn’t want any sort of split decision; he was determined to win. He scheduled his own show — to be called the Electronic Entertainment Expo, or E3 — on the exact same days in May for which Shapiro had scheduled his. Bet hedging would thus be out of the question; everyone would have to choose one show or the other. And E3 would have a critical advantage over its rival: it would be held in Los Angeles rather than Philadelphia, making it a much easier trip not only for those in Silicon Valley but also for those Japanese hardware and software makers thinking of attending with their latest products. He was trying to lure one Japanese company in particular: Sony, who were known to be on the verge of releasing their first ever videogame console, a cutting-edge 32-bit machine which would use CDs rather than cartridges as its standard storage medium.

Sony finally cast their lot with E3, and that sealed the deal. Pat Ferrell:

My assistant Diana comes into my office and she goes, “Gary Shapiro’s on the phone.” I go, “Really?” So she transfers it. Gary says, “Pat, how are you?” I say, “I’m good.” He says, ‘”You win.” And he hangs up.

E3 would go forward without any competition in its time slot.

Howard Lincoln, a man accustomed to dictating terms rather than begging favors, was forced to come to Ferrell and ask what spots on the show floor were still available. He was informed that Nintendo would have to content themselves with the undesirable West Hall of the Los Angeles Convention Center, a space that hadn’t been remodeled in twenty years, instead of the chic new South Hall where Sony and Sega’s big booths would have pride of place. Needless to say, Ferrell enjoyed every second of that conversation.


E3 1993

The dawn of a new era: E3 1995.

Michael Jackson, who was under contract to Sony’s music division, made an appearance at the first E3 to lend his star power to the PlayStation.

Jack and Sam Tramiel of Atari were also present. Coming twelve years after the Commodore 64’s commercial breakout, this would be one of Jack’s last appearances in the role of a technology executive. Suffice to say that it had been a long, often rocky road since then.

Lincoln’s doubts about Ferrell’s organizational acumen proved misplaced; the first E3 went off almost without a hitch from May 11 through 13, 1995. There were no less than 350 exhibitors — large, small, and in between — along with almost 38,000 attendees. There was even a jolt of star power: Michael Jackson could be seen in Sony’s booth one day. The show will be forever remembered for the three keynote addresses that opened proceedings — more specifically, for the two utterly unexpected bombshell announcements that came out of them.

First up on that first morning was Tom Kalinske, looking positively ebullient as he basked in the glow of having forced Howard Lincoln and Nintendo to bend to his will. Indeed, one could make the argument that Sega was now the greatest single power in videogames, with a market share slightly larger than Nintendo’s and the network of steadfast friends and partners that Nintendo’s my-way-or-the-highway approach had prevented them from acquiring to the same degree. Settling comfortably into the role of industry patriarch, Kalinske began by crowing about the E3 show itself:

E3 is a symbol of the changes our industry is experiencing. Here is this great big show solely for interactive entertainment. CES was never really designed for us. It related better to an older culture. It forced some of the most creative media companies on earth to, at least figuratively, put on gray flannel suits and fit into a TV-buying, furniture-selling mold. Interactive entertainment has become far more than just an annex to the bigger electronics business. Frankly, I don’t miss the endless rows of car stereos and cellular phones.

We’ve broken out to become a whole new category — a whole new culture, for that matter. This business resists hard and fast rules; it defies conventional wisdom.

After talking at length about the pace at which the industry was growing (the threshold of $5 billion in annual sales would be passed that year, marking a quintupling in size since 1987) and the demographic changes it was experiencing (only 51 percent of Sega’s sales had been to people under the age of eighteen in 1994, compared with 62 percent in 1992), he dropped his bombshell at the very end of this speech: Sega would be releasing their own new 32-bit, CD-based console, the Saturn, right now instead of on the previously planned date of September 2. In fact, the very first units were being set out on the shelves of four key retailers — Toys “R” Us, Babbage’s, Electronics Boutique, and Software Etc. — as he spoke, at a price of $399.

Halfhearted claps and cheers swept the assembly, but the dominant reaction was a palpable consternation. Many of the people in the audience were working on Saturn games, yet had been told nothing of the revised timetable. Questions abounded. Why the sudden change? And what games did Sega have to sell alongside the console? The befuddlement would soon harden into anger in many cases, as studios and publishers came to feel that they’d been cheated of the rare opportunity that is the launch of a new console, when buyers are excited and have already opened their wallets wide, and are much less averse than usual to opening them a little wider and throwing a few extra games into their bag along with their shiny new hardware. Just like that, the intra-industry goodwill which Sega had methodically built over the course of years evaporated like air out of a leaky tire. Kalinske would later claim that he knew the accelerated timetable was a bad idea, but was forced into it by Sega’s Japanese management, who were desperate to steal the thunder of the Sony PlayStation.

Speaking of which: next up was Sony, the new rider in this particular rodeo, whose very presence on this showcase stage angered such other would-be big wheels in the console space as Atari, 3DO, and Philips, none of whom were given a similar opportunity to speak. Sony’s keynote was delivered by Olaf Olafsson, a man of extraordinary accomplishment by any standard, one of those rare Renaissance Men who still manage to slip through the cracks of our present Age of the Specialist: in addition to being the head of Sony’s new North American console operation, he was a trained physicist and a prize-winning author of literary novels and stories. His slick presentation emphasized Sony’s long history of innovation in consumer electronics, celebrating such highlights as the Walkman portable cassette player and the musical compact disc, whilst praising the industry his company was now about to join with equal enthusiasm: “We are not in a tent. Instead we are indoors at our own trade show. Industry momentum, accelerating to the tune of $5 billion in annual sales, has moved us from the CES parking lot.”

Finally, Olafsson invited Steve Race, the president of Sony Computer Entertainment of America, to deliver a “brief presentation” on the pricing of the new console. Race stepped onstage and said one number: “$299.” Then he dropped the microphone and walked offstage again, as the hall broke out in heartfelt, spontaneous applause. A $299 PlayStation — i.e., a PlayStation $100 cheaper than the Sega Saturn, its most obvious competitor — would transform the industry overnight, and everyone present seemed to realize this.

The Atari VCS had been the console of the 1970s, the Nintendo Entertainment System the console of 1980s. Now, the Sony PlayStation would become the console of the 1990s.

“For a company that is so new to the industry, I would have hoped that Sony would have made more mistakes by now,” sighed Trip Hawkins, the founder of the luckless 3DO, shortly after Olafsson’s dramatic keynote. Sam Tramiel, president of the beleaguered Atari, took a more belligerent stance, threatening to complain to the Federal Trade Commission about Sony’s “dumping” on the American market. (It would indeed later emerge that Sony sold the PlayStation essentially at cost, relying on game-licensing royalties for their profits. The question of whether doing so was actually illegal, however, was another matter entirely.)

Nintendo was unfortunate enough to have to follow Sony’s excitement. And Howard Lincoln’s keynote certainly wouldn’t have done them any favors under any circumstances: it had a downbeat, sour-grapes vibe about it, which stood out all the more in contrast to what had just transpired. Lincoln had no big news to impart, and spent the vast majority of his time on an interminable, hectoring lecture about the scourge of game counterfeiting and piracy. His tone verged on the paranoid: there should be “no safe haven for pirates, whether they board ships as in the old days or manufacture fake products in violation of somebody else’s copyrights”; “every user is a potential illegal distributor”; “information wants to be free [is an] absurd rationalization.” He was like the parent who breaks up the party just as it’s really getting started — or, for that matter, like the corporate lawyer he was by training. The audience yawned and clapped politely and waited for him to go away.

The next five years of videogame-console history would be defined by the events of this one morning. The Sega Saturn was a perfectly fine little machine, but it would never recover from its botched launch. Potential buyers were as confused as developers by its premature arrival, those retailers who weren’t among the initial four chosen ones were deeply angered, and the initial library of games was as paltry as everyone had feared — and then there was the specter of the $299 PlayStation close on the horizon for any retail-chain purchasing agent or consumer who happened to be on the fence. That Christmas season, the PlayStation was launched with a slate of games and an advertising campaign that were masterfully crafted to nudge the average age of the videogame demographic that much further upward, by drawing heavily from the youth cultures of rave and electronica, complete with not-so-subtle allusions to their associated drug cultures. The campaign said that, if Nintendo was for children and Sega for adolescents, the PlayStation was the console for those in their late teens and well into their twenties. Keith Stuart, a technology columnist for the Guardian, has written eloquently of how Sony “saw a future of post-pub gaming sessions, saw a new audience of young professionals with disposable incomes, using their formative working careers as an extended adolescence.” It was an uncannily prescient vision.

Sony’s advertising campaign for the PlayStation leaned heavily on the heroin chic. No one had ever attempted to sell videogames in this way before.

The PlayStation outsold the Saturn by more than ten to one. Thus did Sony eclipse Sega at the top of the videogame heap; they would remain there virtually unchallenged until the launch of Microsoft’s Xbox in 2001. By that time, Sega was out of the console-hardware business entirely, following a truly dizzying fall from grace. Meanwhile Nintendo just kept trucking along in their own little world, much as Howard Lincoln had done at that first E3, subsisting on Mario and Donkey Kong and their lingering family-friendly reputation. When their own next-generation console, the Nintendo 64, finally appeared in 1996, the PlayStation outsold it by a margin of three to one.

In addition to its commercial and demographic implications, the PlayStation wrought a wholesale transformation in the very nature of console-based videogames themselves. It had been designed from the start for immersive 3D presentations, rather than the 2D, sprite-based experiences that held sway on the 8- and 16-bit console generations. When paired with its commercial success, the sheer technical leap the PlayStation represented over what had come before made it easily the most important console since the NES. In an alternate universe, one might have made the same argument for the Sega Saturn or even the Nintendo 64, both of which had many of the same capabilities — but it was the PlayStation that sold to the tune of more than 100 million units worldwide over its lifetime, and that thus gets the credit for remaking console gaming in its image.

The industry never gave CES another glance after the success of that first E3 show; even those computer-game publishers who were partisans of the Software Publishers Association and the Recreational Software Advisory Council rather than the IDSA and ESRB quickly made the switch to a venue where they could be the main attraction rather than a sideshow. Combined with the 3D capabilities of the latest consoles, which allowed them to run games that would previously have been possible only on computers, this change in trade-show venues marked the beginning of a slow convergence of computer games and console-based videogames. By the end of the decade, more titles than ever before would be available in versions for both computers and consoles. Thus computer gamers learned anew how much fun simple action-oriented games could be, even as console gamers developed a taste for the more extended, complex, and/or story-rich games that had previously been the exclusive domain of the personal computers. Today, even most hardcore gamers make little to no distinction between the terms “computer game” and “videogame.”

It would be an exaggeration to claim that all of these disparate events stemmed from one senator’s first glimpse of Mortal Kombat in late 1993. And yet at least the trade show whose first edition set the ball rolling really does owe its existence to that event by a direct chain of happenstance; it’s very hard to imagine an E3 without an IDSA, and hard to imagine an IDSA at this point in history without government pressure to come together and negotiate a universal content-rating system. Within a few years of the first E3, IDG sold the show in its entirety to the IDSA, which has run it ever since. It has continued to grow in size and glitz and noise with every passing year, remaining always the place where deals are made and directions are plotted, and where eager gamers look for a glimpse of some of their possible futures. How strange to think that the E3’s stepfather is Joseph Lieberman. I wonder if he’s aware of his accomplishment. I suspect not.


Postscript: Violence and Videogames



Democracy is often banal to witness up-close, but it has an odd way of working things out in the end. It strikes me that this is very much the case with the public debate begun by Senator Lieberman. As allergic as I am to all of the smarmy “Think of the children!” rhetoric that was deployed on December 9, 1993, and as deeply as I disagree with many of the political positions espoused by Senator Lieberman in the decades since, it was high time for a rating system — not in order to censor games, but to inform parents and, indeed, all of us about what sort of content each of them contained. The industry was fortunate that a handful of executives were wise enough to recognize and respond to that need. The IDSA and ESRB have done their work well. Everyone involved with them can feel justifiably proud.

There was a time when I imagined leaving things at that; it wasn’t necessary, I thought, to come to a final conclusion about the precise nature of the real-world effects of violence in games in order to believe that parents needed and deserved a tool to help them make their own decisions about what games were appropriate for their children. I explained this to my wife when I first told her that I was planning to write this series of articles — explained to her that I was more interested in recording the history of the 1993 controversy and its enormous repercussions than in taking a firm stance on the merits of the arguments advanced so stridently by the “expert panel” at that landmark first Senate hearing. But she told me in no uncertain terms that I would be leaving the elephant in the room unaddressed, leaving Chekhov’s gun unfired on the mantel… pick your metaphor. She eventually brought me around to her point of view, as she usually does, and we agreed to dive into the social-science literature on the subject.

Having left academia behind more than ten years ago, I’d forgotten how bitterly personal its feuds could be. Now, I was reminded: we found that the psychological community is riven with if anything even more dissension on this issue than is our larger culture. The establishment position in psychology is that games and other forms of violent media do have a significant effect on children’s and adolescents’ levels of aggressive behavior. (For better or for worse, virtually all of the extant studies focus exclusively on young people.) The contrary position, of course, is that they do not. A best case for the anti-establishmentarians would have them outnumbered three to one by their more orthodox peers. A United States Supreme Court case from 2011 provides a handy hook for summarizing the opposing points of view.

The case in question actually goes back to 2005, when a sweeping law was enacted in California which made it a crime to sell games that were “offensive to the community” or that depicted violence of an “especially heinous, cruel, or depraved” stripe to anyone under the age of eighteen. The law required that manufacturers and sellers label games that fit these rather subjective criteria with a large sticker that showed “18” in numerals at least two-inches square. In an irony that plenty of people noted at the time, the governor who signed the bill into law was Arnold Schwarzenegger, who was famous for starring in a long string of ultra-violent action movies.

The passage of the law touched off an extended legal battle which finally reached the Supreme Court six years later. Opponents of the law charged that it was an unconstitutional violation of the right to free speech, and that it was particularly pernicious in light of the way it targeted one specific form of media, whilst leaving, for example, the sorts of films to which Governor Schwarzenegger owed his celebrity unperturbed. Supporters of the law countered that the interactive nature of videogames made them fundamentally different — fundamentally more dangerous — than those older forms of media, and that they should be treated as a public-health hazard akin to cigarettes rather than like movies or books. The briefs submitted by social scientists on both sides provide an excellent prism through which to view the ongoing academic debate on videogame violence.

In a brief submitted in support of the law, the California Chapter of the American Academy of Pediatrics and the California Psychological Association stated without hesitation that “scientific studies confirm that violent video games have harmful effects [on] minors”:

Viewing violence increases aggression and greater exposure to media violence is strongly linked to increases in aggression.

Playing a lot of violent games is unlikely to turn a normal youth with zero, one, or even two other risk factors into a killer. But regardless of how many other risk factors are present in a youth’s life, playing a lot of violent games is likely to increase the frequency and the seriousness of his or her physical aggression, both in the short term and over time as the youth grows up. These long-term effects are a consequence of powerful observational learning and desensitization processes that neuroscientists and psychologists now understand to occur automatically in the human child. Simply stated, “adolescents who expose themselves to greater amounts of video game violence were more hostile, reported getting into arguments with teachers more frequently, were more likely to be involved in physical fights, and performed more poorly in school.”

In a recent book, researchers once again concluded that the “active participation” in all aspects of violence: decision-making and carrying out the violent act [sic], result in a greater effect from violent video games than a violent movie. Unlike a passive observer in movie watching, in first-person shooter and third-person shooter games, you’re the one who decides whether to pull the trigger or not and whether to kill or not. After conducting three very different kinds of studies (experimental, a cross-sectional correlational study, and a longitudinal study) the results confirmed that violent games contribute to violent behavior.

The relationship between media violence and real-life aggression is nearly as strong as the impact of cigarette smoking and lung cancer: not everyone who smokes will get lung cancer, and not everyone who views media violence will become aggressive themselves. However, the connection is significant.

One could imagine these very same paragraphs being submitted in support of Senator Lieberman’s videogame-labeling bill of 1994; the rhetoric of the videogame skeptics hasn’t changed all that much since then. But, as I noted earlier, there has emerged a coterie of other, usually younger researchers who are less eager to assert such causal linkages as proven scientific realities.

Thus another, looser amalgamation of “social scientists, medical scientists, and media-effects scholars” countered in their own court brief that the data didn’t support such sweeping conclusions, and in fact pointed in the opposite direction in many cases. They unspooled a long litany of methodological problems, researcher biases, and instances of selective data-gathering which, they claimed, their colleagues on the other side of the issue had run afoul of, and cited studies of their own that failed to prove or even disproved the linkage the establishmentarians believed was so undeniable.

In a recent meta-analytic study, Dr. John Sherry concluded that while there are researchers in the field who “are committed to the notion of powerful effects,” they have been unable to prove such effects; that studies exist that seem to support a relationship between violent video games and aggression but other studies show no such relationship; and that research in this area has employed varying methodologies, thus “obscuring clear conclusions.” Although Dr. Sherry “expected to find fairly clear, compelling, and powerful effects,” based on assumptions he had formed regarding video game violence, he did not find them. Instead, he found only a small relationship between playing violent video games and short-term arousal or aggression, and further found that this effect lessened the longer one spent playing video games.

Such small and inconclusive results prompted Dr. Sherry to ask: “[W]hy do some researchers continue to argue that video games are dangerous despite evidence to the contrary?” Dr. Sherry further noted that if violent video games posed such a threat, then the increased popularity of the games would lead to an increase in violent crime. But that has not happened. Quite the opposite: during the same period that video game sales, including sales of violent video games, have risen, youth violence has dramatically declined.

“The causation research can be done, and, indeed, has been done,” the brief concludes, and “leaves no empirical foundation for the assertion that playing violent video games causes harm to minors.”

The Supreme Court ruled against California, striking down the law as a violation of the First Amendment by a vote of seven to two. I’m more interested today, however, in figuring out what to make of these two wildly opposing views of the issue, both from credentialed professionals.

Before going any further, I want to emphasize that I came to this debate with what I honestly believe to have been an open mind. Although I enjoy many types of games, I have little personal interest in the most violent ones. It didn’t — and still doesn’t — strike me as entirely unreasonable to speculate that a steady diet of ultra-violent games could have some negative effects on some impressionable young minds. If I had children, I would — still would — prefer that they play games that don’t involve running around as an embodied person killing other people in the most visceral manner possible. But the indisputable scientific evidence that might give me a valid argument for imposing my preferences on others under any circumstances whatsoever just isn’t there, despite decades of earnest attempts to collect it.

The establishmentarians’ studies are shot through with biases that I will assume do not distort the data itself, but that can distort interpretations of that data. Another problem, one which I didn’t fully appreciate until I began to read some of the studies, is the sheer difficulty of conducting scientific experiments of this sort in the real world. The subjects of these studies are not mice in a laboratory whose every condition and influence can be controlled, but everyday young people living their lives in a supremely chaotic environment, being bombarded with all sorts of mediated and non-mediated influences every day. How can one possibly filter out all of that noise? The answer is, imperfectly at best. Bear with me while I cite just one example of (what I find to be) a flawed study. (For those who are interested in exploring further, a complete list of the studies which my wife and I examined can be found at the bottom of this article. The Supreme Court briefs from 2011 are also full of references to studies with findings on both sides of the issue.)

In 2012, Ontario’s Brock University published a “Longitudinal Study of the Association Between Violent Video Game Play and Aggression Among Adolescents.” It followed 1492 students, chosen as a demographic reflection of Canadian society as a whole, through their high-school years — i.e., from age fourteen or fifteen to age seventeen or eighteen. They filled out a total of four annual questionnaires over that period, which asked them about their social, familial, and academic circumstances, asked how likely they felt they were to become violent in various hypothetical real-world situations, and asked about their videogame habits: i.e., what games they liked to play and how much time they spent playing them each day. For purposes of the study, “action fighting” games like God of War and our old friend Mortal Kombat were considered violent, but strategy games with “some violent aspects” like Rainbow Six and Civilization were not; ditto sports games. The study’s concluding summary describes a “small” correlation between aggression and the playing of violent videogames: a Pearson correlation coefficient “in the .20 range.” (On this scale, a perfect, one-to-one positive or negative correlation would 1.0 or -1.0 respectively.) Surprisingly, it also describes a “trivial” correlation between aggression and the playing of nonviolent videogames: “mostly less than .10.”

On the surface, it seems a carefully worked-out study which reveals an appropriately cautious conclusion. When we dig in a bit further, however, we can see a few significant methodological problems. The first is its reliance on subjective, self-reported questionnaire answers, which are as dubious here as they were under the RSAC rating system. And the second is the researchers’ subjective assignment of games to the categories of violent and non-violent. Nowhere is it described what specific games were put where, beyond the few examples I cited in my last paragraph. This opacity about exactly what games we’re really talking about badly confuses the issue, especially given that strange finding of a correlation between aggression and “nonviolent” games. Finally, that oft-forgotten truth that correlation is not causation must be considered. When we add third variables to the mix — a statistical method of filtering out non-causative correlations from a data set — the Pearson coefficient between aggressive behavior and violent games drops to around 0.06 — i.e., well below what the researchers themselves describe as “trivial” — and that for nonviolent games drops below the threshold of statistical noise; in fact, the coefficient for violent games is just one one-hundredth above that same threshold. For reasons which are perhaps depressingly obvious, the researchers choose to base their conclusions around their findings without third variables in the mix — just one of several signs of motivated reasoning to be found in their text.

In the interest of not belaboring the point, I’ll just say that other studies I looked at had similar issues. Take, for example, a study of 295 students in Germany with an average age of thirteen and a half years, who were each given several scenarios that could lead to an aggressive response in the real world and asked how they would react, whilst also being asked which of a list of 40 videogames they played and how often they did so. Two and a half years later, they were surveyed again — but by now the researchers’ list of popular videogames was hopelessly out of date. They were thus forced to revamp the questionnaire with a list of game categories, and to retrofit the old, more specific list of titles to match the new approach. I’m sympathetic to their difficulties; again, conducting experiments like this amidst the chaos of the real world is hard. Nevertheless, I can’t help but ask how worthwhile an experiment which was changed so much on the fly can really be. And yet for all the researchers’ contortions, it too reveals only the most milquetoast of correlations between real-world aggression and videogame violence.

Trying to make sense of the extant social-science literature on this subject can often feel like wandering a hall of mirrors. “Meta-analyses” — i.e., analyses that attempt to draw universal findings from aggregations of earlier studies — are everywhere, with conclusions seemingly dictated more by the studies that the authors have chosen to analyze and the emphasis they have chosen to place on different aspects of them than by empirical truth. Even more dismaying are the analyses that piggyback on an earlier study’s raw data set, from which they almost invariably manage to extract exactly the opposite conclusions. This business of studying the effects of videogame violence begins to feel like an elaborate game in itself, one that revolves around manipulating numbers in just the right way, one that is entirely divorced from the reality behind those numbers. The deeper I fell into the rabbit hole, the more one phrase kept ringing in my head: “Garbage In, Garbage Out.”

Both sides of the debate are prone to specious reasoning. Still, the burden of proof ultimately rests with those making the affirmative case: those who assert that violent videogames lead their players to commit real-world violence. In my judgment, they have failed to make that case in any sort of thoroughgoing, comprehensive way. Even after all these years and all these studies, the jury is still out. This may be because the assertion they are attempting to prove is incorrect, or it may just be because this sort of social science is so darn hard to do. Either way, the drawing of parallels between violent videogames and an indubitably proven public-hazard like cigarettes is absurd.

Some of the more grounded studies do tell us that, if we want to find places where videogames can be genuinely harmful to individuals and by extension to society, we shouldn’t be looking at their degree of violence so much as the rote but addictive feedback loops so many of them engender. Videogame addiction, in other words, is probably a far bigger problem for society than videogame violence. So, as someone who has been playing digital games for almost 40 years, I’ll conclude by offering the following heartfelt if unsolicited advice to all other gamers, young and old:

Play the types of games you enjoy, whether they happen to be violent or nonviolent, but not for more than a couple of hours per day on average, and never at the expense of a real-world existence that can be so much richer and more rewarding than any virtual one. Make sure to leave plenty of space in your life as well for other forms of creative expression which can capture those aspects of the human experience that games tend to overlook. And make sure the games you play are ones which respect your time and are made for the right reasons — the ones which leave you feeling empowered and energized instead of enslaved and drained. Lastly, remember always the wise words of Dani Bunten Berry: “No one on their death bed ever said, ‘I wish I had spent more time alone with my computer!’” Surely that statement at least remains as true of Pac-Man as it is of Night Trap.

(Sources: the book The Ultimate History of Video Games by Steven L. Kent; Edge of August 1995; GameFan of July 1995; GamePro of June 1995 and August 1995; Next Generation of July 1995; Video Games of July 1995; Game Developer of August/September 1995. Online sources include Blake J. Harris’s “Oral History of the ESRB” at VentureBeat, “How E3 1995 Changed Gaming Forever” at Syfy Games, “The Story of the First E3” at Polygon, Game Zero‘s original online coverage of the first E3, “Sega Saturn: How One Decision Destroyed PlayStation’s Greatest Rival” by Keith Stuart at The Guardian, an interview with Olaf Olafsson at The Nervous Breakdown, and a collection of vintage CES photos at The Verge. Last but certainly not least, Anthony P’s self-shot video of the first E3, including the three keynotes, is a truly precious historical document.

I looked at the following studies of violence and gaming: “Differences in Associations Between Problematic Video-Gaming, Video-Gaming Duration, and Weapon-Related and Physically Violent Behaviors in Adolescents” by Zu Wei Zhai, et. al.; “Aggressive Video Games are Not a Risk Factor for Future Aggression in Youth: A Longitudinal Study” by Christopher J. Ferguson and John C.K. Wang; “Growing Up with Grand Theft Auto: A 10-Year Study of Longitudinal Growth of Violent Video Game Play in Adolescents” by Sarah M. Coyne and Laura Stockdale; “Aggressive Video Games are Not a Rick Factor for Mental Health Problems in Youth: A Longitudinal Study” by Christopher J. Ferguson and C.K. John Wang; “A Preregistered Longitudinal Analysis of Aggressive Video Games and Aggressive Behavior in Chinese Youth” by Christopher J. Ferguson; “Social and Behavioral Heath Factors Associated with Violent and Mature Gaming in Early Adolescence” by Lind Charmaraman, et. al.; “Reexamining the Findings of the American Psychological Association’s 2015 Task Force on Violent Media: A Meta-Analysis” by Christopher J. Ferguson, et. al.; “Do Longitudinal Studies Support Long-Term Relationships between Aggressive Game Play and Youth Aggressive Behavior? A Meta-analytic Examination” by Aaron Drummond, et. al.; “Technical Report on the Review of the Violent Video Game Literature” by the American Psychological Association; “Exposure to Violent Video Games and Aggression in German Adolescents: A Longitudinal Study” by Ingrid Möller and Barbara Krahé; “Metaanalysis of the Relationship between Violent Video Game Play and Physical Aggression over Time” by Anna T. Prescitt, et. al.; “The Effects of Reward and Punishment in Violent Video Games on Aggressive Affect,Cognition, and Behavior” by Nicholas L. Carnagey and Craig A. Anderson; “Violent Video Game Effects on Aggression, Empathy, and Prosocial Behavior in Eastern and Western Countries: A Meta-Analytic Review” by Craig A. Anderson, et. al.; “Violent Video Games: The Effects of Narrative Context and Reward Structure on In-Game and Postgame Aggression” by James D. Sauer, et. al.; “Internet and Video Game Addictions: Diagnosis, Epidemiology, and Neurobiology” by Clifford J. Sussman, et. al.; “A Longitudinal Study of the Association Between Violent Video Game Play and Aggression Among Adolescents” by Teena Willoughby, et. al. My wife dug these up for me with the help of her university hospital’s research librarian here in Denmark, but some — perhaps most? — can be accessed for free via organs like PubMed. I would be interested in the findings of any readers who care to delve into this literature — particularly any of you who possess the social-science training I lack.)

 

Tags: , , , , , ,

The Ratings Game, Part 3: Dueling Standards

When Sega, Nintendo, and the Software Publishers Association (SPA) announced just before the Senate hearing of December 9, 1993, that they had agreed in principle to create a standardized rating system for videogames, the timing alone marked it as an obvious ploy to deflect some of the heat that was bound to come their way later that day. At the same time, though, it was also more than a ploy: it was in fact the culmination of an effort that had been underway in some quarters of the industry for months already, one which had begun well before the good Senators Lieberman and Kohl discovered the horrors of videogame violence and sex. As Bill White of Sega was at pains to point out throughout the hearing, Sega had been seriously engaged with the question of a rating system for quite some time, and had managed to secure promises of support from a considerable portion of the industry. But the one entity that had absolutely rejected the notion was the very one whose buy-in was most essential for any overarching initiative of this sort: Nintendo. “Howard [Lincoln] was not going to be part of any group created by Sega,” laughs Dr. Arthur Pober, one of the experts the latter consulted.

So, Sega decided to go it alone. Again as described by Bill White at the hearing, they rolled out a thoroughly worked-out rating system for any and all games on their platforms just in time for Mortal Kombat in September of 1993. It divided games into three categories: GA for general audiences, MA-13 for those age thirteen or older, and MA-17 for those age seventeen or older. An independent board of experts was drafted to assign each new game its rating without interference from Sega’s corporate headquarters; its chairman was the aforementioned Arthur Pober, a distinguished educational psychologist with decades of research experience about the role of media in children’s lives on his CV. Under his stewardship, Mortal Kombat wound up with an MA-13 rating; Night Trap, which had already been in stores for the better part of a year by that point, was retroactively assigned a rating of MA-17.

Although one might certainly quibble that these ratings reflected the American media establishment’s terror of sex and relatively blasé attitude toward violence, Sega’s rating system bore all the outward signs of being a good-faith exercise. At the very least it was, as White repeatedly stated at the hearing, a good first step, one that was taken before any of the real controversy even began.

The second step was of course Nintendo’s grudging acquiescence to the concept of a universal rating system on the day of the hearing — a capitulation whose significance should not be underestimated in light of the company’s usual attitude toward intra-industry cooperation, which might be aptly summarized as “our way or the highway.” And the third step came less than a month later, at the 1994 Winter Consumer Electronics Show, which in accordance with long tradition took place over the first week of the new year in Las Vegas.

Anyone wandering the floor at this latest edition of CES would have seen a digital-games industry that was more fiercely competitive than ever. Sega, celebrating a recent report that gave them for the first time a slight edge over Nintendo in overall market share, had several attention-grabbing new products on offer, including the latest of their hugely popular Sonic the Hedgehog games; the Activator, an early attempt at a virtual-reality controller; the CDX, a portable CD player that could also be used as a game console; and, most presciently of all, a partnership with AT&T to bring online multiplayer gaming, including voice communication, to the Genesis. Meanwhile Nintendo gave the first hints about what would see the light of day some 30 months later as the Nintendo 64. And other companies were still trying to muscle their way into the bifurcated milieu of the living-room consoles. Among them were Atari, looking for a second shot at videogame glory with their Jaguar console; Philips, still flogging the dead horse known as CD-I; and a well-financed new company known as 3DO, with a console that bore the same name. Many traditional makers of business-oriented computers were suddenly trying to reach many of the same consumers, through products like Compaq’s new home-oriented Presario line; even stodgy old WordPerfect was introducing a line of entertainment and educational software. Little spirit of cooperation was in evidence amidst any of this. With “multimedia” the buzzword of the zeitgeist, the World Wide Web looming on the near horizon, and no clarity whatsoever about what direction digital technology in the home was likely to take over the next few years, the competition in the space was as cutthroat as it had ever been.

And yet in a far less glitzy back room of the conference center, all of these folks and more met to discuss the biggest cooperative initiative ever proposed for their industry, prompted by the ultimatum they had so recently been given by Senators Lieberman and Kohl: “Come up with a rating system for yourself, or we’ll do it for you.” The meeting was organized by the SPA, which had the virtue of not being any of the arch-rival console makers, and was thus presumably able to evince a degree of impartiality. “Companies such as 3DO, Atari, Acclaim, id Software, and Apogee already have rating systems,” said Ken Wasch, the longstanding head of the SPA, to open the proceedings. “But a proliferation of rating systems is confusing to retailers and consumers alike. Even before this became an issue in the halls of Congress or in the media, there was a growing belief that we needed a single, easily recognizable system to rate and label our products.”

But the SPA lost control of the meeting almost from the moment Wasch stepped down from the podium. The industry was extremely fortunate that neither Senator Lieberman nor Kohl took said organization up on an invitation to attend in person. One participant remembers the meeting consisting mostly of “people sitting around a table screaming and carrying on.” Cries of “Censorship!” and “Screw ’em! We’ll make the games we want to make!” dominated for long stretches. Many regarded the very notion of a rating system as an unacceptable intrusion by holier-than-thou bureaucrats; they wanted to call what they insisted was the senators’ bluff, to force them to put up actual government legislation — legislation whose constitutionality would be highly questionable — or to shut up about it.

Yet such advocates of the principle of free speech over all other concerns weren’t the sum total of the problem. Even many of those who felt that a rating system was probably necessary were thoroughly unimpressed with the hosts of the meeting, and not much disposed to fall meekly into line behind them.

The hard reality was that the SPA had never been viewed as a terribly effectual organization. Formed  to be the voice of the computer-software industry in 1984 — i.e., just after the Great Videogame Crash — it had occupied itself mostly with anti-piracy campaigns and an annual awards banquet in the years since. The return of a viable console marketplace in the form of the Nintendo Entertainment System and later the Sega Genesis had left it in an odd position. Most of the publishers of computer games who began moving some or all of their output to the consoles were members of the SPA, and through them the SPA itself got pulled into this brave new world. But there were certainly grounds to question whether the organization’s remit really ought to involve the console marketplace at all. Was the likes of Acclaim, the publisher of console-based videogames like Mortal Kombat, truly in the same business as such other SPA members as the business-software titans Microsoft and WordPerfect? Nintendo had always pointedly ignored the SPA; Sega had joined as a gesture of goodwill to their outside publishers who were also members, but hardly regarded it as a major part of their corporate strategy. In addition to being judged slow, bureaucratic, and uncreative, the SPA was regarded by everyone involved with the consoles as being much more invested in computer software of all stripes than console-based videogames. And what with computer games representing in the best case fifteen percent of the overall digital-games market, that alone struck them as a disqualifier for spearheading an initiative like this one.

Electronic Arts, the largest of all of the American game publishers, was in an interesting position here. Founded in 1983 to publish games exclusively for computers, EA had begun moving onto consoles in a big way at the dawn of the 1990s, scoring hits there with such games as the first installments in the evergreen John Madden Football series. By the beginning of 1994, console games made up over two-thirds of their total business.

A senior vice president at EA by the name of Jack Heistand felt that an industry-wide rating system was “the right thing to do. I really believed in my heart that we needed to communicate to parents what the content was inside games.” Yet he also felt convinced from long experience that the SPA was hopelessly ill-equipped for a project of this magnitude, and the disheartening meeting which the SPA tried to lead at CES only cemented that belief. So, immediately after the meeting was over, he approached EA’s CEO Larry Probst with a proposal: “Let’s get all the [other] CEOs together to form an industry association. I will chair it.” Probst readily agreed.

Jack Heistand

The SPA was not included in this other, secret meeting, even though it convened at that same CES. Its participants rather included a representative from each of the five manufacturers of currently or potentially viable consoles: Sega, Nintendo, Atari, Philips, and 3DO. Rounding out their numbers were two videogame-software publishers: Acclaim Entertainment of Mortal Kombat fame and of course Electronic Arts. With none of the console makers willing to accept one of their rivals as chairman of the new steering committee, they soon voted to bestow the role upon Jack Heistand, just as he had planned it.

Sega, convinced of the worthiness of their own rating system, would have happily brought the entirety of the industry under its broad tent and been done with it, but this Nintendo’s pride would never allow. It became clear as soon as talks began, if it hadn’t been already, that whatever came next would have to be built from scratch. With Senators Lieberman and Kohl breathing down their necks, they would all have to find a way to come together, and they would have to do so quickly. The conspirators agreed upon an audacious timetable indeed: they wanted to have a rating system in place for all games that shipped after October 31, 1994 — just in time, in other words, for the next Christmas buying season. It was a tall order, but they knew that they would be able to force wayward game publishers to comply if they could only get their own house in order, thanks to the fact all of the console makers in the group employed the walled-garden approach to software: all required licenses to publish on their platforms, meaning they could dictate which games would and would not appear there. They could thus force a rating system to become a ubiquitous reality simply by pledging not to allow any games on their consoles which didn’t include a rating.

On February 3, 1994, Senator Lieberman introduced the “Video Game Rating Act” to the United States Senate, stipulating that an “Interactive Entertainment Rating Commission” should be established, with five members appointed by President Bill Clinton himself; this temporary commission would be tasked with founding a new permanent governmental body to do what the industry had so far not been willing to do for itself. Shortly thereafter, Representative Tom Lantos, a Democrat from California, introduced parallel legislation in the House. Everyone involved made it clear, however, that they would be willing to scrap their legislation if the industry could demonstrate to their satisfaction that it was now addressing the problem itself. Lieberman, Kohl, and Lantos were all pleased when Sega dropped Night Trap from their product line as a sort of gesture of good faith; the controversial game had never been a particularly big seller, and had now become far more trouble than it was worth. (Mortal Kombat, on the other hand, was still posting sales that made it worth the controversy…)

On March 4, 1994, three representatives of the videogame industry appeared before Lieberman, Kohl, and Lantos at a hearing that was billed as a “progress report.” The only participant in the fractious hearing of three months before who returned for this one was Howard Lincoln of Nintendo, who had established something of a rapport with Senator Lieberman on that earlier occasion. Sega kept Bill White, who most definitely had not, well away, sending instead a white-haired senior vice president named Edward Volkwein. But most of the talking was done by the industry’s third representative, Jack Heistand. His overriding goal was to convince the lawmakers that he and his colleagues were moving as rapidly as possible toward a consistent industry-wide rating system, and should be allowed the balance of the year to complete their work before any legislation went forward. He accordingly emphasized over and over that ratings would appear on the boxes of all new videogames released after October 31.

The shift in tone from the one hearing to the next was striking; this one was a much more relaxed, even collegial affair than last time out. Lieberman, Kohl, and Lantos all praised the industry’s efforts so far, and kept the “think of the children!” rhetoric to a minimum in favor of asking practical questions about how the rating system would be implemented. “I don’t need to get into that argument again,” said Senator Lieberman when disagreements over the probability of a linkage between videogame violence and real-world aggression briefly threatened to ruin the good vibe in the room.

“I think you’re doing great,” said Senator Kohl at the end of the hearing. “It’s a wonderful start. I really am very pleased.” Mission accomplished: Heistand had bought himself enough time to either succeed or fail before the heavy hand of government came back on the scene.



Heistand’s remit was rapidly growing into something much more all-encompassing than just a content-rating board. To view his progress was to witness nothing less than an industry waking up to its shared potential and its shared problems. As I’ve already noted, the videogame industry as a whole had long been dissatisfied with its degree of representation in the SPA, as well as with the latter’s overall competence as a trade organization. This, it suddenly realized, was a chance to remedy that. Why not harness the spirit of cooperation that was in the air to create an alternative to the SPA that would focus solely on the needs of videogame makers? Once that was done, this new trade organization could tackle the issue of a rating system as just the first of many missions.

The International Digital Software Association (IDSA) was officially founded in April of 1994. Its initial members included Acclaim, Atari, Capcom, Crystal Dynamics, Electronic Arts, Konami, Nintendo, Philips, Sega, Sony, Viacom, and Virgin, companies whose combined sales made up no less than 60 percent of the whole videogame industry. Its founding chairman was Jack Heistand, and its first assigned task was the creation of an independent Entertainment Software Rating Board (ESRB).

Heistand managed to convince Nintendo and the others to accept the man who had chaired Sega’s ratings board for the same role in the industry-wide system. Arthur Pober had a reputation for being, as Heistand puts it, “very honorable. A man of integrity.” “Arthur was the perfect guy,” says Tom Kalinske, then the president and CEO of Sega of America. “He had good relationships inside of the education world, inside of the child-development world, and knew the proper child psychologists and sociologists. Plus, we knew he could do it — because he had already done it for us!”

Neutral parties like Pober helped to ease some of the tension that inevitably sprang up any time so many fierce competitors were in the room together. Heistand extracted a promise from everyone not to talk publicly about their work here — a necessary measure given that Howard Lincoln and Tom Kalinske normally used each and every occasion that offered itself to advance their own company and disparage their rival. (Witness Lincoln’s performance at the hearing of December 9…)

Over the course of the next several months, the board hammered out a rating system that was more granular and detailed than the one Sega had been using. It divided games into five rather than three categories: “Early Childhood” (EC) for children as young as age three; “Kids to Adults” (K-A) for anyone six years of age or older; “Teen” (T) for those thirteen or older; “Mature” (M) for those seventeen or older; and “Adults Only” (AO) for those eighteen or older. It was not a coincidence that these ratings corresponded fairly closely to the movie industry’s ratings of G, PG, PG-13, R, and NC-17. A team of graphic artists came up with easily recognizable icons for each of the categories — icons which proved so well-designed for their purpose that most of them are still used to this day.

The original slate of ESRB icons. Since 1994, remarkably few changes have been made: the “Kids to Adults” category has been renamed “Everyone,” and a sixth category of games suitable for those ten years and older, known in the rating system’s nomenclature as “Everyone 10+,” has been added.

The ESRB itself was founded as a New York-based non-profit. Each game would be submitted to it in the form of a videotape of 30 to 40 minutes in length, which must contain the game’s most “extreme” content. The board would then assign the game to one of its teams of three reviewers, all of whom were trained and overseen by the ESRB under the close scrutiny of Arthur Pober. The reviewers were allowed to have no financial or personal ties to the videogame industry, and were hired with an eye to demographic diversity: an example which Heistand gave of an ideal panel consisted of a retired black male elementary-school principal, a 35-year-old white full-time mother of two, and a 22-year-old white male law student. A measure of checks and balances was built into the process: publishers would have the chance to appeal ratings with which they disagreed, and all rated games would have to pass a final audit a week before release to ensure that the videotape which had been submitted had been sufficiently representative of the overall experience. The ESRB aimed to begin accepting videotapes on September 1, 1994, in keeping with the promise that all games released after October 31 would have a rating on the box. Everything was coming together with impressive speed.

But as Heistand prepared to return to Washington to report all of this latest progress on July 29, 1994, there remained one part of the games industry which had not fallen into line. The SPA was not at all pleased by the creation of a competing trade association, nor by having the rug pulled out from under its own rating initiative. And the computer-game makers among its members didn’t face the same compulsion to accept the ESRB’s system, given that they published on open platforms with no gatekeepers.



The relationship between computer games and their console-based brethren had always been more complicated than outsiders such as Senators Lieberman and Kohl were wont to assume. While the degree of crossover between the two had always been considerable, computer gaming had been in many ways a distinct form of media in its own right since the late 1970s. Computer-game makers claimed that their works were more sophisticated forms of entertainment, with more variety in terms of theme and subject matter and, in many cases, more complex and cerebral forms of gameplay on offer. They had watched the resurrection of the console marketplace with as much dismay as joy, being unimpressed by what many of them saw as the dumbed-down “kiddie aesthetic” of Nintendo and the stultifying effect which the consoles’ walled gardens had on creativity; there was a real feeling that the success of Nintendo and its ilk had come at the cost of a more diverse and interesting future for interactive entertainment as a whole. Perhaps most of all, computer-game makers and their older-skewing demographic of players profoundly resented the wider culture’s view of digital games of any stripe as essentially children’s toys, to be regulated in the same way that one regulated Barbie dolls and Hot Wheels cars. These resentments had not disappeared even as many of the larger traditional computer-game publishers, such as EA, had been tempted by the booming market for console-based videogames into making products for those systems as well.

Johnny L. Wilson, the editor-in-chief of Computer Gaming World magazine, voiced in an editorial the objections which many who made or played computer games had to the ESRB:

[The ESRB rating system] has been developed by videogame manufacturers and videogame publishers without significant input by computer-based publishers. The lone exception to this rule is Electronic Arts, which publishes personal-computer titles but nets more than two-thirds of its proceeds from videogame sales. The plan advocated by this group of videogame-oriented companies calls for every game to be viewed by an independent panel prior to release. This independent panel would consist of parents, child psychologists, and educators.

How does this hurt you? This panel is not going to understand that you are a largely adult audience. They are not going to perceive that there is a marketplace of mature gamers. Everything they evaluate will be examined under the rubric, “Is it good for children?” As a result, many of the games covered in Computer Gaming World will be rated as unsuitable for children, and many retailers will refuse to handle these games because they perceive themselves as family-oriented stores and cannot sell unsuitable merchandise.

The fate of Night Trap, an unusually “computer-like” console game, struck people like Wilson as an ominous example of how rating games could lead to censoring them.

Honestly held if debatable opinions like the above, combined perhaps with pettier resentments about the stratospheric sales of console games in comparison to those that ran on computers and its own sidelining by the IDSA, led the SPA to reject the ESRB, and to announce the formation of its own ratings board just for computer games. It was to be called the Recreational Software Advisory Council (RSAC), and its founding president was to be Robert Roden, the general counsel and director of business affairs for the computer-game publisher LucasArts. This choice of an industry insider rather than an outside expert like Arthur Pober reflected much of what was questionable about the alternative rating initiative.

Indeed, and although much of the reasoning used to justify a competing standard was cogent enough, the RSAC’s actual plan for its rating process was remarkable mostly for how comprehensively it failed to address the senators’ most frequently stated concerns about any self-imposed rating standard. Instead of asking publishers to submit videotapes of gameplay for review by an independent panel, the RSAC merely provided them with a highly subjective questionnaire to fill out; in effect, it allowed them to “self-rate” their own games. And, in a reflection of computer-game makers’ extreme sensitivity to any insinuation that their creations were just kids’ stuff, the RSAC rejected outright any form of age-based content rating. Age-based rating systems were “patronizing,” claimed the noted RSAC booster Johnny L. Wilson, because “different people of widely disparate ages have different perceptions of what is appropriate.” In lieu of sorting ratings by age groups, the RSAC would use descriptive labels stipulating the amount and type of violence, sex, and profanity, with each being ranked on a scale from zero to four.

The movie industry’s rating system was an obvious counterexample to this idea that age-based classification must necessarily entail the infantilization of art; certainly cinema still enjoyed vastly more cultural cachet than computer games, despite its own longstanding embrace of just such a system. But the computer-game makers were, it would seem, fairly blinded by their own insecurities and resentments.

A representative of the SPA named Mark Traphagen was invited to join Jack Heistand at the hearing of July 29 in order to make the case for the RSAC’s approach to rating computer games. The hearing began in an inauspicious fashion for him. Senator Lieberman, it emerged during opening statements, had discovered id Software’s hyper-violent computer game of DOOM in the interim between this hearing and the last. This occasion thus came to mark the game’s coming-out party on the national stage. For the first but by no means the last time, a politician showed a clip of it in action, then lit into what the audience had just seen.

What you see there is an individual with a successive round of weapons — a handgun, machine gun, chainsaw — just continuing to attack targets. The bloodshed, the gunfire, and the increasingly realistic imagery combine to create a game that I would not want my daughter or any other child to see or to play.

What you have not seen is some of the language that is displayed onscreen when the game is about to be played. “Act like a man!” the player is told. “Slap a few shells into your shotgun and let’s kick some demonic butt! You’ll probably end up in Hell eventually. Shouldn’t you know your way around before you make an extended visit?”

Well, some may say this is funny, but I think it sends the wrong message to our kids. The game’s skill levels include “I’m Too Young To Die” and “Hurt Me Plenty.” That obviously is not the message parents want their kids to hear.

Mark Traphagen received quite a grilling from Lieberman for the patent failings of the RSAC self-rating system. He did the best he could, whilst struggling to educate his interrogators on the differences between computer and console games. He stipulated that the two were in effect different industries entirely — despite the fact that many software publishers were, as we’ve seen, active in both. This was an interesting stand to take, not least in the way that it effectively ceded the ground of console-based software to the newly instituted IDSA, in the hope that the SPA could hang onto computer games.

Traphagen: Despite popular misconceptions and their admitted similarities to consumers, there are major differences between the personal-computer-software industry and the videogame industry. While personal-computer software and videogame software may be converging toward the compact disc as the preferred storage medium, those of us who develop and publish entertainment software see no signs of a convergence in either product development or marketing.

The personal-computer-software industry is primarily U.S.-based, small to medium in size, entrepreneurial, and highly innovative. Like our plan to rate software, it is based on openness. Its products run on open-platform computers and can be produced by any of thousands of companies of different sizes, without restrictive licensing agreements. There is intense competition between our industry and the videogame industry, marked by the great uncertainty about whether personal computers or some closed platform will prevail in the forthcoming “information superhighway.”

Senator Lieberman: Maybe you should define what a closed platform is in this regard.

Traphagen: A closed platform, Senator, is one in which the ability to create software that will run on that particular equipment is controlled by licensing agreements. In order to create software that will run on those platforms, one has to have the permission and consent of the equipment manufacturer.

Senator Lieberman: And give us an example of that.

Traphagen: A closed platform would be a videogame player.

Senator Lieberman: Such as a Sega or Nintendo?

Traphagen: That is right. In contrast, personal computers are an open platform in which any number of different companies can simply buy a development package at a retailer or a specialty store and then create software that will operate on the computer.

Traphagen explained the unwillingness of computer-game makers to fall under the thumb of the IDSA by comparing them to indie film studios attempting to negotiate the Hollywood machine. Yet he was able to offer little in defense of the RSAC’s chosen method of rating games. He made the dubious claim that creating a videotape for independent evaluation would be too technically burdensome on a small studio, and had even less to offer when asked what advantage accrued to not rating games by suitable age groups: “I do not believe there is an advantage, Senator. There was simply a decision that was taken that the ratings would be as informative as possible, without being judgmental.”

Some five weeks after this hearing, the RSAC would hold a press conference in Dallas, Texas, the home of id Software of DOOM fame. In fact, that game was used to illustrate how the rating system would work. Even some of the more sanguine members of the gaming press were surprised when it received a rating of just three out of four for violence. The difference maker, the RSAC representatives explained, was the fact that DOOM‘s violence wasn’t “gratuitous”; the monsters were trying to kill you, so you had no choice but to kill them. One has to presume that Senators Lieberman and Kohl would not have been impressed, and that Mark Traphagen was profoundly thankful that the press conference occurred after his appearance before them.

Even as it was, the senators’ skepticism toward the RSAC’s rating system at the hearing stood out all the more in contrast to their reception of the ESRB’s plan. The relationship between Senator Lieberman and Jack Heistand had now progressed from the cordial to the downright genial; the two men, now on a first-name basis, even made room for some banter on Heistand’s abortive youthful attempts to become a rock star. The specter of government legislation was never even raised to Heistand. It was, needless to say, a completely different atmosphere from the one of December 9. When the hearing was finished, both sides sent out press notices praising the wisdom and can-do spirit of the other in glowing terms.

But much of the rest of the games industry showed far less good grace. As the summer became the fall and it became clear that game ratings really were happening, the rants began, complete with overheated references to Fahrenheit 451 and all of the other usual suspects. Larry O’Brien, the editor of the new Game Developer magazine, made his position clear in the first line of his editorial: “Rating systems are crap.”

With the entire entertainment industry rolling over whenever Congress calls a hearing, it’s fallen on us to denounce these initiatives for what they are: cynical posturing and electioneering with no substance. Rating systems, whether for movies, television, videogames, or any other form of communication, don’t work, cost money, and impede creativity. Everyone at those hearings, politicians and witnesses alike, knows that. But there’s nothing politicians love more than “standing up for the family” and blaming America’s cultural violence on Hollywood. So the entertainment industry submissively pisses all over itself and proposes “voluntary” systems from the pathetic to the laughable.

Parents should decide. If parents don’t want their kids to play X-COM or see Terminator 2, they should say no and put up with the ensuing argument. They don’t need and shouldn’t get a rating system to supplement their authority. The government has no right to help parents say no at the video store if that governmental interference impedes your right to develop whatever content you feel appropriate.

We all have responsibilities. To create responsibly, to control the viewing and gaming habits of our own children, and to call the government’s ratings initiatives what they are: cynical, ineffective, and wrong-headed.

The libertarian-leaning Wired magazine, that voice of cyber-futurism, published a jeremiad from Rogier Van Bakel that was equally strident.

Violent games such as DOOM, Night Trap, and Mortal Kombat are corrupting the minds and morals of millions of American children. So what do you do? Easy.

You elect people like Herb Kohl and Joe Lieberman to the US Senate. You applaud them when they tell the videogame industry that it’s made up of irrepressible purveyors of gratuitous gore and nefarious nudity. You nod contentedly when the senators give the industry an ultimatum: “Either you start rating and stickering your games real soon, or we, the government, will do it for you.”

You are pleasantly surprised by the industry’s immediate white flag: a rating system that is almost as detailed as the FDA-mandated nutrition information on a can of Campbell’s. You contend that that is, in fact, a perfect analogy: all you want, as a consumer, is honest product labeling. Campbell’s equals Sega equals Kraft equals 3DO.

Finally, you shrug when someone remarks that it may not be a good idea to equate soup with freedom of speech.

All that was needed now was a good conspiracy theory. This Karen Crowther, a spokesperson for makers of shareware computer games, helpfully provided when she said that the government had gotten “hoodwinked by a bunch of foreign billion-dollar corporations (such as Sony, Nintendo, and Sega) out to crush their US competition.”

Robert Peck, a lawyer for the American Civil Liberties Union, flirted with a legal challenge:

This [rating] system is a response to the threat of Senators Lieberman and Kohl that they would enact legislation requiring labels unless the industry did something to preempt them. The game manufacturers are being required to engage in speech that they would otherwise not engage in. These ratings have the government’s fingerprints all over them.

This present labeling system isn’t going to be the end of it. I think some games are going to be negatively affected, sales-wise, and the producers of those games will probably bring a lawsuit. We will then see that this system will be invalidated.

The above bears a distinct whiff of legalistic wishful thinking; none of it came to pass.

While voices like these ranted and raved, Jack Heistand, Arthur Pober, and their associates buckled down soberly to the non-trivial task of putting a rating on all new console-based videogames that holiday season, and succeeded in doing so with an efficiency that one has to admire, regardless of one’s position on the need for such a system. Once the initial shock to the media ecosystem subsided, even some of the naysayers began to see the value in the ESRB’s work.

Under the cover of the rating system, for example, Nintendo felt able to relax many of their strict “family-friendly” content policies. The second “Mortal Monday,” heralding the release of Mortal Kombat II on home consoles, came in September of 1994, before the ESRB’s icons had even started to appear on games. Nevertheless, Nintendo improvised a stopgap badge labeling the game unsuitable for those under the age of seventeen, and felt protected enough by it to allow the full version of the coin-op original on their platform this time, complete with even more blood and gore than its predecessor. It was an early sign that content ratings might, rather than leading game makers to censor themselves, give them a feeling of carte blanche to be more extreme.

By 1997, Game Developer was no longer railing against the very idea of a rating system, but was fretting instead over whether the ESRB’s existing approach was looking hard enough at the ever more lifelike violence made possible by the latest graphics hardware. The magazine worried about unscrupulous publishers submitting videotapes that did not contain their games’ most extreme content, and the ESRB failing to catch on to this as games continued to grow larger and larger: “The ESRB system uses three (count ’em, three) ‘demographically diverse’ people to rate a game. (And I thought television’s Nielsen rating system used a small sample set.) As the stakes go up in the ratings game, the threat of a publisher abusing our rating system grows larger and larger.”

Meanwhile the RSAC strolled along in a more shambolic manner, stickering games here and there, but never getting anything close to the complete buy-in from computer-game publishers that the ESRB received from console publishers. These respective patterns held throughout the five years in which the dueling standards existed.

In the end, in other words, the computer-game people got what they had really wanted all along: a continuing lack of any concerted examination of the content of their works. Some computer games did appear with the ESRB icons on their boxes, others with the RSAC schemas, but plenty more bothered to include no content guidance at all. Satisfied for the time being with the ESRB, Senators Lieberman and Kohl didn’t call any more hearings, allowing the less satisfying RSAC system to slip under the radar along with the distinct minority of digital games to which it was applied, even as computer games like Duke Nukem 3D raised the bar for violence far beyond the standard set by DOOM. The content of computer games wouldn’t suffer serious outside scrutiny again until 1999, the year that a pair of rabid DOOM and Duke Nukem fans shot up their high school in Columbine, Colorado, killing thirteen teachers and students and injuring another 24. But that is a tragedy and a controversy for a much, much later article…

(Sources: the books Dungeons and Dreamers: The Rise of Computer Game Culture from Geek to Chic by Brad King and John Borland, The Ultimate History of Video Games by Steven L. Kent, and Game Over: How Nintendo Conquered the World by David Sheff; Game Developer of September 1994, December 1994, August/September 1995, September 1997, and January 1998; Computer Gaming World of June 1994, December 1994, May 1996, and July 1999; Electronic Entertainment of November 1994 and January 1995; Mac Addict of January 1996; Sierra’s newsletter InterAction of Spring 1994; Washington Post of July 29 1994; the article “Regulating Violence in Video Games: Virtually Everything” by Alex Wilcox in the Journal of the National Association of Administrative Law Judiciary, Volume 31, Issue 1; the United States Senate Committee on the Judiciary’s publication Rating Video Games: A Parent’s Guide to Games; the 1994 episode of the television show Computer Chronicles entitled “Consumer Electronics Show.” Online sources include Blake J. Harris’s “Oral History of the ESRB” at VentureBeat and C-SPAN’s coverage of the Senate hearings of December 9 1993, March 4 1994, and July 29 1994.)

 

Tags: , , , , , , , ,

The Ratings Game, Part 2: The Hearing

It’s widely known by those who are interested in the history of gaming that the videogame industry was hauled into a United States Senate hearing on December 9, 1993, to address concerns about the violence and sex to be found in its products. Yet the specifics of what was said on that occasion have been less widely disseminated. This, then, is my attempt to remedy that lack. What follows is a transcript of the hearing in question. It’s been rather heavily edited by me with an eye toward grammar, clarity, and concision, but always in good faith, making every effort to preserve the meaning behind the mangled dictions and pregnant pauses that are such an inevitable part of extemporaneous speech.

Being a snapshot of a very particular moment in time, the transcript below needs to be understood in the context of that time. I hope that my previous article has provided much of that context, and that the links, footnotes, and occasional in-line comments in the transcript itself will provide the rest. I cannot emphasize enough, however, the importance of the fact that the hearing took place during a major spate of violent crime. Many of the other “murder panics” of American history had little relationship to the true statistical levels of violent crime, having been drummed up by disingenuous leaders and accepted by their credulous followers for reasons of emotion and prejudice. But there was some justification for this one: 1993 was marked by just a shade under one murder or non-negligent manslaughter for every 10,000 American citizens, the culmination of more than a decade of steadily increasing violence. No one assembled at the hearing could know that violent crime would begin a precipitous plunge the following year, the start of a decline that has lasted almost all the way through to our present year of 2021.

For all that the hearing is of its time in this and countless other respects, there’s also a disappointing timelessness about the affair. Many of the arguments deployed for and against the idea of hyper-violent videogames as a negative social force are little changed from the ones we hear today. Even more dismayingly, the psychological research into the matter is hardly more clear-cut today than it was in 1993, being shot through with the same researcher biases and methodological weaknesses. Much has changed over the past-quarter century, but it seems we’ve made very little progress at all in our understanding of this issue.

But enough of my editorializing. Here’s the transcript so that you can decide for yourself.



As one of the two instigators of the proceedings, Senator Herbert Kohl delivered the opening remarks. A moderate Democrat from Wisconsin, he had made a fortune founding and running a chain of grocery stores and department stores that bore his name, and was currently the owner of the Milwaukee Bucks basketball team. He was nearing the end of his first term in the Senate, facing an election the following November.

Senator Herbert Kohl: Today is the first day of Hanukkah, and we have already begun the Christmas season. It is a time when we think about peace on earth and goodwill toward all people, and about giving gifts to our friends and loved ones, but it is also a time when we need to take a close, hard look at just what it is we are actually buying for our kids. That is why we are holding this hearing on violent videogames at this time.

Senator Joseph Lieberman, a Democrat from Connecticut, had a more conventional political background than his colleague. A lawyer by education, he had first been elected to his state’s Senate in 1970, then gone on to to serve as its attorney general for six years in the 1980s. Like Kohl, he belonged to the more moderate — i.e., conservative — wing of his party, and like him was facing his first reelection campaign as a United States Senator the following November.

Senator Joseph Lieberman: Thank you very much, Senator Kohl. It’s a privilege to co-chair this joint hearing with you. You’ve been out front protecting our children, and occasionally protecting the rest of us from them, in terms of their ability to obtain guns.[1]Kohl was a noted proponent of commonsense gun control, especially among minors.

Every day, the news brings more images of random violence, torture, and sexual aggression right into our living rooms. Just this week, we heard the dreadful story of a young girl abducted from a slumber party in her own home and then found dead. A man on a commuter train begins coldly and methodically to fire away at innocents on their way home, killing five people and injuring many others.

Violent images permeate more and more aspects of our lives, and I think it’s time to draw the line with violence in videogames. The new generation of videogames contains the most horrible depictions of graphic violence and sex, including particularly violence against women. Like the Grinch who stole Christmas, these violent videogames threaten to rob this holiday season of its spirit of goodwill. Instead of enriching a child’s mind, these games teach a child to enjoy torture. For those who have not seen these so-called “games” before, I want to show you what we’re talking about. What you’re about to see are scenes from two of the most violent videogames.

First we have Mortal Kombat, which is a martial-arts contest involving digitized characters. When a player wins in the Sega version of the game, the so-called “death” sequence begins. The game narrator instructs the player to “finish” — I quote, “finish” — his opponent. The player may then choose a method of murder, ranging from ripping the heart out to pulling off the head of the opponent with spinal cord attached. A version made by Nintendo leaves out the blood and decapitation, but it is still a violent game.

First, the Sega version.


And this is a brief sequence from the Nintendo version.


This version does not have the death sequences, and instead of red blood spurting out there’s… well, there’s some other liquid.

The second game is Night Trap, a game set in a sorority house. The object is to keep hooded men from hanging young women from a hook or drilling their necks with a tool designed to drain their blood. Night Trap uses actual actors and attains an unprecedented level of realism. It contains graphic depictions of violence against women, with strong overtones of sexual violence. I find this game deeply offensive and believe that it simply should be taken off the market now.

But these games are just the beginning. Last Wednesday in fact, as we were announcing our intention to hold this hearing, a videogame maker was announcing the release of a brutal videogame called Lethal Enforcers.[2]Konami’s Lethal Enforcers, a light-gun-based shooting-gallery game, was, like Mortal Kombat, one of the big arcade hits of 1992, and was likewise now coming home on consoles and computers.

This gun, called the “Justifier,” is the handheld implement with which you play the game by shooting at the screen. The more successful you are, the more powerful the gun becomes.

CD technology is also making sexually explicit videogames available. We have no way of keeping these games out of the hands of kids. Next on the horizon are videogames which are going to come to our TV screens over cable channels.[3]The dream of streaming videogame content in the same way that one streams television programs was an old one already by this point, dating back at least to the beginning of the previous decade. Despite many bold predictions and more scattershot attempts at actual implementation, it’s never quite come to pass in the comprehensive way that seemed so well-nigh inevitable in 1993.

Just a short while ago, some members of the videogame industry announced their intention to create a voluntary rating or warning-label system.[4]Sega had actually rolled out its own content-rating system just before the release of Mortal Kombat in September of 1993. Shortly thereafter, Sega and Nintendo, feeling the heat not only from Washington but from such powerful entities as California’s attorney general, did indeed agree to work together on a joint rating system — an unusual step for two companies whose relationship had heretofore been defined by their mutual loathing. On the very morning of this hearing, most of the rest of the video- and computer-game industry signed on to the initiative. I am pleased that the videogame industry recognizes there is a problem here. A credible rating system will help parents determine which videogames are appropriate for children of different ages. But I must say here that creating a rating system is, in my opinion, the very least the videogame industry can do, not the best they can do. It would be far better if the industry simply kept the worst violence and sex out of their games.

I have three major concerns as the industry develops a rating system. First, there are questions about the system itself. Who will do the rating? Will all manufacturers participate? How many age-specific ratings will there be? Will the industry spend money to inform parents about the meanings of the ratings? Second, a rating system must not be perverted into a cynical marketing ploy to attract children to more violent games. We must not allow the industry to trumpet a violent rating as a selling point. Third, the industry must work to enforce whatever rating system it creates. It should consider licensing agreements and contracts which specify that ratings will be clearly visible in any advertising and understandable by parents and consumers. Distributors, including video-rental stores and toy stores, should face some kind of penalty from manufacturers if they sell or rent to children below the minimum ages in the ratings.

Even if all of these concerns with a rating system are addressed, the videogame industry in my opinion will not have done as much as it should do to avoid creating more violence in our already too violent society. The rating system must not become a fig leaf for the industry to hide behind. They must also accept their responsibility to control themselves and simply stop producing the worst of this junk. The videogame industry has not lived up to their responsibility to America’s parents and children. I hope they will do so in the coming months, at worst by developing a credible and enforceable rating system, and at best by taking the worst games or the worst parts of those games off the market. If the violence and sex don’t come out of the games, parents should be able to keep the games out of their homes.

Senator Kohl: Thank you very much for that, Senator Lieberman. I’d like to briefly outline the major issues as I see them.

First, I believe the announcement by most of the videogame industry that they are committed to a rating system indicates that we’ve already changed the terms of the debate. Simply put, we are no longer asking whether violent videogames may cause harm to our children. Clearly they can, or the industry would not be willing to rate its own games so that young kids cannot obtain them.[5]The body of psychological research on the subject was — and is — nowhere near as clear-cut as this formulation implies. And the industry was, of course, motivated to implement a voluntary rating system by fear of government action rather than a sudden conviction that its products could indeed be harmful to children. The question now is just what restrictions we need to put in place and who should do it. In a sense, then, this hearing represents a window of opportunity for the videogame industry. I’ve spent the bulk of my adult life in business, and I know that if Nintendo and Sega, who together control 90 percent of the market, make the development and enforcement of a meaningful rating system a top priority, it will happen — quickly, voluntarily, and without chilling any First Amendment rights.

Second, let me say that I share Senator Lieberman’s outrage at the excerpts that we have just viewed. Mortal Kombat and Night Trap are not the kind of gifts that responsible parents give. Night Trap, which adds a new dimension of violence specifically targeted against women, is especially repugnant. It ought to be taken off the market entirely, or at the very least its most objectionable scenes should be removed.

But those games are only two examples. Senator Lieberman mentioned another videogame called Lethal Enforcers, which comes with an oversized handgun called the “Justifier.” This game teaches our kids that a gun can solve any problem with lethal force. Sometimes the player hits innocent bystanders. In that case, blood splatters to the ground, but what the heck, bystanders need to learn to get out of the way. Make no mistake: Lethal Enforcers is aimed at young kids. The lede of the ad says, “You won’t find a toy like this in any Cracker Jack box!” Well, I hope not.

What a cynical, irresponsible way to market a product. I find its glorification of guns to kids to be highly offensive, coming on the heels of our long battle to enact the Brady Bill and less than a month after Senator Lieberman and I passed a bill to take handguns away from minors.[6]Passed on November 30, 1993, the Brady Bill was a landmark piece of gun-control legislation which mandated that all prospective purchasers of a gun must first pass a background check and then wait five days to take delivery of their weapon. At the very least, this game sends a tremendously reckless message, and turns any effort to discourage youth violence completely on its head.

We all know that there are many causes of the violence that plagues our cities and increasingly our suburbs and small towns: broken families, poor education, easy access to firearms, drugs, the lists goes on and on. Certainly violent videogames and TV violence have become a significant part. We cannot become paralyzed by the multiplicity of causes or the magnitude of the challenge. We need to make every effort to reduce this culture of carnage, and we need to make that effort now — because these games are going to become even more sophisticated and persuasive. Experts can debate whether entertainment violence causes brutality in society or merely reflects it, but there should be no dispute that the pervasive images of murder and mayhem encourage our kids to view violent activity as a normal part of life, and that interactive videogame violence desensitizes children to the real thing. Our children should not be told that to be a winner you need to be a killer. That subtle but menacing message pollutes our society.

I’d like to call now upon my esteemed colleague Senator Dorgan.

Senator Byron Dorgan, Democrat from North Dakota, worked briefly in the aerospace industry before becoming tax commissioner of his state in 1968 at the age of just 26. He was elected to the United States House of Representatives in 1980, going on to serve six terms there before being elected to the Senate in November of 1992.

Senator Byron Dorgan: I wanted very much to be here because I think this is a very important issue. It has been quite a leap from Pac-Man to Night Trap. Violence in videogames is a close cousin to violence on television. I know there are critics of these hearings; these critics are similar in my judgment to those who are still saying there’s no evidence that cigarettes cause cancer. There’s no evidence, they say, that violence in videogames affects our children. Have they lost all common sense? Of course it affects our children, and it affects our kids in a very negative way.

About two months ago, I saw the videogame Night Trap for the first time. It is a sick, disgusting videogame in my judgment. It’s an effort to trap and kill women.[7]The player’s objective in Night Trap, of course, is not to trap and kill women but rather to protect them from others who seek to do so. Shame on the people who produce that trash. It’s child abuse in my judgment.

I know some people will say we are trying to become the thought police. That is not my intention, but we have to take some basic responsibility in this country to protect children. Those who have children understand that they deserve protection. Certain things are appropriate for them and certain things are not. An author once said that 100 years from now it won’t really matter how big your income was or how big your house was, but the world might be a different place because you were important in the life of a child. Maybe our efforts will be important in the lives of children, and will make improvements in this world. I hope so.

Senator Kohl: We’d like to call our first panel now, composed of representatives from academia and education, and also concerned citizens. You may each give a statement.

Parker Page was the head of the Children’s Television Resource and Education Center, an advocacy group whose concerns about violent content on children’s television had recently spread to videogames.

Parker Page: Parents and educators tell us that they are increasingly worried about the effects of violent videogames on children. But do their worries merit national attention? In a country which is grappling with an epidemic of real-life violence, should we bother ourselves with kids’ leisure-time activities like playing videogames? We think the answer is yes.

For, while the impact of violent videogames is still open to debate, early studies as well as decades of television research warn us of possible consequences, especially for young children. The TV research is conclusive: violent screen images have their own special effects. Children who watch a steady diet of violent programming increase their chances of becoming more aggressive toward other children, less cooperative and altruistic, more tolerant of real-life violence, and more afraid of the world outside their homes. The case against videogame violence is not nearly so clear-cut for one simple reason: there hasn’t been enough research.

In the last ten years, only a handful of published reports have explored the effects of videogames. Moreover, the few experimental studies that have been conducted relied on crude cartoon-like videogames produced in the early 1980s, archaic by today’s standards of technological wizardry. Even so, several of the initial videogames studies suggest that there is a link between violent videogames and children’s aggression. For example, research studies have shown that, at least in the short term, children who play violent videogames are significantly more aggressive afterward than children who play less violent videogames. All this research is limited and it’s dated. The overall trends, however, must give us cause for concern as we approach virtual reality.

Mortal Kombat is the latest in a new generation of videogames that allow software designers to combine high levels of violence with fully digitized human beings. While these lifelike characters may make the videogame more thrilling, TV research sends us a warning that the more realistic the images of violence, the more likely they are to influence young children’s behavior and attitudes. Unfortunately, there is no timeout for millions of American children who are daily immersed in videogame violence and bombarded by videogame advertising. Therefore we recommend the following:

We recommend that the federal government fund independent research projects and disseminate their findings in order to shed additional light on the effects of videogames and other emerging interactive media. We recommend that the videogame industry provide parents with more accurate and detailed product information than is currently available, make a commitment to advertising strategies and marketing that reinforce the rating system rather than undercut it, and pursue an industry-wide agreement to put a cap on violence. Videogames that allow young players to participate in heinous acts of cruelty and inhumanity should not exist, regardless of profits.

Having made these recommendations, it’s important to underscore that parents must still shoulder the major responsibility for guiding their children’s entertainment activities. We recommend strongly that parents become actively involved in helping their children make videogame choices that reflect each family’s values, that they take seriously the videogame warning labels and content descriptions that are available, and that they make videogame playing truly interactive by setting up time limits, by substituting less violent games, and by making game-playing a social rather than an isolating activity.

In conclusion, I believe that this national attention on videogame violence affords us a rare opportunity to avoid the enormous time lag between the TV-violence research findings and public awareness. We have a chance to lower the impact of videogame violence on children’s lives sooner rather than later. I hope that all of us will seize the moment.

Eugene Provenzo was (and is) a professor of pedagogy at the University of Miami. He had recently published the book Video Kids: Making Sense of Nintendo.

Eugene Provenzo: Most adults pay relatively little attention to videogames. Although I’ve been studying toys, games, and the culture of childhood for nearly twenty years, it wasn’t until a neighbor came up to me three years ago and asked me what I thought of videogames that I began to consider their implications. What I found shocked me.

During the past decade, the videogame industry has developed games whose social content has been overwhelmingly violent, sexist, and racist, issues that I’ve addressed extensively in my research. For example, in Video Kids I explored the 47 most popular videogames in America. I found that 40 had violence as their main theme, and thirteen included scenarios in which women were kidnapped and had to be rescued — i.e., the idea of women as victims. Although men were often rescued in games too, they were never rescued by women. Videogames have a marked tradition of extreme violence which is also combined with gender discrimination.

Some of my more recent research suggests that videogames are evolving into a new type of interactive medium — participatory or interactive television is what I’m calling it. This new CD-ROM-based videogame technology represents a major evolutionary step beyond the simple graphics of the classic Space Invaders arcade games so popular fifteen or twenty years ago, or even the tiny animated cartoon figures that we see in the Nintendo system. When you combine CD-ROM-based technology, which allows you to have digitized films in the computer, with virtual-reality technologies like Sega’s Activator, which allows you to literally have your movements sensed — punching, hitting, kicking, all translated into the computer — you have something remarkable — a remarkably new and different type of thing. I want to make it very clear that we are dealing with something different, a new type of television.

I believe that the remaining years of this decade will see the emergence and definition of this media form in the same way that the 1940s and 1950s saw television emerge as a powerful social and cultural force. If the videogame industry is going to provide the foundation for the development of interactive television, I think that citizens, parents, educators, and legislators have cause for considerable concern and alarm.

We are at the threshold of a new generation of interactive television. While I believe as an educator that this technology has wonderful potential, I’m also convinced that if we continue using it without addressing the full ramifications and significance of the social content of videogames, we’ll be doing a serious disservice to both our children and our culture.

Dr. Robert Chase was the vice-president of the National Education Association, the largest labor union in the United States; its ranks included more than 2 million schoolteachers and other education professionals.

Robert Chase: I join Senator Lieberman in calling for the producers of electronic games to live up to their responsibilities in helping to raise a generation of children free from violence. It is disheartening that there is even a demand for games that are explicitly violent and graphically sexual.

The first line of defense against the wide distribution of such games remains the family. All parents must assume for themselves the responsibility to raise their children with values of respect and decency and a sense of limits about what is appropriate behavior. I don’t wish anyone to dictate to me what is appropriate for my daughters to see or to say or to do, any more than I would presume to tell you what is appropriate for your sons and daughters. However, I hope we share a commitment to providing parents with appropriate tools to make reasonable judgments for our children.

Electronic games, because they are active rather than passive, can do more than desensitize impressionable children to violence; they can actually encourage violence as the solution of first resort by rewarding participants for killing one’s opponents in the most grisly ways imaginable. The guidelines that now exist for films should be extended to electronic games. We can and must establish a system of parental notification about the graphic sexual or violent materials contained in some videogames.

Marilyn Droz represented the National Coalition on Television Violence.

Marilyn Droz: I’ve been a parent for sixteen years, a wife for twenty years, a teacher for 23 years, and a woman since the day I was born. Let me tell you, in all of the hats I wear, I find the games we’ve seen today extremely offensive, and the only words I can say to the manufacturers and shareholders of the companies are, “Shame on you!” I think they really should stop and think about what they’re doing. I mean, how would you like to have a teenage daughter go out on a date with someone who’s just played three hours of one of those games?

The word “toy” comes from a Scandinavian word meaning “little tools.”[8]This is, at best, an extremely dubious etymology. The Danish word “tøj,” which is pronounced like the English “toy,” actually means clothing. While “værktøjer” means tools, there is no single word for “little tools”: one would need to say “små værktøjer” to get that concept across. The Danish word for toy, on the other hand, is “legetøj.” If the English word descends from the Scandinavian languages at all, it is almost certainly an abbreviated version of this word. Even this, however, is by no means a firmly established etymology. That’s very appropriate because play is the work of children; it’s what prepares them for the future. The technology of today is phenomenal, and it’s going to have the power to prepare our children for a future that we are not able to understand ourselves, a future that’s well worth looking for — if we can get the videogame industry to change some of their values.

When computers first came out, videogames were played equally among boys and girls in the classroom; there was equal time.[9]I have seen no evidence in my own research that there was ever a time when videogames were as popular among girls as boys. Now, it seems boys are comfortable with the technology. Videogames are geared to boys. Fifty percent of our children are losing the value of interactive technology. We are losing a generation of women. Our research indicates that girls are very offended by the lack of games for them. They feel inferior. It’s very easy to determine which are the girl games and which the boy games; girl games are the ones with the fluffy little bunnies. Playing videogames has become a boy thing. Girls are being trained to dress Barbie dolls, while boys are being trained in technology. This has to change. As a mother, as a parent, as a woman, and as an American citizen, I am stating that this needs to change.

Games have confused children’s desire for action with violence. Children want action, they want excitement; they do not need to see the insides of people splattered against the wall. We all work so hard to raise our children well, and our efforts are undermined by videogames, which teach them that the only way to solve problems — the quickest, most efficient way — is to kill ’em off. There are very few women characters with any control or power. Videogames tell our girls that they can be either sex objects or victims; that’s their choice. The very few women who have any kind of power are built with iron body parts, or they can blow a kiss of death. Once again, we’ve got sex and violence. This has to stop.

Almost everything we purchase nowadays has regulations. We have regulations saying that physical toys cannot contain parts that are easy to swallow. Well, I’m finding this violence difficult to swallow. Thank you for bringing this issue before the public.

Eugene Provenzo: I think another thing to point out here is that the psychological studies of the effects of videogames are all from the early 1980s. They’re based on arcade games like Space Invaders, which are highly depersonalized. There are four generations of videogames. There’s Pong, there’s Space Invaders, there’s Nintendo with its cartoon figures, and we’re into the next stage right now, which is Night Trap-type games. And there’s a new stage after this, which is the combining of this with virtual-reality devices. We’re beginning to move into that, where kids can physically participate in the violence. We need to do more studies; we don’t know yet what the results of playing a game like Night Trap are. But we can make some guesses.

Parker Page: Yes, there needs to be a body of upwards of 100 studies before the research on videogames will be as definitive as the research on television. However, given the similarities with television watching, I would be amazed if we don’t find either similar or stronger effects.

Eugene Provenzo: There’s a parallel issue that I think is relevant here in terms of violence against women. There is a new field emerging called cybersex; that’s not a joke. What it amounts to is pornography on CD-ROM. You can dial up what you want — a blonde, a redhead, a brunette, male or female — then do what you want to them. Imagine that getting into the hands of a thirteen- or fourteen-year-old who’s had no sexual experience. And they play these games for three or four years, then they finally meet a real woman on a date. That’s very scary. Look at the portrayal of the women in Night Trap. There are obvious sexual overtones there.

Parker Page: There are some folks who believe that violent videogames can drain away aggression — that they can have a cathartic effect, making kids less violent. That’s a great theory, but it makes for very lousy research. The research in the area of TV violence points in the exact opposite direction.

Senator Lieberman: Dr. Provenzo, you state in your book that videogames are not only violent and sexist but also racist. Can you give a few examples?

Eugene Provenzo: Sure. In interviews with children, they talked about the ninjas as being bad. And then you ask them who the ninjas are, and they said, “The Japs and the Chinese.” It turns out that they perceive Asians as being extremely violent, as being dangerous, as being evil. There is homophobia operating, in terms of how certain types of women are portrayed. It’s subtle and hard to get at sometimes, but I think it presents a relatively disturbing world.

I interviewed large numbers of girls. They said, “I don’t like videogames. I don’t like computers. I think I would like them, but I don’t like what they’re about.” The industry people often argue that videogames are children’s introduction into the culture of computing. We’re discriminating against girls by providing them with these consistent negative images. They get turned off of computers. We’re driving them away from these tools of the 21st century that they need to master. I think that’s very objectionable.

Senator Dorgan: Sega states in five mitigating points responding to the controversy over Night Trap that it was meant to be a satire of vampire films, and that the controversial scene we’ve just seen is displayed only when the player loses. Does that make you feel any better?

Marilyn Droz: Oh, it makes me feel a lot better that if you’re a loser you’re dead.

Eugene Provenzo: At the beginning of Night Trap, your commander looks you straight in the eye and says, “If you don’t have the brains or guts for this mission, then give control to someone who does.” A fascist military type looks at you and essentially says, “If you’re not man enough to do this, forget it! You don’t deserve to play this game!”

I’d like to make a suggestion that I don’t think is that difficult to implement: I’d like to see violence portrayed accurately. I would like to see a videogame where, if you punch someone viciously, they don’t get up and take another punch. Children don’t understand what guns and hitting do. They don’t get that communicated to them. They think that guns aren’t that serious. They don’t understand that when a bullet goes through your leg, you may not walk again, you may lose your leg.

Senator Lieberman: We thank all of you for coming. Let me now call the second panel.

Howard Lincoln is a legendary figure in the history of videogames. Along with Minoru Arakawa, he is widely and justly recognized for resurrecting the videogame console in North America in the form of the Nintendo Entertainment System. At the time of this hearing, he had the title of senior vice president of Nintendo of America, but he effectively ran the multi-billion-dollar branch as a co-equal with Arakawa, its official founder and president. Famous or infamous, depending on one’s point of view, for his take-no-prisoners approach to business, his fingerprints were on every aspect of Nintendo’s American strategy.

Howard Lincoln: Nintendo is just as concerned about the issue of violence in videogames as anyone in this room. Of course, every entertainment executive tells Congress that. But Nintendo can back it up.

In the mid-1980s, when Nintendo entered the videogame business in this country, the issue of violence in videogames was not in the public’s eye. But just like today, there was a computer-software industry selling videogames, and some of these games contained excessive violence and pornographic material. We didn’t want Nintendo’s name associated with this kind of product. Even then, we were concerned about game content. So in 1985, when we launched our first Nintendo home-videogame system, we make a conscious decision not to allow excessively violent, sexually explicit, or otherwise offensive games on it. We incorporated a patented security chip in all Nintendo hardware and software; this enabled us to review and approve the content of all videogames played on Nintendo’s hardware, whether made directly by Nintendo or by one of our approximately 70 third-party licensees.[10]This chip also allowed Nintendo to assure that they collected a royalty from each and every game that was sold for their console — something Atari wouldn’t or couldn’t do during the first videogame craze. Nintendo has guidelines which control game content, and we’ve applied these to every one of the more than 1200 games released to the marketplace by Nintendo and its licensees. These guidelines prohibit sexually suggestive or explicit content; random, gratuitous, or excessive violence; graphic illustration of death; excessive force in sports games; ethnic, racial, national, or sexual stereotypes; profanity or obscenity; and the use of illegal drugs. Over the last eight years, these guidelines have kept an enormous amount of offensive material out of American homes.

Of course, our guidelines are not perfect, and may not answer everyone’s concerns. After all, videogames are a form of entertainment covering everything from education to the martial arts. But I must say that we have made a good-faith effort to keep offensive material off our game systems, and we intend to continue applying our game guidelines in the future.

In the past year, some very violent and offensive games have reached the market. Of course, I’m speaking about Mortal Kombat and Night Trap. Let me state for the record that Night Trap will never appear on a Nintendo system.[11]It wouldn’t have been technically feasible to release a Nintendo version of Night Trap because the company had no CD drive in its product catalog. A game which promotes violence against women simply has no place in our society.

Let me turn to Mortal Kombat. To meet our game guidelines, we insisted that one of our largest licensees, Acclaim Entertainment, remove the blood and death sequences present in the arcade version before we would approve this game. We did this knowing that our competitor would leave these scenes in, and with full knowledge that we would make more money if we included the offensive material. We knew that we would lose money by sanitizing Mortal Kombat, but sanitize it we did. We have been criticized by thousands of young players for insisting that the death sequences be removed from this game.

Senator Lieberman: So, people actually complain that they can’t have the more violent game on the Nintendo system?

Howard Lincoln: That’s correct. Letters and phone calls say, “Leave in the violence! You’re censoring!”

We share the public’s growing concern with violence. Nintendo will do everything it can to develop a workable game-rating system. But a rating system is no substitute for corporate responsibility. Rating games will not make them less violent. Only manufacturers can do that by keeping outrageous games like Night Trap off the market.

Bill White was a vice-president of marketing and communications at Sega of America. He had left Nintendo to join what everyone there regarded as the enemy camp less than six months before. The bad blood between Lincoln and White — a proxy for the bad blood between the arch-rival corporate entities they represented — was palpable throughout the hearing.

Bill White: I want to address three key points. First, the fallacy that Sega and the rest of the digital interactive-media industry only sell games to children. In fact, our consumer base is much broader. Second, the efforts which Sega has already made to provide parents with the information they need to distinguish between interactive-media products which are appropriate for young people and those which are not. And third, the efforts which Sega is currently making to gain the cooperation of all interactive-media companies to develop an industry-wide rating system.

In recent days, the glare of the media spotlight on this issue has resulted in a number of distorted and inaccurate claims. The most damaging of these in my view is the notion that Sega and the rest of the digital-interactive industry are only in the business of selling games to children. This is not the case. Yes, many of Sega’s interactive-video titles are intended and purchased for young children. Many other Sega titles, however, are intended for and purchased by adults for their personal entertainment and education. The average Sega CD user is almost 22 years old, and only 5 percent are under the age of thirteen. The average Sega Genesis user is almost nineteen years old, and fewer than 30 percent are under the age of thirteen. There truly is something for everyone in our software catalog, and the variety of available software is multiplying each day. Interactive media should be treated no differently than the television, motion-picture, recorded-music, or publishing industries. Attempts to relegate digital interactive software to a media backwater are outdated and inappropriate. It makes no more sense to conclude today that digital interactive media is only for children than it would have, when the Gutenberg press was in its infancy, to conclude that the printed word was only for Bible readers.

Digital interactive media communicates increasingly diverse information to an increasingly diverse audience. Looking at our most recent data for 1993, action-adventure titles such as Sonic Spinball and Jurassic Park account for 40 percent of the revenue from our library. Sports titles such as NBA Action ’94, World Series Baseball, and Joe Montana Football account for 35 percent of our revenues. Fighting games such as X-Men and Eternal Champions comprise 13 percent of our revenues. Titles in the children-entertainment category such as Barney’s Hide and Seek, Where in the World is Carmen Sandiego?, and Fun ‘N Games produce 5 percent of our revenues. Role-playing games such as Landstalker make up 5 percent of revenues. And strategy and puzzle games such as Dr. Robotnik’s Mean Bean Machine constitute 2 percent of revenues.

As you can see, evolving interactive technology reaches a huge market that goes well beyond the child-oriented titles that gave the industry its start. Anything Congress might do on this front would affect a large, diverse group of consumers, young and old, in a volatile industry still in its infancy. Information, not regulation, is the appropriate policy.

Last September, Sega completed its implementation of a comprehensive guidance program which we began developing over a year and a half ago. It is a three-pronged approach designed to help parents determine the age-appropriateness of different interactive-video software. It includes a rating system, a toll-free hotline, and an informational brochure. Building on the motion-picture industry’s model, the Sega rating system applies one of three classifications to each interactive program released by Sega: GA for general audiences, MA-13 for mature audiences age thirteen and over, and MA-17 for titles not suitable for those under age seventeen. A Videogame Rating Council, created by Sega and consisting of independent experts in the areas of child psychology, sociology, cinema, and education, is responsible for evaluating each game and giving it the appropriate rating classification. I want to emphasize that this is an independent council. Even though it takes considerable time to evaluate each product, individual council members are paid only a small honorarium for each game they rate.

And now we and others in this industry are prepared to take the next step. This morning, a number of interactive-video companies and some of the nation’s leading retailers announced their plan for creating an industry-wide rating system. The coalition committed to this effort includes Atari, 3DO, Wal-Mart, Sears, Toys ‘R’ Us, Blockbuster Video, and videogame publishers representing over 90 percent of the market. The goal is to develop and implement a rating system that enjoys widespread support and voluntary participation throughout the industry.

There is every reason to be optimistic about the industry’s ability to voluntarily provide parental guidance, but we ask that you treat digital interactive media as you have treated other media such as the motion-picture industry: give parents the power to choose what’s right for their kids, but don’t tell adults what’s right for them.

Ileen Rosenthal was the general counsel of the Software Publishers Association. Formed in 1984, when videogame consoles seemed to most to be a fad of the past and personal computers the exclusive future of interactive entertainment, the traditionally computer-focused SPA was not an overly prominent voice in the world of Nintendo and Sega, although the latter was a member. Indeed, their biggest concern for years was a problem that effectively didn’t exist on the consoles, thanks to the latter’s use of cartridge-based, read-only media: software piracy, which the SPA opposed with a long-running media campaign whose tagline was “Don’t Copy That Floppy!” The presence of a representative of the SPA at this landmark hearing is often overlooked — as, for that matter, Rosenthal’s presence apparently was to some extent by the people who called the hearing; in marked contrast to the sustained grilling delivered to Howard Lincoln and especially Bill White, she would receive just one perfunctory yes/no question from the senators after making her opening statement.

While they made up only about 10 percent of the digital-gaming market in 1993, computer games were hotbeds of innovation, being in many cases more complex and aesthetically ambitious than their console counterparts, with a customer demographic that skewed older even than that of Sega. The people holding the hearing would doubtless have found plenty on personal computers to be outraged about, had they only looked: CD-ROM-based “interactive movies” like Voyeur were far more sexually suggestive than the likes of Night Trap, while action games like id Software’s Wolfenstein 3D, which were now regularly bubbling up from the rough-and-ready shareware underground, were at least as violent as Mortal Kombat. But, thanks to their smaller and older player base — and doubtless thanks to the fact that personal computers tended to be installed in private bedrooms and offices rather than public living rooms — the content of computer games would largely escape serious mainstream scrutiny for years to come. Not until the Columbine High School Massacre of 1999 was carried out by a pair of rabid DOOM fans would computer games find themselves the focal point of a controversy over violent media. (In one of those delicious concordances which history delivers from time to time, id Software would upload the first episode of DOOM to the shareware servers that were to host it about eight hours after this hearing wrapped up.)

Ileen Rosenthal: As the videogame industry has grown, we are finding that some products have begun to incorporate violent and explicit themes. It is inevitable that some of these products will find their way into the hands of children. However, in our attempt to protect our children from those games which contain violent and mature themes, we must not lose sight of the fact that the vast majority of games are appropriate for children, and have the potential to develop many important and socially desirable skills. For example, it is a fact that children who are considered to have short attention spans can focus for hours on a videogame, discovering rules and patterns by an active and interactive process of trial and error. Surely the potential of this medium for bettering our children’s thinking skills is enormous. Even in the literature of Dr. Page’s organization, it asks, “Is there anything good about playing videogames?” The answer: “Sure there is. Like puzzles, board games, and other forms of interactive entertainment, playing videogames can help kids relax, learn new strategies, develop concentration skills, and achieve goals. If they are playing with others, it can also be a great time for socialization.”

Each month, SPA puts out a list of the top-selling software. In September of 1993, most of the games on it had nothing to do with violence: Microsoft Flight Simulator; Wing Commander: Privateer, an outer-space role-playing game; Front Page Sports: Football; X-Wing; Lands of Lore, a fantasy role-playing game; SimCity.[12]Rosenthal doesn’t make it clear here that, in keeping with the computer focus of the SPA, this list includes only games for computers, not consoles. Further, only games that were sold as boxed products in retail stores are included; the list misses entirely the vibrant shareware scene, where games like id’s Wolfenstein 3D were already pushing the envelope on gore and violence at least as much as Sega. Thus it provides a somewhat distorted view of the overall state of gaming even on computers. I want to point out that computer-based games have traditionally been targeted to an older audience than the original videogames.

Dawn Wiener, president of the Video Software Dealers Association, and Craig Johnson, a past president of the Amusement and Music Operators Association, also delivered prepared statements. But they largely echoed Bill White’s statement that the industry ought to be allowed to regulate itself — it’s clear that a degree of message coordination went on prior to the hearing — and they did so in fairly milquetoast fashion at that. So, I’ve chosen to omit their statements here.

Senator Lieberman: Mr. White, let me go right to the heart of the matter with you. Mr. Lincoln just said that Night Trap has no place in our society. Why don’t you agree? Why don’t you pull Night Trap off the market?

Bill White: The interactive-media industry has grown tremendously, and children represent only a portion of the audience that we serve. Night Trap was developed for an adult audience. Sega’s independent rating council labeled it MA-17: “not appropriate for children.”

Senator Lieberman: But do you think that stuff is appropriate even for an adult audience? A provocatively dressed woman is brutally attacked. A lot of the products your company produces are great. Why do you need to produce this stuff, whether for adults or kids?

Bill White: If you saw only the violent or gory scenes from Roots or Gone with the Wind out of context, you might conclude that they’re horrible films. In reality, they aren’t. You’ve picked out a particular segment of the game. A winning effort in Night Trap saves the women. Your job as the player is to identify the villains and to trap them. This game is appropriate for adults who choose to entertain themselves with it.

Senator Lieberman: And if you’re a bad player?

Bill White: If you’re a bad player, you will see that scene.

Senator Lieberman: You have a long way to go to convince me that you’re raising anyone’s values or reducing their aggression, particularly toward women.

Bill White: We agree with much of what the earlier panel said. We believe that more research is necessary to conclude what effect games can have on both adults and children.

Senator Lieberman: Then why don’t you wait until the research is done?

Bill White: Because we believe that adults can make the choice for themselves of whether that game is right or wrong for them.

Senator Lieberman: I have here a recent Sega brochure. You’ve got Night Trap alongside Joe Montana Football and Spider-Man Versus Kingpin and Sherlock Holmes: Consulting Detective. Is this responsible advertising?

Bill White: We’ve taken the first step toward an industry-wide rating system. Just as the motion-picture industry produces films for adults as well as children, the interactive-entertainment industry will continue to produce products that are appropriate for both. We would like to see better enforcement at retail. We would like to see the ratings prominently displayed in advertising.

Senator Lieberman: You agree, then, that this brochure is irresponsible?

Bill White: That was developed prior to our full implementation of our rating system.

Senator Lieberman: If you’ve updated your rating system, I hope that you’ll also update your promotional system.

I want to show an advertisement for Mortal Kombat for Sega. This game is rated MA-13, not suitable for children under thirteen. But just watch this advertisement, and tell me whether it doesn’t encourage children under thirteen to buy Mortal Kombat.

The nerd that became a hero by buying Mortal Kombat looks to me to be under thirteen. What can you do to prevent boys under thirteen from seeing this ad and deciding that their masculinity and freedom from bullies will be determined by whether they can play this game?

Bill White: That advertisement is directed to teens, not to children. I can’t comment on the age of the cast because I just don’t know. The intent of our rating system is to take a first step. We’re proud of that step. We don’t believe it’s perfect, but we do believe that more information is the answer, not government regulation, and certainly not censorship.

Senator Lieberman: I agree with you. The rating system is only a first step. And it’s a fig leaf to cover a lot of transgressions if you don’t enforce it better and, I hope, apply a little bit of self-control to yourselves. Is that ad placed on children’s shows?

Bill White: No, that ad would not be placed on a children’s show. We buy television time directed toward teenagers and time directed toward children. That ad was not approved for children’s television.

Senator Lieberman: I have an ad here from GamePro magazine. At the top it says, “He’s back! Splatterhouse 3 is the kind of game ratings systems were invented for!” At the bottom, it says that it “includes deadly new weapons, six levels of monster-bashing mayhem, and killer special moves!” Doesn’t that kind of advertisement make a mockery of your rating system?

Bill White: I haven’t seen this advertisement. We have no control over what an independent publisher says about our rating system, any more than the motion-picture industry can control what an individual studio says about its rating system.

Senator Lieberman: But wouldn’t you agree, having seen it now, that that makes a mockery of your rating system? I can’t believe that’s what you want.

Bill White: We want to take the next step. That’s why we’ve worked around the clock for the past two weeks to establish an industry coalition that will develop an industry-wide rating system.

Senator Lieberman: Well, there’s a lot of work to do, to put it mildly.

Mr. Lincoln, I appreciate that you’ve been self-regulating to some degree, and I also appreciate that you’ve accepted the idea of a rating system. Even though your games are less violent and less graphically sexual, there is violence in them. Dr. Provenzo feels that there is a lot of violence in the Nintendo products. Can assure us that everyone involved with Nintendo is committed to the rating system?

Howard Lincoln: I can certainly do that. But the point I’m making is that a rating system just doesn’t go far enough. We have to get our hands on the game content. We’ve been doing that, although, like anything, it’s not perfect.

I can’t sit here and allow you to be told that somehow the videogame market has been transformed from children to adults. It hasn’t been — and Mr. White, who is a former Nintendo employee, knows the demographics as well as I do. Further, I can’t let you be subject to this nonsense that this Sega Night Trap game is only meant for adults. There was no rating on this game at all when it was introduced. Small children bought it as Toys “R” Us, and he knows that as well as I do. They adopted the rating system when they started getting heat about this game. But today, as sure as I’m sitting here, a child can go into a Toys “R” Us and buy this product, and no one will challenge him.

I agree that everything Nintendo has done hasn’t been perfect. As a matter of fact, when I came into this hearing this morning, I saw that you have an advertisement for one of the Super Nintendo Entertainment System games. It says, “They’ve got a bullet with your name on it!” I phoned our head office and found out that licensee put out that advertisement without our consent, without our review, and without our permission. If that advertisement is not withdrawn, that company is in breach of its license agreement. We do have the ability and the right to control advertising by our licensees, and we take that seriously. I’d like to apologize to this committee for the fact that we slipped up. But let me tell you, when I get back to Seattle, I will call that licensee.

Senator Lieberman: Thank you for your forthrightness. Thank you for taking responsibility. You’ve shown some leadership here. You’re not perfect, as you’ve said, but you’ve been a damn sight better than the competition.

Bill White: Senator, it’s all well and good for Nintendo to say it has content guidelines. Sega has content guidelines as well. I had the opportunity to meet with your staff and show them some Nintendo games, and to compare their level of violence to the same games on the Sega platform. I’d like to show some of that comparison in order to illustrate that the guidelines Mr. Lincoln speaks of continue to allow excessive violence — without the benefit of a rating system, without the benefit of packaging that clearly states this is for mature audiences.

Senator Lieberman: Mr. White, let me just say this to you. Today, Mr. Lincoln has accepted the idea of a rating system. Nintendo had previously been self-regulating more than you. They chose not to produce Night Trap, and they have a less violent version of Mortal Kombat. You have a rating system, but I still haven’t heard you accept responsibility for regulating the content of your games. That is what’s at issue, notwithstanding the tape you’ve just shown us, which doesn’t compare in my opinion to Mortal Kombat and Night Trap.

Senator Kohl: I’d like to ask both Mr. Lincoln and Mr. White the following question. As you expand your business into the adult market, can you guarantee that children won’t see this adult product?

Howard Lincoln: No.

Senator Kohl: Mr. White?

Bill White: No, we can’t, Senator. All we can do is work with the mechanisms that are available to us.

Senator Kohl: So, there’s no way we can feel comfortable that material which some of us might feel doesn’t belong on the market at all won’t get onto the market and then be viewed by children?

Bill White: There’s an interesting difference between Sega and Nintendo here, in that we’ve moved ahead with CD technology, while Nintendo has not. They continue to focus on children. We have recognized that the interactive-entertainment market is far larger. We would like to have a rating system that will allow us to develop games for that broad array of players.

Howard Lincoln: I didn’t realize the hearing was focused on market share. I thought we were talking about regulation and violence. My colleague must think differently. Certainly the industry is moving into new territory with new technology. Nintendo, for example, will soon be coming out with a 64-bit system. Graphics are going to become much better. Unless we can get everyone in the industry to put a stop to the kind of things you’re seeing in Night Trap, we’re just deceiving ourselves.

Senator Kohl: I think it’s encouraging that you find so much to disagree with each other about. It indicates that you’re not here in a lockstep way. You’re really concerned about what the others are doing, and are worried perhaps that you’re going to kill the goose that laid the golden egg. I hope you walk away with one thought: if you don’t do something about it, we will. Senator Dorgan?

Senator Dorgan: Does anybody here have any notion how many babies will be born this year out of wedlock? No? Over 1 million, 800,000 of whom will never learn the identity of their father during their lifetime. Children are growing up without supervision, without the parents you so blithely say should supervise them. I agree that parents ought to be involved in their children’s viewing habits and so on, but the fact is, in many cases there are no parents! What do you do about those kids?

I understand that Night Trap was not rated when it was first released, and then it was rated at the MA-13 level. Is that correct?

Bill White: Once it was rated, it was rated MA-17, Senator.

Senator Dorgan: Do you consider those over the age of thirteen to be mature?

Bill White: MA-13 is appropriate for teenagers and older.

Senator Dorgan: Isn’t the word “mature” attached to that rating?

Bill White: Yes.

Senator Dorgan: So, the presumption is that those over thirteen years of age are mature?

Bill White: Yes, with parental discretion.

Senator Dorgan: Are you kidding me? What kind of rating system identifies kids of thirteen as mature?

Bill White: It’s similar to the motion-picture rating of PG-13.

Senator Dorgan: We have some responsibility to protect children. We protect them from access to alcohol. We protect them in a whole range of areas. With respect to a videogame in which a woman is grabbed by the neck with a hook, drilled in the neck with a tool, or someone grabs the heart out of a character… we ought to have just as much concern about protecting our children from that sort of trash.

Mr. White, I’ve read your written statement, and I honestly think you don’t understand what we’re talking about here. In your final point about Night Trap, you write this: “Finally, there is some research indicating a short-term, momentary increase in playful aggressive behavior after playing videogames or watching violent television programming. But there is no research indicating this has any lasting impact. In fact, quite the opposite is true.” My sense is that you just don’t get what this hearing is about. You say, “This is not for kids. This is adult entertainment.” But you and I both know that kids will have wide access to it. We need to exercise responsibility and protect those children. Profiting at the expense of America’s kids is not moral profit.

Senator Lieberman: Mr. White, in your rating system you have a category of “non-approved.” The latest version of your guidelines reads: “As always, Sega will not approve products which include material that encourages criminality of any kind.” Isn’t a game that requires kids to point a gun at the television set encouraging criminality? We’re all aware of the incredible outbreak of gun violence in this country.

Bill White: We rely on the independent rating council to make those decisions because we in corporate management are not psychologists or sociologists. They have rated that product MA-17: only appropriate for adults.

I’d also like to point out that Nintendo produces a “rapid-fire machine gun” that uses the same technology. They have no rating on that product to suggest it is for adults.

Senator Lieberman: Mr. Lincoln, what game is that for?

Howard Lincoln: This is something that can be purchased for the Super NES. It’s called the “Super Scope.” Sega’s gun is called the “Justifier.” Our gun is for target-shooting. [There is laughter in the room after Lincoln makes this statement, although it was apparently not intended in jest.]

Lethal Enforcer, the game you’re speaking of, was initially rejected by Nintendo. We told the licensee that they would have to remove the name “Justifier” and we wouldn’t approve their packaging. Because of this, that game is not yet out on Nintendo.

Senator Lieberman: I hope you’ll think again before it goes onto the market because this is about more than the name “Justifier.” That is a handgun, pure and simple. No matter what name is on it, putting it in the hands of kids gives them the wrong idea. And I must say that your Super Scope also looks like an assault weapon to me.

Pursuant to your commitment to have a rating system, would you commit to do everything in your power to ensure that the ratings are not only visible on your products but visible in their advertising?

Bill White: Yes. The ratings should be prominent in advertising. You have our commitment to that. I don’t believe that same commitment has been made by Nintendo.

Howard Lincoln: I don’t know what he’s talking about there. As you well know, we have made a commitment to the rating system. But we are concerned that a rating system by itself might just lead to an open season on more violent games. The commitment I’ll make is that we’ll be the first ones back here if what we see is just business as usual. If we’re going to have a rating system, let’s put some meat into it and enforce it.

Senator Lieberman: Ms. Rosenthal, will you make the same commitment?

Ileen Rosenthal: Absolutely. The software industry is sincerely interested in the well-being of children.

Senator Lieberman: A final question for Mr. White. In your guidelines, you say you won’t publish products which denigrate any ethnic, racial, sexual, or religious group. Obviously I think that Night Trap denigrates a sexual group, namely women. But there’s a Konami ad which talks about “fighting ninjas in Chinatown.” Obviously that’s culturally inaccurate since ninja are in the folklore of Japan, not China. But do you agree that that’s in violation of the spirit of your own guidelines?

Bill White: Senator, those guidelines refer to the games themselves, not to their advertising. And that’s not our advertisement.

Senator Lieberman: Would you include that kind of language — “fighting Ninjas in Chinatown” — in your own advertising?

Bill White: No. We strongly discourage that kind of language.

Senator Lieberman: Okay.

Senator Kohl and I are very serious about this, and intend to stay with it. I hope you’re able as an industry to come up with a rating system that addresses everyone’s concerns, but I think the best guarantee of that is for us to stick to the course we’ve set. I know there’s a tremendous market incentive here, but the best thing you can do — not only for the country but for yourselves — is to self-regulate. It will be important for the ultimate credibility and success of your business. And it’s important to the maintenance of our Constitutional freedoms. Because unless people start to self-regulate, the sense that we’re out of control is going to lead to genuine threats to our freedom. We’ve come a ways today, but we’ve got a long ways to go yet. I hope you’ll become the leaders in this, so we don’t have to worry about it anymore.

Senator Kohl: We have an awful lot of freedom in America. But there’s always that tendency to use the system down to the last inch to maximize profit. We can push it too far, and do great damage to our country. We all hope very much that you take a step back and consider our common responsibilities as citizens. Thank you.

(The full hearing is available for viewing in the C-SPAN archives.)

Footnotes

Footnotes
1 Kohl was a noted proponent of commonsense gun control, especially among minors.
2 Konami’s Lethal Enforcers, a light-gun-based shooting-gallery game, was, like Mortal Kombat, one of the big arcade hits of 1992, and was likewise now coming home on consoles and computers.
3 The dream of streaming videogame content in the same way that one streams television programs was an old one already by this point, dating back at least to the beginning of the previous decade. Despite many bold predictions and more scattershot attempts at actual implementation, it’s never quite come to pass in the comprehensive way that seemed so well-nigh inevitable in 1993.
4 Sega had actually rolled out its own content-rating system just before the release of Mortal Kombat in September of 1993. Shortly thereafter, Sega and Nintendo, feeling the heat not only from Washington but from such powerful entities as California’s attorney general, did indeed agree to work together on a joint rating system — an unusual step for two companies whose relationship had heretofore been defined by their mutual loathing. On the very morning of this hearing, most of the rest of the video- and computer-game industry signed on to the initiative.
5 The body of psychological research on the subject was — and is — nowhere near as clear-cut as this formulation implies. And the industry was, of course, motivated to implement a voluntary rating system by fear of government action rather than a sudden conviction that its products could indeed be harmful to children.
6 Passed on November 30, 1993, the Brady Bill was a landmark piece of gun-control legislation which mandated that all prospective purchasers of a gun must first pass a background check and then wait five days to take delivery of their weapon.
7 The player’s objective in Night Trap, of course, is not to trap and kill women but rather to protect them from others who seek to do so.
8 This is, at best, an extremely dubious etymology. The Danish word “tøj,” which is pronounced like the English “toy,” actually means clothing. While “værktøjer” means tools, there is no single word for “little tools”: one would need to say “små værktøjer” to get that concept across. The Danish word for toy, on the other hand, is “legetøj.” If the English word descends from the Scandinavian languages at all, it is almost certainly an abbreviated version of this word. Even this, however, is by no means a firmly established etymology.
9 I have seen no evidence in my own research that there was ever a time when videogames were as popular among girls as boys.
10 This chip also allowed Nintendo to assure that they collected a royalty from each and every game that was sold for their console — something Atari wouldn’t or couldn’t do during the first videogame craze.
11 It wouldn’t have been technically feasible to release a Nintendo version of Night Trap because the company had no CD drive in its product catalog.
12 Rosenthal doesn’t make it clear here that, in keeping with the computer focus of the SPA, this list includes only games for computers, not consoles. Further, only games that were sold as boxed products in retail stores are included; the list misses entirely the vibrant shareware scene, where games like id’s Wolfenstein 3D were already pushing the envelope on gore and violence at least as much as Sega. Thus it provides a somewhat distorted view of the overall state of gaming even on computers.
 
 

Tags: , , , , ,