Tag Archives: sony

Televising the Revolution

When we finished Broken Sword, the managing director of Virgin [Interactive] called me into his office and showed me a game from Argonaut [Software] called Creature Shock. He said, “These are the games you should be writing, not adventure games. These are the games. This is the future.”

— Charles Cecil, co-founder of Revolution Software

Broken Sword, Revolution Software’s third point-and-click adventure game, was released for personal computers in September of 1996. Three months later, it arrived on the Sony PlayStation console, so that it could be enjoyed on television as well as monitor screens. And therein lies a tale in itself.

Prior to this point, puzzle-based adventure games of the traditional stripe had had a checkered career on the consoles, for reasons as much technical as cultural. They were a difficult fit with the Nintendo Entertainment System (NES), the console du jour in the United States during the latter 1980s, thanks to the small capacity of the cartridges that machine used to host its games, its lack of means for easily storing state so that one could return to a game where one had left off after spending time away from the television screen, and the handheld controllers it used that were so very different from a mouse, joystick, and/or keyboard. Still, these challenges didn’t stop some enterprising studios from making a go of it, tempted as they were by the huge installed base of Nintendo consoles. Over the course of 1988 and 1989, ICOM Simulations managed to port to the NES Deja VuUninvited, and Shadowgate; the last in particular really took off there, doing so well that it is better remembered as a console than a computer game today. In 1990, LucasArts[1]LucasArts was actually still known as Lucasfilm Games at the time. did the same with their early adventure Maniac Mansion; this port too was surprisingly playable, if also rather hilariously Bowdlerized to conform to Nintendo’s infamously strict censorship regime.

But as the 1990s began, “multimedia” was becoming the watchword of adventure makers on computers. By 1993, the era of the multimedia “interactive movie” was in full swing, with games shipping on CD — often multiple CDs — and often boasting not just voice acting but canned video clips of real actors. Such games were a challenge of a whole different order even for the latest generation of 16-bit consoles. Sierra On-Line and several other companies tried mightily to cram their adventure games onto the Sega Genesis,[2]The Genesis was known as the Mega-Drive in Japan and Europe. a popular console for which one could purchase a CD drive as an add-on product. In the end, though, they gave it up as technically impossible; the Genesis’s color palette and memory space were just too tiny, its processor just too slow.

But then, along came the Sony PlayStation.

For all that the usual focus of these histories is computer games, I’ve already felt compelled to write at some length about the PlayStation here and there. As I’ve written before, I consider it the third socially revolutionary games console, after the Atari VCS and the Nintendo Entertainment System. Its claim to that status involves both culture and pure technology. Sony marketed the PlayStation to a new demographic: to hip young adults rather than the children and adolescents that Nintendo and its arch-rival Sega had targeted. Meanwhile the PlayStation hardware, with its built-in CD-drive, its 32-bit processor, its 2MB of main memory and 1MB of graphics memory, its audiophile-quality sound system, and its handy memory cards for saving up to 128 K of state at a time, made ambitious long-form gaming experiences easier than ever before to realize on a console. The two factors in combination opened a door to whole genres of games on the PlayStation that had heretofore been all but exclusive to personal computers. Its early years brought a surprising number of these computer ports, such as real-time strategy games like Command & Conquer and turn-based strategy games like X-COM. And we can also add to that list adventure games like Broken Sword.

Their existence was largely thanks to the evangelizing efforts of Sony’s own new PlayStation division, which seldom placed a foot wrong during these salad days. Unlike Nintendo and Sega, who seemed to see computer and console games as existing in separate universes, Sony was eager to bridge the gap between the two, eager to bring a wider variety of games to the PlayStation. And they were equally eager to push their console in Europe, where Nintendo had barely been a presence at all to this point and which even Sega had always treated as a distant third in importance to Japan and North America.

Thus Revolution Software got a call one day while the Broken Sword project was still in its first year from Phil Harrison, an old-timer in British games who knew everyone and had done a bit of everything. “Look, I’m working for Sony now and there’s this new console going to be produced called the PlayStation,” he told Charles Cecil, the co-founder and tireless heart and soul of Revolution. “Are you interested in having a look?”

Cecil was. He was indeed.

Thoroughly impressed by the hardware and marketing plans Harrison had shown him, Cecil went to Revolution’s publisher Virgin Interactive to discuss making a version of Broken Sword for the PlayStation as well. “That’s crazy, that’s not going to work at all,” said Virgin according to Cecil himself. Convinced the idea was a non-starter, both technically and commercially, they told him he was free to shop a PlayStation Broken Sword elsewhere for all they cared. So, Cecil returned to his friend Phil Harrison, who brokered a deal for Sony themselves to publish a PlayStation version in Europe as a sort of test of concept. Revolution worked on the port on the side and on their own dime while they finished the computer game. Sony then shipped this PlayStation version in December of 1996.

Broken Sword on a computer…

…and on the PlayStation, where it’s become more bleary-eyed.

To be sure, it was a compromised creation. Although the PlayStation was a fairly impressive piece of kit by console standards, it left much to be desired when compared to even a mid-range gaming computer. The lovely graphics of the original had to be downgraded to the PlayStation’s lower resolution, even as the console’s relatively slow CD drive and lack of a hard drive for storing frequently accessed data made them painfully sluggish to appear on the television screen; one spent more time waiting for the animated cut scenes to load than watching them, their dramatic impact sometimes being squandered by multiple loading breaks within a scene. Even the voiced dialog could take unnervingly long to unspool from disc. Then, too, pointing and clicking was nowhere near as effortless using a game controller as it was with a mouse. (Sony actually did sell a mouse as an optional peripheral, but few people bought one.) Perhaps most worrisome of all, though, was the nature of the game itself. How would PlayStation gamers react to a cerebral, puzzle-oriented and narrative-driven experience like this?

The answer proved to be, better than some people — most notably those at Virgin — might have expected. Broken Sword‘s Art Deco classicism may have looked a bit out of place in the lurid, anime-bedecked pages of the big PlayStation magazines, but they and their readers generally treated it kindly if somewhat gingerly. Broken Sword sold 400,000 copies on the PlayStation in Europe. Granted, these were not huge numbers in the grand scheme of things. On a console that would eventually sell more than 100 million units, it was hard to find a game that didn’t sell well into the six if not seven (or occasionally eight) figures. By Revolution’s modest standards, however, the PlayStation port made all the difference in the world, selling as it did at least three times as many copies as the computer version despite its ample reasons for shirking side-by-side comparisons. Its performance in Europe was even good enough to convince the American publisher THQ to belatedly pick it up for distribution in the United States as well, where it shifted 100,000 or so more copies. “The PlayStation was good for us,” understates Charles Cecil today.

It was a godsend not least because Revolution’s future as a maker of adventure games for computers was looking more and more doubtful. Multinational publishers like Virgin tended to take the American market as their bellwether, and this did not bode well for Revolution, given that Broken Sword had under-performed there in relation to its European sales. To be sure, there were proximate causes for this that Revolution could point to: Virgin’s American arm, never all that enthused about the game, had given it only limited marketing and saddled it with the terrible alternative title of Circle of Blood, making it sound more like another drop in the ocean of hyper-violent DOOM clones than a cerebral exercise in story-driven puzzle-solving. At the same time, though, it was hard to deny that the American adventure market in general was going soggy in the middle; 1996 had produced no million-plus-selling mega-hit in the genre to stand up alongside 1995’s Phantasmagoria, 1994’s Myst, or 1993’s The 7th Guest. Was Revolution’s sales stronghold of Europe soon to follow the industry’s bellwether? Virgin suspected it was.

So, despite having made three adventure games in a row for Virgin that had come out in the black on the global bottom line, Revolution had to lobby hard for the chance to make a fourth one. “It was frustrating for us,” says Revolution programmer Tony Warriner, “because we were producing good games that reviewed and sold well, but we had to beg for every penny of development cash. There was a mentality within publishing that said you were better off throwing money around randomly, and maybe scoring a surprise big hit, instead of backing steady but profitable games like Broken Sword. But this sums up the problem adventures have always had: they sell, but not enough to turn the publishers on.”

We might quibble with the “always” in Warriner’s statement; there was a time, lasting from the dawn of the industry through the first half of the 1990s, when adventures were consistently among the biggest-selling titles of all on computers. But this was not the case later on. Adventure games became mid-tier niche products from the second half of the 1990s on, capable of selling in consistent but not huge numbers, capable of raking in modest profits but not transformative ones. Similar middling categories had long existed in other mass-media industries, from film to television, books to music, all of which industries had been mature enough to profitably cater to their niche customers in addition to the heart of the mainstream. The computer-games industry, however, was less adept at doing so.

The problem there boiled down to physical shelf space. The average games shop had a couple of orders of magnitude fewer titles on its shelves at any given time than the average book or record store. Given how scarce retail space was, nobody — not the distributors, not the publishers, certainly not the retailers themselves — was overly enthusiastic about filling it with product that wasn’t in one of the two hottest genres in gaming at the time, the first-person shooter and the real-time strategy. This tunnel vision had a profound effect on the games that were made and sold during the years just before and after the millennium, until the slow rise of digital distribution began to open fresh avenues of distribution for more nichey titles once again.

In light of this situation, it’s perhaps more remarkable how many computer games were made between 1995 and 2005 that were not first-person shooters or real-time strategies than the opposite. More dedicated, passionate developers than you might expect found ways to make their cases to the publishers and get their games funded in spite of the remorseless logic of the extant distribution systems.

Revolution Software found a way to be among this group, at least for a while — but Virgin’s acquiescence to a Broken Sword II didn’t come easy. Revolution had to agree to make the sequel in just one year, as compared to the two and a half years they had spent on its predecessor, and for a cost of just £500,000 rather than £1 million. The finished game inevitably reflects the straitened circumstances of its birth. But that isn’t to say that it’s a bad game. Far from it.

Broken Sword II: The Smoking Mirror kicks off six months after the conclusion of the first game. American-in-Paris George Stobbart, that game and this one’s star, has just returned to France after dealing with the death of his father Stateside. There’s he’s reunited with Nico Collard, the fetching Parisian reporter who helped him last time around and whom George has a definite hankering for, to the extent of referring to her as his “girlfriend”; Nico is more ambiguous about the nature of their relationship. At any rate, an ornately carved and painted stone, apparently Mayan in origin, has come into her possession, and she has asked George to accompany her to the home of an archaeologist who might be able to tell them something about it. Unfortunately, they’re ambushed by thugs as soon as they arrive; Nico is kidnapped, while George is left tied to a chair in a room whose only other inhabitants are a giant poisonous spider and a rapidly spreading fire.

If this game doesn’t kick off with the literal bang of an exploding bomb like last time, it’s close enough. “I believe that a videogame must declare the inciting incident immediately so the player is clear on what their character needs to do and, equally importantly, why,” says Charles Cecil.

With your help, George will escape from his predicament and track down and rescue Nico before she can be spirited out of the country, even as he also retrieves the Mayan stone from the dodgy acquaintance in whose safekeeping she left it and traces their attackers back to Central America. And so George and Nico set off together across the ocean to sun-kissed climes, to unravel another ancient prophecy and prevent the end of the world as we know it for the second time in less than a year.

Broken Sword II betrays its rushed development cycle most obviously in its central conspiracy. For all that the first game’s cabal of Knights Templar was bonkers on the face of it, it was grounded in real history and in a real, albeit equally bonkers classic book of pseudo-history, The Holy Blood and the Holy Grail. Mayans, on the other hand, are the most generic adventure-game movers and shakers this side of Atlanteans. “I was not as interested in the Mayans, if I’m truthful,” admits Charles Cecil. “Clearly human sacrifices and so on are interesting, but they were not on the same level of passion for me as the Knights Templar.”

Lacking the fascination of uncovering a well-thought-through historical mystery, Broken Sword II must rely on its set-piece vignettes to keep its player engaged. Thankfully, these are mostly still strong. Nico eventually gets to stop being the damsel in distress, becoming instead a driving force in the plot in her own right, so much so that you the player control her rather than George for a quarter or so of the game; this is arguably the only place where the second game actually improves on the first, which left Nico sitting passively in her flat waiting for George to call and collect hints from her most of the time. Needless to say, the sexual tension between George and Nico doesn’t get resolved, the writers having learned from television shows like Moonlighting and Northern Exposure that the audience’s interest tends to dissipate as soon as “Will they or won’t they” becomes “They will!” “We could very easily have had them having sex,” says Cecil, “but that would have ruined the relationship between these two people.”

The writing remains consistently strong in the small moments, full of sly humor and trenchant observations. Some fondly remembered supporting characters return, such as Duane and Pearl, the two lovably ugly American tourists you met in Syria last time around, who’ve now opted to take a jungle holiday, just in time to meet George and Nico once again. (Isn’t coincidence wonderful?)

And the game is never less than fair, with occasional deaths to contend with but no dead ends. This manifestation of respect for their players has marked Revolution’s work since Beneath a Steel Sky; they can only be applauded for it, given how many bigger, better-funded studios got this absolutely critical aspect of their craft so very wrong back in the day. The puzzles themselves are pitched perfectly in difficulty for the kind of game this is, being enough to make you stop and think from time to time but never enough to stop you in your tracks.

Broken Sword or Monkey Island?

In the end, then, Broken Sword II suffers only by comparison with Broken Sword I, which does everything it does well just that little bit better. The backgrounds and animation here, while still among the best that the 1990s adventure scene ever produced, aren’t quite as lush as what we saw last time. The series’s Art Deco and Tintin-inspired aesthetic sensibility, seen in no other adventure games of the time outside of the equally sumptuous Last Express, loses some focus when we get to Central America and the Caribbean. Here the game takes on an oddly LucasArts-like quality, what with the steel-drum background music and all the sandy beaches and dark jungles and even a monkey or two flitting around. Everywhere you look, the seams show just a little more than they did last time; the original voice of Nico, for example, has been replaced by that of another actress, making the opening moments of the second game a jarring experience for those who played the first. (Poor Nico would continue to get a new voice with each subsequent game in the series. “I’ve never had a bad Nico, but I’ve never had one I’ve been happy with,” says Cecil.)

But, again, we’re holding Broken Sword II up against some very stiff competition indeed; the first game is a beautifully polished production by any standard, one of the crown jewels of 1990s adventuring. If the sequel doesn’t reach those same heady heights, it’s never less than witty and enjoyable. Suffice to say that Broken Sword II is a game well worth playing today if you haven’t done so already.

It did not, however, sell even as well as its predecessor when it shipped for computers in November of 1997, serving more to justify than disprove Virgin’s reservations about making it in the first place. In the United States, it was released without its Roman numeral as simply Broken Sword: The Smoking Mirror, since that country had never seen a Broken Sword I. Thus even those Americans who had bought and enjoyed Circle of Blood had no ready way of knowing that this game was a sequel to that one. (The names were ironic not least in that the American game called Circle of Blood really did contain a broken sword, while the American game called Broken Sword did not.)

That said, in Europe too, where the game had no such excuses to rely upon, the sales numbers it put up were less satisfactory than before. A PlayStation version was released there in early 1998, but this too sold somewhat less than the first game, whose relative success in the face of its technical infelicities had perchance owed much to the novelty of its genre on the console. It was not so novel anymore: a number of other studios were also now experimenting with computer-style adventure games on the PlayStation, to mixed commercial results.

With Virgin having no interest in a Broken Sword III or much of anything else from Revolution, Charles Cecil negotiated his way out of the multi-game contract the two companies had signed. “The good and the great decided adventures [had] had their day,” he says. Broken Sword went on the shelf, permanently as far as anyone knew, leaving George and Nico in a lovelorn limbo while Revolution retooled and refocused. Their next game would still be an adventure at heart, but it would sport a new interface alongside action elements that were intended to make it a better fit on a console. For better or for worse, it seemed that the studio’s hopes for the future must lie more with the PlayStation than with computers.

Revolution Software was not alone in this; similar calculations were being made all over the industry. Thanks to the fresh technology and fresh ideas of the PlayStation, said industry was entering a new period of synergy and cross-pollination, one destined to change the natures of computer and console games equally. Which means that, for all that this site has always been intended to be a history of computer rather than console gaming, the PlayStation will remain an inescapable presence even here, lurking constantly in the background as both a promise and a threat.

Where to Get It: Broken Sword II: The Smoking Mirror is available as a digital download at

Did you enjoy this article? If so, please think about pitching in to help me make many more like it. You can pledge any amount you like.

Sources: the book Grand Thieves and Tomb Raiders: How British Video Games Conquered the World by Magnus Anderson and Rebecca Levene; Retro Gamer 6, 31, 63, 146, and 148; GameFan of February 1998; PlayStation Magazine of February 1998; The Telegraph of January 4 2011. Online sources include Charles Cecil’s interviews with Anthony Lacey of Dining with Strangers, John Walker of Rock Paper Shotgun, Marty Mulrooney of Alternative Magazine Online, and Peter Rootham-Smith of Game Boomers.


1 LucasArts was actually still known as Lucasfilm Games at the time.
2 The Genesis was known as the Mega-Drive in Japan and Europe.

Tags: , , ,

Putting the “J” in the RPG, Part 2: PlayStation for the Win

From the Seven Hills of Rome to the Seven Sages of China’s Bamboo Grove, from the Seven Wonders of the Ancient World to the Seven Heavens of Islam, from the Seven Final Sayings of Jesus to Snow White and the Seven Dwarfs, the number seven has always struck us as a special one. Hironobu Sakaguchi and his crew at Square, the people behind the Final Fantasy series, were no exception. In the mid-1990s, when the time came to think about what the seventh entry in the series ought to be, they instinctively felt that this one had to be bigger and better than any that had come before. It had to double down on all of the series’s traditional strengths and tropes to become the ultimate Final Fantasy. Sakaguchi and company would achieve these goals; the seventh Final Fantasy game has remained to this day the best-selling, most iconic of them all. But the road to that seventh heaven was not an entirely smooth one.

The mid-1990s were a transformative period, both for Square as a studio and for the industry of which it was a part. For the former, it was “a perfect storm, when Square still acted like a small company but had the resources of a big one,” as Matt Leone of Polygon writes.  Meanwhile the videogames industry at large was feeling the ground shift under its feet, as the technologies that went into making and playing console-based games were undergoing their most dramatic shift since the Atari VCS had first turned the idea of a machine for playing games on the family television into a popular reality. CD-ROM drives were already available for Sega’s consoles, with a storage capacity two orders of magnitude greater than that of the most capacious cartridges. And 3D graphics hardware was on the horizon as well, promising to replace pixel graphics with embodied, immersive experiences in sprawling virtual worlds. Final Fantasy VII charged headlong into these changes like a starving man at a feast, sending great greasy globs of excitement — and also controversy — flying everywhere.

The controversy came in the form of one of the most shocking platform switches in the history of videogames. To fully appreciate the impact of Square’s announcement on January 12, 1996, that Final Fantasy VII would run on the new Sony PlayStation rather than Nintendo’s next-generation console, we need to look a little closer at the state of the console landscape in the years immediately preceding it.

Through the first half of the 1990s, Nintendo was still the king of console gaming, but it was no longer the unchallenged supreme despot it had been during the 1980s. Nintendo had always been conservative in terms of hardware, placing its faith, like Apple Computer in an adjacent marketplace, in a holistic customer experience rather than raw performance statistics. As part and parcel of this approach, every game that Nintendo agreed to allow into its walled garden was tuned and polished to a fine sheen, having any jagged edges that might cause anyone any sort of offense whatsoever painstakingly sanded away. An upstart known as Sega had learned to live in the gaps this business philosophy opened up, deploying edgier games on more cutting-edge hardware. As early as December of 1991, Sega began offering its Japanese customers a CD-drive add-on for its current console, the Mega Drive (known as the Sega Genesis in North America, which received the CD add-on the following October). Although the three-year-old Mega Drive’s intrinsic limitations made this early experiment in multimedia gaming for the living room a somewhat underwhelming affair — there was only so much you could do with 61 colors at a resolution of 320 X 240 — it perfectly illustrated the differences in the two companies’ approaches. While Sega threw whatever it had to hand at the wall just to see what stuck, Nintendo held back like a Dana Carvey impression of George Herbert Walker Bush: “Wouldn’t be prudent at this juncture…”

Sony was all too well-acquainted with Nintendo’s innate caution. As the co-creator of the CD storage format, it had signed an agreement with Nintendo back in 1988 to make a CD drive for the upcoming Super Famicom console (which was to be known as the Super Nintendo Entertainment System in the West) as soon as the technology had matured enough for it to be cost-effective. By the time the Super Famicom was released in 1990, Sony was hard at work on the project. But on May 29, 1991, just three days before a joint Nintendo/Sony “Play Station” was to have been demonstrated to the world at the Summer Consumer Electronics Show in Chicago, Nintendo suddenly backed out of the deal, announcing that it would instead be working on CD-ROM technology with the Dutch electronics giant Philips — ironically, Sony’s partner in the creation of the original CD standard.

This prototype of the Sony “Play Station” surfaced in 2015.

Nintendo’s reason for pulling out seems to have come down to the terms of the planned business relationship. Nintendo, whose instinct for micro-management and tough deal-making was legendary, had uncharacteristically promised Sony a veritable free hand, allowing it to publish whatever CD-based software it wanted without asking Nintendo’s permission or paying it any royalty whatsoever. In fact, given that a contract to that effect had already been signed long before the Consumer Electronics Show, Sony was, legally speaking, still free to continue with the Play Station on its own, piggybacking on the success of Nintendo’s console. And initially it seemed inclined to do just that. “Sony will throw open its doors to software makers to produce software using music and movie assets,” it announced at the show, promising games based on its wide range of media properties, from the music catalog of Michael Jackson to the upcoming blockbuster movie Hook. Even worse from Nintendo’s perspective, “in order to promote the Super Disc format, Sony intends to broadly license it to the software industry.” Nintendo’s walled garden, in other words, looked about to be trampled by a horde of unwashed, unvetted, unmonetized intruders charging through the gate Sony was ready and willing to open to them. The prospect must have sent the control freaks inside Nintendo’s executive wing into conniptions.

It was a strange situation any way you looked at it. The Super Famicom might soon become the host of not one but two competing CD-ROM solutions, an authorized one from Philips and an unauthorized one from Sony, each using different file formats for a different library of games and other software. (Want to play Super Mario on CD? Buy the Philips drive! Want Michael Jackson? Buy the Play Station!)

In the end, though, neither of the two came to be. Philips decided it wasn’t worth distracting consumers from its own stand-alone CD-based “multimedia box” for the home, the CD-i.[1]Philips wasn’t, however, above exploiting the letter of its contract with Nintendo to make a Mario game and three substandard Legend of Zelda games available for the CD-i. Sony likewise began to wonder in the aftermath of its defiant trade-show announcement whether it was really in its long-term interest to become an unwanted squatter on Nintendo’s real estate.

Still, the episode had given some at Sony a serious case of videogame jealousy. It was clear by now that this new industry wasn’t a fad. Why shouldn’t Sony be a part of it, just as it was an integral part of the music, movie, and television industries? On June 24, 1992, the company held an unusually long and heated senior-management debate. After much back and forth, CEO Norio Ohga pronounced his conclusion: Sony would turn the Play Station into the PlayStation, a standalone CD-based videogame console of its own, both a weapon with which to bludgeon Nintendo for its breach of trust and — and ultimately more importantly — an entrée to the fastest-growing entertainment sector in the world.

The project was handed to one Ken Kutaragi, who had also been in charge of the aborted Super Famicom CD add-on. He knew precisely what he wanted Sony’s first games console to be: a fusion of CD-ROM with another cutting-edge technology, hardware-enabled 3D graphics. “From the mid-1980s, I dreamed of the day when 3D computer graphics could be enjoyed at home,” he says. “What kind of graphics could we create if we combined a real-time, 3D computer-graphics engine with CD-ROM? Surely this would develop into a new form of entertainment.”

It took him and his engineers a little over two years to complete the PlayStation, which in addition to a CD drive and a 3D-graphics system sported a 32-bit MIPS microprocessor running at 34 MHz, 3 MB of memory (of which 1 MB was dedicated to graphics alone), audiophile-quality sound hardware, and a slot for 128 K memory cards that could be used for saving game state between sessions, ensuring that long-form games like JRPGs would no longer need to rely on tedious manual-entry codes or balky, unreliable cartridge-mounted battery packs for the purpose.


In contrast to the consoles of Nintendo, which seemed almost self-consciously crafted to look like toys, and those of Sega, which had a boy-racer quality about them, the Sony PlayStation looked stylish and adult — but not too adult. (The stylishness came through despite the occasionally mooted comparisons to a toilet.)

The first Sony PlayStations went on sale in Tokyo’s famed Akihabara electronics district on December 3, 1994. Thousands camped out in line in front of the shops the night before. “It’s so utterly different from traditional game machines that I didn’t even think about the price,” said one starry-eyed young man to a reporter on the scene. Most of the shops were sold out before noon. Norio Ohga was mobbed by family and friends in the days that followed, all begging him to secure them a PlayStation for their children before Christmas. It was only when that happened, he would later say, that he fully realized what a game changer (pun intended) his company had on its hands. Just like that, the fight between Nintendo and Sega — the latter had a new 32-bit CD-based console of its own, the Saturn, while the former was taking it slowly and cautiously, as usual — became a three-way battle royal.

The PlayStation was an impressive piece of kit for the price, but it was, as always, the games themselves that really sold it. Ken Kutaragi had made the rounds of Japanese and foreign studios, and found to his gratification that many of them were tired of being under the heavy thumb of Nintendo. Sony’s garden was to be walled just like Nintendo’s — you had to pay it a fee to sell games for its console as well — but it made a point of treating those who made games for its system as valued partners rather than pestering supplicants: the financial terms were better, the hardware was better, the development tools were better, the technical support was better, the overall vibe was better. Nintendo had its own home-grown line of games for its consoles to which it always gave priority in every sense of the word, a conflict of interest from which Sony was blessedly free.[2]Sony did purchase the venerable British game developer and publisher Psygnosis well before its console’s launch to help prime the pump with some quality games, but it largely left it to manage its own affairs on the other side of the world. Game cartridges were complicated and expensive to produce, and the factories that made them for Nintendo’s consoles were all controlled by that company. Nintendo was notoriously slow to approve new production runs of any but its own games, leaving many studios convinced that their smashing success had been throttled down to a mere qualified one by a shortage of actual games in stores at the critical instant. CDs, on the other hand, were quick and cheap to churn out from any of dozens of pressing plants all over the world. Citing advantages like these, Kutaragi found it was possible to tempt even as longstanding a Nintendo partner as Namco — the creator of the hallowed arcade classics Galaxian and Pac-Man — into committing itself “100 percent to the PlayStation.” The first fruit of this defection was Ridge Racer, a port of a stand-up arcade game that became the new console’s breakout early hit.

Square was also among the software houses that Ken Kutaragi approached, but he made no initial inroads there. For all the annoyances of dealing with Nintendo, it still owned the biggest player base in the world, one that had treated Final Fantasy very well indeed, to the tune of more than 9 million games sold to date in Japan alone. This was not a partner that one abandoned lightly — especially not with the Nintendo 64, said partner’s own next-generation console, due at last in 1996. It promised to be every bit as audiovisually capable as the Sony PlayStation or Sega Saturn, even as it was based around a 64-bit processor in place of the 32-bit units of the competition.

Indeed, in many ways the relationship between Nintendo and Square seemed closer than ever in the wake of the PlayStation’s launch. When Yoshihiro Maruyama joined Square in September of 1995 to run its North American operations, he was told that “Square will always be with Nintendo. As long as you work for us, it’s basically the same as working for Nintendo.” Which in a sense he literally was, given that Nintendo by now owned a substantial chunk of Square’s stock. In November of 1995, Nintendo’s president Hiroshi Yamauchi cited the Final Fantasy series as one of his consoles’ unsurpassed crown jewels — eat your heart out, Sony! — at Shoshinkai, Nintendo’s annual press shindig and trade show. As its farewell to the Super Famicom, Square had agreed to make Super Mario RPG: Legend of the Seven Stars, dropping Nintendo’s Italian plumber into a style of game completely different from his usual fare. Released in March of 1996, it was a predictably huge hit in Japan, while also, encouragingly, leveraging the little guy’s Stateside popularity to become the most successful JRPG since Final Fantasy I in those harsh foreign climes.

But Super Mario RPG wound up marking the end of an era in more ways than Nintendo had imagined: it was not just Square’s last Super Famicom RPG but its last major RPG for a Nintendo console, full stop. For just as it was in its last stages of development, there came the earthshaking announcement of January 12, 1996, that Final Fantasy was switching platforms to the PlayStation. Et tu, Square? “I was kind of shocked,” Yoshihiro Maruyama admits. As was everyone else.

The Nintendo 64, which looked like a toy — and an anachronistic one at that — next to the PlayStation.

Square’s decision was prompted by what seemed to have become an almost reactionary intransigence on the part of Nintendo when it came to the subject of CD-ROM. After the two abortive attempts to bring CDs to the Super Famicom, everyone had assumed as a matter of course that they would be the storage medium of the Nintendo 64. It was thus nothing short of baffling when the first prototypes of the console were unveiled in November of 1995 with no CD drive built-in and not even any option on the horizon for adding one. Nintendo’s latest and greatest was instead to live or die with old-school cartridges which had a capacity of just 64 MB, one-tenth that of a CD.

Why did Nintendo make such a counterintuitive choice? The one compelling technical argument for sticking with cartridges was the loading time of CDs, a mechanical storage medium rather than a solid-state one. Nintendo’s ethos of user-friendly accessibility had always insisted that a game come up instantly when you turned the console on and play without interruption thereafter. Nintendo believed, with considerable justification, that this quality had been the not-so-secret weapon in its first-generation console’s victorious battle against floppy-disk-based 8-bit American microcomputers that otherwise boasted similar audiovisual and processing capabilities, such as the Commodore 64. The PlayStation CD drive, which could transfer 300 K per second into memory, was many, many times faster than the Commodore 64’s infamously slow disk drive, but it wasn’t instant. A cartridge, on the other hand, for all practical purposes was.

Fair enough, as far as it went. Yet there were other, darker insinuations swirling around the games industry which had their own ring of truth. Nintendo, it was said, was loath to give up its stranglehold on the means of production of cartridges and embrace commodity CD-stamping facilities. Most of all, many sensed, the decision to stay with cartridges was bound up with Nintendo’s congenital need to be different, and to assert its idiosyncratic hegemony by making everyone else dance to its tune while it was at it. The question now was whether it had taken this arrogance too far, was about to dance itself into irrelevance while the makers of third-party games moved on to other, equally viable alternative platforms.

Exhibit Number One of same was the PlayStation, which seemed tailor-made for the kind of big, epic game that every Final Fantasy to date had strained to be. It was far easier to churn out huge quantities of 3D graphics than it was hand-drawn pixel art, while the staggering storage capacity of CD-ROM gave Square someplace to keep it all — with, it should not be forgotten, the possibility of finding even more space by the simple expedient of shipping a game on multiple CDs, another affordance that cartridges did not allow. And then there were those handy little memory cards for saving state. Those benefits were surely worth trading a little bit of loading time for.

But there was something else about the PlayStation as well that made it an ideal match for Hironobu Sakaguchi’s vision of gaming. Especially after the console arrived in North America and Europe in September of 1995, it fomented a sweeping change in the way the gaming hobby was perceived. “The legacy of the original Playstation is that it took gaming from a pastime that was for young people or maybe slightly geeky people,” says longtime Sony executive Jim Ryan, “and it turned it into a highly credible form of mass entertainment, really comparable with the music business and the movie business.” Veteran game designer Cliff Bleszinski concurs: “The PlayStation shifted the console from having an almost toy-like quality into consumer electronics that are just as desired by twelve-year-olds as they are by 35-year-olds.”

Rather than duking it out with Nintendo and Sega for the eight-to-seventeen age demographic, Sony shifted its marketing attention to young adults, positioning PlayStation gaming as something to be done before or after a night out at the clubs — or while actually at the clubs, for that matter: Sony paid to install the console in trendy nightspots all over the world, so that their patrons could enjoy a round or two of WipEout between trips to the dance floor. In effect, Sony told the people who had grown up with Nintendo and Sega that it was okay to keep on gaming, as long as they did it on a PlayStation from now on. Sony’s marketers understood that, if they could conquer this demographic, that success would automatically spill down into the high-school set that had previously been Sega’s bread and butter, since kids of that age are always aspiring to do whatever the university set is up to. Their logic was impeccable; the Sony PlayStation would destroy the Sega Saturn in due course.

For decades now, the hipster stoner gamer, slumped on the couch with controller in one hand and a bong in the other, has been a pop-culture staple. Sony created that stereotype in the space of a year or two in the 1990s. Whatever else you can say about it, it plays better with the masses than the older one of a pencil-necked nerd sitting bolt upright on his neatly made bed. David James, star goalkeeper for the Premier League football team Liverpool F.C., admitted that he had gotten “carried away” playing PlayStation the night before by way of explaining the three goals that he conceded in a match against Newcastle. It was hard to imagine substituting “Nintendo” or “Saturn” for “PlayStation” in that statement. In May of 1998, Sony would be able to announce triumphantly that, according to its latest survey, the average age of a PlayStation gamer was a positively grizzled 22. It had hit the demographic it was aiming for spot-on, with a spillover that reached both younger and older folks. David Ranyard, a member of Generation PlayStation who has had a varied and successful career in games since the millennium:

At the time of its launch, I was a student, and I’d always been into videogames, from the early days of arcades. I would hang around playing Space Invaders and Galaxian, and until the PlayStation came out, that kind of thing made me a geek. But this console changed all that. Suddenly videogames were cool — not just acceptable, but actually club-culture cool. With a soundtrack from the coolest techno and dance DJs, videogames became a part of [that] subculture. And it led to more mainstream acceptance of consoles in general.

The new PlayStation gamer stereotype dovetailed beautifully with the moody, angsty heroes that had been featuring prominently in Final Fantasy for quite some installments by now. Small wonder that Sakaguchi was more and more smitten with Sony.

Still, it was one hell of a bridge to burn; everyone at Square knew that there would be no going back if they signed on with Sony. Well aware of how high the stakes were for all parties, Sony declared its willingness to accept an extremely low per-unit royalty and to foot the bill for a lot of the next Final Fantasy game’s marketing, promising to work like the dickens to break it in the West. In the end, Sakaguchi allowed himself to be convinced. He had long run Final Fantasy as his own fiefdom at Square, and this didn’t change now: upper management rubber-stamped his decision to make Final Fantasy VII for the Sony PlayStation.

The announcement struck Japan’s games industry with all the force of one of Sakaguchi’s trademark Final Fantasy plot twists. For all the waves Sony had been making recently, nobody had seen this one coming. For its part, Nintendo had watched quite a number of studios defect to Sony already, but this one clearly hurt more than any of the others. It sold off all of its shares in Square and refused to take its calls for the next five years.

The raised stakes only gave Sakaguchi that much more motivation to make Final Fantasy VII amazing — so amazing that even the most stalwart Nintendo loyalists among the gaming population would be tempted to jump ship to the PlayStation in order to experience it. There had already been an unusually long delay after Final Fantasy VI, during which Square had made Super Mario RPG and another, earlier high-profile JRPG called Chrono Trigger, the fruit of a partnership between Hironobu Sakaguchi and Yuji Horii of Dragon Quest fame. (This was roughly equivalent in the context of 1990s Western pop culture to Oasis and Blur making an album together.) Now the rush was on to get Final Fantasy VII out the door within a year, while the franchise and its new platform the PlayStation were still smoking hot.

In defiance of the wisdom found in The Mythical Man-Month, Sakaguchi decided to both make the game quickly and make it amazing by throwing lots and lots of personnel at the problem: 150 people in all, three times as many as had worked on Final Fantasy VI. Cost was no object, especially wherever yen could be traded for time. Square spent the equivalent of $40 million on Final Fantasy VII in the course of just one year, blowing up all preconceptions of how much it could cost to make a computer or console game. (The most expensive earlier game that I’m aware of is the 1996 American “interactive movie” Wing Commander IV, which its developer Origin Systems claimed to have cost $12 million.) By one Square executive’s estimate, almost half of Final Fantasy VII‘s budget went for the hundreds of high-end Silicon Graphics workstations that were purchased, tools for the unprecedented number of 3D artists and animators who attacked the game from all directions at once. Their output came to fill not just one PlayStation CD but three of them — almost two gigabytes of raw data in all, or 30 Nintendo 64 cartridges.

Somehow or other, it all came together. Square finished Final Fantasy VII on schedule, shipping it in Japan on January 31, 1997. It went on to sell over 3 million copies there, bettering Final Fantasy VI‘s numbers by about half a million and selling a goodly number of PlayStations in the process. But, as that fairly modest increase indicates, the Japanese domestic market was becoming saturated; there were only so many games you could sell in a country of 125 million people, most of them too old or too young or lacking the means or the willingness to acquire a PlayStation. There was only one condition in which it had ever made sense to spend $40 million on Final Fantasy VII: if it could finally break the Western market wide open. Encouraged by the relative success of Final Fantasy VI and Super Mario RPG in the United States, excited by the aura of hipster cool that clung to the PlayStation, Square — and also Sony, which lived up to its promise to go all-in on the game — were determined to make that happen, once again at almost any cost. After renumbering the earlier games in the series in the United States to conform with its habit of only releasing every other Final Fantasy title there, Square elected to call this game Final Fantasy VII all over the world. For the number seven was an auspicious one, and this was nothing if not an auspicious game.

Final Fantasy VII shipped on a suitably auspicious date in the United States: September 7, 1997. It sold its millionth unit that December.

In November of 1997, it came to Europe, which had never seen any of the previous six mainline Final Fantasy game before and therefore processed the title as even more of a non sequitur. No matter. Wherever the game went, the title and the marketing worked — worked not only for the game itself, but for the PlayStation. Coming hot on the heels of the hip mega-hit Tomb Raider, it sealed the deal for the console, relegating the Sega Saturn to oblivion and the Nintendo 64 to the status of a disappointing also-ran. Paul Davies was the editor-in-chief of Britain’s Computer and Video Games magazine at the time. He was a committed Sega loyalist, he says, but

I came to my senses when Square announced Final Fantasy VII as a PlayStation exclusive. We received sheets of concept artwork and screenshots at our editorial office, sketches and stills from the incredible cut scenes. I was smitten. I tried and failed to rally. This was a runaway train. [The] PlayStation took up residence in all walks of life, moved from bedrooms to front rooms. It gained — by hook or by crook — the kind of social standing that I’d always wanted for games. Sony stomped on my soul and broke my heart, but my God, that console was a phenomenon.

Final Fantasy VII wound up selling well over 10 million units in all, as many as all six previous entries in the series combined, divided this time almost equally between Japan, North America, and Europe. Along the way, it exploded millions of people’s notions of what games could do and be — people who weren’t among the technological elite who invested thousands of dollars into high-end rigs to play the latest computer games, who just wanted to sit down in front of their televisions after a busy day with a plug-it-in-and-go console and be entertained.

Of course, not everyone who bought the game was equally enamored. Retailers reported record numbers of returns to go along with the record sales, as some people found all the walking around and reading to be not at all what they were looking for in a videogame.

In a way, I share their pain. Despite all its exceptional qualities, Final Fantasy VII fell victim rather comprehensively to the standard Achilles heel of the JRPG in the West: the problem of translation. Its English version was completed in just a couple of months at Square’s American branch, reportedly by a single employee working without supervision, then sent out into the world without a second glance. I’m afraid there’s no way to say this kindly: it’s almost unbelievably terrible, full of sentences that literally make no sense punctuated by annoying ellipses that are supposed to represent… I don’t know what. Pauses… for… dramatic… effect, perhaps? To say it’s on the level of a fan translation would be to insult the many fans of Japanese videogames in the West, who more often than not do an extraordinary job when they tackle such a project. That a game so self-consciously pitched as the moment when console-based videogames would come into their own as a storytelling medium and as a form of mass-market entertainment to rival movies could have been allowed out the door with writing like this boggles the mind. It speaks to what a crossroads moment this truly was for games, when the old ways were still in the process of going over to the new. Although the novelty of the rest of the game was enough to keep the poor translation from damaging its commercial prospects overmuch, the backlash did serve as a much-needed wake-up call for Square. Going forward, they would take the details of “localization,” as such matters are called in industry speak, much more seriously.

Oh, my…

Writerly sort that I am, I’ll be unable to keep myself from harping further on the putrid translation in the third and final article in this series, when I’ll dive into the game itself. Right now, though, I’d like to return to the subject of what Final Fantasy VII meant for gaming writ large. In case I haven’t made it clear already, let me state it outright now: its arrival and reception in the West in particular marked one of the watershed moments in the entire history of gaming.

It cemented, first of all, the PlayStation’s status as the overwhelming victor in the late-1990s edition of the eternal Console Wars, as it did the Playstation’s claim to being the third socially revolutionary games console in history, after the Atari VCS and the original Nintendo Famicom. In the process of changing forevermore the way the world viewed videogames and the people who played them, the PlayStation eventually sold more than 100 million units, making it the best-selling games console of the twentieth century, dwarfing the numbers of the Sega Saturn (9 million units) and even the Nintendo 64 (33 million units), the latter of which was relegated to the status of the “kiddie console” on the playgrounds of the world. The underperformance of the Saturn followed by that of its successor the Dreamcast (again, just 9 million units sold) led Sega to abandon the console-hardware business entirely. Even more importantly, the PlayStation shattered the aura of remorseless, monopolistic inevitability that had clung to Nintendo since the mid-1980s; Nintendo would be for long stretches of the decades to come an also-ran in the very industry it had almost single-handedly resurrected. If the PlayStation was conceived partially as revenge for Nintendo’s jilting of Sony back in 1991, it was certainly a dish served cold — in fact, one that Nintendo is to some extent still eating to this day.

Then, too, it almost goes without saying that the JRPG, a sub-genre that had hitherto been a niche occupation of American gamers and virtually unknown to European ones, had its profile raised incalculably by Final Fantasy VII. The JRPG became almost overnight one of the hottest of all styles of game, as millions who had never imagined that a game could offer a compelling long-form narrative experience like this started looking for more of the same to play just as soon as its closing credits had rolled. Suddenly Western gamers were awaiting the latest JRPG releases with just as much impatience as Japanese gamers — releases not only in the Final Fantasy series but in many, many others as well. Their names, which tended to sound strange and awkward to English ears, were nevertheless unspeakably alluring to those who had caught the JRPG fever: Xenogears, Parasite Eve, Suikoden, Lunar, Star Ocean, Thousand Arms, Chrono Cross, Valkyrie Profile, Legend of Mana, Saiyuki. The whole landscape of console gaming changed; nowhere in the West in 1996, these games were everywhere in 1998 and 1999. It required a dedicated PlayStation gamer indeed just to keep up with the glut. At the risk of belaboring a point, I must note here that there were relatively few such games on the Nintendo 64, due to the limited storage capacity of its cartridges. Gamers go where the games they want to play are, and, for gamers in their preteens or older at least, those games were on the PlayStation.

From the computer-centric perspective that is this site’s usual stock in trade, perhaps the most important outcome of Final Fantasy VII was the dawning convergence it heralded between what had prior to this point been two separate worlds of gaming. Shortly before its Western release on the PlayStation, Square’s American subsidiary had asked the parent company for permission to port Final Fantasy VII to Windows-based desktop computers, perchance under the logic that, if American console gamers did still turn out to be nonplussed by the idea of a hundred-hour videogame despite marketing’s best efforts, American computer gamers would surely not be.

Square Japan agreed, but that was only the beginning of the challenge of getting Final Fantasy VII onto computer-software shelves. Square’s American arm called dozens of established computer publishers, including the heavy hitters like Electronic Arts. Rather incredibly, they couldn’t drum up any interest whatsoever in a game that was by now selling millions of copies on the most popular console in the world. At long last, they got a bite from the British developer and publisher Eidos, whose Tomb Raider had been 1996’s PlayStation game of the year whilst also — and unusually for the time — selling in big numbers on computers.

That example of cross-platform convergence notwithstanding, everyone involved remained a bit tentative about the Final Fantasy VII Windows port, regarding it more as a cautious experiment than the blockbuster-in-the-offing that the PlayStation version had always been treated as. Judged purely as a piece of Windows software, the end result left something to be desired, being faithful to the console game to a fault, to the extent of couching its saved states in separate fifteen-slot “files” that stood in for PlayStation memory cards.

The Windows version of Final Fantasy VII came out a year after the PlayStation version. “If you’re open to new experiences and perspectives in role-playing and can put up with idiosyncrasies from console-game design, then take a chance and experience some of the best storytelling ever found in an RPG,” concluded Computer Gaming World in its review, stamping the game “recommended, with caution.” Despite that less than rousing endorsement, it did reasonably well, selling somewhere between 500,000 and 1 million units by most reports.

They were baby steps to be sure, but Tomb Raider and Final Fantasy VII between them marked the start of a significant shift, albeit one that would take another half-decade or so to come to become obvious to everyone. The storage capacity of console CDs, the power of the latest console hardware, and the consoles’ newfound ability to easily save state from session to session had begun to elide if not yet erase the traditional barriers between “computer games” and “videogames.” Today the distinction is all but eliminated, as cross-platform development tools and the addition of networking capabilities to the consoles make it possible for everyone to play the same sorts of games at least, if not always precisely the same titles. This has been, it seems to me, greatly to the benefit of gaming in general: games on computers have became more friendly and approachable, even as games on consoles have become deeper and more ambitious.

So, that’s another of the trends we’ll need to keep an eye out for as we continue our journey down through the years. Next, though, it will be time to ask a more immediately relevant question: what is it like to actually play Final Fantasy VII, the game that changed so much for so many?

Did you enjoy this article? If so, please think about pitching in to help me make many more like it. You can pledge any amount you like.

Sources: the books Pure Invention: How Japan Made the Modern World by Matt Alt, Power-Up: How Japanese Video Games Gave the World an Extra Life by Chris Kohler, Fight, Magic, Items: The History of Final Fantasy, Dragon Quest, and the Rise of Japanese RPGs in the West by Aidan Moher, Atari to Zelda: Japan’s Videogames in Global Contexts by Mia Consalvo, Revolutionaries at Sony: The Making of the Sony PlayStation by Reiji Asakura, and Game Over: How Nintendo Conquered the World by David Sheff. Retro Gamer 69, 96, 108, 137, 170, and 188; Computer Gaming World of September 1997, October 1997, May 1998, and November 1998.

Online sources include Polygon‘s authoritative Final Fantasy 7: An Oral History”, “The History of Final Fantasy VII at Nintendojo, “The Weird History of the Super NES CD-ROM, Nintendo’s Most Notorious Vaporware” by Chris Kohler at Kotaku, and “The History of PlayStation was Almost Very Different” by Blake Hester at Polygon.


1 Philips wasn’t, however, above exploiting the letter of its contract with Nintendo to make a Mario game and three substandard Legend of Zelda games available for the CD-i.
2 Sony did purchase the venerable British game developer and publisher Psygnosis well before its console’s launch to help prime the pump with some quality games, but it largely left it to manage its own affairs on the other side of the world.

Tags: , , , ,

The Ratings Game, Part 4: E3 and Beyond

In 1994, the Consumer Electronics Show was a seemingly inviolate tradition among the makers of videogame consoles and game-playing personal computers. Name a landmark product, and chances were there was a CES story connected with it. The Atari VCS had been shown for the first time there in 1977; the Commodore 64 in 1982; the Amiga in 1984; the Nintendo Entertainment System in 1985; Tetris in 1988; the Sega Genesis in 1989; John Madden Football in 1990; the Super NES in 1991, just to name a few. In short, CES was the first and most important place for the videogame industry to show off its latest and greatest to an eager public.

For all that, though, few inside the industry had much good to say about the experience of actually exhibiting at the show. Instead of getting the plum positions these folks thought they deserved, their cutting-edge, transformative products were crammed into odd corners of the exhibit hall, surrounded by the likes of soft-porn and workout videos. The videogame industry’s patently second-class status may have been understandable once upon a time, when it was a tiny upstart on the media landscape with a decidedly uncertain future. But now, with it approaching the magic mark of $5 billion in annual revenues in the United States alone, its relegation at the hands of CES’s organizers struck its executives as profoundly unjust. Wherever and whenever they got together, they always seemed to wind up kibitzing about the hidebound CES people, who still lived in a world where toasters, refrigerators, and microwave ovens were the apex of technological excitement in the home.

They complained at length as well to Gary Shapiro, the man in charge of CES, but his cure proved worse than the disease. He and the other organizers of the 1993 Summer CES, which took place as usual in June in Chicago, promised to create some special “interactive showcase pavilions” for the industry. When the exhibitors arrived, they saw that their “pavilions” were more accurately described as tents, pitched in the middle of an unused parking lot. Pat Ferrell, then the editor-in-chief of GamePro magazine, recalls that “they put some porta-potties out there and a little snack stand where you could pick up a cookie. Everybody was like, ‘This is bullshit. This is like Afghanistan.'” It rained throughout the show, and the tents leaked badly, ruining several companies’ exhibits. Tom Kalinske, then the CEO of Sega of America, remembers that he “turned to my team and said, ‘That’s it. We’re never coming back here again.'”

Kalinske wasn’t true to his word; Sega was at both the Winter and Summer CES of the following year. But he was now ready and willing to listen to alternative proposals, especially after his and most other videogame companies found themselves in the basement of Chicago’s McCormick Place in lieu of the parking lot in June of 1994.

In his office at GamePro, Pat Ferrell pondered an audacious plan for getting his industry out of the basement. Why not start a trade show all his own? It was a crazy idea on the face of it — what did a magazine editor know about running a trade show? — but Ferrell could be a force of nature when he put his mind to something. He made a hard pitch to the people who ran his magazine’s parent organization: the International Data Group (IDG), who also published a wide range of other gaming and computing magazines, and already ran the Apple Macintosh’s biggest trade show under the banner of their magazine Macworld. They were interested, but skeptical whether he could really convince an entire industry to abandon its one proven if painfully imperfect showcase for such an unproven one as this. It was then that Ferrell started thinking about the brand new Interactive Digital Software Association. The latter would need money to fund the ratings program that was its first priority, then would need more money to do who knew what else in the future. Why not fund the IDSA with a trade show that would be a vast improvement over the industry’s sorry lot at CES?

Ferrell’s negotiations with the IDSA’s members were long and fraught, not least because CES seemed finally to be taking the videogame industry’s longstanding litany of complaints a bit more seriously. In response to a worrisome decline in attendance in recent editions of the summer show, Shapiro had decided to revamp his approach dramatically for 1995. After the usual January CES, there would follow not one but four smaller shows, each devoted to a specific segment of the larger world of consumer electronics. The videogame and consumer-computing industries were to get one of these, to take place in May in Philadelphia. So, the IDSA’s members stood at a fork in the road. Should they give CES one more chance, or should they embrace Ferrell’s upstart show?

The battle lines over the issue inside the IDSA were drawn, as usual, between Sega and Nintendo. Thoroughly fed up as he was with CES, Tom Kalinske climbed aboard the alternative train as soon as it showed up at the station. But Howard Lincoln of Nintendo, very much his industry’s establishment man, wanted to stick with the tried and true. To abandon a known commodity like CES, which over 120,000 journalists, early adopters, and taste-makers were guaranteed to visit, in favor of an untested concept like this one, led by a man without any of the relevant organizational experience, struck him as the height of insanity. Yes, IDG was willing to give the IDSA a five-percent stake in the venture, which was five percent more than it got from CES, but what good was five percent of a failure?

In the end, the majority of the IDSA decided to place their faith in Ferrell in spite of such reasonable objections as these — such was the degree of frustration with CES. Nintendo, however, remained obstinately opposed to the new show. Everyone else could do as they liked; Nintendo would continue going to CES, said Lincoln.

And so Pat Ferrell, a man with a personality like a battering ram, decided to escalate. He didn’t want any sort of split decision; he was determined to win. He scheduled his own show — to be called the Electronic Entertainment Expo, or E3 — on the exact same days in May for which Shapiro had scheduled his. Bet hedging would thus be out of the question; everyone would have to choose one show or the other. And E3 would have a critical advantage over its rival: it would be held in Los Angeles rather than Philadelphia, making it a much easier trip not only for those in Silicon Valley but also for those Japanese hardware and software makers thinking of attending with their latest products. He was trying to lure one Japanese company in particular: Sony, who were known to be on the verge of releasing their first ever videogame console, a cutting-edge 32-bit machine which would use CDs rather than cartridges as its standard storage medium.

Sony finally cast their lot with E3, and that sealed the deal. Pat Ferrell:

My assistant Diana comes into my office and she goes, “Gary Shapiro’s on the phone.” I go, “Really?” So she transfers it. Gary says, “Pat, how are you?” I say, “I’m good.” He says, ‘”You win.” And he hangs up.

E3 would go forward without any competition in its time slot.

Howard Lincoln, a man accustomed to dictating terms rather than begging favors, was forced to come to Ferrell and ask what spots on the show floor were still available. He was informed that Nintendo would have to content themselves with the undesirable West Hall of the Los Angeles Convention Center, a space that hadn’t been remodeled in twenty years, instead of the chic new South Hall where Sony and Sega’s big booths would have pride of place. Needless to say, Ferrell enjoyed every second of that conversation.

E3 1993

The dawn of a new era: E3 1995.

Michael Jackson, who was under contract to Sony’s music division, made an appearance at the first E3 to lend his star power to the PlayStation.

Jack and Sam Tramiel of Atari were also present. Coming twelve years after the Commodore 64’s commercial breakout, this would be one of Jack’s last appearances in the role of a technology executive. Suffice to say that it had been a long, often rocky road since then.

Lincoln’s doubts about Ferrell’s organizational acumen proved misplaced; the first E3 went off almost without a hitch from May 11 through 13, 1995. There were no less than 350 exhibitors — large, small, and in between — along with almost 38,000 attendees. There was even a jolt of star power: Michael Jackson could be seen in Sony’s booth one day. The show will be forever remembered for the three keynote addresses that opened proceedings — more specifically, for the two utterly unexpected bombshell announcements that came out of them.

First up on that first morning was Tom Kalinske, looking positively ebullient as he basked in the glow of having forced Howard Lincoln and Nintendo to bend to his will. Indeed, one could make the argument that Sega was now the greatest single power in videogames, with a market share slightly larger than Nintendo’s and the network of steadfast friends and partners that Nintendo’s my-way-or-the-highway approach had prevented them from acquiring to the same degree. Settling comfortably into the role of industry patriarch, Kalinske began by crowing about the E3 show itself:

E3 is a symbol of the changes our industry is experiencing. Here is this great big show solely for interactive entertainment. CES was never really designed for us. It related better to an older culture. It forced some of the most creative media companies on earth to, at least figuratively, put on gray flannel suits and fit into a TV-buying, furniture-selling mold. Interactive entertainment has become far more than just an annex to the bigger electronics business. Frankly, I don’t miss the endless rows of car stereos and cellular phones.

We’ve broken out to become a whole new category — a whole new culture, for that matter. This business resists hard and fast rules; it defies conventional wisdom.

After talking at length about the pace at which the industry was growing (the threshold of $5 billion in annual sales would be passed that year, marking a quintupling in size since 1987) and the demographic changes it was experiencing (only 51 percent of Sega’s sales had been to people under the age of eighteen in 1994, compared with 62 percent in 1992), he dropped his bombshell at the very end of this speech: Sega would be releasing their own new 32-bit, CD-based console, the Saturn, right now instead of on the previously planned date of September 2. In fact, the very first units were being set out on the shelves of four key retailers — Toys “R” Us, Babbage’s, Electronics Boutique, and Software Etc. — as he spoke, at a price of $399.

Halfhearted claps and cheers swept the assembly, but the dominant reaction was a palpable consternation. Many of the people in the audience were working on Saturn games, yet had been told nothing of the revised timetable. Questions abounded. Why the sudden change? And what games did Sega have to sell alongside the console? The befuddlement would soon harden into anger in many cases, as studios and publishers came to feel that they’d been cheated of the rare opportunity that is the launch of a new console, when buyers are excited and have already opened their wallets wide, and are much less averse than usual to opening them a little wider and throwing a few extra games into their bag along with their shiny new hardware. Just like that, the intra-industry goodwill which Sega had methodically built over the course of years evaporated like air out of a leaky tire. Kalinske would later claim that he knew the accelerated timetable was a bad idea, but was forced into it by Sega’s Japanese management, who were desperate to steal the thunder of the Sony PlayStation.

Speaking of which: next up was Sony, the new rider in this particular rodeo, whose very presence on this showcase stage angered such other would-be big wheels in the console space as Atari, 3DO, and Philips, none of whom were given a similar opportunity to speak. Sony’s keynote was delivered by Olaf Olafsson, a man of extraordinary accomplishment by any standard, one of those rare Renaissance Men who still manage to slip through the cracks of our present Age of the Specialist: in addition to being the head of Sony’s new North American console operation, he was a trained physicist and a prize-winning author of literary novels and stories. His slick presentation emphasized Sony’s long history of innovation in consumer electronics, celebrating such highlights as the Walkman portable cassette player and the musical compact disc, whilst praising the industry his company was now about to join with equal enthusiasm: “We are not in a tent. Instead we are indoors at our own trade show. Industry momentum, accelerating to the tune of $5 billion in annual sales, has moved us from the CES parking lot.”

Finally, Olafsson invited Steve Race, the president of Sony Computer Entertainment of America, to deliver a “brief presentation” on the pricing of the new console. Race stepped onstage and said one number: “$299.” Then he dropped the microphone and walked offstage again, as the hall broke out in heartfelt, spontaneous applause. A $299 PlayStation — i.e., a PlayStation $100 cheaper than the Sega Saturn, its most obvious competitor — would transform the industry overnight, and everyone present seemed to realize this.

The Atari VCS had been the console of the 1970s, the Nintendo Entertainment System the console of 1980s. Now, the Sony PlayStation would become the console of the 1990s.

“For a company that is so new to the industry, I would have hoped that Sony would have made more mistakes by now,” sighed Trip Hawkins, the founder of the luckless 3DO, shortly after Olafsson’s dramatic keynote. Sam Tramiel, president of the beleaguered Atari, took a more belligerent stance, threatening to complain to the Federal Trade Commission about Sony’s “dumping” on the American market. (It would indeed later emerge that Sony sold the PlayStation essentially at cost, relying on game-licensing royalties for their profits. The question of whether doing so was actually illegal, however, was another matter entirely.)

Nintendo was unfortunate enough to have to follow Sony’s excitement. And Howard Lincoln’s keynote certainly wouldn’t have done them any favors under any circumstances: it had a downbeat, sour-grapes vibe about it, which stood out all the more in contrast to what had just transpired. Lincoln had no big news to impart, and spent the vast majority of his time on an interminable, hectoring lecture about the scourge of game counterfeiting and piracy. His tone verged on the paranoid: there should be “no safe haven for pirates, whether they board ships as in the old days or manufacture fake products in violation of somebody else’s copyrights”; “every user is a potential illegal distributor”; “information wants to be free [is an] absurd rationalization.” He was like the parent who breaks up the party just as it’s really getting started — or, for that matter, like the corporate lawyer he was by training. The audience yawned and clapped politely and waited for him to go away.

The next five years of videogame-console history would be defined by the events of this one morning. The Sega Saturn was a perfectly fine little machine, but it would never recover from its botched launch. Potential buyers were as confused as developers by its premature arrival, those retailers who weren’t among the initial four chosen ones were deeply angered, and the initial library of games was as paltry as everyone had feared — and then there was the specter of the $299 PlayStation close on the horizon for any retail-chain purchasing agent or consumer who happened to be on the fence. That Christmas season, the PlayStation was launched with a slate of games and an advertising campaign that were masterfully crafted to nudge the average age of the videogame demographic that much further upward, by drawing heavily from the youth cultures of rave and electronica, complete with not-so-subtle allusions to their associated drug cultures. The campaign said that, if Nintendo was for children and Sega for adolescents, the PlayStation was the console for those in their late teens and well into their twenties. Keith Stuart, a technology columnist for the Guardian, has written eloquently of how Sony “saw a future of post-pub gaming sessions, saw a new audience of young professionals with disposable incomes, using their formative working careers as an extended adolescence.” It was an uncannily prescient vision.

Sony’s advertising campaign for the PlayStation leaned heavily on the heroin chic. No one had ever attempted to sell videogames in this way before.

The PlayStation outsold the Saturn by more than ten to one. Thus did Sony eclipse Sega at the top of the videogame heap; they would remain there virtually unchallenged until the launch of Microsoft’s Xbox in 2001. By that time, Sega was out of the console-hardware business entirely, following a truly dizzying fall from grace. Meanwhile Nintendo just kept trucking along in their own little world, much as Howard Lincoln had done at that first E3, subsisting on Mario and Donkey Kong and their lingering family-friendly reputation. When their own next-generation console, the Nintendo 64, finally appeared in 1996, the PlayStation outsold it by a margin of three to one.

In addition to its commercial and demographic implications, the PlayStation wrought a wholesale transformation in the very nature of console-based videogames themselves. It had been designed from the start for immersive 3D presentations, rather than the 2D, sprite-based experiences that held sway on the 8- and 16-bit console generations. When paired with its commercial success, the sheer technical leap the PlayStation represented over what had come before made it easily the most important console since the NES. In an alternate universe, one might have made the same argument for the Sega Saturn or even the Nintendo 64, both of which had many of the same capabilities — but it was the PlayStation that sold to the tune of more than 100 million units worldwide over its lifetime, and that thus gets the credit for remaking console gaming in its image.

The industry never gave CES another glance after the success of that first E3 show; even those computer-game publishers who were partisans of the Software Publishers Association and the Recreational Software Advisory Council rather than the IDSA and ESRB quickly made the switch to a venue where they could be the main attraction rather than a sideshow. Combined with the 3D capabilities of the latest consoles, which allowed them to run games that would previously have been possible only on computers, this change in trade-show venues marked the beginning of a slow convergence of computer games and console-based videogames. By the end of the decade, more titles than ever before would be available in versions for both computers and consoles. Thus computer gamers learned anew how much fun simple action-oriented games could be, even as console gamers developed a taste for the more extended, complex, and/or story-rich games that had previously been the exclusive domain of the personal computers. Today, even most hardcore gamers make little to no distinction between the terms “computer game” and “videogame.”

It would be an exaggeration to claim that all of these disparate events stemmed from one senator’s first glimpse of Mortal Kombat in late 1993. And yet at least the trade show whose first edition set the ball rolling really does owe its existence to that event by a direct chain of happenstance; it’s very hard to imagine an E3 without an IDSA, and hard to imagine an IDSA at this point in history without government pressure to come together and negotiate a universal content-rating system. Within a few years of the first E3, IDG sold the show in its entirety to the IDSA, which has run it ever since. It has continued to grow in size and glitz and noise with every passing year, remaining always the place where deals are made and directions are plotted, and where eager gamers look for a glimpse of some of their possible futures. How strange to think that the E3’s stepfather is Joseph Lieberman. I wonder if he’s aware of his accomplishment. I suspect not.

Postscript: Violence and Videogames

Democracy is often banal to witness up-close, but it has an odd way of working things out in the end. It strikes me that this is very much the case with the public debate begun by Senator Lieberman. As allergic as I am to all of the smarmy “Think of the children!” rhetoric that was deployed on December 9, 1993, and as deeply as I disagree with many of the political positions espoused by Senator Lieberman in the decades since, it was high time for a rating system — not in order to censor games, but to inform parents and, indeed, all of us about what sort of content each of them contained. The industry was fortunate that a handful of executives were wise enough to recognize and respond to that need. The IDSA and ESRB have done their work well. Everyone involved with them can feel justifiably proud.

There was a time when I imagined leaving things at that; it wasn’t necessary, I thought, to come to a final conclusion about the precise nature of the real-world effects of violence in games in order to believe that parents needed and deserved a tool to help them make their own decisions about what games were appropriate for their children. I explained this to my wife when I first told her that I was planning to write this series of articles — explained to her that I was more interested in recording the history of the 1993 controversy and its enormous repercussions than in taking a firm stance on the merits of the arguments advanced so stridently by the “expert panel” at that landmark first Senate hearing. But she told me in no uncertain terms that I would be leaving the elephant in the room unaddressed, leaving Chekhov’s gun unfired on the mantel… pick your metaphor. She eventually brought me around to her point of view, as she usually does, and we agreed to dive into the social-science literature on the subject.

Having left academia behind more than ten years ago, I’d forgotten how bitterly personal its feuds could be. Now, I was reminded: we found that the psychological community is riven with if anything even more dissension on this issue than is our larger culture. The establishment position in psychology is that games and other forms of violent media do have a significant effect on children’s and adolescents’ levels of aggressive behavior. (For better or for worse, virtually all of the extant studies focus exclusively on young people.) The contrary position, of course, is that they do not. A best case for the anti-establishmentarians would have them outnumbered three to one by their more orthodox peers. A United States Supreme Court case from 2011 provides a handy hook for summarizing the opposing points of view.

The case in question actually goes back to 2005, when a sweeping law was enacted in California which made it a crime to sell games that were “offensive to the community” or that depicted violence of an “especially heinous, cruel, or depraved” stripe to anyone under the age of eighteen. The law required that manufacturers and sellers label games that fit these rather subjective criteria with a large sticker that showed “18” in numerals at least two-inches square. In an irony that plenty of people noted at the time, the governor who signed the bill into law was Arnold Schwarzenegger, who was famous for starring in a long string of ultra-violent action movies.

The passage of the law touched off an extended legal battle which finally reached the Supreme Court six years later. Opponents of the law charged that it was an unconstitutional violation of the right to free speech, and that it was particularly pernicious in light of the way it targeted one specific form of media, whilst leaving, for example, the sorts of films to which Governor Schwarzenegger owed his celebrity unperturbed. Supporters of the law countered that the interactive nature of videogames made them fundamentally different — fundamentally more dangerous — than those older forms of media, and that they should be treated as a public-health hazard akin to cigarettes rather than like movies or books. The briefs submitted by social scientists on both sides provide an excellent prism through which to view the ongoing academic debate on videogame violence.

In a brief submitted in support of the law, the California Chapter of the American Academy of Pediatrics and the California Psychological Association stated without hesitation that “scientific studies confirm that violent video games have harmful effects [on] minors”:

Viewing violence increases aggression and greater exposure to media violence is strongly linked to increases in aggression.

Playing a lot of violent games is unlikely to turn a normal youth with zero, one, or even two other risk factors into a killer. But regardless of how many other risk factors are present in a youth’s life, playing a lot of violent games is likely to increase the frequency and the seriousness of his or her physical aggression, both in the short term and over time as the youth grows up. These long-term effects are a consequence of powerful observational learning and desensitization processes that neuroscientists and psychologists now understand to occur automatically in the human child. Simply stated, “adolescents who expose themselves to greater amounts of video game violence were more hostile, reported getting into arguments with teachers more frequently, were more likely to be involved in physical fights, and performed more poorly in school.”

In a recent book, researchers once again concluded that the “active participation” in all aspects of violence: decision-making and carrying out the violent act [sic], result in a greater effect from violent video games than a violent movie. Unlike a passive observer in movie watching, in first-person shooter and third-person shooter games, you’re the one who decides whether to pull the trigger or not and whether to kill or not. After conducting three very different kinds of studies (experimental, a cross-sectional correlational study, and a longitudinal study) the results confirmed that violent games contribute to violent behavior.

The relationship between media violence and real-life aggression is nearly as strong as the impact of cigarette smoking and lung cancer: not everyone who smokes will get lung cancer, and not everyone who views media violence will become aggressive themselves. However, the connection is significant.

One could imagine these very same paragraphs being submitted in support of Senator Lieberman’s videogame-labeling bill of 1994; the rhetoric of the videogame skeptics hasn’t changed all that much since then. But, as I noted earlier, there has emerged a coterie of other, usually younger researchers who are less eager to assert such causal linkages as proven scientific realities.

Thus another, looser amalgamation of “social scientists, medical scientists, and media-effects scholars” countered in their own court brief that the data didn’t support such sweeping conclusions, and in fact pointed in the opposite direction in many cases. They unspooled a long litany of methodological problems, researcher biases, and instances of selective data-gathering which, they claimed, their colleagues on the other side of the issue had run afoul of, and cited studies of their own that failed to prove or even disproved the linkage the establishmentarians believed was so undeniable.

In a recent meta-analytic study, Dr. John Sherry concluded that while there are researchers in the field who “are committed to the notion of powerful effects,” they have been unable to prove such effects; that studies exist that seem to support a relationship between violent video games and aggression but other studies show no such relationship; and that research in this area has employed varying methodologies, thus “obscuring clear conclusions.” Although Dr. Sherry “expected to find fairly clear, compelling, and powerful effects,” based on assumptions he had formed regarding video game violence, he did not find them. Instead, he found only a small relationship between playing violent video games and short-term arousal or aggression, and further found that this effect lessened the longer one spent playing video games.

Such small and inconclusive results prompted Dr. Sherry to ask: “[W]hy do some researchers continue to argue that video games are dangerous despite evidence to the contrary?” Dr. Sherry further noted that if violent video games posed such a threat, then the increased popularity of the games would lead to an increase in violent crime. But that has not happened. Quite the opposite: during the same period that video game sales, including sales of violent video games, have risen, youth violence has dramatically declined.

“The causation research can be done, and, indeed, has been done,” the brief concludes, and “leaves no empirical foundation for the assertion that playing violent video games causes harm to minors.”

The Supreme Court ruled against California, striking down the law as a violation of the First Amendment by a vote of seven to two. I’m more interested today, however, in figuring out what to make of these two wildly opposing views of the issue, both from credentialed professionals.

Before going any further, I want to emphasize that I came to this debate with what I honestly believe to have been an open mind. Although I enjoy many types of games, I have little personal interest in the most violent ones. It didn’t — and still doesn’t — strike me as entirely unreasonable to speculate that a steady diet of ultra-violent games could have some negative effects on some impressionable young minds. If I had children, I would — still would — prefer that they play games that don’t involve running around as an embodied person killing other people in the most visceral manner possible. But the indisputable scientific evidence that might give me a valid argument for imposing my preferences on others under any circumstances whatsoever just isn’t there, despite decades of earnest attempts to collect it.

The establishmentarians’ studies are shot through with biases that I will assume do not distort the data itself, but that can distort interpretations of that data. Another problem, one which I didn’t fully appreciate until I began to read some of the studies, is the sheer difficulty of conducting scientific experiments of this sort in the real world. The subjects of these studies are not mice in a laboratory whose every condition and influence can be controlled, but everyday young people living their lives in a supremely chaotic environment, being bombarded with all sorts of mediated and non-mediated influences every day. How can one possibly filter out all of that noise? The answer is, imperfectly at best. Bear with me while I cite just one example of (what I find to be) a flawed study. (For those who are interested in exploring further, a complete list of the studies which my wife and I examined can be found at the bottom of this article. The Supreme Court briefs from 2011 are also full of references to studies with findings on both sides of the issue.)

In 2012, Ontario’s Brock University published a “Longitudinal Study of the Association Between Violent Video Game Play and Aggression Among Adolescents.” It followed 1492 students, chosen as a demographic reflection of Canadian society as a whole, through their high-school years — i.e., from age fourteen or fifteen to age seventeen or eighteen. They filled out a total of four annual questionnaires over that period, which asked them about their social, familial, and academic circumstances, asked how likely they felt they were to become violent in various hypothetical real-world situations, and asked about their videogame habits: i.e., what games they liked to play and how much time they spent playing them each day. For purposes of the study, “action fighting” games like God of War and our old friend Mortal Kombat were considered violent, but strategy games with “some violent aspects” like Rainbow Six and Civilization were not; ditto sports games. The study’s concluding summary describes a “small” correlation between aggression and the playing of violent videogames: a Pearson correlation coefficient “in the .20 range.” (On this scale, a perfect, one-to-one positive or negative correlation would 1.0 or -1.0 respectively.) Surprisingly, it also describes a “trivial” correlation between aggression and the playing of nonviolent videogames: “mostly less than .10.”

On the surface, it seems a carefully worked-out study which reveals an appropriately cautious conclusion. When we dig in a bit further, however, we can see a few significant methodological problems. The first is its reliance on subjective, self-reported questionnaire answers, which are as dubious here as they were under the RSAC rating system. And the second is the researchers’ subjective assignment of games to the categories of violent and non-violent. Nowhere is it described what specific games were put where, beyond the few examples I cited in my last paragraph. This opacity about exactly what games we’re really talking about badly confuses the issue, especially given that strange finding of a correlation between aggression and “nonviolent” games. Finally, that oft-forgotten truth that correlation is not causation must be considered. When we add third variables to the mix — a statistical method of filtering out non-causative correlations from a data set — the Pearson coefficient between aggressive behavior and violent games drops to around 0.06 — i.e., well below what the researchers themselves describe as “trivial” — and that for nonviolent games drops below the threshold of statistical noise; in fact, the coefficient for violent games is just one one-hundredth above that same threshold. For reasons which are perhaps depressingly obvious, the researchers choose to base their conclusions around their findings without third variables in the mix — just one of several signs of motivated reasoning to be found in their text.

In the interest of not belaboring the point, I’ll just say that other studies I looked at had similar issues. Take, for example, a study of 295 students in Germany with an average age of thirteen and a half years, who were each given several scenarios that could lead to an aggressive response in the real world and asked how they would react, whilst also being asked which of a list of 40 videogames they played and how often they did so. Two and a half years later, they were surveyed again — but by now the researchers’ list of popular videogames was hopelessly out of date. They were thus forced to revamp the questionnaire with a list of game categories, and to retrofit the old, more specific list of titles to match the new approach. I’m sympathetic to their difficulties; again, conducting experiments like this amidst the chaos of the real world is hard. Nevertheless, I can’t help but ask how worthwhile an experiment which was changed so much on the fly can really be. And yet for all the researchers’ contortions, it too reveals only the most milquetoast of correlations between real-world aggression and videogame violence.

Trying to make sense of the extant social-science literature on this subject can often feel like wandering a hall of mirrors. “Meta-analyses” — i.e., analyses that attempt to draw universal findings from aggregations of earlier studies — are everywhere, with conclusions seemingly dictated more by the studies that the authors have chosen to analyze and the emphasis they have chosen to place on different aspects of them than by empirical truth. Even more dismaying are the analyses that piggyback on an earlier study’s raw data set, from which they almost invariably manage to extract exactly the opposite conclusions. This business of studying the effects of videogame violence begins to feel like an elaborate game in itself, one that revolves around manipulating numbers in just the right way, one that is entirely divorced from the reality behind those numbers. The deeper I fell into the rabbit hole, the more one phrase kept ringing in my head: “Garbage In, Garbage Out.”

Both sides of the debate are prone to specious reasoning. Still, the burden of proof ultimately rests with those making the affirmative case: those who assert that violent videogames lead their players to commit real-world violence. In my judgment, they have failed to make that case in any sort of thoroughgoing, comprehensive way. Even after all these years and all these studies, the jury is still out. This may be because the assertion they are attempting to prove is incorrect, or it may just be because this sort of social science is so darn hard to do. Either way, the drawing of parallels between violent videogames and an indubitably proven public-hazard like cigarettes is absurd.

Some of the more grounded studies do tell us that, if we want to find places where videogames can be genuinely harmful to individuals and by extension to society, we shouldn’t be looking at their degree of violence so much as the rote but addictive feedback loops so many of them engender. Videogame addiction, in other words, is probably a far bigger problem for society than videogame violence. So, as someone who has been playing digital games for almost 40 years, I’ll conclude by offering the following heartfelt if unsolicited advice to all other gamers, young and old:

Play the types of games you enjoy, whether they happen to be violent or nonviolent, but not for more than a couple of hours per day on average, and never at the expense of a real-world existence that can be so much richer and more rewarding than any virtual one. Make sure to leave plenty of space in your life as well for other forms of creative expression which can capture those aspects of the human experience that games tend to overlook. And make sure the games you play are ones which respect your time and are made for the right reasons — the ones which leave you feeling empowered and energized instead of enslaved and drained. Lastly, remember always the wise words of Dani Bunten Berry: “No one on their death bed ever said, ‘I wish I had spent more time alone with my computer!’” Surely that statement at least remains as true of Pac-Man as it is of Night Trap.

(Sources: the book The Ultimate History of Video Games by Steven L. Kent; Edge of August 1995; GameFan of July 1995; GamePro of June 1995 and August 1995; Next Generation of July 1995; Video Games of July 1995; Game Developer of August/September 1995. Online sources include Blake J. Harris’s “Oral History of the ESRB” at VentureBeat, “How E3 1995 Changed Gaming Forever” at Syfy Games, “The Story of the First E3” at Polygon, Game Zero‘s original online coverage of the first E3, “Sega Saturn: How One Decision Destroyed PlayStation’s Greatest Rival” by Keith Stuart at The Guardian, an interview with Olaf Olafsson at The Nervous Breakdown, and a collection of vintage CES photos at The Verge. Last but certainly not least, Anthony P’s self-shot video of the first E3, including the three keynotes, is a truly precious historical document.

I looked at the following studies of violence and gaming: “Differences in Associations Between Problematic Video-Gaming, Video-Gaming Duration, and Weapon-Related and Physically Violent Behaviors in Adolescents” by Zu Wei Zhai, et. al.; “Aggressive Video Games are Not a Risk Factor for Future Aggression in Youth: A Longitudinal Study” by Christopher J. Ferguson and John C.K. Wang; “Growing Up with Grand Theft Auto: A 10-Year Study of Longitudinal Growth of Violent Video Game Play in Adolescents” by Sarah M. Coyne and Laura Stockdale; “Aggressive Video Games are Not a Rick Factor for Mental Health Problems in Youth: A Longitudinal Study” by Christopher J. Ferguson and C.K. John Wang; “A Preregistered Longitudinal Analysis of Aggressive Video Games and Aggressive Behavior in Chinese Youth” by Christopher J. Ferguson; “Social and Behavioral Heath Factors Associated with Violent and Mature Gaming in Early Adolescence” by Lind Charmaraman, et. al.; “Reexamining the Findings of the American Psychological Association’s 2015 Task Force on Violent Media: A Meta-Analysis” by Christopher J. Ferguson, et. al.; “Do Longitudinal Studies Support Long-Term Relationships between Aggressive Game Play and Youth Aggressive Behavior? A Meta-analytic Examination” by Aaron Drummond, et. al.; “Technical Report on the Review of the Violent Video Game Literature” by the American Psychological Association; “Exposure to Violent Video Games and Aggression in German Adolescents: A Longitudinal Study” by Ingrid Möller and Barbara Krahé; “Metaanalysis of the Relationship between Violent Video Game Play and Physical Aggression over Time” by Anna T. Prescitt, et. al.; “The Effects of Reward and Punishment in Violent Video Games on Aggressive Affect,Cognition, and Behavior” by Nicholas L. Carnagey and Craig A. Anderson; “Violent Video Game Effects on Aggression, Empathy, and Prosocial Behavior in Eastern and Western Countries: A Meta-Analytic Review” by Craig A. Anderson, et. al.; “Violent Video Games: The Effects of Narrative Context and Reward Structure on In-Game and Postgame Aggression” by James D. Sauer, et. al.; “Internet and Video Game Addictions: Diagnosis, Epidemiology, and Neurobiology” by Clifford J. Sussman, et. al.; “A Longitudinal Study of the Association Between Violent Video Game Play and Aggression Among Adolescents” by Teena Willoughby, et. al. My wife dug these up for me with the help of her university hospital’s research librarian here in Denmark, but some — perhaps most? — can be accessed for free via organs like PubMed. I would be interested in the findings of any readers who care to delve into this literature — particularly any of you who possess the social-science training I lack.)


Tags: , , , , , ,