RSS

Author Archives: Jimmy Maher

A Web Around the World, Part 2: If At First You Don’t Succeed…

The early history of the telegraph in commercial service can largely be told as a series of anecdotes, publicity coups that served to convince people that this was a technology worth embracing. The first of these occurred just a few days after the Washington-to-Baltimore line’s inauguration. On May 18, 1844, this first American telegraph service brought to the capital the shocking news that, after nine ballots’ worth of wrangling over the issue, the Baltimore-based Democratic National Convention had settled on a dark-horse candidate for president by the name of James K. Polk; word of this game changer reached the ears of the Washington political establishment within five minutes of the deciding votes being cast. Clearly the telegraph had its uses, in politics as in so many other facets of life. The newspapers were soon filled with more personal anecdotes about the new technology, such as reports of births and deaths delivered instantaneously to the family members affected.

Nevertheless, Samuel Morse found Congress to be stubbornly unforthcoming with more money to build more telegraph lines. After lobbying fruitlessly over the balance of 1844 for what struck him as the next logical step, an extension of the existing line from Baltimore to New York City, he gave up and turned to the private investors who were now beginning to knock at his door. Although neither he nor they could possibly realize it at the time, it would prove a fateful change of course whose aftereffects can still be felt in the world of today. Unlike the European nations, whose communications networks would be funded and managed by their governments, the United States would rely mostly on private industry. The two contrasting funding and governance models more or less persist to the present day.

Rather than attempting to raise capital and wire the United States all by himself, Morse was content to license his telegraph patent to various regional players. The first of these private telegraph lines, linking Philadelphia to New York City, opened in January of 1846. The telegraph’s spread thereafter was breathtaking; the stampede to get onto the World Wide Web during the 1990s has nothing on the speed with which the telegraph became a fixture of everyday American life during the second half of the 1840s.

By 1851, one could send telegraph messages to and from almost any decent-sized American town east of the Mississippi River. To the average mid-nineteenth-century American, the telegraph seemed literally to be a form of magic. Newspapers published rapturous poetry dedicated to Morse’s wondrous invention, which had “annihilated time and space.” Thanks to the telegraph, the United States as a whole became infatuated with the wonders of technology — an infatuation that has never really left it. A thoroughly impressed British visitor reported on the extraordinary range of uses to which the telegraph was already being put just five years after the first lines opened for business:

It is employed in transmitting messages to and from bankers, merchants, members of Congress, officers of government, brokers, and police officers. [It is used for] items of news, election returns, announcements of deaths, inquiries respecting the health of families and individuals, daily proceedings of the Senate and the House of Representatives, orders of goods, inquiries respecting the sailing of vessels, proceedings of cases in various courts, summoning of witnesses, messages for express trains, invitations, the receipt of money at one station and its payment at another; for persons requesting the transmission of funds from debtors, consultation of physicians, and messages of every character usually sent by the mail. The confidence in the efficiency of telegraphic communication is so complete that the most important commercial transactions daily transpire by its means between correspondents several hundred miles apart.

The financiers who built this network out from nothing in almost no time at all were more often than not connected with the railroads that were busily binding the sprawling nation together in another way. Indeed, the telegraph and the railroad were destined to be boon companions for a long, long time to come; the two usually ran along the same rights-of-way, just as with that very first telegraph line from Washington, D.C., to Baltimore. Together they were the necessary prerequisites of a burgeoning new age of big business; they became the handmaids of the modern bureaucratic corporation, with its tendrils stretching across the country like the arms of an octopus (a rather sinister analogy that would become a populist favorite during the Gilded Age to come).

In the meanwhile, Western Europe was being wired together at a slower pace. The telegraph first captured anecdotal headlines in Britain on August 6, 1844, when it was used to send word from Windsor Palace to Fleet Street that Prince Alfred, Queen Victoria’s second son, had been born. The Duke of Wellington forgot to bring his best suit down from London with him for the celebratory banquet, but the telegraph and the railroad, those two fast stablemates of Progress, saved the day: an urgent electronic message was sent back up the line, and the duke’s ensemble arrived on the next train.

On January 3, 1845, the railroad and the telegraph had starring roles in a sensational murder case, when one John Tawell killed his mistress in Slough and jumped on a train for London. The police in Slough sent a telegraph message to their counterparts in London to watch for him at the station, and the blackguard was apprehended as he climbed down from his carriage. “It may be observed,” wrote the London Times, “that had it not been for the efficient aid of the London telegraph, the greatest difficulty as well as delay would have occurred in the apprehension of the party now in custody.” After the murderer was duly executed, the telegraph was immortalized in verse as “the cords that hung John Tawell.”

Observing the more rapid expansion of the telegraph in the United States, Britain and the other European nations grudgingly came to accept that Samuel Morse’s simple, robust system was more practical than any of their more baroque approaches. And so, gradually, the rudimentary tool that was the Morse key and the more refined one that was the Morse Code became an international standard. Morse himself, who was determined to receive every dollar and every bit of credit he felt he had coming to him for his inventions, was less pleased than he might have been by these developments, in that he usually wasn’t paid for Europe’s copycat systems. (In 1860, France and several other European nations would finally agree to pay him a one-time joint indemnity of $80,000, far less than he believed he was owed.)

Of course, Morse’s original telegraph had to evolve in some ways in order for a single 40-mile wire to be transformed into a dense network of connections binding entire nations together. Although the core components of Morse’s telegraph — a Morse key used to transmit Morse Code — would remain the same for a century and more, everything else was ripe for improvement. Better batteries and better cables stretched the possible distance between stations and repeaters almost exponentially year by year; switchboards, timetables, and manual routing protocols were developed to move messages through the system quickly and efficiently from any given source to any given destination.

The new telegraph companies attracted the sort of brainy young men who, had they been born in the following century, might have become computer hackers. A freewheeling culture of competitive cooperation that wasn’t at all far removed from the future hacker culture developed around the telegraph, as all of these bright sparks relentlessly optimized their systems, creating their own legends and lore, heroes and villains in the process. They developed shortcuts for talking with one another along the wires that smack of nothing so much as Internet chat: “SFD” stood for “stop for dinner,” “GM” for “good morning”; one almost expects to find an “LOL” lurking around in there somewhere. During downtime, they filled the lines with such idle chatter, or played checkers and chess with their counterparts in other cities using a special system of codes they’d developed — the original form of networked gaming. And surely they must have made fun of the clueless suits who believed they were the ones running things…

As the second half of the nineteenth century began, then, the telegraph had already become an inexorable transformative force on two continents. There now remained only the most world-transforming feat of connectivity of them all: to bridge the aforementioned two continents themselves, thereby to turn two discrete communications networks into one.

Over the last 50 years, the arrival of steamships on the scene had reduced the time it took to get news across the Atlantic from four or six weeks to as little as ten days under ideal conditions. Yet in the new age of the telegraph such an interval still seemed painfully long. What was needed was obvious: a telegraph wire running across — or rather under — the Atlantic Ocean. Samuel Morse had envisioned just such a thing already in 1843: “A telegraph communication on my plan may with certainty be established across the Atlantic! Startling as this may seem now, the time will come when this project is realized.” Nine years later, the magazine Scientific American dreamed of a future when “the earth will be belted by the electric wire, and New York will yet be able to send the throb of her electric pulse through our whole continent, Asia, Africa, and Europe in a second of time.” Such aspirations seemed far-fetched even in light of the magical powers of current telegraph systems. And yet one thoroughly remarkable man would soon set in motion a major transatlantic effort to realize them — an effort whose vision, daring, and sheer audacity makes it worthy of comparison to the twentieth century’s Project Apollo.


Cyrus Field

This giant nerve, at whose command
The world’s great pulses throb or sleep —
It threads the undiscerned repose
Of the dark bases of the deep.

Around it settle in the calm
Fine tissues that a breath might mar,
Nor dream what fiery tidings pass,
What messages of storm and war.

Far over it, where filtered gleams
Faintly illumine the mid-sea day,
Strange, pallid forms of fish or weed
In the obscure tide softly sway.

And higher, where the vagrant waves
Frequent the white, indifferent sun,
Where ride the smoke-blue hordes of rain
And the long vapors lift and run,

Pauses, perhaps, some lonely ship
With exile hearts that homeward ache —
While far beneath it flashed a word
That soon shall bid them bleed or break.

— “The Atlantic Cable” by Charles G.D. Roberts

During this antebellum era of the United States, New York City’s Astor House was the most famous hotel in the country, the place where all of the movers and shakers stayed when they came to the business capital of the nation. In January of 1854, two of the Astor’s guests happened to be Matthew Field, a prominent railroad engineer, and Frederick Gisborne, a Canadian entrepreneur who was attempting to secure additional funding for a project that had proved much more difficult than he had first anticipated: a telegraph line linking the town of St. John’s on the island of Newfoundland with the town of Sydney on the island of Cape Breton, which entailed some 400 miles of overland and about 85 miles of undersea cable.

Map of Newfoundland and Cape Breton

The undersea portion of the telegraph line would need to be run between Channel-Port aux Basques and the northern tip of Cape Breton, where Cape Breton Highlands National Park is today.

When they bumped into one another one evening in the bar and Gisborne told Field how strapped for cash he was, his interlocutor could well understand the reluctance of potential investors. He asked Gisborne why on earth he wanted to build a telegraph cable in such a remote and inhospitable location at all, serving a Newfoundland population of fishermen that numbered in the bare handful of thousands. Gisborne’s response surprised him: he explained that St. John’s was actually the most easterly town in the Americas, fully one-third closer to Europe than New York City was. If fast steamers carrying urgent messages docked there instead of at one of the larger eastern cities, then passed said messages on to a telegraph operator there, they could substantially cut the communication time between the two continents. Gisborne envisioned a bustling trade of businesses and governments willing to pay well to reduce their best-case communication lag from ten to seven days.

Matthew Field was intrigued enough that he mentioned Gisborne and his scheme to his brother Cyrus Field, who at the age of just 33 was already one of the richest men in New York City. He had made his fortune in paper, but was now semi-retired from business life; being possessed of a decided taste for adventure, he had recently returned from an expedition to some of the more remote regions of South America, in the company of the great landscape painter Frederic Church. Cyrus Field took a meeting with Gisborne, but wasn’t overly impressed with his plan, which struck him as an awful lot of trouble and expense for a fairly modest gain in communication speed. The matter might have ended there — but for one thing. “After [Gisborne] left,” wrote Henry M. Field (another of Cyrus’s brothers) in his history of the Atlantic Cable, “Mr. Field took the globe which was standing in the library, and began to turn it over. It was while thus studying the globe that the idea first occurred to him that the telegraph might be carried further still, and be made to span the Atlantic Ocean.”

It’s hard not to compare this realization with Samuel Morse’s own eureka moment aboard the Sully 22 years earlier. Like Morse at the time, Field was enough of a rank amateur to believe that his brainstorm was a new idea under the sun. Knowing nothing whatsoever about telegraphy, eager to find out if a transatlantic cable was a realistic possibility, Field dispatched two letters. One was to Morse, the one name in the field that absolutely everyone was familiar with. The other was to one Matthew Fontaine Maury, a noted oceanographer and intellectual jack-of-all-trades who wore the uniform of the United States Navy. Both responded enthusiastically: Morse was excited enough to join the project as an official advisor and to offer Field the use of his precious telegraph patent for free, while Maury explained that he had thought about the question enough already to propose a route for the cable between Newfoundland and Ireland, based upon deep-sea soundings he had recently conducted. The route in question was, he said, “neither too deep nor too shallow; yet it is so deep that the wires but once landed will remain forever beyond the reach of vessels’ anchors, icebergs, and drifts of any kind, and so shallow that the wires may be readily lodged upon the bottom.”

The planned course of the cable between Ireland and Newfoundland.

Field’s further inquiries revealed that underwater telegraphy wasn’t an entirely black art. As early as 1845, well before the landlocked telegraph became a reality of daily life in the developed world, an experimental cable had been laid under the Hudson River between New York City and Fort Lee, New Jersey, sheathed in a rubber-packed lead pipe; it had functioned for several months, until the winter ice did it in. In 1851, an underwater cable had bridged the 31 miles of the English Channel, to be followed soon after by another cable connecting Britain to Ireland. Using the latest batteries and wiring, such distances and more were by now possible without employing any repeaters.

So, Field set about enlisting other wealthy men into his cause, whilst getting Gisborne to accept a relegation to the role of chief engineer in what was now to be a much more ambitious venture than he had ever envisioned. In March of 1854, a company was founded with an appropriately ambitious name: the New York, Newfoundland, and London Telegraph Company. The founders estimated that they would need about $1.5 million to complete their task. This was no small sum in 1854; the entire budget of the federal government of the United States that year totaled just $54 million. Nevertheless, the project would end up costing far, far more. “God knows that none of us were aware of what we had undertaken to accomplish,” Cyrus Field would muse later. Had they known, it is doubtful they ever would have begun.


There is nothing in the world easier than to build a line of railroad or of telegraph on paper. You have only to take the map and mark the points to be connected, and then with a single sweep of the pencil to draw the line along which the iron track is to run. In this airy flight of the imagination, distances are nothing. All obstacles disappear. The valleys are exalted, and the hills are made low, soaring arches span the mountain streams, and the chasms are leaped in safety by the fire-drawn cars.

Very different it is to construct a line of railroad or of telegraph in reality; to come with an army of laborers, with axes on their shoulders to cut down the forests, and with spades in their hands to cast up the highway. Then poetry sinks to prose, and instead of flying over the space on wings, one must traverse it on foot, slowly and with painful steps. Nature asserts her power, and, as if resentful of the disdain with which man in his pride affected to leap over her, she piles up new barriers in his way. The mountains with their rugged sides cannot be moved out of their place, the rocks must be cleft in twain, to open a passage for the conqueror, before he can begin his triumphal march. The woods thicken into impassable jungle, and the morass sinks deeper, threatening to swallow up the horse and his rider, until the rash projector is startled at his own audacity. Then it becomes a contest of forces between man and nature, in which, if he would be victorious, he must fight his way. The barriers of nature cannot be lightly pushed aside, but must yield at last only to time and toil, and “man’s unconquerable will.”

— Henry M. Field, The Story of the Atlantic Telegraph

The newly incorporated New York, Newfoundland, and London Telegraph Company decided that its first goal ought to be the completion of Gisborne’s original project, which would also constitute the fulfillment of two-thirds of its name: a telegraph line linking Newfoundland to New York City, via Cape Breton. Such a line would hopefully bring some money in to help fund the vastly more audacious final third of the company’s name.

The first stage of this first goal required no underwater cable, but was daunting enough in its own right: it entailed running an overland cable from St. John’s across the widest part of Newfoundland to the point where the underwater cable was planned to begin. Gisborne had managed to complete the first 40 miles of this link before his money ran out; that left 260 miles still to go. Matthew Field took charge of this endeavor in the summer of 1854, anticipating that it would be done within a year. But he hadn’t reckoned with the rugged, isolated, in many places well-nigh unmapped terrain the work party had to cross, where opportunities for living off the land were few. The logistics surrounding the building of the line thus became much more complicated than the construction effort itself; the 600 men involved in the effort had to build their own roads as they went just to get supplies in and out. “Recently, in building half a mile of road, we had to bridge three ravines,” wrote Matthew Field to his brother Cyrus on one occasion. “Why didn’t we go around the ravines? Because Mr. Gisborne had explored twenty miles in both directions and found more ravines. That’s why!” The whole project could have served as a case study in why builders of telegraph lines usually preferred to follow the smooth, straight paths which the builders of railroads had already cut through the landscape. Alas, that wasn’t an option on Newfoundland.

And then the dark, cold northern winter set in, exacerbating the builders’ suffering that much more. “What hardships and suffering the men endured — all this is a chapter in the History of the Telegraph which has not been written, and which can never be fully told,” writes Henry Field. Bridging Newfoundland and then constructing another 100 miles of overland telegraph line on Cape Breton to reach Sydney wound up taking two years and costing more than $1 million all by itself.

While Matthew Field’s party was inching its way across the wilds of coastal Canada, Cyrus Field was growing impatient to begin laying the undersea part of the route, which he saw as an important test run of sorts for the eventual laying of an Atlantic-spanning cable. He went to London to purchase 85 miles of the best undersea cable money could buy, the same as that which had been used to connect Britain to France and Ireland. It consisted of three intertwined copper-alloy wires, sheathed in tarred hemp, gutta-percha, and galvanized iron wire — guaranteed, so the sellers said, to be impervious to water forever. Field made plans to lay the undersea cable already in the summer of 1855, when the overland cable was still only half completed.

Having as keen an instinct for publicity as any tech mogul of today, Field decided to turn the laying of the cable into a junket for existing and potential investors. Thus on August 7, 1855, the luxury coastal steamer James Adgar departed New York Harbor with many of the brightest stars in the moneyed East Coast firmament aboard. It was to rendezvous off the coast of Newfoundland with an older sailing ship, a sturdy brig called the Sarah L. Bryant carrying the shiny new cable from London, then tow it as it paid out the cable behind it across the Cabot Strait that separates Newfoundland from Cape Breton.

Right from the start, everything that possibly could went wrong, a result not only of bad luck but of a thoroughgoing lack of planning and preparation. The Bryant failed to turn up at the appointed time. When it did appear several days late, it was in a sorry state, having been badly battered by a rough Atlantic crossing weighted down by the cable in its hold. More days were spent on repairs, after which an impenetrable fog rolled in and forced the two ships to sit idle for yet 48 more hours. When the weather cleared at last and the Adgar tried to take the Bryant in tow to begin the operation, a series of cock-ups caused the steamship to ram the brig broadside, very nearly breaking it in two. The captain of the Adgar, whose name was Turner, was by now convinced — and not without justification, it must be admitted — that he was dealing with a bunch of rank amateurs; he grew willfully uncooperative, refusing to steer the course and speed asked of him even after he finally had the Bryant in tow. Cyrus Field and his party watched with alarm as the Adgar‘s high speed, combined with the weight of the cable spooling out behind, caused the Bryant‘s stern to dip lower and lower into the water. Meanwhile the light breeze that had marked the morning’s weather was becoming a howling sidelong gale by mid-afternoon, threatening to capsize the already floundering brig. The captain of the Bryant felt he had no choice: he cut both the tow rope and the telegraph cable, letting the latter fall uselessly into the ocean.

John Wells Stancliff, an amateur painter who was a part of the 1855 attempt to lay a telegraph cable from Newfoundland to Nova Scotia, created this dramatic image of the Sarah L. Bryant being towed through dangerously choppy seas by the James Adgar.

The company’s first attempt to lay an undersea cable had proved an unadulterated fiasco, with the chattering class in ringside seats for the whole sorry spectacle. The final price tag: $351,000 almost literally tossed into the ocean.

Publicly, the partners blamed it all, more than a little disingenuously, on Captain Turner of the Adgar: “We had spent so much money, and lost so much time, that it was very vexatious to have our enterprise defeated by the stupidity and obstinacy of one man.” In truth, though, the obstinate captain was neither the only nor the most important reason that everything had gone sideways. The company had learned the hard way that a sailing ship in tow simply didn’t have the maneuverability necessary to lay a cable in the notoriously temperamental waters of the North Atlantic.

Luckily, Cyrus Field was a man capable of learning from his mistakes. He traveled to London again and bought another cable. And the next summer, just as the overland lines across Newfoundland and Cape Breton were being completed, he tried again to lay it under the ocean. This time, however, he used the agile modern steamer Propontis for the purpose, and invited no one to witness the endeavor, in case it all went wrong again. He needn’t have worried: it all went off without a hitch. The newly minted telegraph connection between St. John’s and Sydney would suffer no service interruptions for the next ten years — a very impressive service record for any line by the standards of the mid-nineteenth century.

Unfortunately, the completion of Frederick Gisborne’s original project had cost the company all of its starting capital and then some — and yet there were still 2000 miles to go if the cable was to reach Ireland. The completed stretch of line ended up bringing in the merest pittance, as Field had suspected it would when Gisborne first broached his idea to him.

So, Field traveled yet again to London, the financial capital of the world, to beat the bushes for more investors. He met with no less skepticism there than he had in his home country; no less august a personage than the head of the Royal Greenwich Observatory called it “a mathematical impossibility to submerge the cable at so great a depth, and if it were possible, no signals could be transmitted through so great a length.” But Cyrus Field could be persuasive: by the time he left Britain six months later, he had formed a new corporation called the Atlantic Telegraph Company, with £350,000 (the equivalent of £40 million or $53 million today) of investment capital; the roll call of those who had pledged their money to the cause included such well-known names as the novelist William Makepeace Thackeray. Lest anyone accuse him of failing to put his money where his mouth was, know that the total also included the majority of Field’s own remaining fortune.

Almost as importantly, the British government promised to pay £14,000 per year to use the telegraph for diplomatic dispatches, and offered to loan the company the recently commissioned 3500-ton steam-powered battleship HMS Agamemnon for the laying of the cable — a poetically appropriate choice, given how the ship shared a name with the ancient tragedy which contains the first documented description of a long-distance signaling system. So, just like that, the Atlantic Telegraph Company had its first customer. Shortly thereafter, it gained its second, when the American government agreed to virtually the same deal: $70,000 per year to make use of the cable. And the Americans too offered a ship for the purpose of laying it: the USS Niagara, a fast, modern 5200-ton steam-powered frigate that was due to be commissioned in the spring of 1857 in the New York Navy Yard. The pride of the United States Navy already, the Niagara was set to become the biggest and arguably the most powerful warship in service anywhere in the world.

The USS Niagara. It dates from that odd era in naval history when builders were still hedging their bets between sail and steam power by equipping their ships with both. Its hull too was a hybrid of old and new, being made of wood draped over a skeleton of steel.

Working from the proposals of Matthew Fontaine Maury, the company plotted a relatively level course for the cable across the Atlantic seafloor. The company’s engineers believed that, by combining a big power source with a cable big enough to handle all the juice it put out without melting, they could push a signal fully 2000 miles without a single repeater; the old, vexing problem of signal loss down a wire had largely been solved by now by brute force. But another problem had cropped up that rather smacked of this older one.

It was slowly becoming clear to the electrical engineers of the mid-1850s that an electric current moved down a wire very quickly but not instantaneously. This phenomenon, which was dubbed electrical retardation, was not a problem in the most obvious sense: a signal traveling at just one percent of the speed of light can still cross the Atlantic in less than one second. The real issue was that different frequencies of current traveled at different speeds, and a telegraph signal contained many frequencies. Thus by the time the signal reached the end of a really long wire, the  sharp, staccato dots and dashes of Morse Code could turn into a fuzzy haze of white noise. A British mathematician and physicist named William Thomson concluded that there was a “law of squares” governing retardation, meaning it was inversely proportional to the square of the cable’s length, echoing a similar claim which Peter Barlow had once made about signal-strength decay. Right on cue, one Wildman Whitehouse, a British surgeon and gentleman experimenter, came forward to play the role of Joseph Henry to Thomson’s Barlow: at worst, retardation increased linearly down the length of a cable, Whitehouse claimed. So, he said, the problem actually wasn’t as big as Thomson made it sound. The battle lines were drawn again, the nay-saying academic once again pitted against the can-do practical man.

There was, to be sure, a straightforward solution of sorts to the problem of retardation even when it was at its worst: operators could simply work their Morse keys more slowly to ensure that every pulse remained distinct. Field was willing to gamble that he would be able to transmit enough messages along his line to make a profit, retardation or no.

The next question to be settled was that of the design of the cable itself. Clearly it had to be much larger in diameter than the typical terrestrial telegraph cable to carry a signal the distance asked of it. But how much larger? The thicker the cable, the more retardation it would be subject to, for the latter is a function not just of a cable’s length but of its total surface area. Thus four qualities needed to be balanced: the positive ones of electrical capacity and physical strength versus the negative ones of degree of signal retardation and physical bulk. This last was no trivial concern. The cable “must be strong, or it would snap in the process of laying,” wrote Henry Field later. “Yet it would not do to have it too large, for it would be unmanageable.” In September of 1855, a British vessel attempting to lay an undersea cable between Sardinia and Algeria had almost been pulled under when the capstan around which the cable was wound had suddenly given way, sending a dead weight of sixteen tons plunging into the ship’s wake “with fearful velocity.” In light of their own recent nautical misadventures off the coast of Newfoundland, such disastrous possibilities were very much on the company’s minds.

Cyrus Field asked both William Thomson and Wildman Whitehouse what type of cable they thought would work best. Predictably enough, they were in complete disagreement. To ensure adequate tensile strength and signal retention, Thomson recommended a cable as thick as a man’s upper arm. To address the retardation that such a thick cable would only exacerbate, he proposed a core made of more conductive pure copper instead of the typical copper alloy, and also proposed a new, ultra-sensitive galvanometer for detecting signals on the receiving end, something he had ideas for but had yet to make a reality. Whitehouse, on the other hand, was vastly more sanguine. A much thinner cable made from a copper alloy, combined with the already proven technologies for sending and receiving, would be just fine according to him. He argued that the retardation engendered by the thinner cable would necessarily be milder, and what there was of it could be easily dealt with by training operators to key their messages somewhat more slowly. His proposed cable would be only as big around as a man’s wrist.

The future Atlantic Cable being made in London.

Unsurprisingly, Field opted for Whitehouse’s approach, which would be far cheaper and faster. Without considering the matter further, he sent an order to London for 2500 miles of cable conforming to Whitehouse’s specifications, at a price of £225,000. (The peaks and valleys of the ocean floor, plus the fact that the cable would not be stretched completely taut, meant that crossing 2000 miles of ocean would surely take considerably more than just 2000 miles of the stuff.) When Thomson was given a snippet of it to test, he was horrified to discover its alloy core was so sloppily made that some sections were twice as conductive as other sections. But the die was now cast.

Whitehouse’s cable may have been comparatively light, but it still weighed one ton per mile, and there was no ship in the world at the time capable of carrying a load of 2500 tons. Therefore the company made plans to load half of this longest length of cable ever made aboard each of the Agamemnon and the Niagara. The ships would sail together, and when the first ship ran out of cable somewhere in the middle of the Atlantic, the other would splice the beginning of its cable onto the end of the first and complete the job.

On April 24, 1857, the Niagara departed New York Harbor on its maiden voyage across the Atlantic, its decks and holds cleared of guns and ammunition to make room for the massive weight of cable that was to be loaded in Britain. Aboard were the Field brothers, Samuel Morse, and a party of engineers and technicians in the employ of one or the other of Cyrus Field’s recently formed telegraph companies; more personnel would be picked up in Britain. Relations between the United States and Britain were not yet as warm as they would become in later decades; the British Army had, after all, sacked and burned Washington, D.C., within the lifetime of most of the politicians there. The Atlantic cable and the cooperative endeavor of laying it were therefore invested with huge symbolic importance by the governments of both nations. Windy speeches and toasts accompanied the Niagara as it met up with the Agamemnon in Plymouth, England, then continued apace as the two ships loaded their unique cargoes, a tricky process that wound up taking quite some weeks. When that task was completed at last, they sailed on to the tiny port of Queenstown (now known as Cobh) on the southern tip of Ireland, where the eastern end of the cable was to make landfall. As a further symbol of the emerging spirit of transatlantic cooperation and trust, the American Niagara was to begin the laying of the cable on the British side of the ocean, while the British Agamemnon would complete it on the American side.

Loading the cable aboard the ships was no small task in itself. It had to be dragged up from the quay and laboriously wound around the giant spools in the ships’ holds.

But first, the two ships anchored side by side off the coast of Ireland to conduct an important test. The crew of the Niagara ferried the end of their cable over to the Agamemnon, where it was spliced with the one onboard that ship. Telegraph operators aboard each of the ships then sent a series of test signals back and forth. The 2500-mile connection worked, but the degree of retardation was much more extreme than Whitehouse had promised it would be; it proved possible to send only about two words per minute, one-fifth the rate he had stated would be the worst possible case. But whatever the infelicities of the advice Field had elected to follow and the cable he had elected to purchase, the moment was a telling testament to an extraordinarily rapid evolution in electrical engineering and materials science since that time less than two decades before when Samuel Morse had struggled to push a decipherable signal down 40 feet of wire. Field trusted that even a telegraph able to send only two words per minute across the Atlantic would be of immeasurable value to diplomacy and commerce.

With the test completed, it was time to begin the actual laying of the cable. Its end came ashore on the evening of August 5, 1857, to the accompaniment of much celebration and speechifying. Cyrus Field was clearly touched when he stepped up to the podium:

I have no words to express the feelings which fill my heart tonight — it beats with love and affection for every man, woman, and child who hears me. I may say, however, that, if ever at the other side of the waters now before us, any one of you shall present himself at my door and say that he took hand or part, even by an approving smile, in our work here today, he shall have a true American welcome. I cannot bind myself to more, and shall merely say, “What God has joined together, let not man put asunder.”

Paying out the first of the cable from the stern of the Niagara. Note the cage around the ship’s screws, put there to make sure the cable couldn’t become entangled in them. The sailors liked to call it a “crinoline,” after the wire hoops used to support ladies’ skirts.

A 25-year-old British telegraph engineer named Charles Bright had designed an ingenious mechanism for drawing the cable up from the spools in the ships’ holds and paying it out in a controlled fashion behind them. As the Niagara and its escort crept away from Ireland at a speed of three to six knots, Bright himself monitored his machine day and night, adjusting it constantly to account for the shifting topography of the seafloor beneath and the wind and waves that buffeted the vessel on whose deck it rode. Telegraph operators ashore in Ireland and aboard the ship tapped out a constant patter back and forth to confirm that the cable was still functioning. The distinctive, steady rumble of the pay-out mechanism became an equally important source of comfort to everyone aboard, another reminder that everything was working as it ought to. “If one should drop to sleep, and wake up at night,” wrote Henry Field later, “he has only to hear the sound of ‘the old coffee mill,’ and his fears are relieved, and he goes to sleep again.”

Charles Bright’s paying-out mechanism on the deck of the Niagara.

By the dawn hours of August 10, almost 300 miles of cable had been laid without a hitch, and Bright stepped away from his machine for some much needed rest, leaving it in the charge of one of his assistants. At 3:45 AM, the ship plunged into the trough of an unusually large wave. As it rose again, the cable was pulled taut. The attendant Bright had left in charge should have reduced the braking force in the mechanism, to let the cable spool out faster and ease the strain on it. But he failed to do so in time. The cable snapped with a sound that reverberated through the decks like the clap of doom. In a flash, the frayed end was lost forever beneath the ocean.

“Instantly ran through the ship a cry of grief and dismay,” writes Henry Field. “All gathered on deck with feelings which may be imagined.” The captain of the Niagara would remember the moment as akin to the death of a “dear friend”; he promptly ordered his ship’s flag lowered to half mast.

Field and his colleagues did a quick assessment, and concluded that the well over 300 miles of cable they had lost left them without enough of it remaining to start over again and hope to complete their task. There was nothing for it but to return to Britain. Once back in London, Field learned that it wasn’t possible to manufacture the needed additional cable before the Atlantic winter made the project of laying it too dangerous to attempt. So, the Niagara sailed for home for the season, and the naysayers and mockers on both sides of the ocean came out in force. A parody of “Pop Goes the Weasel!” made the music-hall rounds:

Pay it out! Oh, pay it out
As long as you are able:
For if you put the damned brake on:
Pop goes the cable!

But Cyrus Field professed himself to be undaunted — indeed, to be more encouraged than discouraged by recent events. Rather than the dismal failure described in the popular press, he chose to see his first attempt to lay his Atlantic Cable as a successful proof of concept; he had sent and received underwater telegraph signals over a gap several times longer than anyone had ever managed before. All he needed to go the full distance were a modestly redesigned paying-out mechanism and some equally modest operational refinements. He said as much in a letter to his investors:

The successful laying down of the Atlantic Telegraph Cable is put off for a short time, but its final triumph has been fully proved by the experience that we have had. My confidence was never so strong as at the present time, and I feel sure that, with God’s blessing, we shall connect Europe and America with the electric cord.

The first Atlantic cable may have been lost forever beneath the cold, dark waves of the ocean, but Field’s passion for the task burned as warmly as ever.

(Sources: the books The Victorian Internet by Tom Standage, Power Struggles: Scientific Authority and the Creation of Practical Electricity Before Edison by Michael B. Schiffer, Lightning Man: The Accursed Life of Samuel F.B. Morse by Kenneth Silverman, A Thread across the Ocean: The Heroic Story of the Transatlantic Telegraph by John Steele Gordon, and The Story of the Atlantic Telegraph by Henry M. Field. Online sources include “The Telegraph and Chess” by Bill Wall, Distant Writing: A History of the Telegraph Companies in Britain between 1838 and 1868 by Steven Roberts, and History of the Atlantic Cable & Undersea Communications.)

 
 

Tags:

This Week on The Analog Antiquarian

The Great Wall of China, Chapter 2: Origin Stories

 
Comments Off on This Week on The Analog Antiquarian

Posted by on January 14, 2022 in Uncategorized

 

A Web Around the World, Part 1: Signals Down a Wire

The microcomputer had a well-nigh revolutionary impact on the way that business was done over the first twenty years after its invention: the arrival of a computer on every desk made the workplace more efficient in countless ways. But the gadget’s impact on our personal lives during this period was less all-encompassing. Yes, many youngsters and adults learned the advantages of word processing over typewriting, and a substantial minority of both learned the advantages of computer over console gaming. Meanwhile smaller minorities learned of the pleasures of programming, and some even ventured online to meet others of their ilk. Yet the wide-angle social transformation promised by the most starry-eyed pundits during the would-be Home Computer Revolution of the early 1980s didn’t materialize on the timetable we were promised. For a good decade after the heyday of such predictions, one could get on perfectly well as an informed, aware, plugged-in member of society without owning a computer or caring a whit about them. The question of what a home computer was really good for, beyond word processing, entertainment, and accessing fairly primitive online services at usually exorbitant prices, was difficult to answer for the average person. Most of the other usage scenarios proposed during the early 1980s, from storing recipes to balancing one’s checkbook, remained easier and cheaper on the whole to do the old-fashioned way. The personal computer seemed a useful invention in its realm, to be sure, but not a society-reshaping one.

All of that changed in the mid-1990s, when the Internet entered the public consciousness. By the turn of the millennium, those unable or unwilling to buy a computer and enter cyberspace were well and truly left behind, having no seat at the table where our most important cultural dialogs were suddenly taking place. It’s almost impossible to exaggerate the impact the Internet has had on us: on the way we access information, on the way we communicate and socialize with one another, on the way we entertain ourselves, on the very way we think. The claim that the Internet is the most important advance in the technologies of information and communication since Johannes Gutenberg’s invention of the printing press, which once seemed so expansive, now seems almost picayune in relation to the change we’ve witnessed. Coming at the end of a century of wondrous inventions, the Internet was the most wondrous of them all. We may still be waiting for our flying cars and cheap tickets to Mars, but the world we live in today would nevertheless have seemed thoroughly science-fictional just 30 years ago. Seen in this light, the computer itself seems merely a comparatively plebeian bit of infrastructure that needed to be laid down for the really earth-shattering technology to build upon. Or perhaps we were just seeing computers the wrong way before the Internet: what seemed most significant as a tool for, well, computation was actually a revolution in communication just waiting to happen. In this formulation, a computer without the Internet is like a car without any roads.

When I talk about the Internet in this context, of course, I really mean the combination of a globe-spanning network of computers — one which was already a couple of decades old by the beginning of the 1990s — with the much younger World Wide Web, which applied to the network a new paradigm of effortless navigation based on associative hyperlinks. This serves as a useful reminder that no human invention since the first stone tools has ever been monolithic; inventions are always amalgamations of existing technologies, iterations on what came before. In A Brief History of the Future, his classic millennial history of and philosophical meditation on the Internet, John Naughton noted that “it’s always earlier than you think. Whenever you go looking for the origins of any significant technological development, you find that the more you learn about it, the deeper its roots seem to tunnel into the past.”

I thought about those words a lot as I considered how best to tell the story of the Internet and the World Wide Web here. And as I did so, I kept coming back to this word “Web.” In the strict terms by which Tim Berners-Lee meant the word when he invented the World Wide Web, it refers to a logical web of links. But the prerequisite for that logical web is the physical web of cables that allows computers to talk to one another over long distances in the first place. This infrastructure was not originally designed for computers; it is in fact much, much older than they are. Still, this network — the physical network — strikes me as the most logical place to start this series of articles about the Internet, that ultimate expression of instantaneous worldwide communication.


Aeschylus’s tragedy Agamemnon of the fifth century BC deals, like so much classical Greek literature, with the Trojan War, an event historians now believe to have occurred in approximately 1000 BC. In the play, we’re told how the Greek soldiers abroad sent news of their victory over the Trojans back to their homeland far more quickly than any ship- or horse-borne messenger could possibly have delivered it. This ecstatic paean to modern communication issues from the mouth of Clytemnestra, the wife of the Greek commander Agamemnon, who has been waiting on her husband for ten years at home in the Peloponnesian city of Argos:

Hephaestus, who sent the blazing light from Ida;
then beacon after beacon’s courier flame:
from Ida first, to Hermes’ crag at Lemnos.
Third came the Athos summit, which belongs
to Zeus: it, too, received the massive firebrand.
Ascending now to shoot across the sea’s back,
the journeying torch in all its power and joy.
The pine wood, like a second sun, conveyed
the gold-gleam to the watchtower on Macistus.
Prompt and triumphant over feckless sleep,
unslacking in its task as courier,
passing Euripus’s streams, the beacon’s light
signaled far off to watchmen on Messapion.
They sent out light in turn, sent on the message,
setting alight a rick of graying heather.
Potent against the dimming murk, the light
went leaping high across Asopus’ plain
like the beaming moon, and at Cathaeron’s scarp
roused missive fire still another relay.
The lookout there did not defy the light
sent from far off; the new blaze shot up stronger.
The glow shot past the lake called Gorgon’s Face;
arriving at the mountains where the goats roam,
it urged the fire-ordnance on.
With all their strength, men raised a giant flame,
beard-shaped, to overshoot and pass beyond
the headland fronting the Saronic strait —
so bright the blaze. Darting again, it reached
Arachne’s lookout peak, this city’s neighbor;
then it fell here, on the Atreides’ mansion.
The light we see descends from Ida’s fire.
Tourchbearers served me in this regimen,
with every handoff perfectly performed.
The runners who came first and last both win.
This is my proof, the pledge of what I tell you.
My husband passed the news to me from Troy.

In this fascinating passage, then, we learn of what may have been the first near-instantaneous long-distance communications network ever conceived, dating back more than 3000 years. The signal began with a burning pyre atop Mount Ida near Troy itself, then flashed onward like a torch being passed between the members of a relay team: to the highlands of the island of Lemnos, to Mount Athos on a northeastern peninsula of the Greek mainland, to the northern tip of the island of Euboea, to finally reach the mainland city of Aulas, whence the Greek fleet had sailed for Troy so long before. From there, the signal fires spread across Greece. Historians and geographers are skeptical whether such a signal system might truly have been practicable, even given the mountainous landscape of the region with its many rarefied peaks. But even if it never existed in reality, Aeschylus — or some other, anonymous earlier Greek who created the legend before him — deserves a great deal of credit for imagining that such a thing might exist.

Others after Aeschylus refined the idea further, into something that would function over shorter distances in places without mountain peaks in useful proximity to one another, something that might be used to send a message at least slightly more complicated than word of a war won. During the Second Punic War of the late second century BC, both Rome and its enemy Carthage are believed to have built networks of signal towers for purposes of battlefield communication. Very simple messages — signals to attack or withdraw, etc. — could be passed from tower to tower by waving torches in distinctive patterns. Many more short-range optical-signal systems followed: the Chinese used fireworks on their border walls to raise the alarm if one section was attacked by the “barbarians” on the other side; harbors raised flags to inform ships of the height and movement of the tides.

But all such systems were sharply limited in the types of information they could transmit and the distances over which they could send it. On any broader, more flexible scale, the speed of communication was still the same as that of messengers on horseback, or of sailors in ships at the mercy of the wind and waves. The impact this had on commerce, on diplomacy, and on warfare is difficult for us children of the mass-media age to appreciate; there are repeated instance in history of such follies as bloody battles fought after the wars that spawned them had already ended, because word of the ceasefire couldn’t be gotten to the front lines in time. The people of the past, for their part, had equally little conception of any alternative speed of communication; for them, the weeks that were required to, say, get a message from the Americas to Europe were as natural as a transatlantic telephone call is to us.

Claude Chappe

But in 1789, one Claude Chappe, a French seminary student whose studies had been interrupted by his country’s political revolution, began to envision something else. He became obsessed with the idea of a fast long-range communications network that could transmit messages as arbitrary as the content of any given written letter. He first thought of using electricity, a phenomenon which scientists and inventors were just starting to consider how to turn to practical purposes. But it was still a dangerous, untamed beast at this juncture, and Chappe quickly — and probably wisely — set it aside. Next he turned to sound. He and his four brothers discovered that a cast-iron pot could be heard up to a quarter of a mile away if hit hard enough with a steel mallet. Thus by beating out patterns they could pass messages across reasonably long distances, a quarter-mile at a time. But the method had some obvious problems: its range was highly dependent on the vagaries of wind and weather, and the brothers’ experiments certainly didn’t make them very popular with their neighbors. So, Chappe went back to the drawing board again — went back, in fact, to the ancient solution of optical signalling.

After much experimentation, he arrived at a system based on semaphores mounted atop towers. Each semaphore consisted of three separate, jointed pieces which could be positioned in multiple ways, enough so that there were fully 98 possible distinct configurations of the apparatus as a whole. Six of the configurations were reserved for special purposes, the equivalent of what a digital-network engineer would call “control blocks”: stop and start signals, requests for re-transmission, etc. The other 92 stood for numbers. Chappe provided a code dictionary consisting of 8464 words, divided into 92 pages of 92 words each. The transmission of each word was a two-step procedure: first a number pointing to the page, then another pointing to the word on that page. The system even boasted a form of error correction: since the operator of the next tower in the chain would need to configure his semaphores to match those of the tower before his in order to transmit the message further, the operator in the previous tower got a chance to confirm that his message had been received correctly, and was expected to send a hasty “Belay that!” signal in the case of a mistake.

A contemporary sketch of Chappe’s semaphore system.

Optical engineering had by now progressed to the point that Chappe’s towers could be placed much farther apart than any of the signal towers of old, for they could now be viewed through a telescope rather than with the naked eye. Chappe envisioned a vast network of towers, separated from one another by 10 to 20 miles (15 to 30 kilometers) depending on the terrain, the whole extending across the country of France or even eventually across the whole continent of Europe.

The system was labor-intensive, requiring as it did a pair of attendants in every tower. It was also slow — at best, it was good for about one word per minute — and at the mercy of the hours of daylight and to some extent the weather. But when the conditions were right it worked. Appropriately given how the germ of the concept stemmed from Aeschylus, Chappe turned to Greek for a name for his invention. He first wanted to call it the tachygraphe, combining two Greek cognates meaning “fast” and “writing.” But a friend in the government suggested télégraphe — “distant writing” — instead.

Living in revolutionary times tends to bring challenges along with benefits: Chappe and his brothers had to run for their lives during at least one of their tests, when a mob decided they must be Royalist sympathizers passing secret messages of sedition. On the other hand, the new leaders of France were as eager as any have ever been to throw out the old ways of doing things and to embrace modernity in all its aspects. Some of the innovations they enacted, such as the metric system of measurement, have remained with us to this day; others, such as a new calendar that used ten-day weeks (Revolutionary France had a positive mania for decimals), would prove less enduring. Chappe’s telegraph would fall somewhere in between the two extremes, adding a word and an idea to our culture that would long outlive this first practical implementation of it.

On July 26, 1793, following a series of proof-of-concept demonstrations, the National Convention gave Claude Chappe the title of “Telegraph Engineer” in the Committee of Public Safety. And so, while other branches of the same Committee were carrying out the Reign of Terror with the assistance of Madame la Guillotine, Chappe was building a chain of signal towers stretching from Lille to Paris; the terminus in the capital stood on the dome of the Louvre Palace, newly re-purposed as a public art museum.

On August 15, 1794, shortly after the telegraph went officially into service, it brought news of a major French victory in the war with the old, conservative order of Europe that was going on on the country’s northern border. A National Convention delegate named Lazare Carnot ascended to the podium in the Salles des Machines in Paris. “Quesnoy is restored to the Republic,” he read out from the scrap of paper in his hands. “Its surrender took place at six o’clock this morning.” A wave of jubilation swept the hall, prompted not only by the military victory thus reported but by the timeliness with which the news had arrived, which seemed an equally potent validation of the whole forward-looking revolutionary project. A delegate to the Convention named Joseph Lakanal summed up the mood: “What brilliant destiny do science and the arts not reserve to a republic which, by the genius of its inhabitants, is called to instruct the nations of Europe!”

In the end, the republic in question had a shorter career than Lakanal might have hoped for it, but Chappe’s telegraph survived its demise. By the time Napoleon seized power from the corrupt and dysfunctional remnants of the Revolution in 1799, most of France had been bound together in a web of towers and semaphores. Napoleon supported the construction of many more stations as part of his mission to make France the world’s unrivaled leader in science and technology. But Chappe found himself increasingly sidelined by the French bureaucracy, even as he apparently suffered from a debilitating bladder disease. On January 25, 1805, at the age of 42, he either cut his own throat while standing beneath a telegraph tower on the Rue de Saint Germain in Paris, or deliberately threw himself into a well, or stumbled accidentally into one. (Reports of the death of Claude Chappe, like many of those pertaining to his life, are confused and contradictory, a byproduct of the chaotic times in which he lived.)

This statue of Claude Chappe used to stand in central Paris on the site where some say he committed suicide, just next to one of his preserved telegraph towers. It was removed and melted down by the Nazis during World War II.

His optical telegraph would live on for another half-century after him, growing to fully 556 towers, concentrated in France but stretching as far as Amsterdam, Brussels, Mainz, Milan, Turin, and Venice. According to folk history, it was used for the last time in 1855, to bring news of the victory of France and its allies in the siege of Sevastopol — a fitting bookend for a system which had announced its arrival with word of another military victory more than 60 years before.

Remnants of Chappe’s telegraph network can still be seen in many places in France. This semaphore tower stands in the commune of Saverne in the northeastern part of the country.


One morning he made him a slender wire,
As an artist’s vision took life and form,
While he drew from heaven the strange, fierce fire
That reddens the edge of the midnight storm;
And he carried it over the Mountain’s crest,
And dropped it into the Ocean’s breast;
And Science proclaimed, from shore to shore,
That Time and Space ruled man no more.
“We are one!” said the nations, and hand met hand,
In a thrill electric from land to land.

— “The Victory,” written anonymously in honor of Samuel Morse upon his death in 1872

This photograph of Samuel Morse was taken in 1840, in the midst of his struggle to interest the world in his electric telegraph.

In 1824,  a 33-year-old American painter named Samuel Morse traveled to Washington, D.C. An artist of real talent with a not unimpressive track record — he had once been commissioned to paint President James Monroe — he had previously been in the habit of prioritizing his muse over his earnings. But now he was determined to change that: he went to the capital in the hope of becoming one of a small circle of painters who earned a steady living by making flattering official portraits of prominent men.

On February 10, 1825, Morse sent a letter back home to his wife in New Haven, Connecticut, with some exciting news: he had won a lucrative contract to paint the Marquis de Lafayette, a famous hero of both the American and French Revolutions. But his wife never got to read the letter: she had died on February 7. The day after Morse had posted his missive, word of her death finally reached him. He immediately left for home, but by the time he arrived she had already been buried. The episode was a painful lesson in the shortcomings of current communications methods in the United States, a country which had not embraced even the optical telegraph.

In addition to his more well-known accomplishments as an inventor, Samuel Morse was a painter of no small talent and not inconsiderable importance. He painted his rather magnificent Grand Gallery of the Louvre on his trip to Europe of 1829 to 1832.

Seven years later, Morse found himself aboard a packet ship called the Sully, returning to his homeland from France after an extended sojourn in Europe during which he had combined the profitable business of making miniature copies of European masterpieces with the more artistically satisfying one of trying to create new masterpieces of his own. One of his fellow passengers enjoyed dabbling with electricity, and showed him a battery and some other toys he had brought onboard. Morse was not, as is sometimes claimed, a complete neophyte to the wonders of electricity at this point; a man of astonishingly diverse interests and aptitudes, he had attended a series of lectures on the subject a few years earlier, and had even befriended the instructor. Nevertheless, he clearly had a eureka moment aboard the Sully. “It occurred to me,” he would later write, “that by means of electricity, signs representing figures, letters, or words might be legibly written down [emphasis original] at any distance.” He chattered almost manically about it to anyone who would listen throughout the four-week passage home. His brother Sidney, who met him at the dock upon the Sully‘s arrival in New York City, would later recall that he was still “full of the subject of the [electric] telegraph during the walk from the ship, and for some days afterward could scarcely speak about anything else.”

His surprise and excitement at the thought were in some ways a measure of his ignorance: the idea of an electric telegraph that would not be subject to all of the multitudinous drawbacks of optical systems was practically old hat by now in engineering and invention circles. Still, no one had ever quite managed to get one to work well enough to be useful. This may strike us as odd today; as Tom Standage has noted in his book The Victorian Internet, any clever child of today can construct a working one-way electric telegraph in the course of an afternoon. All you need is a length of wire, a breaker switch, an electric lamp of some sort, and a battery. Run the wire between the breaker switch and the lamp, connect the whole circuit to the battery, and you can sit at one end of the wire making the bulb at the other end flash on and off to your heart’s content. All that’s left to do is to decide upon some sort of code to give meaning to the flashes.

But for electrical experimenters at the turn of the nineteenth century, the devil was in the details. One serious problem was that of detecting the presence or absence of electric current at all, many decades before reasonably reliable incandescent light bulbs became available. By 1800, it had been discovered that immersing the end of a live wire into water would generate telltale bubbles; we now understand that these are the result of a process known as electrolysis, in which an electric current breaks water molecules down into their component hydrogen and oxygen atoms. Experiments were conducted which attempted to apply this phenomenon to telegraphy, but it was difficult, to say the least, to read a coherent message from bubbles floating in a pot of water.

A breakthrough came in 1820, when a Danish scientist named Hans Christian Ørsted discovered that electric current pulls the needle of a compass toward itself. Electricity, in other words, generates its own magnetic field. By winding together a coil of wire, one can make an electromagnet, which affects a compass or anything else containing ferromagnetic materials just like an ordinary magnet, with one important difference: this magnet functions only when electric current is flowing through the coil. The implications for telegraphy were enormous: an electromagnet should finally make it possible to instantly and precisely detect the presence or absence of current in a wire.

But there was still another problem: it didn’t seem to be possible to transmit currents over really long wires. Over such distances as those which separated two typical towers in Claude Chappe’s optical-telegraph system — much less that which separated, say, Lille from Paris — the signal just seemed to peter out and disappear. In 1825, a Briton named Peter Barlow, one of the eminent mathematical and scientific luminaries of his day, conducted a series of experiments to determine the scale of the problem. His conclusions gave little room for optimism. A current’s strength on a wire, he wrote, was inversely proportional to the square of its distance from the battery that had spawned it. As for the telegraph: “I found such a sensible diminution with only 200 feet [60 meters] of wire as at once to convince me of the impracticality of the scheme.”

Luckily for the world, not everyone was ready to defer to Barlow’s reputation. An American named Joseph Henry, a teacher of teenage boys at The Albany Academy in New York who was possessed at the time of neither a university degree nor an international reputation, conducted experiments of his own, and found that Barlow had been mistaken in one of his key conclusions: he found that the strength of a current was inversely proportional to its distance from the battery, full stop — i.e., not from the distance squared. In the course of further experimenting, Henry discovered that higher voltages lost proportionally even less of their strength over distance than weaker ones. Fortunately, the state of the art in batteries was steadily improving. Henry found that a cutting-edge 25-cell battery had enough “projectile force” to push a current a fairly long distance; it was able to ring a bell at the end of a wire more than a mile (1.6 kilometers) long. He published his findings in 1831, while a blissfully unaware Samuel Morse was painting pictures in Europe. But the world of science and invention did take notice; suddenly a workable electric telegraph seemed like a practical possibility once again.

Meanwhile Morse spent the years after his eureka moment aboard the Sully as busily and diversely as ever: teaching art at New York University, teaching private pupils how to paint, painting more pictures of his own, serving on the American Academy of Fine Arts, writing feverish anti-Catholic screeds, even running for mayor of New York City under the auspices of the anti-immigration Native American Democratic Association. (Like too many men of his era, Morse was a thoroughgoing racist and bigot in addition to his more positive qualities.) In light of all this activity, it would be a stretch to say he was consistently consumed with the possibility of an electric telegraph, but he clearly did tinker with the project intermittently, and may very well have followed the latest advancements in the field of electrical transmission closely as part of his interest.

But while people like Joseph Henry were asking whether and how an electrical signal might be sent over a long distance in the abstract, Morse was asking how an electric telegraph might actually function as a tool. How could you get messages into it, and how could you get them out of it?

Morse’s first solution to the problem of sending a message is a classic example of how old paradigms of thought can be hard to escape when inventing brand-new technology. He designed his electric telegraph to work essentially like a long-distance printing press. The operator arranged along a groove cut into a three-foot (1-meter) beam of wood small pieces of metal “movable type,” each having from one to ten teeth cut into it to represent a single-digit number; ten teeth meant zero. He then slotted the beam into a sending apparatus Morse called a “port-rule,” attached to one end of the telegraph wire. The operator turned a hand crank on the port-rule’s side to move the beam through the contraption. As he did so, the teeth on the metal type caused a breaker connected to the telegraph wire to close and open, producing a pattern of electrical pulses.

Morse’s movable type. We see here two pieces representing the number two, and one representing each of three, four, and five.

The whole port-rule apparatus.

At the other end of the wire was an electromagnet, to which was mounted a pencil on the end of a spring-loaded arm made from a ferromagnetic metal. The nib of the pencil rested on a band of paper, which could be set in motion by means of a clockwork mechanism driven by a counterweight. When a message came down the wire, the electrical pulses caused the electromagnet to switch on and off, pulling the pencil up and down as the paper scrolled beneath it. The resulting pattern on the paper could then be translated into a series of digits, which could then be further decoded into readable text using a code dictionary not dissimilar to the one employed by Claude Chappe’s optical telegraph.

Morse’s receiving mechanism, which he called the “register.”

It was all quite fiddly and complicated, but by 1837 — i.e., fully five years after Morse’s eureka moment — it more or less worked on a good day. Range was his biggest problem; not having access to the cutting-edge batteries that were available to Joseph Henry, Morse found that his first versions of his telegraph could only transmit a message 40 feet (12 meters). Pondering this, he came up with a rather brilliant stopgap solution, in the form of what is now called a “repeater”: an additional battery partway down the wire, activated by an electromagnet that responded to the current coming down the prior section of wire. “By the same operation the same results may again be repeated,” Morse wrote in his patent application, “extending and breaking at pleasure such current through yet another and another circuit, ad infinitum.” If you had enough batteries and electromagnets, in other words, you could extend the telegraph to a theoretically infinite length.

With his invention looking more and more promising, Morse befriended a younger man named Alfred Vail, the scion of a wealthy family with many industrial and political connections. Vail became an important collaborator in ironing out the design of the telegraph, while his family signed on as backers, giving Morse access to much more advanced batteries among other benefits. In January of 1838, he sent a “pretty full letter” down a wire 10 miles (16 kilometers) long. “The success is complete,” he exalted.

“Give me a lever long enough and a fulcrum on which to rest it, and I will move the world,” the ancient engineer Archimedes had once (apocryphally) said. Now, Morse paraphrased him with an aphorism of his own: “If [the signal] will go ten miles without stopping, I can make it go around the globe.”

One month after their ten-mile success, Morse and Alfred Vail traveled to Washington, D.C., to demonstrate the telegraph to members of Congress and even to President Martin Van Buren himself. The demonstration was not a success; it’s doubtful whether most of the audience, the president among them, really understood what they were being shown at all. This was not least because Morse was forced to set up his sending and receiving stations right next to one another in the same room, then to try to explain that the unruly tangle of wire lying piled up between them meant that they could just as well have been ten miles apart. As it was, his telegraph looked like little more than a pointless parlor trick to busy men who believed they had more important things to worry about.

So, Morse decided to try his luck in Europe. Upon arriving there, he learned to his discomfiture that various Europeans were already working on the same project he was. In particular, a pair of Britons named William Fothergill Cooke and Charles Wheatstone, building upon the ideas and experiments of a Russian nobleman named Pavel Lvovitch Schilling, had made considerable progress on a system which transmitted signals over a set of ten wires to a set of five needles, causing them to tilt in different directions and thereby to signify different letters of the alphabet.

Morse pointed out to anyone who would listen that this system’s need for so many wires made it far more complicated, expensive, and delicate than his own system, which required just one.Yet few of the Europeans Morse met showed much interest in yet another electric-telegraph project, much less one from the other side of the Atlantic. He grew almost frantic with worry that one of the European projects would pan out before he could get his own telegraph into service. Against all rhyme and reason, he began claiming that Cooke and Wheatstone had stolen from him the very idea for an electric telegraph; it had, he said, probably reached them through one of the other passengers who had sailed on the Sully back in 1832. This was of course absurd on the face of it; the idea of an electric telegraph in the broad strokes had been batted about for decades by that point. Morse’s invention was a practical innovation, not a conceptual one. Yet he heatedly insisted that he alone was the father of the electric telegraph in every sense. Europeans didn’t hesitate to express their own opinion to the contrary. The argument quickly got personal. One French author, for example, took exception with Morse’s habit of calling himself a “professor.” “It may be well to state here,” he sniffed, “that he [is] merely professor of literature and drawing, by an honorary title conferred upon him by the University of New York.”

Morse returned to the United States in early 1839 a very angry man. He now enlisted the nativist American press in his cause. “The electric telegraph, that wonder of our time, is an American discovery,” wrote one broadsheet. “Professor Morse invented it immediately after his return from France to America.” To back up his claim of being the victim of intellectual theft, Morse even tracked down the Sully‘s captain and got him to testify that Morse had indeed spoken of his stroke of genius freely to everyone onboard.

But even Morse had to recognize eventually that such pettiness availed him little. There came a point, not that long after his return to American shores, when he seemed ready to give up on his telegraph and all the bickering that had come to surround it in favor of a new passion. While visiting Paris, he had seen some of the first photographs taken by Louis Daguerre, had even visited the artist and inventor personally in his studio. He had brought one of Daguerre’s cameras back with him, and now, indefatigable as ever, he set up his own little studio; the erstwhile portrait painter became New York City’s first portrait photographer, as well as a teacher of the new art form. His knack for rubbing shoulder with Important Men of History hadn’t deserted him: among his students was one Mathew Brady, whose images of death and destruction from the battlefields of the American Civil War would later bring home the real horrors of war to civilians all over the world for the first time. Morse also plunged back into reactionary politics with a passion; he ran again, still unsuccessfully, for mayor of New York on an anti-immigration, anti-Catholic, pro-slavery platform.

So, Morse might have retired quietly from telegraphy, if not for an insult which he simply couldn’t endure. Over in Britain, Cooke and Wheatstone had been making somewhat more headway. They had found that the men behind the new railroads that were then being built showed some interest in their telegraph as a means of keeping tabs on the progress of trains and avoiding that ultimate disaster of a collision. In 1839, Cooke and Wheatstone installed the first electric telegraph ever to be put into everyday service, connecting the 13 miles (21 kilometers) that separated Paddington from West Drayton along Britain’s Great Western Railway. Several more were installed over the next few years on other densely trafficked stretches. One story has it that, when three of the five indicator needles on the complex system conked out on one of the lines, the operators in the stations improvised a code for passing all the information they needed to using only the remaining two needles. The lesson thus imparted would only slowly dawn on our would-be electric-telegraph entrepreneurs on both sides of the Atlantic: that both of their systems were actually more complicated than they needed to be, that a simpler system would be cheaper and more reliable while still doing everything it needed to.

But first, the insult: flush with their relative success, Cooke and Wheatstone wrote to Morse in early 1842 to ask whether, in light of all his experience with electric telegraphy in general, he might be interested in peddling their system to the railroads in his country — in becoming, in other words, a mere salesman for their telegraph. It may have been intended as an honest conciliatory overture, a straightforward attempt to bury the hatchet. But that wasn’t how Morse took it. Livid at this affront to his inventor’s pride, he jumped back into the telegraphy game with a vengeance; he soon extended his system’s maximum range to 33 miles (53 kilometers).

He wrote a deferential letter to Joseph Henry, whose experiments had by now won him a position on the faculty of Princeton University and the reputation of the leading authority in the country on long-distance applications of electricity. Morse knew that, if he could get Henry to throw his weight behind his telegraph, it might make all the difference. “Have you met with any facts in your experiments thus far that would lead you to think that my mode of telegraphic communication will prove impracticable?” he asked in his letter. Not only did Henry reply in the negative, but he invited Morse up to Princeton to talk in person. This was, needless to say, exactly what Morse had been hoping for. Henry agreed to support Morse’s telegraph, even to publicly declare it to be a better design than its competitor from Britain.

Thus Henry was in attendance when Morse exhibited his telegraph in New York City in the summer of 1842, garnering for it the first serious publicity it had received in a couple of years. Morse continued beavering away at it, adding an important new feature: a sending and receiving station at each end of the same wire, to turn his telegraph into an effortless two-way communications medium. The British system, by contrast, required no fewer than twenty separate wires to accomplish the same thing. In December of 1842, the growing buzz won Morse another hearing in Washington, D.C. Knowing that this was almost certainly his last chance to secure government funding, he lobbied for and got access to two separate audience halls. He installed one station in each, and he and Alfred Vail then mediated a real-time conversation between two separate groups of politicians and bureaucrats who could neither see nor hear one another.

This added bit of showmanship seemed to do the trick; at last some of those assembled seemed to grasp the potential of what they were seeing. A bill was introduced to allocate $30,000 to the construction of a trial line connecting Washington, D.C., to Baltimore, a distance of 40 miles (65 kilometers). On February 23, 1843, it passed the House by a vote of 89 to 83, with 70 abstainers. On March 3, the Senate passed it unanimously as a final piece of business in the literal last minute of the current term, and President John Tyler signed it. More than a decade after the idea had come to him aboard the Sully, Morse finally had his chance to prove to the world how useful his telegraph could be.

He had no small task before him: no one in the country had ever attempted to run a permanent electrical cable over a distance of 40 miles before. Morse asked the Baltimore & Ohio Railroad Company for permission to use their right-of-way between Washington, D.C., and Baltimore. They agreed, in return for free use of the telegraph, thus further cementing a connection between railroads and telegraphs that would persist for many years.

The project was beset with difficulties from the start. A plan to lay the cable underground, encased within custom-manufactured lead pipe, went horribly awry when the latter proved to be defective. The team had to pull it all up again, whereupon Morse decided to string the cable along on poles instead, where it would be more exposed to the elements and to vandals but also much more accessible to repair crews; thus was born the ubiquitous telegraph — later telephone — pole.

This experience may have taught Morse something of the virtues of robust simplicity. At any rate, it was during the construction of the Washington-to-Baltimore line that he finally abandoned his complicated electrical printing press in favor of a sending apparatus that was about as simplistic as it could be. It was apparently Alfred Vail rather than Morse himself who was primarily responsible for designing what would be immortalized as the “Morse key”: a single switch which the operator could use to close and open the circuit breaker manually. The receiving station, on the other hand, remained largely unchanged: a pencil or pen made marks on a paper tape turning beneath it.

The Morse key. For well over a century the principal tool and symbol of the telegraph operator’s trade, it was actually a last-minute modification of a more ambitious design.

To facilitate communication using such a crude tool, Morse and Vail created the first draft of the system that would be known forevermore as Morse code. After being further refined and simplified by the German Friedrich Clemens Gerke in 1848, Morse code became the first widely used binary communications standard, the ancestor of later computer protocols like ASCII. In lieu of the zeroes and ones of the computer age, it encoded every letter and digit as a series of dots and dashes, which the operator at the sending end produced on the roll of paper at the other end of the line by pressing and releasing the Morse key quickly (in the case of a dot) or pressing and holding it for a somewhat longer time (in the case of a dash). The system demanded training and practice, not to mention significant manual dexterity, and was far from entirely foolproof even with a seasoned operator on each end of the line. Nonetheless, plenty of people would get very, very good at it, would learn practically to think in Morse code and to transcribe any text into dots and dashes almost as fast as you or I might type it on a computer keyboard. And they would learn to turn a received sequence back into characters on the page with equal facility. The electromagnet attached to the stylus on the receiving end gave out a distinct whine when it was engaged; thanks to this, operators would soon learn to translate messages by ear alone in real time. The sublime ballet of a telegraph line being operated well would become a pleasure to watch, in that way it is always wonderful to watch competent people who take pride in their skilled work going about it.

On May 24, 1844, the Washington-to-Baltimore telegraph line was officially opened for business. Before an audience of journalists, politicians, and other luminaries, Morse himself tapped out the first message in, of all places, the chambers of the Supreme Court of the United States. At the other end of the line in Baltimore, Alfred Vail decoded it before an audience of his own. “What hath God wrought?” it read, a phrase from the Old Testament’s Book of Numbers.

For our purposes, a perhaps more appropriate question might be, “What hath the telegraph wrought?” Thanks to Samuel Morse and his fellow travelers, the first stepping stone toward a World Wide Web had fallen into place.

(Sources: the books A Brief History of the Future: The Origins of the Internet by John Naughton; The Victorian Internet by Tom Standage; From Gutenberg to the Internet: A Sourcebook on the History of Information Technology edited by Jeremy M. Norman; The Greek Plays edited by Mary Lefkowitz and James Romm; Les Télégraphes by A.L. Ternant, Power Struggles: Scientific Authority and the Creation of Practical Electricity Before Edison by Michael B. Schiffer, and Lightning Man: The Accursed Life of Samuel F.B. Morse by Kenneth Silverman. And the paper “The Telegraph of Claude Chappe: An Optical Communications Network for the XVIIIth Century” by J.M. Dilhac.)

 
 

Tags:

Heroes of Might and Magic

The Heroes of Might and Magic series of fantasy strategy games was an anomaly during its period of peak popularity at the turn of the millennium: it remained defiantly turn-based in an industry that had gone almost entirely real-time, and it likewise continued to rely on lovingly hand-drawn pixel art in lieu of trendy 3D graphics. All of this gave the series the feel of — dare I say it? — a board game, at a time when such things were deeply out of fashion with digital gamers looking for the latest and greatest in immersive whiz-bang pyrotechnics. And yet it sold millions upon millions of copies.

When we dig a bit deeper, we find that the origins of Heroes‘s retro tabletop sensibility are as explicable as its popularity is inexplicable. As we learned in the last article, its principal creator Jon Van Caneghem was a tabletop gamer long before he became a computer gamer, much less a computer-game designer and programmer. Heroes of Might and Magic was as heavily influenced by the delightfully tactile boards, cards, counters, and dice which had marked his adolescence as it was by anything he had seen or done on a computer since. More specifically, it was the belated fruit of what had once seemed a quixotic attempt on his part to bridge the analog-versus-digital split in gaming — a venture which dated back to more than seven years before the first game in the Heroes series arrived in late 1995.



One of the young Jon Van Caneghem’s favorite tabletop games was Star Fleet Battles, a “simulation” of outer-space combat in the Star Trek universe. His love for it persisted even after he decided to become a computer-game entrepreneur. Indeed, the self-described Star Fleet Battles “fanatic” found time amidst the run up to the release of the original Might and Magic to win the game’s biggest tournament in 1986, thus securing for himself the status of best player in the world as of that instant in time.

The publisher of Star Fleet Battles was an Amarillo, Texas-based outfit known as Task Force Games. Two plucky freshly minted Texas Tech graduates named Stephen Cole and Allen Eldridge had founded Task Force in 1978, whereupon they had managed to acquire a license for one of the biggest science-fiction properties in the world by employing a circuitous — not to say dubious — stratagem: they had sub-licensed the intellectual property from Franz Joseph, the author of a tome called the Star Trek Star Fleet Technical Manual. Task Force would eventually secure a more direct contract with Paramount Pictures, the owners of Star Trek, but their game would always live on precarious legal ground, entirely at the sufferance of a corporate overlord which seemed only intermittently to realize that it existed. The tortuously circumscribed contract which allowed Task Force to make their game stipulated that they could use the ships and hardware and the various alien societies and races from the television show, but that they couldn’t mention specific characters or plot lines. Nevertheless, Star Fleet Battles has survived if not always thrived right up to the present day, even as countless other, higher profile efforts to make interactive versions of Star Trek have come and gone.

In 1983, Cole and Eldridge parted ways to some extent: the former started a new company called Amarillo Design Group to conjure up fresh Star Fleet Battles rules, scenarios, and supplements, while the latter continued as the head of Task Force, which in turn remained the publisher of the aforementioned game line and others. Some five years later, word reached Jon Van Caneghem at New World Computing, now flush with the commercial success which Might and Magic was enjoying, that Eldridge was interested in selling Task Force. It struck Van Caneghem as a chance to become a new type of gaming mogul, uniting the worlds of tabletop and computer gaming under a shared umbrella for the very first time. The dedicated grognards at Task Force could become a “proving ground for systems” on the tabletop before they were implemented on the computer, and product lines too could cross and recross the digital divide in pursuit of synergies no one had yet dared to imagine.

So, New World purchased Task Force and moved them into their offices in Van Nuys, California. Stephen Cole insisted on retaining control of the Amarillo Design Group and keeping it in its namesake city, but he did agree to continue to provide Task Force with their most prominent product line. To run the tabletop side of his empire, Van Caneghem hired one John Olsen, a board-gaming insider with an impressive resume; he had most recently headed the major British tabletop publisher Games Workshop’s American operation.

The whole scheme appealed greatly to a dedicated old-school gamer like Van Caneghem, but it was in reality muddle-headed in the extreme. He had bought into the tabletop industry when it was smack in the middle of a brutal downturn, prompted largely by, ironically enough, computer games. Avalon Hill, the old king of hobbyist wargames, was bleeding money, and even the likes of Dungeons & Dragons was rather less than it once had been. It would be some years yet before collectible card games and a new breed of ruthlessly balanced abstract board games known as “Eurogames” would breathe life back into the tabletop market. In the meantime, it was a disheartening place to be, where the only people making any real money were the big conglomerates marketing hoary family staples like Monopoly. “We really made a go at board games, but compared to the dollars and profitability of software… there was just no comparison,” Van Caneghem would later admit. “It didn’t make any sense.”

But that realization would take some time to fully dawn on him. Thus the output of New World Computing immediately after 1988’s Might and Magic II was dominated by the tabletop-to-computer (or vice versa) synergies Van Caneghem hoped to create. Granted, Task Force’s biggest property of all was a nonstarter here: there was no way that Paramount was going to allow Star Fleet Battles onto computers to compete with other efforts to bring Star Trek to the digital realm. But Task Force also had a long-established relationship with the Arizona game maker Flying Buffalo, and now served as a conduit for bringing some of the latter’s designs to the computer. First came a credible port of the venerable satirical card game Nuclear War. And then came a more ambitious project, a computer game based on Tunnels & Trolls, Flying Buffalo’s simpler would-be rival to Dungeons & Dragons. The system being very popular in Japan, this project became a trans-Pacific collaboration: a design document was written by the Flying Buffalo regulars Elizabeth Danforth and Michael Stackpole (both of whom already had a track record with computer games as well), then passed to a team in Japan for implementation. Finally, said team sent it sent back to New World to be made presentable in its original language. Unsurprisingly, the finished product felt more than a trifle schizophrenic, while its audiovisuals captured the charmingly pulpy, low-rent feel of its tabletop source material perhaps a bit too well for the presentation-driven contemporary computer-game market. “It didn’t go over all that well” in its native country, admits Van Caneghem, although it did do somewhat better in Japan.

But the most interesting of all the products of this rather confused period in New World Computing’s history is also the one which came closest to being a truly synergistic effort. Rather than being a tabletop game ported to the computer, King’s Bounty had one foot planted squarely in each realm from first to last.



It all started when Jon Van Caneghem began musing one day about how to bring some of the flavor of another of his old tabletop favorites to the computer: a board game known as Titan, one of those gloriously messy experiences from the heyday of Avalon Hill, the sort of game in which half the players might be eliminated in the first hour while the other half grind away at one another for five or six hours more. In Titan, players move their “stacks” of monsters over a highly abstracted map, trying to recruit additional monsters to add to their legions even as they also try to outmaneuver and attack the other players’ stacks when the advantage is with their side. When two players’ armies do bump into one another, the focus shifts to a tactical battle map representing the terrain in which the clash is occurring. The ultimate goal is to defeat each of your opponents’ titans — their super-units, the equivalent of the king in chess — as this is the only way to force them out of the game; you must also be careful to protect your own titan, of course. Getting to the end of a game of Titan can be a long journey indeed, one that can be by turns riotously entertaining and numbingly tedious.

Fond though he was of the game, Van Caneghem felt it would be problematic to bring a similar experience to life on the computer, mostly because of the difficulty of implementing an opponent artificial intelligence able to challenge a human player; at the time, New World was still making their games for 8-bit platforms with as little as 64 K of memory, which didn’t leave much scope for such things. But then Van Caneghem happened to talk to John Olsen about a concept Task Force had in development, with the working title of Bounty Hunter. Designed primarily by one Robert L. Sassone, it cast its players as the titular vigilantes for hire, moving around a board trying to nab more fugitives than their opponents. Van Caneghem thought the idea was brilliant in a big-picture sort of way, but a trifle under-baked in the details. But what if they combined it with some of the themes and mechanics of Titan? And what if they then made it into both a computer game and a board game?

The resulting game of King’s Bounty first appeared on computers in 1990, beginning with versions for the Apple II and for MS-DOS machines. Living in the hazily delineated borderlands between the CRPG and strategy genres, it was a refreshingly light-hearted, fast-playing change of pace from the more ponderous epics which dominated to either side of it. You start out by picking one of four protagonists to guide: the Knight, the Paladin, the Barbarian, or the Sorceress. Then you proceed to wander the four continents of its world, fighting some monsters and recruiting others, visiting towns, besieging castles, looting treasure chests, and, yes, capturing villains for bounties, growing steadily stronger all the while. In addition to a cash reward, each successfully hunted bounty reveals another piece of a treasure map, an idea cheerfully stolen from Sid Meier’s Pirates!. Said map points the way to the King’s Sceptre, the recovery of which ends the game in victory. Rather than competing directly against a computer opponent who tries to accomplish the same goal as you, you battle the calendar: you have between 200 and 1000 days to complete your quest for the Sceptre, depending on the difficulty level you choose. By this means was New World able to dodge the problem of creating an artificial intelligence capable of going head to head with a human player.

A complete game of King’s Bounty generally takes only a few hours, making it a positively snack-sized offering in comparison to the 100-hour-plus likes of a Might and Magic. Yet it provides a compressed version of the same satisfying power fantasy — what Alan Emrich, reviewing King’s Bounty for Computer Gaming World magazine, called “expanding megalomania.”

There is some sort of intangible “charge” that comes out of seeing one’s character become a more powerful warlord, leading bigger armies, gaining an ever-increasing commission, subduing ever larger foes, and so forth. While this is hardly an original concept (it dates back to the first games of Dungeons & Dragons), it still holds an endearing appeal when done well. In King’s Bounty, this “Monty Haul” brand of adventuring is exquisitely executed, rewarding the player with plenty of strokes on his way to finding the Sceptre.

Unlike the typical one-and-done CRPG, the computer game of King’s Bounty is designed to be played multiple times, as one would a board game. Not only are there four protagonists and four difficulty levels to choose from, but the placement of monsters, treasures, bounties, and the Sceptre itself are randomized with each new game. If it isn’t a hugely deep game even by the standards of its day, it can be an entertaining one for far more hours than a rundown of its simple mechanics might suggest. As we’ll soon see, this trait along with many of its other, more prosaic qualities would return years later in the more famous series of games for which it served as something of a prototype.

In the here and now of 1990, however, King’s Bounty on the computer proved a modest commercial success but not a sterling one. By the time it appeared, Jon Van Caneghem had reluctantly acknowledged that it was tough enough for his company to survive as a maker of computer games alone, and was in the process of divesting New World of Task Force Games: he sold out to John Olsen, who then moved Task Force back to Amarillo. In 1991, Olsen’s Task Force finally published the board game of King’s Bounty. It preserved most of the key elements of the computer game in forms modified to suit the tabletop, but it attracted little attention in a moribund marketplace, and quickly went out of print. (Task Force itself would continue to release new products until 1996 and to market the best of the old ones until 2004.)

Meanwhile, with the dream of making analog and digital games side by side now consigned to the past alongside other follies of youth, Van Caneghem retrenched and refocused on New World Computing’s core commercial strength: namely, the Might and Magic CRPG series. New World made a slick new engine that left behind 8-bit machines like the Apple II, then made three new Might and Magic games with it between 1991 and 1993. During this period, as these games were garnering strong reviews and equally strong sales, it was easy enough to see King’s Bounty as just one more misbegotten sign of a confused time.

But by 1994, the Might and Magic line seemed to be losing momentum, in tandem with a dramatic downturn in the CRPG market in general. The new standard bearers for narrative-oriented games were tightly scripted “interactive movies” like Under a Killing Moon, along with 3D-rendered slideshow adventures like Myst; many had come to see the sort of sprawling, open-ended high-fantasy CRPGs which New World made as relics of the past. New World stood at a proverbial fork in the road. Their second-generation Might and Magic engine too had now passed its sell-by date. Did they damn the torpedoes and surge ahead with the expensive task of making a new one for a genre that had fallen so badly out of fashion? Or did they try something else entirely? Van Caneghem looked around and weighed his options.

The enormous success of id Software’s Wolfenstein 3D and DOOM had heralded the emergence of a less highfalutin and more visceral, action-oriented strand of computer gaming to challenge the artsier experiments of the period. Yet computer gaming as a whole has never been a monolithic or even a bifurcated beast. Along with the likes of DOOM and Myst, that yang and yin of the era, strategy games were enjoying a quieter golden age in the wake of such classics as Railroad Tycoon, Civilization, and Master of Orion. Further, some of the latest explorations of the genre almost seemed to have taken a lesson from King’s Bounty about the appeal of CRPG-style character building within a strategic framework: X-COM and Master of Magic remain famous to this day for the intense personal bonds they forge between their characters and the players who control them. Perhaps, mused Van Caneghem, King’s Bounty had just been a bit too far ahead of the curve, implemented using technology that couldn’t quite do justice to its concept. Perhaps he should try again.

But, this being an older and wiser Jon Van Caneghem, he would do some things differently this time. Whatever the current state of the CRPG market, the Might and Magic name still had the benefit of widespread familiarity. Why let that go to waste? Why not make the new game a spinoff of Might and Magic rather than a completely new, completely unfamiliar thing?

And so Heroes of Might and Magic was born. Van Caneghem could hardly have imagined how successful it would prove on every level, from the crass commercial measure of units sold to the more idealistic one of hours of fun delivered to millions of people all over the world.



At this point, I owe it to those readers who aren’t among said millions to explain just what Heroes of Might and Magic is all about. It has ironically less to do with the CRPG series whose name it borrows than it does with King’s Bounty. Beyond sharing a fantasy theme that involves plenty of monster killing and leveling up as a reward for it, and some halfhearted efforts to tie it into the CRPG series’s universe, it has almost nothing to do with its older namesake. “If not for copyright lawyers, Heroes of Might and Magic could as easily have been called Heroes of Ultima, Heroes of Wizardry, or Heroes of Advanced Dungeons & Dragons,” noted Jason Kapalka accurately in his review of the first game in the series for Computer Gaming World.

But even the influence of King’s Bounty shouldn’t be overstated. For all that its roots so plainly lie there, Heroes is at bottom a very different sort of game; there’s far more distance between King’s Bounty and the first Heroes than there is between the latter and any of the subsequent entries in the series. Heroes abandons the business about bounty hunting in favor of being a true strategic wargame, complete with computer and/or human opponents who are trying to conquer the same map that you are. Rather than guiding a single hero, you can now recruit and control up to eight of them, along with the sedentary garrisons you collect to defend your castles against the other players whose heroes are also roaming the map with their armies.

Newbies to the series today are often advised to skip the first game, on the argument that everything it does is done bigger and better by the later ones. This is true enough as a statement of fact; those games are packed full of much more stuff — stuff which, in contrast to that found in many sequels, really does make an already compelling game that much more compelling. Still, I don’t really agree with the argument that this fact makes Heroes I extraneous. On the contrary, it strikes me as a perfect place to start with the series. It introduces the core concepts that carry through all of the subsequent games, leaving those successors free to layer their additional complexities and nuances onto its sturdy frame. So, this article will focus exclusively on the often neglected first game. There will be plenty of time to praise the others in later articles.

At the time of its release in the fall of 1995, Heroes I seemed merely the latest exemplar of what was already a long tradition of fantasy strategy on the computer. The most notable recent game of this sort had been Steve Barcia’s Master of Magic. It and Heroes of Might and Magic share many similarities: both are games of conquest that expect you to recruit and nurture individual heroes to lead your armies, even as you also guide the development of the castles and towns that spawn the soldiers who fight under them; both prominently feature the magic that is found in both their names; both shift between a strategic map where the big decisions are made and a tactical view used for battles. Yet the two games’ personalities are markedly different, so much so that no one who has actually played both of them could ever confuse them. Master of Magic is a gonzo, ramshackle creation, stuffed with so many spells, monsters, treasures, and general flights of fancy that it doesn’t really matter that a third or more of it all doesn’t really work, on a design or even sometimes a purely technical level. Heroes, by contrast, is a much finer honed creation, replacing the fascination of Master of Magic‘s multitudinous sprawl with its own brand of fiendishly addictive playability.

One difference between the two stands out above all the others: while Master of Magic trusts in its random world generator to create interesting dilemmas for the player, Heroes embraces set-piece, human-crafted scenarios. These fall into two categories: there are standalone scenarios you can play — no fewer than 34 of them in the version of the game found at digital storefronts today — and also an eight-scenario campaign which you can play through from the point of view of any of the four factions in the game. This campaign lacks most of the bells and whistles that came in the sequels: each successive scenario is introduced by a bare few sentences of text rather than one of the elaborate cut scenes that came later. As a result, it inculcates little sense of narrative momentum and still less of a sense of identification with the faction leader you’re meant to be playing; if the campaign scenarios had merely been shoveled into the mix as yet more standalone scenarios, no one would likely have been the wiser. Still, it’s a start, the germ of an idea which New World later took to much more ambitious heights.

Whether standalone or a part of the campaign, a scenario generally gives you one hero with a few units under his command and one partially developed castle with which to start. Fog of war means that only a tiny portion of the full map is revealed to you at the beginning. In the example shown below, we’ve started as a Barbarian, one of the four possible factions; the others are the Knights, the Sorceresses, and, replacing King’s Bounty‘s Paladins, the Warlocks. One player of each faction is found in each scenario. All of the factions have their own strengths and weaknesses, but in general the Barbarians and Knights are better suited for physical combat while the Sorceresses and Warlocks are better at casting spells.

Each of the four factions has its own style of castle, with its own roster of structures to be built up. Each castle can provide different types of units to fight for you — up to six types in all after you build the appropriate “dwellings” for them by spending your gold and other resources. We’ve been given a very generous start here; we already have four of the six possible Barbarian units types — namely goblins, wolves, orcs, and ogres — available to join our legions. Only trolls and the fearsome cyclopes are still to go.

In fact, we find that we have the necessary resources — 20 ore and 4000 gold — to build a bridge that will produce trolls already. A few of the creatures in question become available to hire as soon as a dwelling is built, followed by more at the beginning of every week; each turn consists of one day.

We can also recruit additional heroes at our castles, at the cost of 2500 gold each. Two are shuffled to the top of the pool for our consideration at any given time. Note that, although our starting faction is the Barbarians, we need not confine ourselves to recruiting only Barbarian heroes; nor must individual heroes “follow suit” in terms of the types of units which join their armies, although there is a morale advantage to grouping units of the same faction together.

With money tight, resources scarce, your armies weak, and your heroes unproven, the early game is always a race to scout out as much of the map as possible in order to better understand your strategic position, whilst also grabbing up any loot you find lying about. (Luckily, our Barbarian hero is particularly well-suited to this stage of the game, being the fastest mover of the four types.) There’s so much that is tempting: gold and other resources to replenish our scant stocks, sawmills and mines that can provide a constant supply of resources, magic artifacts which our heroes can carry around to aid them, obelisks which reveal pieces of a treasure map to an ultra-powerful Ultimate Artifact (a legacy of King’s Bounty‘s King’s Sceptre), even some places where we can recruit new units to join our ranks without having to pay for them, as we must at our castles.

But soon the easy pickings around our starting castle have all been scarfed up, and it’s time to start fighting some of the monsters scattered about the map, who guard things that we want and block passes that can take us farther afield. The hero who commands each of your armies is a typical armchair general: he doesn’t fight directly, but rather stays in the rear, adding his Attack and Defense scores to those of his troops, and casting spells that can become devastating by the late game. (For this reason, Barbarians and Knights tend to do best in the earlier stages of a scenario, but can be in for a rude shock if they don’t eliminate their spell-casting rivals before they grow too powerful.) In a testament to the old adage that heroes never die but only fade away, a hero whose army is defeated is merely returned to the pool of his colleagues that are waiting to be hired; if you’re not careful, you might find yourself fighting against a hero who was once one of yours, whom you spent a long time lovingly parenting for the benefit of one of your opponents.

The turn-based combat is as conceptually simple and fast-playing as the rest of the game, but nevertheless boasts surprising tactical subtleties. Each unit type has its own initiative value, movement speed, attack type (melee or ranged), attack potency, armor class, and hit points, and sometimes its own special strengths and weaknesses on top of all of these basic ones. Learning to build and use your armies most effectively, and learning how best to counter the various types of enemies you meet, takes quite some complete games, but doing so is very rewarding.

Sooner or later, you’ll come into contact with one of the other players — sooner if you’re playing on a smaller map, later in the case of a larger. Once that happens, exploration begins to compete with military strategy in your ranking of priorities. In most scenarios your goal is simply to capture all of your enemies’ castles, although a few of the campaign scenarios do mix things up a bit by asking you to be the first to recover the Ultimate Artifact or to capture a specific neutral castle. Regardless, Heroes shares with King’s Bounty a quality of brevity that sets it apart from most strategy games of the mid-1990s. Even its largest maps seldom take more than a few hours to explore and conquer. (Of course, this doesn’t mean that you won’t immediately start on another scenario…)

Such is a very, very broad overview of Heroes of Might and Magic. Yet it hardly begins to explain what makes it such a great game. This is every critic’s dilemma: it’s always easier to identify the flaws that keep a given game from greatness than it is to capture that peculiar kismet that yields a game as compulsively playable as this one. Still, it’s part of what I’m paid to do here, so I’ll do my best.

Like so many of the great ones, Heroes is perhaps most of all a tribute to its designer’s willingness to test and test and iterate endlessly until he gets it right. Jon Van Caneghem:

Any time you’re creating a new game — a game that has mechanics people haven’t seen before — there’s a lot of resistance to it. They’re used to something they’ve been playing all the time, and now you’re giving them something new. It’s foreign, so the first reaction is, “I don’t like it.” And if the game isn’t really good, that makes it even worse. If it’s not balanced or it’s not playing right, it becomes, “I don’t like this at all.”

So, my testing department on Heroes was not liking the game. They didn’t like the mechanics; it had a lot of imbalance to it; it was too slow; it was too different. And I just kept hammering at it. I said, “I know this is going to be fun. This is gonna work.” I really analyzed what they were doing and what was bothering them. The length of the turns was too long. If I made the distance that the heroes got to move on the map [in a single turn] too short, they didn’t like it. The same if I made it too long. There was a sweet spot. I made all these little tweaks, and said, “Try it again. Try it again. Try it again.”

All of a sudden, they started getting into it. They started battling each other. Then they started arguing over strategies. That’s my usual moment of clarity in game development. The moment QA is arguing over which strategy is best to win, you’re ready to ship.

This willingness to take every scrap of feedback seriously placed its stamp on every aspect of the game, from the interface, which is as well-nigh perfect as the technology of 1995 could possibly have allowed, to more abstract questions of playability and balance. Consider, for example, the limit of eight heroes per player. Such a number gives you a wealth of possibilities each turn by the late game, but keeps the game’s scope from exploding to the point where keeping track of everything becomes a daunting chore rather than a pleasure, as tends to happen in such predecessors as Master of Magic.

The opponent artificial intelligence is another case study in the ruthless pursuit of fun. It’s a virtual given that the computer will be allowed to cheat somehow in a game of this sort stemming from this era; it simply wasn’t possible at this time to program opponents that could give a decent human player a run for her money on a level field. But instead of merely increasing all of the computer players’ relevant numbers by 50 percent or more, as so many strategy games of the 1990s did, Heroes cheats in a way that isn’t so obviously egregious: its computer players suffer from no fog of war, meaning they know where every resource and castle is on the map from the start and can react accordingly. The resulting competition remains decidedly asymmetrical, but it feels like a struggle against deviously clever opponents rather than blatantly cheating ones. And at the end of the day, the subjective feel of the experience is all that matters. (Then again, if you really want a challenge, you can play against up to three human friends in hot-seat mode, or against one friend over a network. These too are options that surprisingly few turn-based strategy games of Heroes‘s era offer.)

In the final analysis, there is no magic bullet that makes Heroes so much fun, just a long string of small decisions, decided almost invariably correctly thanks to Van Caneghem’s willingness to listen to what his first players told him. The result is a game that’s addictive for all the right reasons — one that’s simple and approachable on the surface but is full of unexpected depths, a possibility space that’s enormously rewarding to explore and learn how to optimize. You’ll feel as if you’re leveling up like one of your heroes as you learn how to play on the Easy scenarios, polish your skills on the Normal ones, and at last find ways to triumph even over the Tough and Impossible ones. You might occasionally slam down your computer’s lid in frustration along the way, but you’ll always come back the next day to try again.

To all of this must be added the game’s immensely appealing presentation, which makes its world a nice place to be even when the tide of war is going against you. One similarity it does share with the CRPG series which gives it its name is its eschewing of the “dark, gritty” aesthetic of so many drearily serious fantasy games of its own era and later ones. Heroes is serious about being fun, but it never takes itself all that seriously in any other sense. Its world is one of bright primary colors that pop right on your monitor screen, of cartoon-style monsters duking it out without ever shedding a visible drop of blood. Its stories and settings don’t make a lick of sense, being a pastiche of whatever mythologies, fairy tales, and pop-culture tropes happened to be lying around the offices of New World. Why do the Sorceress’s glitter-splattered sprites, unicorns, and phoenixes look like an explosion at a My Little Pony factory? And why on earth do cyclopes spring forth from Mesoamerican-style pyramids, and what has any of that got to do with Barbarians anyway? Nobody knows and nobody cares. Heroes succeeds to a large degree through its sheer giddy likability, a reflection of the personality of the man who conceived it. No game less pretentious than this one has ever been made.

Another thing to love about the game is the fact that its roster of heroes consists of women and men in nearly equal proportion. The former are every bit as cool and capable as the latter, without ever being over-sexualized in order to please the male gaze. This level of enlightenment was sadly rare among mainstream strategy games of the 1990s. Heroes stands almost alone in being so welcoming to absolutely everyone.

Complaints? Any critic worth his salt must come up with a few, I suppose. So, I’ll note that some of the more difficult scenarios are as much exercises in puzzle solving as pure strategizing, almost demanding that you fail a few times before you can piece together the correct route to victory. Then, too, despite all the extensive play-testing it received, Heroes is no paradigm of balanced game design by modern lights. The Warlock faction is much more powerful than any of the others. Its top-end units are dragons, the best in the game by far. They have an absurd number of hit points, are completely immune to magical attacks, and can kill two stacks of enemies in one shot thanks to their fiery breath; the first player able to start purchasing significant numbers of dragons is all but guaranteed to win. And Heroes by no means resolves what has long been the biggest conundrum in wide-angle strategy-game design: that of the anticlimactic mopping up that follows that tipping point when you know you’re going to win. The need to optimize your play and weigh every decision carefully — do I spend my precious resources on a dwelling that will produce better units or on an addition to my mage guild that will give me better spells? — goes away after this point. All of the most interesting choices and the most nail-biting drama are front-loaded.

On the other hand, none of these things are necessarily unadulterated negatives.”Solving” a difficult map that’s been giving you fits can be a thoroughly satisfying accomplishment in its own right. And any faction can capture a Warlock castle and thereby gain a pathway to dragons, meaning that the starting Warlock player is most definitely not guaranteed to win — and then as well, the sheer joy of romping across the landscape with a ridiculously overpowered army of dragons shouldn’t be taken lightly when considering these matters. Much the same riposte heads off complaints about the anticlimactic endgame. It usually doesn’t take that long to win once the tipping point is reached, and doing so always warms the heart with megalomaniac joy.

I’ve used the word “addictive” a couple of times already in relation to Heroes of Might and Magic. And indeed, the word seems unavoidable in any review of it. The “one more turn” syndrome these games provoke has long been infamous. I’m not overly prone to gaming addiction myself — unlike some of my friends, I have no stories to tell of playing Civilization or Europa Universalis for 48 hours straight — but Heroes of Might and Magic is the closest thing to my personal gaming Kryptonite. Playing Heroes — solo or, even better, with your friends — is so much fun that it can be downright dangerous to your life balance. I do believe I’ve spent more time with this series than any other in the quarter century since I first encountered it. Heroes is just that engaging, even as it remains hard to explain exactly why. Chalk it up to the ineffability of interactive flow.



Upon its release in September of 1995, Heroes of Might and Magic reaped all of the commercial rewards it deserved. It became the biggest hit in the history of New World Computing to that point, and was pronounced Strategy Game of the Year by Computer Gaming World. His CRPG series forgotten for the moment, Jon Van Caneghem went right to work on a Heroes II, which he would make bigger, richer, and even more addictive than its predecessor. I will, needless to say, be writing about that one as well once we reach that point in our journey through time.

(Sources: the books Might and Magic Compendium: The Authorized Strategy Guide by Caroline Spector and Designers & Dragons Volumes 1 and 2 by Shannon Appelcline; Computer Gaming World of October 1988, November 1990, December 1995, and April 2004; Retro Gamer 49; Space Gamer of August 1981; XRDS: The ACM Magazine for Students of Summer 2017. Online sources include the CRPG Addict’s final post on Might and Magic: Darkside of Xeen, Matt Barton’s interviews with John Van Caneghem and Neal Hallford, Julien Pirou’s interview with John Van Caneghem, the RPG Codex interview with John Van Caneghem, and The Grognard Files interview with Tim Olsen.

Heroes of Might and Magic can be purchased as a digital download at GOG.com.)

 
31 Comments

Posted by on December 24, 2021 in Digital Antiquaria, Interactive Fiction

 

Tags: , , ,

Might and Magic

Wizardry and Ultima were great inspirations for me. But I wanted to make my own vision for a CRPG. I wanted more of an open-world feel, with quests, puzzles, and an emphasis on exploration and discovery. I wanted party-based tactical combat, tons of magic items to find, and an ever-increasing feeling of power as you leveled your characters. Most of all, I wanted players to feel free to experiment with all of the “tools” I put in the game, so that they could enjoy playing any way they wanted to.

— Jon Van Caneghem

The long-running Might and Magic CRPG series is an easy thing for a project like this one to overlook. These games were never obviously, forthrightly innovative, being content to rework the same high-fantasy territory over and over again. Meanwhile their level of commercial success, although certainly considerable at times, never became so enormous as to make them sit up and demand our attention.

What Might and Magic consistently did have going for it, however, was the quality of being fun. Jon Van Caneghem, the series’s guiding light, was himself a lifelong compulsive gamer, and he stuffed his creations full of the things that he himself enjoyed: quests to fulfill, dungeons to explore, loot to collect, and an absurd variety of monsters to fight, all couched inside a non-linear, open-ended philosophy of gameplay that eschewed the sort of (overly?) elaborate set-piece plots that had begun to mark many of his series’s peers by the dawn of the 1990s. Might and Magic games, in other words, weren’t so much stories to be experienced as places to be explored.

Chet Bolingbroke — better known as The CRPG Addict — names “generous” as his defining adjective for Might and Magic.

The Might and Magic games have always been generous. Jon Van Caneghem clearly had a history with tabletop RPGs and early CRPGs, but he envisioned worlds of bounty where those titles were sparse and unyielding. In Wizardry, Might and Magic’s most obvious forebear, a 16 x 16 map might only hold a couple of fixed combats and two textual encounters. Van Caneghem’s strategy was to give you something in every row and column. I have maps from the first game in which I had to go into the double letters to annotate everything. A Dungeons & Dragons module might take you from Level 2 to 5 over the course of 30 hours of campaigning. Van Caneghem had no problem offering games in which you hit Level 100 or more. Where Dungeons and Dragons and Wizardry regarded attributes as closely policed within a 3-18 range, you might start at 15 strength in Might and Magic and end at 500.

Throughout its long history as a series, Might and Magic never strayed far from this gonzo approach. It remained always that which it had first aspired to be: an exercise in exploring spaces, killing the monsters you met there, and taking their stuff so that you could use it to kill even tougher monsters somewhere else. But within that template, it did find room to innovate — to do so, in fact, in more ways than it’s generally given credit for, including a few innovations that have become staples of the CRPG genre today. If it was no poster child for games as art, it had arguments of its own to make for games as pure fun.



Jon Van Caneghem was in some ways the last of a breed: the last of the living-room gaming entrepreneurs who dominated at the very start of the industry, with their self-coded products, their homemade packaging, and their naïve gumption that took the place of business plans and venture capital.

Van Caneghem grew up as a child of privilege in the 1970s near the heart of Hollywood, the stepson of a prominent neurologist. His parents had high ambitions for him; he attended grade school at the elite bilingual Lycée Français de Los Angeles. But he never quite fit the mold his parents had cast for him. A slow reader and reluctant student, he was obsessed with games from a very young age, beginning with checkers, then moving on to chess, Risk, and Diplomacy, then to Avalon Hill’s wargames and finally, inevitably to Dungeons & Dragons. He entered UCLA as a pre-med student in 1979, but becoming a doctor was his parents’ dream for him, not his own. Once there, he continued to devote the bulk of his energies to playing games, as well as another, more dangerous obsession: racing cars on the legendary Mulholland Drive.

Van Caneghem discovered computers and computer games during his middle years at university, just as many of his friends were finding jobs and significant others, and were left with less time for game nights as a result. “Then a friend of mine showed me an Apple II, and he was playing a bunch of simple games on it,” he remembers. “This was great! I could play any time I wanted and didn’t have to wait for anyone to get together. So, I immediately got one.”

Like at least half of the Apple II world at the time, he was soon in the throes of a full-blown Wizardry addiction; he guesses he must have finished it “seven or eight times.” The original Ultima also consumed plenty of hours. It was ironically the flaws in these pioneering but technologically limited early CRPGs that drove him to go from being a game player to a game maker.

Everyone started to tell me, “You’re always complaining about these games. Why don’t you make your own?” And I said that I didn’t have the slightest idea how to program. But it intrigued me. I switched from being a pre-med student to a math and computer-science major at UCLA and just started delving into the Apple II, absorbing every magazine and piece of information I could find. Everything I was learning at school was just ancient history as far as the computer was concerned, with punched cards and mainframes. There was nothing about personal computers. So I pretty much had to teach myself everything.

Much to his parents’ relief, he finished his Bachelor’s program at UCLA in 1983, albeit not in the major they had planned for him. Then he dropped a bombshell on them: he wanted to make his own computer game and sell it. They reluctantly agreed to give him a couple more years of room and board while he chased his dream.

In the end, it would take him almost three years to make the grandiosely titled Might and Magic: Book One — The Secret of the Inner Sanctum. In what Van Caneghem still calls the most satisfying single creative experience of his life, he designed and programmed the whole thing himself. He drew the graphics with some help from a pair of outside artists he hired, and outsourced some of the writing to his wife. But at least 90 percent of the sprawling final product was his work alone.

When he began to shop the game around to publishers at last in 1986, he found they were very interested; CRPGs were enjoying a boom at that time, with Ultima IV and The Bard’s Tale having been two of the biggest hits of the previous year. Yet he was sadly underwhelmed by the terms he was offered, which might allow him to earn $1 per copy sold at a retail price of $35 or more.

So, having come this far alone, he decided to self-publish his game. Being a self-described “ultimate Star Trek nut,” he chose New World Computing — as in “strange new worlds and new civilizations” — for the name of his new company. He recruited friends to draw the art for a box and a rather handsome map of his game’s land of Varn, then bought himself a PO Box and a toll-free order line along with advertisements in Computer Gaming World and A+, respectively computing gaming’s journal of record and one of the most popular of the Apple II magazines. Having been taught from a young age that success in life often hinges on looking like a success, he pulled out all the stops for the advertisements. Instead of the quarter-page black-and-white ad with fuzzy stick-figure art that was typical of home-grown software entrepreneurs like him, he convinced his parents to splash out one more time for a professionally laid-out, full-page, full-color spread that looked as good as any of those from the more established competition and better than most of them; this was an advertisement that couldn’t help but get Might and Magic noticed.

The first Might and Magic advertisement, which came complete with veiled jabs at The Bard’s Tale. It’s amusing to see how it describes pencil-and-paper map-making as a virtue rather than a necessary evil. Most gamers apparently didn’t agree: the very next game in the series would feature one of the CRPG genre’s first simple auto-maps.

And it did get noticed: it was a case of the right game at the right time putting its best foot forward, and the response exceeded all of his expectations. The order line which he’d installed in his bedroom rang all night long, and Van Caneghem, who had no intention of missing a single sale, turned into a sleep-deprived zombie thanks to it. Relief came in the form of another phone call, this one from Jim Levy, the CEO of Activision. Levy explained that Activision was starting something they called an “affiliated-label program” to help small developers get their products to market, and he thought that New World Computing would be an excellent candidate. (He may have been motivated to atone, to Activision’s stockholders if no one else, for his infamous rejection of Interplay’s The Bard’s Tale as “niche-ware for nerds” — a rejection which had delivered to Activision’s arch-rival Electronic Arts their biggest hit to date.) Activision could take phone orders and, much more importantly, distribute Might and Magic to stores all over the country, all while taking a smaller cut than a traditional publisher and leaving the New World logo as the only prominent one on the box; they could even put Van Caneghem in touch with people who could port it to other platforms. It sounded very good indeed to the young entrepreneur.

Within weeks of this conversation, Levy was fired from Activision, but the deal he had made with Van Caneghem remained in place. Might and Magic‘s arrival in stores in early 1987 was heralded by a glowing review in Computer Gaming World from Scorpia, the magazine’s longstanding adventure and CRPG columnist. She called it “world touring on a grand scale”: “There is so much to learn and enjoy in Might and Magic because its scope and complexity are amazing.” Ports to the IBM PC and Commodore 64 (the latter done by none other than John Romero of later DOOM fame) were available well before the Christmas of 1987. While it would never quite manage to join Ultima, The Bard’s Tale, and, soon, Pool of Radiance and the other SSI Gold Box games in the very top commercial tier of late-1980s CRPGs, it did become the leading name among the second tier, more than enough to get New World off the ground properly and create high expectations for a sequel.

Despite Scorpia’s rapture over it, this first Might & Magic game was, like all of the ones that would follow it, disarmingly easy to underestimate. It wore the influence of Wizardry and its successors, The Bard’s Tale among them, prominently on its sleeve: it too was an exercise in turn-based, grid-based exploration, which you navigated from a first-person point of view despite controlling a party of up to six characters. (The oddity of this has led to its sub-genre’s modern nickname of “blobber,” for the way it “blobs” all of your characters together into one octopus-like mass of sword-wielding arms and spell-casting hands.) Its technology verged on the primitive even in 1987, the year which saw the introduction of real-time gameplay to the CRPG genre in Dungeon Master. Nor was it any paradigm of balanced design: the early stages, when your newly created party consisted of naked, penniless club-wielders, proved so difficult that Van Caneghem grudgingly added a slightly — slightly, mind you — more hardy pre-made starting party to later releases. Even once your characters made it to level three or so and were no longer as weak as infants, the difficulty level remained more jagged than curved; monsters could suddenly appear on some levels that were an order of magnitude more powerful than anything else you’d met there, killing you before you knew what had hit you. This was an especial problem given that you could only save your game from one of the nine adventurer’s inns scattered around the sprawling world, a result more of technical limitations than designer intent. Meanwhile the story was mostly nonexistent, and silly where it did exist, culminating in the revelation that the entire world of Varn you’d been exploring was really a giant artificial biosphere created by space aliens; “Varn” turned out to be an acronym for “Vehicular Astropod Research Nacelle.”

If you could get past all that, however, it was a surprisingly rich game. Caneghem has noted that, though he became a pretty good programmer in the course of making Might and Magic, he was always a game designer first, a game programmer second: “I wasn’t a programmer who knew a neat graphics routine and then turned it into a game. I think most people at the time, except for a few, came from that end of it.” As one of the few who didn’t, Van Caneghem took a more holistic approach. Here we have to return to this idea of generosity that the CRPG Addict broached for us at the beginning of this article. Primitive though it was, Might and Magic was still crammed to bursting with stuff, enough to fill a couple of hundred hours if you let it: 250 different items to collect, 94 different spells to cast, 200 different monsters to fight, 55 individual 16-square-by-16-square areas to map. It boasted not only dungeons and towns, but a whole grid-based outside world to explore. The lumpy amalgamation was riddled with cheap exploits as well, of course, but discovering them was half the fun. One should never dismiss the appeal of building a group of adventurers from a bunch of babes in the woods who fall over dead if a goblin looks at them sideways to a six-person blob of terror that can annihilate a thousand of the little buggers at the stroke of a key.

For all its manifest derivativeness in the broad strokes, Might and Magic wasn’t without a smattering of genuinely new ideas, at least one of which became quietly influential on the future course of its genre. As you explored its maps, you often met people who gave you quests: tasks to accomplish apart from revealing more territory and collecting more experience points. These could range from such practical affairs as delivering a letter to another town to more, shall we say, whimsical endeavors, such as climbing every tree in a given area. Completing these side-quests provided rewards in the form of additional experience points and riches. More importantly, it added an additional impetus to your wanderings, a new dimension of play that was different from methodically lawn-mowering through a sometimes numbing procession of dungeons and monsters. In time, sub-quests like these would become an essential component of many or most CRPGs.

Jon Van Caneghem took advantage of his first game’s success to set up a proper office for New World in Van Nuys, California, and hire a staff made up of people much like himself. “A lot of our employees had met at game conventions, and all of our roots were in gaming,” he says. “At 5:30, the office would shut down and the gaming would start. Everyone was always there until all hours of the night, playing games.” He noted in a contemporary magazine profile that he wished above all to keep the New World offices “loose, friendly, and creative.”

He and his fellow travelers shipped Might and Magic II: Gates to Another World in December of 1988. Although clearly of the same technological lineage as its predecessor, it was a big step forward in terms of the details. Not only did it offer an even vaster profusion of stuff, spread over 60 different discrete areas this time, but it came with some significant quality-of-life improvements, including a reasonably usable auto-map if you chose to invest in the Cartography skill for at least one of your characters. Another subtle but welcome improvement came in your ability to set a “disposition” for your party, from “inconspicuous” to “thrill-seeker”; this allowed you to set the frequency of random monster encounters to your own liking, depending on whether you were just trying to get someplace or were actively grinding for experience points. But the most obvious improvement of all was the revamped graphics, courtesy of the full-time artists Van Caneghem had now hired; a version for the Commodore Amiga, the audiovisual wundermachine of the era, looked particularly good. The story was as daft as the last one, taking place on another world… err, alien biosphere called Cron instead of Varn. (The stories of Might and Magic do rather tend to satirize themselves…) But, just like last time, it really didn’t matter at all in a game that was all about the joy of exploration and exploitation.

The improved audiovisuals of Might and Magic II highlighted another aspect of the series that had perhaps been obscured by the primitiveness of the first game. In keeping with Van Caneghem’s sunny, optimistic personality — writer and designer Neal Halford, who came to work with him at New World during this era, calls him “terminally mellow” — the environs of Might and Magic would always be bright, colorful, fun places to inhabit. The series would never embrace the “dark, gritty” aesthetics that so much of the games industry came to revel in as the 1990s wore on.

Jon Van Caneghem the businessman seemed to live a charmed life not out of keeping with his vaguely fairy-taleish visual aesthetic. For instance, he dropped Activision in favor of becoming an affiliated label of Brøderbund in 1989, just before the former company — by this point officially known as Mediagenic — imploded, defaulting on their payments to their entire network of affiliated labels and destroying many of them thereby. He even escaped relatively unscathed from a well-intentioned but financially ill-advised venture into the board-game market, which I’ll cover in more detail in my next article.

For now, though, suffice to say that it was a big part of the reason that Might and Magic III: Isles of Terra wasn’t released until 1991. Like its predecessors, this latest entry in the series tossed you into another new world and let you have it. Still, while philosophically and even formally identical to the first two games — it remained a turn-based, grid-based blobber — it was a dramatic leap forward in terms of interface and presentation. Designed on and for a 32-bit MS-DOS machine instead of the 8-bit Apple II, it sported 256-color VGA graphics that replaced many of the older games’ numbers with visual cues, a lovely soundtrack composed for the new generation of multi-voice sound cards, and a mouse-driven interface. But its most gratifying improvement of all was more basic: it finally let you save your progress inside dungeons or anywhere else you liked. I would venture to guess that this change alone cut the number of hours the average player could expect to spend finishing the game in half, in spite of the fact that its number of individual areas actually grew slightly, to 64.

Veterans of the series could and sometimes did complain that the new level of professionalism and polish came at the cost of some of its old ramshackle charm, and Van Caneghem himself has confessed to being worried that people would notice how the new game’s average completion time was more likely to be in the tens than the hundreds of hours. But he needn’t have been: gamers ate it up.

In his review for Computer Gaming World, Charles Ardai captured how impressive Might and Magic III was in many ways, but also made note of the ennui that can come to cling to a series like this one — a series which is dedicated to doing the same thing better in each installment rather than doing anything dramatically, holistically new.

Unfortunately, Might and Magic III is also a remarkable exercise in water-treading, which does not advance the genre one inch in terms of plot, event, or ontology. Here we are again, one realizes, a band of hardy adventurers — elves, gnomes, dwarves, clerics, paladins, sorcerers — tramping about the wilderness and facing off against assorted orcs, rats, bugs, and other stock uglies.

Here we are, once more mapping a corner of Middle-earth, or a reasonable facsimile thereof, in pursuit of yet another necromatic ne’er-do-well with a faux-mythic name and a bad disposition. Here we are again — will we never be somewhere else?

On the other hand, there is a market for this stuff. David Eddings rewrites the same high-fantasy novel over and over again and never fails to hit the bestseller lists with it…

Ardai concluded that “the gamer who wants to be surprised by discovery, conversation, and story is likely to be disappointed in Might and Magic III, while the gamer who simply wants to play [emphasis original] may be ecstatic with the game.” Few sentences sum up the Might and Magic series as a whole more cogently.

The next pair of games in the series pushed the boundaries in terms of size without even attempting to address Ardai’s complaints. (After all, if it worked for David Eddings…) The name of 1992’s Might and Magic: Clouds of Xeen reflected a sudden games-industry conventional wisdom that numbered titles could actually be a drag on sales, being a turn-off to the many folks who were acquiring home computers for the first time in the early 1990s. “I felt strongly that everyone wants to see the next James Bond movie, but no one wants to see Rocky IX,” says Van Caneghem. “So off came the numbers.” This sentiment would die away as quickly as it had flared, both inside and outside of New World.

Both Clouds of Xeen and the would-be Might and Magic V, which was known simply as Might and Magic: Darkside of Xeen upon its release in 1993, took place, as you might have guessed, in yet another new land of adventure known as Xeen. More interestingly, when combined they represented another of New World’s subtle experiments with the nuts and bolts of the CRPG form if not the substance. If you installed them both on the same computer, they turned into World of Xeen, a single contiguous game encompassing no less than 201 discrete areas. Outside of its central gimmick, World of Xeen continued the Might and Magic tradition of careful, compartmentalized evolution. It was, for example, the first CRPG I know of that included an automatic quest log as a replacement for the dog-eared, hand-written notebooks of yore. It also added a global difficulty setting. Van Caneghem:

I added a feature when you first start the game where you’re asked if you want an Adventurer game or a Warrior game. This was my wife’s idea. She really liked the game, the adventure. But she wasn’t into combat. She was like, well, you know, monsters are fun, but let’s get on with the story. I said, “Okay, well, I’m sure there’s plenty of people out there just like you, who aren’t into the numbers and the hit points. They just want to get on with the story.” There’s a lot of quests, a lot of fun things to do. So I put the choice in, and what Adventurer does is it makes it easier to win all the battles. So you get through that part of the game a lot quicker.

There’s other stuff to do, and we want to expand our audience, to bring in more and more people who wouldn’t normally play this kind of game.

But this notion of “expanding our audience” was becoming a sticking point for Might and Magic by the time the conjoined World of Xeen appeared in the inevitable single box in 1994. Some of Charles Ardai’s criticisms appeared to be coming home to roost; the market had been flooded with fantasy CRPGs over the last half decade, most of which appeared all but indistinguishable from one another to the casual browser. It was extremely difficult even for a well-done example of the form, such as Might and Magic had always been on the whole, to stand out from the competition. The new generation of gaming neophytes to whom Van Caneghem imagined his Adventurer mode appealing didn’t have the likes of Might and Magic and its peers on their radar at all; they were buying things like The 7th Guest, Myst, and Phantasmagoria. The CRPG genre had transitioned from boom to bust, and was now deeply out of fashion.

This reality left Jon Van Caneghem and his company facing some hard questions. The engine which had powered the last three Might and Magic games was several years old now, its once-impressive VGA graphics looking a bit sad in comparison to the latest high-resolution Super VGA wonders. Clearly it would need to be rethought and rebuilt from scratch for any possible Might and Magic VI. But was there really a business case for taking on such an expensive task in the current market? Or had the franchise perhaps run its course, as such venerable rivals as WizardryThe Bard’s Tale, and Ultima seemed to have done?

Van Caneghem’s solution to this dilemma of what was to be done with a respected CRPG franchise in an era when CRPGs themselves seemed to be dead on the vine would prove as unexpected as it would refreshing, and would spawn a great rarity in gaming: a spinoff franchise that became even more popular than its parent.

(Sources: the book Might and Magic Compendium: The Authorized Strategy Guide by Caroline Spector and the individual hint books published by New World Computing for each of the first five Might and Magic games; Compute! of May 1993; Computer Gaming World of December 1986, April 1987, October 1988, March 1989, May 1989, May 1991, January 1992, September 1993, and April 2004; Retro Gamer 49; XRDS: The ACM Magazine for Students of Summer 2017. Online sources include the CRPG Addict’s final post on Might and Magic: Darkside of Xeen, Matt Barton’s interviews with Jon Van Caneghem and Neal Hallford, and the RPG Codex interview with Jon Van Caneghem.

The first six Might and Magic CRPGs can be purchased as a digital bundle at GOG.com.)

 
35 Comments

Posted by on December 10, 2021 in Digital Antiquaria, Interactive Fiction

 

Tags: , ,