RSS

Eliza, Part 3

21 Jun

The most obvious legacy of Eliza is the legions of similar chatterbots which have followed, right up to the present day. But what does Eliza mean to the history of interactive narrative? Or, put another way: why did I feel the need to backtrack and shoehorn it in now?

One answer is kind of blindingly obvious. When someone plays Eliza she enters into a text-based dialog with a computer program. Remind you of something? Indeed, if one took just a superficial glance at an Eliza session and at a session of Adventure one might assume that both programs are variations on the same premise. This is of course not the case; while Eliza is “merely” a text-generation engine, with no deeper understanding, Adventure and its antecedents allow the player to manipulate a virtual world through textual commands, and so cannot get away with pretending to understand the way that Eliza can. Still, it’s almost certain that Will Crowther would have been aware of Eliza when he began to work on Adventure, and its basic mode of interaction may have influenced him. Lest I be accused of stretching Eliza‘s influence too far, it’s also true that almost all computer / human interaction of the era was in the form of a textual dialog; command-line interfaces ruled the day, after all. The really unique element shared by Eliza and Adventure was the pseudo-natural language element of that interaction. Just on that basis Eliza stands as an important forerunner to full-fledged interactive fiction.

But to just leave it at that, as I’m afraid I kind of did when I wrote my little history of IF a number of years ago now, is to miss most of what makes Eliza such a fascinating study. At a minimum, the number of scholars who have been drawn to Eliza despite having little or no knowledge of or interest in its place in the context of IF history points to something more. Maybe we can tease out what that might be by looking at Eliza‘s initial reception, and at Joseph Weizenbaum’s reaction to it.

Perhaps the first person to interact extensively with Eliza was Weizenbaum’s secretary: “My secretary, who had watched me work on the program for many months and therefore surely knew it to be merely a computer program, started conversing with it. After only a few interchanges with it, she asked me to leave the room.” Her reaction was not unusual; Eliza became something of a sensation at MIT and the other university campuses to which it spread, and Weizenbaum an unlikely minor celebrity. Mostly people just wanted to talk with Eliza, to experience this rare bit of approachable fun in a mid-1960s computing world that was all Business (IBM) or Quirky Esoterica (the DEC hackers). Some, however, treated the program with a seriousness that seems a bit baffling today. There were even suggestions that it might be useful for actual psychotherapy. Carl Sagan, later of Cosmos fame, was a big fan of this rather horrifying idea, which a group of psychologists actually managed to get published as a serious article in The Journal of Nervous and Mental Diseases:

Further work must be done before the program will be ready for clinical use. If the method proves beneficial, then it would provide a therapeutic tool which can be made widely available to mental hospitals and psychiatric centers suffering a shortage of therapists. Because of the time-sharing capabilities of modern and future computers, several hundreds patients an hour could be handled by a computer system designed for this purpose. The human therapist, involved in the design and operation of the system, would not be replaced, but would become a much more efficient man since his efforts would no longer be limited to the one-to-one patient-therapist as now exists.

Weizenbaum’s reaction to all of this has become almost as famous as the Eliza program itself. When he saw people like his secretary engaging in lengthy heart-to-hearts with Eliza, it… well, it freaked him the hell out. The phenomenon Weizenbaum was observing was later dubbed “the Eliza effect” by Sherry Turkle, which she defined as the tendency “to project our feelings onto objects and to treat things as though they were people.” In computer science and new media circles, the Eliza effect has become shorthand for a user’s tendency to assume based on its surface properties that a program is much more sophisticated, much more intelligent, than it really is. Weizenbaum came to see this as not just personally disturbing but as dangerous to the very social fabric, an influence that threatened the ties that bind us together and, indeed, potentially threatened our very humanity. Weizenbaum’s view, in stark contrast to those of people like Marvin Minsky and John McCarthy at MIT’s own Artificial Intelligence Laboratory, was that human intelligence, with its affective, intuitive qualities, could never be duplicated by the machinery of computing — and that we tried to do so at our peril. Ten years on from Eliza, he laid out his ideas in his magnum opus, Computer Power and Human Reason, a strong push-back against the digital utopianism that dominated in many computing circles at the time.

Weizenbaum wrote therein of his students at MIT, which was of course all about science and technology. He said that they “have already rejected all ways but the scientific to come to know the world, and [they] seek only a deeper, more dogmatic indoctrination in that faith (although that word is no longer in their vocabulary).” He certainly didn’t make too many friends among the hackers when he described them like this:

Bright young men of disheveled appearance, often with sunken glowing eyes, can be seen sitting at computer consoles, their arms tensed and waiting to fire their fingers, already poised to strike, at the buttons and keys on which their attention seems to be riveted as a gambler’s on the rolling dice. When not so transfixed, they often sit at tables strewn with computer printouts over which they pore like possessed students of a cabbalistic text. They work until they nearly drop, twenty, thirty hours at a time. Their food, if they arrange it, is brought to them: coffee, Cokes, sandwiches. If possible, they sleep on cots near the printouts. Their rumpled clothes, their unwashed and unshaven faces, and their uncombed hair all testify that they are oblivious to their bodies and the world in which they move.

Although Weizenbaum claimed to be basing this description at least to some extent on his own experiences of becoming too obsessed with his work, there’s some evidence that his antipathy for the hardcore hackers at MIT was already partially in place even before Eliza. It’s worth noting that Weizenbaum chose to write Eliza not on the hackers’ beloved DEC, but rather on a big IBM 7094 mainframe located in another part of MIT’s campus; according to Steven Levy, Weizenbaum had “rarely interacted with” the hardcore hacker contingent.

Still, I’m to a large degree sympathetic with Weizenbaum’s point of view. Having watched a parade of young men come through his classes who could recite every assembler opcode on the PDP but had no respect or understanding of aesthetics, of history, of the simple good fellowship two close friends find over a bottle of wine, he pleads for balance, for a world where those with the knowledge to create and employ technology are also possessed of humanity and wisdom. It’s something we could use more of in our world of Facebook “friends” and Twitter “conversations.” I feel like Weizenbaum every time I wander over to Slashdot and its thousands of SLNs — Soulless Little Nerds, whose (non-videogame) cultural interests extend no further than Tolkien and superheroes, who think that Sony’s prosecution of a Playstation hacker is the human-rights violation of our times. It’s probably the reason I ended up studying the humanities in university instead of computer science; the humanities people were just so much more fun to talk with. I’m reminded of Watson’s initial description of his new roommate Sherlock Holmes’s character in A Study in Scarlet:

1. Knowledge of literature — nil.
2. Knowledge of philosophy — nil.
3. Knowledge of astronomy — nil.
4. Knowledge of politics — feeble.
5. Knowledge of botany — variable. Well up in belladonna, opium and poisons generally. Knows nothing of practical gardening.
6. Knowledge of geology — practical, but limited. Tells at a glance different soils from each other. After walks, has shown me splashes upon his trousers and told me by their color and consistence in what part of London he has received them.
7. Knowledge of chemistry — profound.
8. Knowledge of anatomy — accurate, but unsystematic.
9. Knowledge of sensational literature — immense. He appears to know every detail of every horror perpetuated in the century.
10. Plays the violin well.
11. Is an expert singlestick player, boxer, and swordsman.
12. Has a good practical knowledge of English law.

No wonder Watson moved out and Arthur Conan Doyle started adjusting his hero’s character pretty early on. Who’d want to live with this guy?

All that aside, I also believe that, at least in his strong reaction to the Eliza effect itself, Weizenbaum was missing something pretty important. He believed that his parlor trick of a program had induced “powerful delusional thinking in quite normal people.” But that’s kind of an absurd notion, isn’t it? Could his own secretary, who, as he himself stated, had “watched [Weizenbaum] work on the program for many months,” really believe that in those months he had, working all by himself, created sentience? I’d submit that she was perfectly aware that Eliza was a parlor trick of one sort or another, but that she willingly surrendered to the fiction of a psychotherapy session. It’s no great insight to state that human beings are imminently capable of “believing” two contradictory things at once, nor that we willingly give ourselves over to fictional worlds we know to be false all the time. Doing so is in the very nature of stories, and we do it every time we read a novel, see a movie, play a videogame. Not coincidentally, the rise of the novel and of the movie were both greeted with expressions of concern that were not all that removed from those Weizenbaum expressed about Eliza.

There’s of course a million philosophical places we go could with these ideas, drawing from McLuhan and Baudrillard and a hundred others, but we don’t want to entirely derail this little series on computer-game history, do we? So, let’s stick to Eliza and look at what Sherry Turkle wrote of the way that people actively helped along the fiction of a psychotherapy session:

As one becomes experienced with the ways of Eliza, one can direct one’s remarks either to “help” the program make seemingly pertinent responses or to provoke nonsense. Some people embark on an all-out effort to “psych out” the program, to understand its structure in order to trick it and expose it as a “mere machine.” Many more do the opposite. I spoke with people who told me of feeling “let down” when they had cracked the code and lost the illusion of mystery. I often saw people trying to protect their relationships with Eliza by avoiding situations that would provoke the program into making a predictable response. They didn’t ask questions that they knew would “confuse” the program, that would make it “talk nonsense.” And they went out of their way to ask questions in a form that they believed would provoke a lifelike response. People wanted to maintain the illusion that Eliza was able to respond to them.

If we posit, then, that Eliza‘s interactors were knowingly suspending their disbelief and actively working to maintain the fiction of a psychotherapy session, the implications are pretty profound, because now we have people in the mid-1960s already seriously engaging with a digital “interactive fiction” of sorts. We see here already the potential and the appeal of the computer as a storytelling medium, not as a tool to create stories from whole cloth. Eliza‘s interlocutors are engaging with a piece of narrative art generated by a very human artist, Weizenbaum himself (not that he would likely have described himself in those terms). This is what story writers and story readers have always done. Unlike Weizenbaum, I would consider the reception of Eliza not a cause for concern but a cause for excitement and anticipation. “If you think Eliza is exciting,” we might say to that secretary, “just wait until the really good stuff hits.” Hell, I get retroactive buzz just thinking about it.

And that buzz is the real reason why I wanted to talk about Eliza.

 

Tags: ,

22 Responses to Eliza, Part 3

  1. Bob Reeves

    June 21, 2011 at 7:23 pm

    I had Eliza on my personal computer in the ’80s and used to time how long it would take her to flunk the Turing test by saying something “machinelike” if I sincerely typed in things I’d say to a person. Sometimes right away; but surprisingly often, she could go a good long time sounding like a reasonable, attentive Rogerian therapist. It’s still an impressive program, remembering its limitations.

     
  2. Sig

    June 23, 2011 at 4:55 am

    The fact that therapists seriously considered augmenting their work with Eliza may say more about therapy than it does about Eliza. Probably best not to think about that.

    Almost entirely off topic, I really enjoyed “Let’s Tell a Story Together.” I don’t even remember how I stumbled upon it, but it was the catalyst that started me playing IF last year (I missed it the first time around), so thank you very much for that.

     
  3. Sean

    June 24, 2011 at 1:32 pm

    Did you mean “There’s of course a million philosophical places we could [go] with these ideas”?

     
    • Jimmy Maher

      June 24, 2011 at 1:36 pm

      As a matter of fact I did. Thanks!

       
  4. Gary

    June 24, 2011 at 1:52 pm

    I think Turkle’s first name is Sherry.

     
    • Jimmy Maher

      June 24, 2011 at 2:16 pm

      …and the hits just keep on coming. :)

      Thanks!

       
  5. Joe

    June 24, 2011 at 5:54 pm

    A great article!
    I especially think the connection between interactive fiction and psychiatric dialogue is interesting. I don’t find the idea of using a computer program in a clinical setting as repugnant as you do, though.

     
  6. Dr. Chandra

    June 24, 2011 at 5:54 pm

    “Who’d want to live with this guy? ” is precisely why we want to develop AIs, so that I can be free to be who I am and have a (virtual) friend to talk to and be patient with me and answer my questions about those things I don’t know about…

     
    • Harry Kaplan

      June 26, 2011 at 11:05 pm

      TELL ME MORE ABOUT YOUR QUESTIONS.

       
  7. Pingback: Cool Links
  8. Nate

    October 12, 2011 at 11:17 pm

    First let me say I’m really enjoying this blog. A nostalgic trip down 8K memory lane with some new insights from history.

    “The really unique element shared by Eliza and Adventure was the pseudo-natural language element of that interaction.”

    While you probably don’t mean that Eliza and Adventure were the *only* pseudo-natural language programs of the era, one might get that impression, which unless I’ve misread my history isn’t at all the case. Natural language was an active area of human-machine interaction.

    Have you covered Terry Winograd’s SHRDLU of 1968-1970 yet? I think there’s a logical development from SHRDLU to ADVENT – even more so, perhaps, than ELIZA.

     
    • Jimmy Maher

      October 13, 2011 at 1:41 pm

      No, I didn’t mean that ONLY Eliza and Adventure used natural-language-style input. I can see how the word “unique” could indeed imply that they were, well, unique — a poor choice of words on my part.

      SHRDLU is a very interesting program, and one I was aware of without having studied it in any depth. Thanks for bringing it the attention of me and my readers again.

       
  9. Nathan Segerlind

    February 17, 2012 at 1:04 am

    This a great series and I’m thoroughly enjoying it.

    The quip about how you chose to be a humanities major resonated with me…..

    I had had to go all the way through the PhD process and into postdoctoral research at a Very Elite Institution before I had the rather shaking realization that the historians were much more fun to talk to than my Very Serious Hard Science Crowd.

     
  10. Nathan Laws

    May 12, 2012 at 10:29 pm

    Could you give a reference to Weisenbaum’s comment about his students at MIT? I’d like to be able to quote that.

     
    • Jimmy Maher

      May 13, 2012 at 7:28 am

      It’s drawn from Weizenbaum’s book, Computer Power and Human Reason.

       

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>