RSS

Eliza, Part 2

16 Jun

Just to be sure we understand what Eliza does and doesn’t do, I thought it might be instructive to look at an actual conversation from under the hood. What follows is an only slightly modified version of the sample run included in the July/August, 1977, issue of Creative Computing that introduced the BASIC Eliza. (Specifically, I changed the original reference to an IMSAI 8080 to a Tandy in keeping with this blog’s recent theme.) It’s a much less compelling example than the famous transcript I included in my last post, which is partly down to the acknowledged inferiority of this version of Eliza and partly down to Creative Computing choosing to interact the way a person more typically might — i.e., by trying to take the piss out of the program just a bit rather than playing along with the psychologist / patient relationship. In that sense I’d call it a more honest reflection of Eliza‘s capabilities and limitations, and of the average user’s experience with it.

At the heart of the program is a routine that searches each input for one of a group of text sequences. In order of priority, they are:

1. “CAN YOU”
2. “CAN I”
3. “YOU ARE”
4. “YOU’RE”
5. “I DON’T”
6. “I FEEL”
7. “WHY DON’T YOU”
8. “WHY CAN’T I”
9. “ARE YOU”
10. “I CAN’T”
11. “I AM”
12. “I’M ”
13. “YOU ”
14. “I WANT”
15. “WHAT”
16. “HOW”
17. “WHO”
18. “WHERE”
19. “WHEN”
20. “WHY”
21. “NAME”
22. “CAUSE”
23. “SORRY”
24. “DREAM”
25. “HELLO”
26. “HI ”
27. “MAYBE”
28. ” NO”
29. “YOUR”
30. “ALWAYS”
31. “THINK”
32. “ALIKE”
33. “YES”
34. “FRIEND”
35. “COMPUTER”

As soon as one of these sequences is found, it is taken as the key sequence for the input as a whole, and no more searching is done. In other words, “COMPUTER” could be taken as the key sequence only if the input text does not contain any of the previous 34 sequences.

RUN
HI! I'M ELIZA. WHAT'S YOUR PROBLEM?
? I AM AN IBM 370
DID YOU COME TO ME BECAUSE YOU ARE AN IBM 370

When Eliza finds a key sequence, it responds with a set sequence of its own. In this case, it has latched onto the sequence “I AM” in the input. Having done so, it automatically throws away all of the input through that point, leaving just “AN IBM 370.” It then looks up the appropriate response pattern, which is “DID YOU COME HERE BECAUSE YOU ARE…”, followed by the remainder of the original input.

(Oh, and just so everyone gets the joke: an IBM 370 was an extremely large and powerful mainframe computer of the era; thus its repeated talking down to the little Radio Shack microcomputer.)

? YES I WANT A NEW LINEPRINTER
WHAT WOULD IT MEAN TO YOU IF YOU GOT A NEW LINEPRINTER

This time there are two possible matches: “YES” and “I WANT.” However, “I WANT” has higher priority, so Eliza throws away all of the input text through that point, which incidentally includes the “YES.” The appropriate response to “I WANT” is “WHAT WOULD IT MEAN TO YOU IF YOU GOT…”, followed by the remainder of the input.

? ABOUT 1800 LPM
SAY, DO YOU HAVE ANY PSYCHOLOGICAL PROBLEMS?

We’ve given Eliza nothing to latch onto here; no sequence matches at all. Therefore it falls back on a canned response for such situations.

? WHAT WOULD YOU KNOW I'M A 370 AND YOU'RE A TANDY
WHAT MAKES YOU THINK I AM A TANDY

There are a few matches this time, but “YOU’RE” has the highest priority, so Eliza throws everything away up to that point and appends the remainder to the appropriate response, “WHAT MAKES YOU THINK I AM…”

? IT SAYS THAT ON YOUR FRONT PANEL
WHY ARE YOU CONCERNED ABOUT MY FRONT PANEL

Key: “YOUR”
Result: “WHY ARE YOU CONCERNED ABOUT MY…” + “FRONT PANEL”

? LOOK YOU BROUGHT UP THE SUBJECT
WE WERE DISCUSSING YOU-- NOT ME.

Not all response sequences incorporate the original input. In this case, the key “YOU ” results in the canned response “WE WERE DISCUSSING YOU– NOT ME.”

? I'M FIXING A HOLE...
DID YOU COME HERE BECAUSE YOU ARE FIXING A HOLE...

The key sequence this time is “I’M “. Eliza turns to the same response sequence for both “I AM” and “I’M “, as one might expect.

? SORRY THAT WAS A TEN YEAR OLD SOFTWARE BUG
PLEASE DON'T APOLOGIZE

Key: “SORRY”
Result: “PLEASE DON’T APOLOGIZE!”

(Presumably, this is — leaving aside the obvious Beatles reference — an in-joke having to do with the IBM 370. If anyone reading this knows what exactly is being referred to here, by all means enlighten us in the comments.)

? I AM ALWAYS APOLOGIZING
HOW LONG HAVE YOU BEEN ALWAYS APOLOGIZING

Yet again “I AM” is the key sequence. However, notice that the response is not quite the same. For most keys, Eliza cycles through several response sequences, hopefully masking somewhat the automated nature of the whole process. In the case of “I AM,” there are four possible responses. (Due to an idiosyncrasy in the program, synonyms such as “I AM” and “I’M” are each run through their own cycle even though they draw from the same list of responses.)

? SINCE MY FIRST COLD START
WHAT DOES THAT SUGGEST TO YOU?

Once again we have given Eliza nothing to go on, so it falls back on the second of seven possible noncommittal responses.

? YOU'RE THE SHRINK
DOES IT PLEASE YOU TO BELIEVE I AM THE SHRINK

Our second use of “YOU’RE” results in the second of four possible responses to that sequence.

? DON'T YOU KNOW?
OH, I KNOW?

And our second use of “YOU ” results in the second of three possible response to that sequence.

? I AM BEGINNING TO WONDER ABOUT YOU
DO YOU BELIEVE IT IS NORMAL TO BE BEGINNING TO WONDER ABOUT I

Here we have the first real breakdown, and it’s an illustrative failure indeed. Eliza discovered yet another use of “I AM,” our third now, and so turns to the third response, “DO YOU BELIEVE IT IS NORMAL TO BE…” Based on what we’ve seen so far, we might expect the full reply to be, “DO YOU BELIEVE IT IS NORMAL TO BE BEGINNING TO WONDER ABOUT YOU,” which is obviously inappropriate. To account for situations like this, Eliza always attempts to modify the input-text fragments that it reprints, by changing first-person pronoun references to second-person and vice versa, and attempting to do some simplistic to-be verb conjugation. To be precise, it replaces sequences in the input text like so:

” AM ” <--> ” ARE ”
“WAS ” <--> “WERE ”
” I ” <--> ” YOU ”
“MY ” <--> “YOUR ”
” I’VE ” <--> ” YOU’VE ”
” I’M ” <--> ” YOU’RE ”

Eliza has in fact been searching reprinted fragments all along for these sequences, but has not found any until now. And now that it does, it gets it wrong, falling afoul of the fact that in English the second-person pronoun is the same in both the nominative and the accusative, but the first-person pronoun is different. Since it lacks any structural understanding of the English language and can only replace sequences of characters with other sequences, it replaces “YOU” with “I” rather than “ME.”

? HA HA HA YOU CAN'T EVEN TALK PROPERLY
YOU'RE NOT REALLY TALKING ABOUT ME, ARE YOU?

Key: “YOU ”
Result: “YOU’RE NOT REALLY TALKING ABOUT ME, ARE YOU?” (response 3 of 3)

? IS THERE ANYONE ELSE IN THE ROOM?
I SEE

Key: none
Result: Noncommittal response 3 of 7.

? COMPUTERS DON'T HAVE EYES
ARE YOU SURE?

Here we have another “failure” of sorts that this time actually works within the context of the conversation. Eliza takes the sequence “YES” from the word “EYES” as its key, resulting in the first of three possible responses to that sequence. Considering the non-sequitur that might have resulted, Eliza just dodged a bullet.

? ABOUT COMPUTERS?
DO COMPUTERS WORRY YOU?

Key: “COMPUTER”
Result: “DO COMPUTERS WORRY YOU?” (response 1 of 7)

? SHOULD THEY AFTER ALL I AM A 370
DO YOU ENJOY BEING A 370

Key: “I AM”
Result: “DO YOU ENJOY BEING” + “A 370” (response 4 of 4)

…and so on. I’m sure you get the gist of it by now. Having beaten this particular dead horse into the ground, I’ll wrap up this topic next time by talking about what the experience of Eliza ultimately means — to me, to Joseph Weizenbaum, and maybe to you.

 

Tags: ,

10 Responses to Eliza, Part 2

  1. Felix Pleșoianu

    June 16, 2011 at 3:27 pm

    Didn’t some text adventures in the 1980es use similar techniques to supplement the otherwise very limited two-word parsers that were so common back then?

     
    • Jimmy Maher

      June 16, 2011 at 3:43 pm

      Yes. Matter of fact, I strongly suspect the ADRIFT parser is doing some of this sort of thing even today, as are any number of home-brewed systems. It’s a tempting but dangerous game to play in that it can work for a while, but when it inevitably breaks down the results are embarrassing — much like with Eliza itself.

       
  2. Nathan

    June 16, 2011 at 3:39 pm

    You mean “nominative”, not “subjunctive”.

     
    • Jimmy Maher

      June 16, 2011 at 3:42 pm

      Woops… thanks.

       
  3. rockersuke

    June 16, 2011 at 8:20 pm

    A neat trick in the original also present in the 48 Kb port I typed from my AI book made it use “my” as a keyword and remember anything after it, so if the user mentioned something like “my father” the program would politely ask her to keep talking about her father in case no other key was found, as shown in the famous transcript from last post. Problem was in my spanish version it couldn’t make the difference between “my” and “me” as both words are witten the same way “mi”, which caused a lot of garbage output (among several other translation conflicts). It would have surely been very easy to find a workaround, but I was too newbie in BASIC at the time ^_^’

     
  4. Martin

    May 13, 2016 at 2:19 am

    Actually the joke about the IBM 370 is still reliant today as IBM still makes mainframes based on the 370 architecture today and many program written in the 70’s and 80’s for it can run today as-is.

     
  5. Carl Read

    July 8, 2016 at 9:04 am

    My first encounter with ELIZA I think was in a Commodore Pet listing in the Oct. 1981 Micro Computer Printout. They’d decided it’d be more fun if the psychologist was rude. Here’s some of the responses, as taken from the listing…

    TYPICAL SUB-ADOLESCENT INFATUATION
    YOU CERTAINLY LOOK LIKE A NIGHTMARE
    NEGATIVE LITTLE TWERP
    PUSH OFF BEFORE I ELECTROCUTE YOU.

    (It’s nice programmers don’t have to type in the line numbers any more – from magazines…)

     
  6. Borys Jagielski

    March 24, 2017 at 7:58 pm

    If a human wannabe-psychotherapist had internalized the Eliza algorithm and started to treat actual patients using it (without them being aware of the origins of the “therapy”), would he be effective? Something tells me that he just might… as long as he would be avoiding Eliza’s “communication bugs”. :)

     
  7. Clayton

    May 30, 2019 at 2:23 am

    Re. the “fixing a hole/ten year old bug” comment: you said this sample was originally published in 1977. The song “Fixing a Hole” was released in 1967 on the Sgt. Pepper album. The IBM System/370 series was announced in the summer of 1970 and the first units shipped in early 1971, so my guess is that it was a reference to the age of the song rather than something specific to the computer.

    I remember well playing with this version of ELIZA on the TRS-80 model IIIs in my high school’s “computer lab,” back in the dark ages of 1985. I spent 3 or 4 days typing it in from one of those big Creative Computing “BASIC Computer Games” books (and probably as many more debugging the typos), saving on cassette as I went. Good times!

     
  8. Jeff Nyman

    April 11, 2021 at 9:40 am

    Way late to this and I can’t claim I know the “hole” reference entirely. Clearly the Beatles song as a reference makes sense, given the mention of “ten years.” So I wonder if this person who was typing into the program in 1977 was very clever.

    To explain why I say that, a bit of history is needed. (And I’ll say here that it’s still hard for me to believe that this is what was meant, but the coincidences are interesting.)

    There was a there was a relatively famous bug in the translations of what were called channel programs. How is that (maybe) relevant?

    Well, the IBM 370 has a channel architecture. Any channel program was composed of channel control words. Those words had operation codes (orders) that were interpreted by the channel. Most of those orders are also sent to the currently connected device, like, say, a line printer. Hence (possibly) the specific reference to a printer in the transcript.

    The idea was that the channel did some decoding of the order and that is what helped it to distinguish the direction of the flow of data. (Basically: input/outputs stream since there were read orders and write orders.)

    There was a serious security hole which is the (not so famous in all circles) “hole” bug.

    It’s kind of complicated to explain the actual bug but what ended up happening was that READ channel control words could required WRITE access to RAM while WRITE channel control words could require READ access and there was a translation program that handled this. What could happen was that it was possible for IBM 370 programs to write to RAM that the program should have lacked legitimate write access to. So, as an example, instead of writing (sending output) to the printer, the program was writing to some area of memory instead. A similar situation could happen with read input, essentially reading something that it should not be allowed to.

    Again, I have no idea if that’s relevant or even what was meant, even in an oblique way. But it is interesting that the IBM 370 is mentioned and that a printer is particularly called out. Of course, the IMSAI 8080 (referenced in the original transcript, as opposed to Tandy) was not subject at all to this kind of bug. A computer world article (from 11 July 1977 — around the same time as the transcript from Creative Computing was released) talked about the “crude” virtual storage implementation of the IBM 370; the IMSAI also being referenced in that same issue.

     

Leave a Reply

Your email address will not be published.


This site uses Akismet to reduce spam. Learn how your comment data is processed.