This post is also inspired from Adam Curtis’ Hyper Normalization. I wanted to write a little about the background of one of the threads he explores in the film.
In 1950, Alan Turing wrote a paper titled Computing Machinery and Intelligence which began ambitiously with the question, “I propose to consider the question, ‘Can machines think?’”.
Since thinking was and remains a contentious subject, Turing proposed an alternate metric, something measurable. What if it was impossible to tell whether or not a machine could think? In his experiment, a person would sit in front of a computer terminal and type out her end of a conversation. Her interlocutor’s responses would come typed out to her on the screen. She would not know if the response was made by a computer program or another person sitting in another room. If she could not reliably distinguish whether she was communicating with a person or a machine, the machine is said to have passed the test.
This formed the famous Turing Test.
It isn’t difficult to find objections to this as anyone who has been stuck with a bore knows. All you need to do is to make the appropriate sounds, Mmmmm, Uh-huh, you don’t say, I agree and the bore doesn’t really care if you are listening or not.
Dismayed by such superficiality, Joseph Weizenbaum of MIT created the first chatterbot, ELIZA from 1964 to 1966. Named after Eliza Doolitle in Bernard Shaw’s Pygmalion, the software used pattern matching and substitution to reflect back whatever was told to it without understanding anything whatsoever.
So If I typed,
* I want to discuss my problems
It might respond:
> What would it mean to you if you got to discuss your problems?
Or if I told it,
* I think it would ease my burden, if I shared it with someone.
It could respond:
> But you are not sure you it would ease your burden, if you shared it with someone?
Note the machine had no understanding of what a problem or a burden was, but merely mirrored what I said. So if I used the word ‘might’, it automatically triggered a response to ask me if I was sure.
ELIZA was meant as a spoof, a parody of Rogerian psychotherapy to show the absurd direction Weizenbaum felt artificial intelligence was taking. After writing ELIZA, he invited his secretary to come and test it. She was aware about the nature of the program and that it was merely responded to whatever she told it. But to Weizenbaum’s surprise, after a few minutes of ‘talking’ to the program, she asked if he would leave the room as the conversation was personal. He was further disturbed by the number of lay people who attributed human-like feelings to the computer program.
ELIZA is still around and online here if anyone is interested in giving it a try.
According to Curtis, this was a powerful precursor to the information bubble which a lot of us live in today. When we talk what we really want is someone to listen uncritically. We don’t talk to get information, we talk so that we can feel good about ourselves. The same feeling is extended to what we read, watch and who we listen to forming our very own little bubble.