WE BROUGHT ELIZA (1966) BACK TO LIFE

Kyle McKenzie
Feb 14, 2025

From 1964 to 1967, Joseph Weizenbaum[1] developed a psychotherapist computer program at MIT[2]. At the time, there was an optimistic dream that computers could be built to think like humans. Scientists scrambled to reverse-engineer human thought processes. And they failed. Weizenbaum, however, built something that undermined their efforts: a computer program that simply reflected human thoughts back at them. Like a therapist, ELIZA didn’t need to think—it just needed to listen. Type in your problems. Exactly what you’re feeling. And the machine would do what Carl Rogers[3], the renowned therapist, built his entire method around:

Repeating your words back to you.

A different version of what you already said.

What stunned researchers wasn’t just ELIZAs ability to mimic Roger’s methods. It was that people liked talking to her. Even when they knew she was a computer program. Without a consciousness.  Somewhere in that loop, people found comfort though. So we created our own version of ELIZA.

I reached out to friend/designer/engineer Louie Gavin of Tame Labs®, who had written his university thesis on conversational AI (a year before ChatGPT's launch in 2021), including a brief exploration of ELIZA. We both shared an interest in modernizing ELIZA. No predictive text tricks. No generative image. Just a chatbot that listens, without trying to sell you something or have sex with you. Conducting our own experiment over the past two weeks, we had friends test it out at Null HQ. Instead of treating it as a relic, or an early breakthrough in conversational AI, users immediately judged it. They projected onto it. They tried to outsmart it.

And what we learned wasn’t about ELIZA.

It was about us.

Maybe that was the point all along.

Kyle Mckenzie (KM): When I first heard about Eliza, I thought it was really impressive in the context of the ’60s. But when I had friends and peers interact with it, they immediately wanted to provoke or challenge the model rather than engage with it. 

The more you talk to ChatGPT, the more biased it becomes with you. Unlike Eliza, which is transparent, something like ChatGPT is more of a black box—in my opinion.

But how does it actually understand us?

Louie Gavin (LG): You can think of the evolution of these models in three stages. The first stage, maybe four years ago, was essentially about reading the internet, absorbing all the text data and then simulating it. If you gave it a sentence from Wikipedia, it would just autocomplete that article in a similar way because it had read it before.

But these models had no orientation, no direction. They were just big autocomplete systems. That’s when the second stage came in—the ChatGPT stage. OpenAI hired people to curate its responses, telling it what was a good or bad answer. That human input gave it more direction and made it useful for interacting with humans.

The third stage is what we’re entering now. Instead of just responding based on raw probabilities, models are being trained to simulate thinking. Before they give an answer, they go through a reasoning process. They “think” about a task before responding. And you can actually read these thought steps—it’s like seeing a human thought process unfold.

Right now, this is mostly being applied in areas where there’s a right or wrong answer, like math or logic puzzles. But it could extend into creative fields where the AI generates entirely new ideas.

KM: But isn’t that kind of messed up?

LG: Yeah, it’s quite crazy if you think about it.

KM: I had a friend tell me he tells AI everything—what his kids look like, his financial situation. Another friend uses it for everything, from recipes to where to hang a photo. If people are already that transactional with conversational AI, what comes next? Because these models still don’t fully simulate human thought.

LG: For sure. They just imitate it.

KM: Exactly. But I feel like we expect AI to be able to fill in the gaps of what we’re not saying. Sometimes people give it very minimal prompts, yet it "figures out" what they mean.  I don’t want to sound negative, but I’m skeptical about how reactive we are to it. I use AI to be efficient in areas where I might drag my feet. But for people who don’t use it in a creative sense, where does that leave them?

Are we actually seeing advancements in AI from step two to step three, or are we just shifting our expectations?

LG: Both. When ChatGPT first came out, it felt like a massive leap. But within months, it became the new standard. Now we expect more.

But AI is changing too. Earlier models just repeated human input. Now, with this third step, AI is starting to generate new ideas and solutions beyond human guidance. That’s where it stops just repeating and starts actually advancing.

KM: So, Eliza is like a scaled-down, dumbed-down version of what ChatGPT is at its core.

LG: On a technical level, they’re very different, but they both simulate human language without really understanding its full meaning. I mean, you could say that in some way, but I think maybe you shouldn't think of it as a black and white thing, but more as a continuous spectrum of Eliza's on the one end and ChatGPT on the other end. 

ChatGPT simulates that it knows what a tree is because it has read so much about the topic. But I think we could say that in terms of understanding the meaning, the semantics and all that stuff, it's gone way further than Eliza. And then you could also question on the other side what humans do– like if there really is something fundamentally different when a human understands something, or if it is just that humans have completely different exposure to other inputs, physical inputs. It's a really philosophical thing. I'm not a philosopher, so I'm also not going too deep into this direction, but I don't know, it's quite interesting.

KM: It’s funny because when Eliza first came out in the 60s, people loved that it listened to them. They felt heard. And they could talk about it freely without judgment. But when we tested it this past week, people just wanted to provoke it. Why do you think that has changed?

LG: Because people now know what a chatbot[4] is. The first time they saw one, they had no frame of reference. Culture evolved with the technology, and now they just want to test its limits.

KM: We expect technology to make things easier for us, to fill in the blanks. But I’m thinking about how AI is developing in the creative industry—because right now, ideas are a form of currency. What happens when AI can simulate human creativity at a high level?

LG: An idea is only valuable if you attach it to something real. AI might generate ideas, but humans are still needed to execute them.

There’s a good example of this in AlphaGo[5], the AI that mastered the game of Go. It came up with a move that no human had ever played before. At first, people thought it was a mistake. But it ended up teaching humans a new strategy. AI didn’t replace humans—it expanded what was possible for them.

KM: So, do you think there’s still some things we can learn from Eliza?

LG: Yeah. At the end of the day, these models are still based on human knowledge. They have flaws, hallucinations, and biases. With Eliza, people felt like they were being heard, even though it was just mirroring their input. That same illusion exists today with AI chatbots. You always have to have in mind that these systems have flaws and are not based on some ground truth. 

“You need to build something like antibodies against these wrong outputs.”

In emotional use cases where people may use them as coaches or something like that, you always have to have in mind that it’s maybe a tool for you to reflect your own thinking. But you can't rely on a Chatbot. There’s nothing behind that. It's maybe a nice curtain, a nice theater. But it's still fake in some way.

KM: So Eliza is a reminder to be human.

LG: Yeah for sure. It’s also a point to embrace your human qualities and flaws. Maybe intelligence is not the ultimate goal for humans. 

Maybe we should have more fun. Hang out with our friends a bit more. I don't know, enjoy a bit more of life at the end of the day. I mean, that's the ultimate story, right? The idea that it will work for us and we can just do what we want. That's not that easy of course, but I think in some way you could get there. 

KM: That’s a good way to end it, I’d just cut it right here

Louie Gavin | @louiegavin

Index