Thread
We provided mental health support to about 4,000 people — using GPT-3. Here’s what happened 👇
To run the experiment, we used @koko — a nonprofit that offers peer support to millions of people...
On Koko, people can ask for help, or help others. What happens if GPT-3 helps as well?
We used a ‘co-pilot’ approach, with humans supervising the AI as needed. We did this on about 30,000 messages...
Here’s a 2min video on how it worked:
www.loom.com/share/d9b5a26c644640ba95bb413147e41766
Read on for the TLDR and some thoughts…
www.loom.com/share/d9b5a26c644640ba95bb413147e41766
Read on for the TLDR and some thoughts…
Messages composed by AI (and supervised by humans) were rated significantly higher than those written by humans on their own (p < .001). Response times went down 50%, to well under a minute.
And yet… we pulled this from our platform pretty quickly.
Why?
Why?
Once people learned the messages were co-created by a machine, it didn’t work. Simulated empathy feels weird, empty.
Machines don’t have lived, human experience so when they say “that sounds hard” or “I understand”, it sounds inauthentic.
And they aren’t expending any genuine effort (at least none that humans can appreciate!)
They aren’t taking time out of their day to think about you. A chatbot response that’s generated in 3 seconds, no matter how elegant, feels cheap somehow.
They aren’t taking time out of their day to think about you. A chatbot response that’s generated in 3 seconds, no matter how elegant, feels cheap somehow.
Think of the difference between getting an e-card vs a physical card from someone. Even if the words are the same in both cases, we might appreciate the effort that comes from going to the store, picking a card, mailing it, etc.
Can machines overcome this?
Probably. Especially if they establish rapport with the user over time. (Woebot has published data suggesting its bot can form bonds with its users. Kokobot probably does this as well in some cases).
Probably. Especially if they establish rapport with the user over time. (Woebot has published data suggesting its bot can form bonds with its users. Kokobot probably does this as well in some cases).
I’ve had long conversations with chatGPT where I asked it to flatter me, to act like it cares about me. When it later admitted it can’t really care about me because, well, it’s a language model, I genuinely felt a little bad.
Maybe we’re so desperate to be heard, to have something actually pay attention to us without being distracted, without looking at a phone or checking slack or email or twitter — maybe we long for that so deeply, we’ll convince ourselves that the machines actually care about us.
The implications here are poorly understood. Would people eventually seek emotional support from machines, rather than friends and family?
How can we get the benefits of empathic machines, without sacrificing existing human relationships? As Sherry Turkle warns, it's possible the machine, “begins as a solution and ends up a usurper.”
It’s also possible that genuine empathy is one thing we humans can prize as uniquely our own. Maybe it’s the one thing we do that AI can’t ever replace.
(curious what others think? @zakijam @paulbloomatyale?)
(curious what others think? @zakijam @paulbloomatyale?)
UPDATE: Looks like there was some large misperceptions about what we did. Some clarifications are here:
Mentions
See All
Jason Scott Montoya @JasonSMontoya
·
Jan 8, 2023
- Curated in Open Source AI Chat GPT