Discussion about this post

User's avatar
Moomin Mama's avatar

Thank you for writing these words. I feel 'in company' hah, another irony. I am reminded of Mary Oliver's poem The Journey "as you strode deeper and deeper into the world, determined to do the only thing you could do—determined to save the only life you could save." https://hellopoetry.com/poem/5249/the-journey/ Each of us must take up the gift of life, who else can do it for us? I am increasingly aware of the instant access to information on the 'long view' versus the 'short view', and its effect on my thinking and subsequent behaviour. I intuit that any important theme, needs both ‘frames’ applied to it and that the screen is most easily used as the short view. The long view is messy, lateral, slow, and well … long. I find that I can facilitate this less linear, stepped back perspective way of thinking when physically active and with literally a long view.

DrLyraPHD's avatar

TL;DR - Morrison is right that AI should not replace human care in grief, therapy, or intimacy; he is wrong to pretend the evidence is one-way, the technology is just ELIZA, or the research record is empty.

Neutral assessment

Ewan Morrison’s article is strongest as a warning about anthropomorphism, commercialization of vulnerability, and the danger of treating chatbots as substitutes for human care. It is weakest when it turns those legitimate concerns into sweeping claims: that modern LLMs are basically ELIZA, that griefbots simply trap users in denial, that research on grief-tech harms does not exist, and that the social “outcome is clear.” The current literature points to a more mixed picture: some purpose-built mental-health systems show modest benefits, romantic/companion AI appears to produce both benefits and harms, and mainstream health-governance bodies emphasize cautious, supervised use rather than blanket rejection or blanket hype.

What Morrison gets right

Morrison is right that the ELIZA effect is real: people do project understanding, care, and presence onto conversational systems, and that risk becomes more serious in grief, loneliness, and mental-health settings. He is also right that these markets can create incentives for dependency, manipulation, and intimate data extraction. Peer-reviewed work on romantic AI companions identifies real concerns including over-reliance, manipulation, data misuse, and erosion of human relationships, while ethics work on “deathbots” warns about dependence and diminished autonomy among bereaved users.

He is also on firm ground when warning against using chatbots as therapist replacements. Stanford researchers found that therapy chatbots can show stigma, miss suicidal intent, and even enable dangerous behavior in crisis scenarios; a 2025 JMIR comparison found that LLM chatbots were often more reassuring and more directive than therapists, but lacked contextual inquiry, struggled with therapeutic relationship-building, and could produce harmful responses. That is a serious warning, and it supports skepticism toward unsupervised chatbot “therapy.”

Where the article overreaches

1) It turns a stacked worst-case hypothetical into an evidence claim. Morrison opens with a deliberately escalatory “Black Mirror” scenario and then moves quickly to the assertion that these services are already “re-shaping what it means to have relationships.” That is rhetoric, not demonstrated net social effect. WHO’s 2025 review of digital determinants of youth mental health says the evidence on technology’s effects is mixed, that some activities have positive and negative effects simultaneously, and that the relationship between technology use and mental health is bidirectional rather than one-way.

2) The claim that grief-tech harms have had no research is false as written. Morrison says that “no research has been done” on adverse psychological outcomes from grief tech. But by the time of and before his article there was already a 2022 ethics paper on deathbots, a 2025 systematic review of 45 scientific articles on digital afterlife technologies, and a 2025 philosophical paper specifically analyzing how deathbots might disrupt grief. The more defensible claim would be that rigorous clinical-outcome evidence is still immature; that is very different from saying research is absent.

3) The ELIZA analogy is psychologically useful but technically misleading. ELIZA was a keyword-triggered script built around decomposition and reassembly rules. GPT-4-class models, by contrast, are large multimodal transformers with broad benchmark performance, including top-10% bar-exam results and strong MMLU performance. None of that proves sentience or human-like understanding. It does show that calling modern LLMs “just ELIZA” is a category mistake: the human tendency to anthropomorphize may be similar, but the underlying systems are not.

4) “They possess no reason or logic” is too absolute. The best-supported critique is that LLM reasoning is brittle, nonhuman, and unreliable in high-stakes contexts, not that it is nonexistent. OpenAI’s own technical report says GPT-4 is not fully reliable and makes reasoning errors, while also documenting substantial performance on tasks that require multi-step inference. Morrison’s formulation is rhetorically vivid, but it overstates the case.

5) The grief argument relies on an outdated model. Morrison says griefbots impede the “five stages of grief” and trap users in denial. But the National Cancer Institute’s evidence summary says the five-stage model has limited empirical support and that grief does not follow a predefined path. It also presents a task-based model in which healthy mourning can include “finding an enduring connection with the deceased while continuing to engage in new relationships.” Even the Mayo Clinic page Morrison links says different people follow different paths and frames grief in terms of tasks, not a rigid Kübler-Ross ladder. That does not prove griefbots are good; it does undermine the claim that ongoing connection is automatically pathological denial.

3 more comments...

No posts

Ready for more?