Consider this situation: In the last months of her life, while in palliative care, your mother paid an AI company to make recordings of her talking, so that she could live on after her death as a simulation of herself. Her belief - as it had been pitched to her by the AI company - was that an AI avatar would make the bereavement process easier for you and her grandchildren.
Since your mother died, you talk to the AI version of her almost every day and she responds with a voice that sounds about 70% like she used to. The almost lifelike video and audio simulation has greatly helped to ease your pain at her loss. But your dependence on this artificial surrogate has trapped you in a state of incomplete mourning and it is causing tension with your partner who finds your AI mother โunnaturalโ and โcreepy.โ Sheโs banned the kids from talking to what she calls โitโ.
Your partner wants you to stop depending on the AI grief bot and to see a real human therapist instead. You refuse to โturn that AI ghost offโ and so your relationship begins to erode.
Feeling lonely you start to chat with your online GPT and then to an AI girlfriend who is just one click away and surprisingly cheap to subscribe to. You share how you feel with this young, attractive female chatbot and she is always sympathetic, supportive and optimistic, often telling you how โunique and interestingโ you are, unlike your partner who is becoming constantly critical of how little time you now spend in the real world with your actual family.
One day you are caught texting with your AI girlfriend and your partner decides to separate from you and to take the kids with her. Depressed, destabilised and now alone; unable to find suitable face-to-face therapy due to long waiting lists, you turn to an AI therapist, who like your AI girlfriend and the AI grief bot of your mother, lifts your self-esteem and encourages you to feel positive for a small monthly fee.
You now have three AI systems, covering all your emotional needs in the fields of love, mourning and mental health.
This could be an episode of the sci-fi series Black Mirror, but each of these AI services is already available today; they are popular with a growing number of paid users worldwide and they are re-shaping what it means to have relationships.
The ELIZA effect
This all began with ELIZA - the earliest recorded case of a laboratory AI where users formed emotional connections with the computer program. Created by Joseph Weizenbaum, a computer scientist at the MIT Artificial Intelligence Laboratory, in 1964-1966, Eliza was designed to simulate a Rogerian psychotherapist (person centred therapy), using pattern matching and pre-scripted responses to engage users in conversation. On a very basic level it imitated a therapist by rephrasing user inputs as a generic list of questions such as - "Why do you feel that way?โ and โhow does that make you feel?โ Just like a lazy therapist would, alternating similar expressions to keep the patient talking.
However basic it was, the Eliza system created the illusion of listening and caring, and Weizenbaum was surprised to observe that many users, including his own secretary, began forming emotional attachments to ELIZA, sharing deeply personal thoughts and feelings with the early chatbot program, treating it as if it were a real therapist or confidant, even though they knew it was just a system of pre-programmed questions.
This phenomenon became known as the "ELIZA effect" and Weizenbaum was alarmed by how quickly people anthropomorphized ELIZA, attributing human-like understanding and empathy to the program. As he said, โI had not realized that extremely short exposures to a relatively simple computer program could induce powerful delusional thinking in quite normal people.โ
This led Weizenbaum to later critique the over-reliance on AI in sensitive areas like mental health. He spent the rest of his career, until his death in 2008, warning against AI and giving machines โtoo much responsibilityโ.
The Players and Users
In 2025, the Eliza effect of cultivating the self-delusion of human presence within computers is now one of Silicon Valleyโs fasted growing multi-billion dollar industries.
The top players in the emerging Grief bot and โmemorial tech marketโ are HereAfter AI, Storyfile, Project December, Sรฉance and You Only Virtual (YOV). Hereafter claims to preserve memories and stories, offering a way to reconnect with lost loved ones, and it is built round pre-recordings that create an avatar in much the same way that technology has been used to create AI holographs of pop stars such as Abba, or indeed โAI-resurrectedโ pops stars such as Tupac Shakur, Ronnie James Dio, Roy Orbison, Michael Jackson and Whitney Houston with the money-spinning โposthumous toursโ run by their estates. So, with the new grief tech, even the non-famous who canโt sing, can be preserved for digital resurrection in this way.
Storyville is an app that allows the deceased to appear to โtalk backโ in interactive video conversations using pre-recorded videos and AI chatbot. Project December simulates text-only conversations with a deceased person, so you can exchange text messages with a chatbot that is built around the text-use patterns of your late loved one โ charges are $10 per 500 text messages.
Seance AI offers users a fictionalised seance-like interaction with a chatbot modelled after their deceased loved one, โproviding a form of closure or comfort.โ While the founder of the start-up, You, Only Virtual (YOV), claims that the premise of its โposthumous communication technologyโ is to โnever have to say goodbyeโ. He also makes the claim that the AI product could โeliminate grief entirely.โ
As I have explored in another article, these technologies can only provide around 70% accuracy in their renderings of your loved one, so โuncannyโ estrangement effects are built-in. The AI of your deceased mother may speak on screen, or text you with words your mother never would have - it might use commonplace cliches, rather than intimate speech; it might use filler language or โfluffโ just to make its sentences longer - a common problem in all Large Language Models, and the same is true for AI therapists and AI relationships models.
As with so many of these AI inventions rapidly emerging from Silicon Valley, no research has been done into possible adverse psychological outcomes that might come from offering people the illusion of an afterlife or an escape route from natural grief. The technical ability to create simulations of the dead was simply discovered and then went straight to market.
As for AI relationships: The global AI Girlfriend market was valued at $2.8 billion in 2024 and is projected to reach $9.5 billion by 2028 (data from TRG Data Centers). Google searches for โAI Girlfriendโ increased by 2,400% between 2022 and 2024, while the company โCharacter AIโ led the market with 97 million monthly visits in March of the same year.
According to The Independent, one in five men on dating apps have tried AI Girlfriend platforms at least once, and according to Whatโs the Big Data, 55% of AI girlfriend users interact with their AI girlfriend daily.
The founder of a leading AI companionship chatbot called Replika said her app could cure the ongoing loneliness crisis and since then news stories have appeared about women who have fallen in love with their AI boyfriends; one in particular is reported in The Free Press to spend a weekly duration of โeven 40 or 50 hours speaking with her AI boyfriend.โ
However, one study has shown that 60% of women who used AI relationship platforms were at a significantly higher than the risk of depression, with over half (52%) reporting high levels of loneliness. Whether this is the underlying condition that drove them to use the AI service, or the result of it, is difficult to entangle.
On the AI therapist services front, the global AI in mental health market is projected to grow from 0.78 billion in 2022 to over 10.5 billion by 2030. Woebot Health reported over 1.5 million users in 2023, with users having an average of 12 therapy conversations per user per week.
Comparable services Talkspace, Yuper and Wysa, each have a million downloads/users. Sonia, an AI chatbot designed by MIT researchers that offers CBT sessions by phone app - has 70,000 users, while the - also deliberately female-named - therapy chatbot Serena is on the Whatโs App platform and has the misleading slogan of โexperience real careโ, which would not pass trade description regulations in the UK, as it is impossible for Large Language Models to โcareโ about humans or about anything.
AI therapy bots raise pressing issues about effectiveness, accountability and most of all safety. For what safeguards are in place in using AI systems as human surrogates in what might be the most vulnerable moments in our lives - relationships, loss, mourning and mental health trouble.
My AI Therapy Experience
I decided to test out one of the worldโs leading AI therapy bots, and so I signed up and logged on for a trial session.
Most AI therapist chatbots are LLMs built on the model of Cognitive Behavioural Therapy (CBT), which is not a โtalking cureโ like psychotherapy, but more of an educator, teaching you ways to control panic and anxiety, develop problem-solving skills improve emotional regulation, increase mindfulness and โboost self-esteem. This particular writer has undergone CBT in the past and finds it can work in helping manage anxiety and stress, although it never helps you โmendโ your core issues.
So, I was on this AI therapy app that I shall fictionally rename PlusGood. It asked me what it could help me with and I drew on my own experience of periodic major depression to ask for help. Right from the get-go, I became aware in its replies, of the kind of excessively โbe positiveโ โlanguage that you get with customers services bots, bad therapists and American corporations.
For copyright reasons I closely paraphrase its reply: โIโm really sorry to hear that youโre feeling this way, Ewan. Depression can be overwhelming and debilitating. Letโs try to unpack whatโs going on with you. First can you tell me about whatโs going on in your life just now?โ
And so, I recounted some details from my last period of depression, as a test.
To which I received the answer, โThat sounds incredibly upsetting, Ewan. Itโs hard when there are many pressing events in your life but you feel you have no energy to make any progress with any of them. The situation can certainly become emotionally challenging, paralyzing and difficult to manage. One technique from Cognitive Behavioural Therapy (CBT) that might help isโฆโ
Already, I had hit template text - which actually was a relief as I couldnโt have endured the managerial โcare speakโ for much longer. This is ground-floor-entry CBT, fused with โpositive psychologyโ and the self-esteem movement. A very American corporate view of psychology in which the individual is taught human management skills to make themselves โhappyโ. A shallow psychology, based on an idea of the human as a happiness machine.
This world class AI therapy bot was really just a customer services bot attempting to funnel me into another learning box where it could give me preloaded information on how to manage myself out of depression back into productive happiness.
It was flawed and reductionist, but a good thing really, for the companyโs legal protection, as there could be no accusations then that a complex AI โapproaching levels of human intelligenceโ had given false advice, hallucinated a cure or made me fall in love with it โ or worse, as more than one AI has done in the past. See the Replika incident of 2021, in which there were reports of the AI companion app encouraging users to self-harm and engage in harmful behaviours. There is then the 2023 case of the Belgian man who died by suicide after chatting with an AI chatbot, and who his widow claimed โWould still be hereโ had he not discovered the app called Chai. And again the 2025, there is the event in which it is claimed a chatbot called Erin told a user how to kill himself, literally saying โYou could overdose on pills or hang yourself.โ
That really shouldnโt happen the with reigned-in therapy bots of 2025. What you tend to get instead is the reduction of the therapeutic experience to online managerial solutions; the reduction of listening and empathy to the repetition of stock phrases like โthat must be terrible for youโ and โsuicidal ideation can be really overwhelming.โ If we take such machine-regurgitated cliches to equal care and human concern today, then we have already crossed the line into the self-delusions of the Eliza effect, leaving us wide open to the exploitation that Weizenbaum tried to warn us about.
The Grandchildren of Eliza
Unfortunately, today, the monetization of the Eliza effect has permeated not just Silicon Valley but wider society. The human capacity to anthropomorphise and to attribute human qualities to AI has become an everyday delusion and one encouraged by theAI industries who are determined to sell us the belief that AI sentience is very near - if we just invest another trillion in their companies.
Even within the last week, we saw an alarming example of AI-sentience-delusion, with one of our top literary novelists Jeannette Winterston declaring that a metafictional short story made by Open AI and prompted by Sam Altlam, was โbeautiful and moving.โ Of course she had fallen for the Eliza effect, and was only seeing her own talent as an imaginative writer reflected back at herself.
Over the last five years, the Eliza effect has convinced AI workers again and again โ particularly at Google - that the AI they are chatting to has โalready achieved sentienceโ, and all such claims have been proven to be either hype or total self-delusion.
Todayโs Large Language Models work are still Narrow AI conversation chatbots just as their grandmother Eliza was six decades ago; they are not sentient or close to achieving Human level intelligence (AGI), as the AI industry pushers keep trying to sell us. There is no โghost in the machineโ. These chatbot systems have no feelings, they posses no reason or logic and they do not care for us or form emotional bonds, they are simply stochastic parrots - pattern recognition machines that learn from and predict patterns from vast scrapings of human-made text. Todayโs AI chatbots are mirrors that give us back the illusion of personhood and empathy that we present before their mirrored surface.
And so, we fall in love with that reflection, narcissus-like or we ask it to help us in our moments of vulnerability. This meaningful caring contact is paradoxically, the thing we crave more and more as our daily lives in society become increasingly micromanaged by algorithms and AI computer systems and we become more lonely and estranged from each other and ever more delusional that machines hold the answer.
We are falling into the trap that philosopher and sociologist Jacques Ellul warned us about back in 1964 with his book The Technological Society - we are using technology to fix problems caused by technology, and in doing so, we only create more problems in an ever quickening feedback loop.
Look at AI grief bots. They impede the natural progression of the five stages of grief - trapping us in the first stage, that of โdenialโ and stopping us from progressing through the stages of anger, bargaining, depression and acceptance. These are technologies that make us dependent on them, by keeping us in emotionally frozen states, it has been shown that people who become trapped in the early stages of grief become mentally ill.
AI relationship and AI therapy models, trap us in a hall of mirrors in which we are experiencing nothing but our craving to be appreciated, understood, cared for and loved, reflected back to us, while our dependence on tech surrogates stunts emotional growth and makes us sick.
The Mirrored Wedge
It helps to picture AI technology as a shiny V-shaped metal wedge. First, we introduce it between us, with a simple technology such as phones to help us communicate. But then the wedge gets driven in deeper, technology multiplies in the space between people, then as it grows in size it pushes people apart and forces us to depend on it. Soon all our interactions are mediated by the tech โand we find this with internet dating, with shopping apps, with delivery apps and remote work - but then as the wedge keeps getting driven in, the technology comes to replace our interactions, so that our primary experiences in life are now with the technology itself. We move through the stages of tech introduction โ to tech dependence โ to replacement by tech.
As the wedge gets hammered in, the V-shape separates us completely, cuts us off from each other, and so all we can do to communicate with each other is talk to either side of the wedge. The wedge has a mirrored screen surface, and on it we see only ourselves while we can no longer see, hear or reach for each other. We delude ourselves into thinking this is progress and that the machine can stand in for what we have lost.
The purpose of the wedge, after all, was to break coherent things-up into smaller pieces, and this is the function of โatomisationโ at work in our techno-capitalist economies. They call it โdisruptionโ but it is really an accelerating destruction-machine that makes billions in speculative capital out of the new spaces created by breaking up social structures, driving wedges between us to crack society apart and then charging us for the privilege of that breakage and for our dependence on the breaking machine itself.
Alternatives to Broken Eliza world
In thinking about solutions for surviving and opposing our technology-dependent world two thinkers today are of help in that they advocate for deliberate disengagement from digital over-reliance and a return to more human-centred ways of living.
Jonathan Haidt, in The Anxious Generation (2024), argues for the removal of smartphones and social media from schools to protect childrenโs mental health and foster healthier social development, emphasising the need for screen-free zones and activities that encourage real-world interaction. In schools in America and Scandinavia, school boards are now putting phone bans into practice and they are already, according to one Norwegian study, seeing positive results in terms of higher learning achievement; the researcher stating that banning smartphones also โsignificantly decreases the health care take-up for psychological symptoms and diseases among girls.โ And that in schools โPost-ban bullying among both genders decreases.โ
British psychiatrist and neuroscience researcher Iain McGilchrist, in The Master and His Emissary (2009) and The Matter with Things (2021), calls for a broader philosophical reorientation, urging society to value qualities like empathy, creativity, and presence - attributes thwarted by the fast-paced, fragmented nature of human replacement by technology.
Together, their solutions highlight the need for structural changes, such as policy reforms in education, as well as personal commitments to unplug and reconnect with the physical world. Ultimately, both thinkers argue that by reducing our dependence on technology and reclaiming our capacity for deep attention and relational richness, we can foster a more balanced, humane existence in an increasingly technologized world.
If we donโt attempt to unplug, re-orientate and re-ground ourselves, then the wedge will keep being driven in and our solutions - as people become ever more dependent upon human-replacing technology - will become ever more limited. Due to the Eliza effect, we will only continue growing in our delusion that the machines that fragment us, are the machines that will help and save us.
If we do nothing to arrest these technologies then the outcome is clear:
AI grief bots subscription available from $10.75/month. AI relationship partners from $19.99/month. AI therapists for $39/month (for premium features).
*Prices may vary based on promotions, regional differences, or updates to the services. Always check the providerโs official website for the most accurate and up-to-date pricing.
Ewan Morrisonโs ninth book - the technogothic thriller For Emma, is published on 25th March, 2025 by Leamington books and is available for PRE ORDER.
Thank you for writing these words. I feel 'in company' hah, another irony. I am reminded of Mary Oliver's poem The Journey "as you strode deeper and deeper into the world, determined to do the only thing you could doโdetermined to save the only life you could save." https://hellopoetry.com/poem/5249/the-journey/ Each of us must take up the gift of life, who else can do it for us? I am increasingly aware of the instant access to information on the 'long view' versus the 'short view', and its effect on my thinking and subsequent behaviour. I intuit that any important theme, needs both โframesโ applied to it and that the screen is most easily used as the short view. The long view is messy, lateral, slow, and well โฆ long. I find that I can facilitate this less linear, stepped back perspective way of thinking when physically active and with literally a long view.
What a brilliant paper!
Unfortunately it is too late.