Can AI Help Rescue Us From Our Mental Health Crisis?
How AI could help us solve this growing but hidden challenge
Do we have a mental health crisis?
Maybe you're not sure?
Well, did you know that 25% of all people globally will suffer from mental illness at some point in their lives? That's more than 2 Billion people.
That's 1 in 4 people, which means it probably includes people you know.
And did you know that 70% of those who have some kind of mental illness don't get the care and treatment they need? That's around 1.4 Billion people.
So yes, we have a huge mental health crisis worldwide, and this growing crisis has been reported by organisations such as the World Health Organisation.
Unfortunately, the media and most governments don't seem to care much about this important issue.
I care about people who have poor mental health getting support, do you?
I think you do.
AI helping with our mental health crisis might seem like a crazy idea to you, I get that.
But when you look at the reasons why this mental health crisis is happening, you might start to see how it's not such a crazy idea after all...
Potential risks of AI & mental health
While AI does have the potential to help with mental health, there are also potential risks which are important to understand.
In a recent article in Forbes, Dr Lance B. Eliot covered some of the risks and opportunities AI is bringing to mental health.
Risks to the safety and quality of mental health care
The first risk he highlights is AI may not be able to handle complex or sensitive mental health issues, such as suicidal ideas, trauma, or psychosis.
AI may also provide inaccurate, inappropriate, or harmful advice that could worsen the user’s condition or lead to adverse outcomes.
As he highlights:
First, we don’t know whether this use of generative AI is safe. A person might opt to use generative AI and get wacky outputs due to an error or an AI hallucination.
Ethical and legal issues regarding privacy & accountability
Another potential risk he highlights is that AI may collect and use sensitive personal data from the users without their knowledge or consent, and expose them to breaches, leaks, or misuse.
AI may also lack transparency and explainability in how it provides mental health advice, and this might create difficulties in assigning responsibility and liability in case of errors or harm.
As he describes:
Currently, most of the AI makers state in their licensing agreements that you have no guarantee of privacy or confidentiality. They reserve the right to examine whatever you have entered into the generative AI...most people using generative AI are not aware of this lack of privacy and confidentially. It is easy to overlook. Trust is mistakenly accrued over repeated usage.
Being aware of these risks is important, and allows us to consider how we might reduce or eliminate these risks.
For example, human oversight and review could be one way AI outputs could be monitored for inappropriate or bad responses, however, this could conflict with the privacy issue.
An alternative could be the AI itself could be trained to recognise the issues that might not be appropriate for it to have conversations about e.g. issues too complex or sensitive to be dealt with by an AI, and then the AI could at these points, refer the person to human mental health services.
So in this use case, the AI could be used as an initial first point of contact, which might be sufficient for milder mental health issues, but who could then also recognise when human mental health professionals would be more appropriate.
The privacy and consent issue could be addressed in different ways, for example, you could have AI’s dedicated specifically to mental health that would have all chat data protected and only visible to mental health professionals, and that could be stately clearly when that service is used.
The issue of responsibility and liability in cases of errors or harm could be tricky to resolve, but perhaps is a matter of setting a precedent.
Similar legal issues are being faced by manufacturers of self-driving cars for example and liability in cases where they are involved in accidents.
The issues of liability and responsibility will have to be resolved eventually by new laws & legal precedent.
Subscribe to get more free content like this delivered to you each week including podcast episodes, articles, sci-fi stories, online courses and more.
Helping you learn more about how AI is impacting society and humanity.
How AI could help with our mental health crisis
If we can help reduce or mitigate the potential risks, this enables the huge potential AI has for helping solve our mental health crisis.
Making mental health services accessible & affordable
Many people who suffer from mental health issues face barriers to accessing professional help, such as stigma, cost, or lack of availability.
Sadly, these are some of the main factors making the mental health crisis as big as it is.
This is true for a whole range of countries from the developed West to the Global South.
AI could provide a convenient and low-cost alternative that can reach more people in need.
As Eliot's article also highlights:
A person can use the generative AI at any time of the day or night since the AI is available 24x7. No need to somehow hold onto your mental anguish or angst until you can get access to your needed therapist. Furthermore, the cost is likely going to be a lot less to use a generative AI mental health advisor rather than a human one.
Providing personalized and tailored mental health advice
AI can use data and algorithms to analyse the individual needs and preferences of each user and provide customised feedback and interventions.
As long as privacy and consent concerns are addressed, the data of a person's conversation with an AI is one of the most powerful and unique aspects of using AI for mental health.
The AI can remember everything you have said and would be able to offer useful and personalised insights and connections about what a person has said in a way no human would be able to give.
AI might also provide other unique benefits over humans as Eliot explains:
Another reason to do so is that people at times feel more at ease conferring with AI than they do with a fellow human being, as noted in this posted remark: “Even for less structured therapies, some data suggest that people will share more with a bot than a human therapist. It relieves concerns that they are being judged or need to please the human therapist. And for a generation of digital natives, the appeal of a human therapist may not be the same as it was for their parents and grandparents”
Enhancing the effectiveness and efficiency of human therapists
AI could help augment the work of human therapists by providing them with insights, tools, and assistance.
For example, the AI could help summarise conversations with a client (either AI or human therapist conversations).
The AI might also be able to assist the human therapist in other ways, for example reviewing case notes to give additional insight into patterns over time with a patient, etc.
AI could also help with administrative tasks, such as note-taking, scheduling, and billing, and free up more time for the therapists to focus on the human aspects of their practice.
A combination of both human and AI mental health services might offer the best of both worlds, as Eliot highlights:
The human in the loop seems a lot more reassuring. If the generative AI goes astray, presumably the human therapist will be able to set things straight with the patient. We might be able to get the best of both worlds, as it were.
Humans & AI working together to solve our mental health crisis
One thing is clear, if we carry on as we are, our mental health crisis is going to grow even worse.
With it affecting 1 in 4 people, in your lifetime you're going to know many people who are going to suffer from mental health issues, who won't get the support or help they need.
People you know and care about. Maybe, one day it might be you.
I wouldn't want you or anyone to suffer from mental health issues without any support.
The idea of using AI for mental health is an extremely controversial and heated topic, and I can totally understand why.
You could say, we should only use humans for mental health support, end of story.
As Eliot highlights:
Some believe that only a human therapist or clinician can suitably aid other humans with their mental health concerns. The idea of using a passionless human-soul-devoid AI system or a so-called robot therapist or robo-therapist (which nowadays is said to be a chatbot or generative AI therapist), just seems nutty and completely off the rails.
But that would ignore & fail to address the main drivers for our mental health crisis such as stigma, cost, or lack of availability.
As Eliot also explains:
We can civilly agree that there is a mental health challenge facing the country and that we ought to be doing something about it. If we do nothing, the base assumption is that things are going to get worse. You can’t let a festering problem endlessly fester.
If we can address concerns such as risks to the quality of care & privacy issues, AI could be a powerful tool we can use together with human therapists to solve this crisis.
If we are prepared to understand the reasons for this crisis, be open to dealing with it in new and innovative ways, and let go of old beliefs that don't provide effective solutions, we can solve this crisis.
If we can use the best that humans and AI offer together, AI could help rescue us from this mental health crisis, and help billions of people around the world who would otherwise suffer alone.
But what’s your perspective? Do you agree? Do you think AI could help solve our mental health crisis? Or do you have a very different perspective?
I’d love to know what you think whatever that is, let me know in the comments and let’s continue this important discussion about AI and mental health.
I use AI as part of my writing process, as a tool to help improve my own writing and more. Like to learn how to use AI to help your writing & more?
What do you think about the issues raised here?
Let me know in the comments I’d love to know 🙂