In suggesting that Artificial Intelligence might be able to help us improve our mental health the first reactions I receive are often puzzled or confused looks: as if I had just asked your toaster to drive us to the airport. How can a programme with no true understanding of emotions help us de-code our own? The answer is that our thoughts, emotions and behaviors create patterns and if there is one thing that AI excels at, it’s identifying patterns.

Autism, ADHD, PTSD, anxiety, and depression are forms of mental health illnesses which artificial intelligence can help to treat. In each of these conditions, the brain makes repetitive connections which induce detrimental thoughts and behaviors. For example, if every time someone mentions an assignment the brain links this thought to stress, it will continue to practice this connection and actively seek it out. But by being aware of these detrimental behaviors and knowing that we can actively change them, we can eventually heal our brain of its mental health problems. AI can not only help us identify these patterns but can help train us out of them.

By recording and analysing the language used in thousands of therapy sessions using a natural language processing tool, therapists can get a better idea of how much time is spent on constructive therapy techniques versus general chitchat. Thus, allowing them to modify their sessions accordingly to ensure the highest standard of care is attained. Furthermore, by comparing the techniques used by therapists, the symptoms of the patients and the outcomes of their sessions key insights can be derived to improve treatment. For example, it can identify subgroups within a diagnosis; show what therapies work best for which subgroups; and assign the most suitable therapists to individuals. In short, data collected and analyzed by AI can help create personalized therapeutic programmes and ensure earlier detection of mental health issues. But only as long as ethical considerations such as maximizing accuracy, using high-quality data and avoiding biases are kept at the forefront of any AI-driven approaches.

Diagnosis of mental health conditions can also be improved through motion sensors and deep learning techniques. A study has found that using AI which can identify movements such as nail-biting, knuckle cracking and hand tapping, the AI was able to detect anxiety with 92% accuracy. Although this was a rather small study, with only 10 participants, it highlights the beginning of a promising mental health tool which could be of considerable value as a screening and diagnostic tool.

CBT aims to address both the troubling behaviors resulting from the mental health issues and the mental health issues themselves. CBT chatbots are created to identify negative thought patterns and encourage users to change them. One of the main benefits identified by individuals using CBT chatbots is that there is no pressure because they are ‘just chatbots’ allowing a patient to honestly and openly vocalize exactly what they are thinking, perhaps for the first time. The accessibility of CBT chatbots also presents a huge advantage as they can reach individuals who, for various reasons, may not have access to mental health services. This may be because they can’t afford it or, perhaps, in their part of the world seeking support for mental health conditions is stigmatized or simply unavailable. Some users have described CBT chatbots as a ‘4 am friend’. The chatbots can also be a stepping stone for individuals who perhaps aren’t comfortable with the idea of seeking therapy.

Yet, there is a lot of controversy surrounding CBT chatbots. Some therapists make constructive criticisms, stating that there is no room for a stream of consciousness and that the design relies on the patients being in a grounded mental state (which often patients seeking psychological support are not). While other therapists completely reject the chatbots, as they find the nature of the chatbots deeply problematic. The fact that chatbots aren’t human means they can’t empathize; a key element of therapeutic care. Therapists have argued that presenting chatbots as the solution to the lack of funding for mental health services can be deeply problematic. Potentially this could lead to a social acceptance that chatbots are good enough care for those who don’t have access to a human therapist, possibly justifying a disinvestment in mental health services.

However, critiques regarding the use of chatbots instead of human therapists are fundamentally flawed. Mental health chatbots don’t have the power to magically cure us or replace human therapists, they are designed by nature to work alongside our own efforts and those of a professional. It is important to be clear on how we conceptualize these chatbots because it will affect how people use them.

The fundamental misconception that CBT chatbots are designed to replace human therapists feeds into possibly the biggest mistake that we make with AI generally. Movies, books and the media have all fed us the scary idea that we are being replaced by AI and robotics and that AI aims to give us the solution. Now more than ever, the human-centric movement in AI is to find ways to help us as humans find solutions and make us better at what we do.

Written by Celene Sandiford, smartR AI

Recent News

Stronger, Smarter, Together – Partnership

We are excited to announce that Eike Consulting and smartR AI are working in partnership. Working together with great partners is the smartR way to live, and Randall Eike and Oliver King-Smith go back many many years. We're bring visions to life to improve lives...

Meet smartR AI at HETT, 26 – 27 September 2023

smartR AI are honoured to be invited by Facts and Dimensions to be featured on their stand at HETT this year. We will demo and discuss ready to roll AI solutions. Find out how our private smartR myGPT™ provides excellence in data analysis for the FAD data sets, as...