Will Artificial Intelligence Replace Human Psychiatrists?

By Henry I. Miller, MS, MD — Feb 18, 2025
Artificial Intelligence (AI) applications will range from simple triage and the evaluation of psychotherapy sessions to extraordinary science fiction-like diagnosis.
Dr. Sigmund Freud memorabilia
Sigmund Freud Museum, Vienna, Austria

Psychiatry was one of my early rotations as a third-year medical student just beginning clinical training. After years of exposure to science in the classroom and laboratories, its imprecision was unsettling. Also disquieting was both how common and devastating mental illness can be; suicide is the tenth leading cause of death in the U.S. and the fourth most prevalent among 15-29 year-olds globally. 

Psychiatry, despite its good intentions, often falls short. How, then, can physicians improve outcomes?

That’s one of the issues I wrestled with during my training. The primary way to evaluate someone’s mental health was through self-reporting in response to direct questions like, “In the past two weeks, how often have you felt little interest or pleasure in doing activities that normally would be pleasurable?” or “Do you sometimes hear voices that no one else hears?” or "Do you ever feel that life isn't worth living?"

I recall vividly my questioning of one new inpatient. I asked him the usual questions, but nothing seemed abnormal. Finally, I tried, “Is there any way that you’re different from everyone else?” His eyes narrowed, he glared at me and said, “You know, don’t you?” Then he described his elaborate delusional system, in which everyone was plotting against him. He was a paranoid schizophrenic.

Beyond the couch

Although such questioning – either verbally or via a questionnaire – is still seen as the primary tool for diagnosing and monitoring psychiatric disorders, it is far from foolproof. Not only are the responses subjective snapshots, often taken in settings that do not reflect the individual’s everyday environment, but sometimes the questions simply don’t push the right psychological button. 

Moreover, as was pointed out to me by psychiatrist and director of the University of North Carolina Suicide Prevention Institute Dr. Patrick F. Sullivan, self-reporting by patients is fallible: Even their reporting of medical or surgical hospitalizations during the past year “isn’t great. People obfuscate, are in denial, occasionally lie – [about] substance use disorders, for instance.”

Could AI help?

Now we are in an era when artificial intelligence (AI) might provide previously unimagined ways that technology can help to more objectively decipher patients’ deepest emotions and mental states. Academic researchers are pioneering the use of AI to enhance the accuracy of mental health assessments. These ingenious approaches should provide a more comprehensive picture of a person’s mental well-being, identifying those in need of intervention and guiding treatment decisions. 

The potential benefits are compelling, but because machine learning that is the basis of AI requires a continuous flow of information on patients, AI’s integration into psychiatry may cause concerns about privacy, safety, and bias. There are already epic examples of failures. For example, there was a report of a Belgian man who, after weeks of verbal dialogues with his chatbot “confidante,” committed suicide after it encouraged him to sacrifice himself in the interest of climate change.

Various AI tools under development analyze speech to predict the severity of anxiety and depression. They monitor reproducible parameters such as speech patterns and physiological indicators to evaluate subtle patterns that might help with diagnoses. For example, individuals with depression more frequently use words like “mine” and first-person singular pronouns such as “I,” “me,” and “my.” This seemingly minor detail is a useful indicator of depressive states. Moreover, people with depression often specifically discuss sadness, whereas those with anxiety tend to express a broader range of emotions. 

Similarly, in a January 2025 article in The Annals of Family Medicine, researchers described an AI tool that was able to correctly identify depression in 71% of those diagnosed with depression while correctly ruling it out in 74% of a group of people who did not have it. Rather than word choice, it detects anomalies in speech patterns; for example, people with depression are more likely to stutter and hesitate when they speak, have a slower speech cadence, and pause more often and for a longer time than people who are not depressed. 

To establish empathy with patients, skilled psychotherapists sometimes adopt certain speech patterns or use carefully chosen words that have resonance to the patient based on his or her vocation or level of education. AI programs’ encyclopedic database could be useful in creating rapport through the selection of certain words and vernacular patterns of speech. 

However, UNC’s Dr. Sullivan raises valid concerns about the clinical population being different from those used to train the AI machine learning programs, and about the ability of the programs to cope with “lisps, English as a second language, regional accents, and personal style.”

The future of psychotherapy could include AI “mentors” that observe and analyze sessions, offer recommendations on medications, and even suggest specific therapy techniques and strategies.

Ambient intelligence and beyond

Beyond the therapist’s office, there is also under development a science-fiction-like approach called “ambient intelligence” — technology embedded in buildings that can sense and respond to the occupants’ mental states. This includes audio analysis, pressure sensors to monitor gait, thermal sensors for physiological changes, and visual systems to detect unusual behaviors. Such technology could be invaluable in hospitals and senior-care facilities, identifying individuals at risk of hallucinations, cognitive decline, or suicide.

AI is also proving useful in other ways. Stanford University researchers, in collaboration with a telehealth company, developed an AI system called Crisis-Message Detector 1 that rapidly identifies messages from patients that indicate thoughts of suicide, self-harm, or violence, drastically reducing wait times for those in crisis from hours to minutes.

While AI tools like Crisis-Message Detector 1 are designed to support human decision-making, there is also the possibility of autonomous AI therapists eventually emerging. Using AI that provides cognitive behavioral therapy and empathetic support, companies such as the health chatbot Woebot and Koko, a peer-support platform that provides crowdsourced cognitive therapy, aim to replicate the experience of a live human therapist.

Initially text-based, these AI therapists could eventually incorporate audio and video to analyze clients’ facial expressions and body language. A recent survey revealed that 55% of respondents would prefer AI-based psychotherapy, appreciating the convenience and the ability to discuss sensitive topics more freely.

The concept of AI in therapy is not new. ELIZA, an early conversational program developed in the 1960s at MIT (coincidentally, when I was an undergraduate there), mimicked a Rogerian psychotherapist. Although its creator intended to demonstrate AI’s limitations, many found ELIZA surprisingly empathetic. Today, with advanced language models, individuals are using AI chatbots like ChatGPT for mental health support, prompting them to act like a therapist.

Ultimately, AI’s role in mental health care could democratize access to high-quality therapy, delivering effective treatment to vast numbers of patients at low cost. (Koko, mentioned above, is currently free.) While no AI is yet adequate for independent psychiatric use, it holds the potential to complement and enhance human therapists by providing insights into the nuances of effective therapy and offering detailed analysis of therapy sessions to understand why certain approaches work better than others.

As we apply these advances, the goal remains the same as during my medical school Psychiatry rotation decades ago: to diagnose mental illness and provide compassionate, effective care to those in need.

Henry I. Miller, MS, MD

Henry I. Miller, MS, MD, is the Glenn Swogger Distinguished Fellow at the American Council on Science and Health. His research focuses on public policy toward science, technology, and medicine, encompassing a number of areas, including pharmaceutical development, genetic engineering, models for regulatory reform, precision medicine, and the emergence of new viral diseases. Dr. Miller served for fifteen years at the US Food and Drug Administration (FDA) in a number of posts, including as the founding director of the Office of Biotechnology.

Recent articles by this author:
ACSH relies on donors like you. If you enjoy our work, please contribute.

Make your tax-deductible gift today!

 

 

Popular articles