If you or someone you know is in crisis, call or text the 988 Suicide and Crisis Lifeline to connect with a counselor.
SPRINGFIELD – As the use of artificial intelligence continues to grow, the technology is increasingly being considered for use in mental health settings. Lawmakers, however, are concerned this could cause patients to receive underqualified mental health care in the place of support from licensed therapists.
In August, Illinois passed a bill that will limit the use of AI in therapy settings. The bill’s lead sponsor, Democratic State Representative Bob Morgan, said the measure aims to keep patients safe.
“People are increasingly using their phones for everything in their lives,” he said. “Unfortunately, more and more so, when they’re in crisis, when they have suicidal ideation, when they are dealing with other mental health challenges, they’re turning to AI.”
Morgan said that this can be harmful because AI is not as qualified as a human to address mental health problems. He said the legislation targets business models that are trying to start up therapy chatbots rather than people who turn to large language AI models like Chat GPT for their mental health.
“If somebody opened a store at a corner and said, I’m a doctor, but they really weren’t, the state always regulated that. What we’re talking about here is similar,” he said. “We don’t want an app or a website or platform holding itself out and trying to fill the needs of what otherwise would be provided by a therapist.”
The law does allow therapists and companies to use for administrative tasks like note-taking and scheduling appointments — if patients consent to the use of AI.
Though officials say there are dangers to using AI for therapy, others say the technology could help address a shortage of mental health professionals and decrease wait times for care.
Olu Ajilore, a professor of psychiatry at the University of Illinois Chicago, is part of a team of researchers developing an AI voice coach called Lumen to address this problem.
“We definitely don’t want to replace the scarce resources that we already have, but what we want to do is augment what we have,” he said. “We want to give somebody something that they can use, something to help them address their mental health issues while they’re waiting for care.”
Ajilore said Lumen has shown promising results for mental health care. He said the tool is being developed to tailor responses based on the user’s needs rather than generate responses like large language models.
“The answers that you get back will still have to be curated and prepared. It doesn’t make up stuff, or even say things that could be harmful, which has been a real problem with using tools like chat GPT for therapy,” Ajilore said.
In his research, Ajilore found that underrepresented groups, such as non-white women, had the most success using Lumen. He said tools like these function the best when created with the user’s needs in mind.
“It’s important to think about who are you designing these technologies for, and that they are part of the development process when you’re creating these technologies,” Ajilore said.
Morgan said he thinks the increased use of artificial intelligence is inevitable. He said the goal of the new law is not to ban companies from using AI to help treat mental health, but to establish a set of standards guiding how the technology can be used.
“This need for mental health care is just so high that it’s unrealistic to think that our existing network of actual people, of licensed professionals, are going to be enough to satisfy the need,” he said. “That’s why we have to have guardrails.”