Parkview Health Logo

Parkview researcher receives National Science Foundation grant to study mental health chatbot responses

Fort Wayne, IND. – March 26, 2025 – With the rise of generative artificial intelligence, more and more people are interacting with chatbots, but those apps may fall short of claims of providing clinical support for people experiencing mental health issues, according to the recent findings of a Parkview Health researcher.Fyika Farhat Nova

And getting it wrong when someone is in crisis can have severe, even life-threatening consequences.

With funding from a two-year National Science Foundation (NSF) grant, Fayika Farhat Nova PhD, a research scientist with Parkview Health’s Health Services and Informatics Research team at the Mirro Center for Research and Innovation, is working to address these challenges. Nova’s research will evaluate how mental health chatbots interact with users as they share sensitive and maybe traumatizing experiences and, in collaboration with clinicians and end users, create holistic design guidelines for these technologies. Grounded in trauma-informed care (TIC) principles, these guidelines will focus on ensuring user safety, empowerment, and overall well-being, helping to create chatbots that provide effective support during critical moments.

It’s the first time Parkview researchers have received a grant from the National Science Foundation. The $175,000 grant was awarded by the NSF’s Division of Information and Intelligent Systems and funds the project through May 31, 2026.

TIC is an approach to mental health support that prioritizes patient safety, trust, peer support, collaboration, empowerment, and cultural/historical sensitivity, holding providers and healthcare organizations accountable during patient care. By establishing accountable design practices for AI-driven mental health chatbots, Nova’s project seeks to improve care quality, user experiences and scientific progress in the field.

The NSF project builds off a pilot study Nova conducted at Parkview to test two popular mental health chatbots available on internet app stores and gauge their effectiveness and sensitivity in responding to a variety of trauma-based scenarios presented to them.

After running a selection of chatbots through tests and recording the responses, Nova consulted with a group of nine healthcare providers, including mental health physicians, social workers and public health experts, to first define TIC protocols and then assess whether the chatbots effectively met those criteria as they responded to situations posed before them. Providers rarely agreed on any positive demonstration of the six TIC principles, but they were a little more in agreement when the chatbots failed to uphold those principles.

While chatbots can often provide useful information when a user directly and clearly states their concern, Nova found they struggle with understanding subtle cues or nuances, which can be harmful or risky in this context. This contrasts with human mental health providers, who are trained to recognize these cues and appropriately explore and address them.

“These chatbots are trained on certain datasets. While it is almost impossible to know what specific datasets were used to train them and their quality, a majority of the chatbots have a list of keywords and trigger words to get cues,” Nova said. “But focusing just on the keyword may not provide the actual context. If a chatbot cannot recognize subtle cues and provide inappropriate responses or support, it’s not just failing to meet expectations, it’s creating an environment that is risky, unsupportive and dismissive of the individual’s unique experiences and background, something that is evident in trauma-informed principles.”

In an ideal world, Nova said, mental health chatbots should be able to accurately connect a user to appropriate resources during a crisis and serve as a support tool in conjunction with trained mental health treatment. Chatbots could effectively aid providers as a communication bridge or a logging device, to collect information that can help guide or improve treatment for a particular patient. Upon completion of the study, Nova’s findings will be published and available to other researchers, healthcare providers and tech developers for further study or implementation into new app design.

“Now that technology is evolving and everywhere we look it’s about generative AI, how can healthcare take advantage of AI?” Nova said. “But before we include AI into our workflow, it’s really important to understand how it would impact every relevant stakeholder and how it will impact patients and their safety.”

The project is one of several ongoing research endeavors being conducted by the Health Systems and Informatics Research (HSIR) team at the Parkview Mirro Center for Research and Innovation. Parkview’s HSIR team brings concrete solutions to problems that patients, healthcare providers, the healthcare system and the community face through traditional research studies, pilot studies, program evaluation and user experience research.

“Research is the key to unlocking the next wave of advancement in the application of new technology to effectively develop treatments and best practices in medicine. We are in a Mental Health Pandemic and innovative Mental Health solutions are a hopeful path forward from the darkness,” said Dr. Michael Mirro, chief academic research officer, Parkview Health. “Parkview’s research teams are working every day to both ask and answer the big questions in healthcare in order to improve the care for not only our patients, but patients across the globe.”

More information about Nova’s NSF grant and a project abstract can be found here.