By Qining Wang
Under the turmoil of social events, from global pandemics to wars and social unrests, mental health is becoming an increasingly greater concern among the public.
According to the Anxiety and Depression Association of America (AADA), anxiety disorders are the most common mental illness in the USA, affecting 40 million adults. Another common mental health illness, depression, affects 16 million adults in the USA, according to statistics from the Centers for Disease Control and Prevention (CDC). The greater awareness and gradual destigmatization of mental health issues have led more people to seek professional help to improve their overall mental well-being.
When working with mental health professionals, self-disclosure is vital to finding the roots and triggers of mental health issues. Self-disclosure is a process through which a person reveals personal or sensitive information to others. It is a crucial way to relieve stress, anxiety, and depression.
Meanwhile, self-disclosure is a skill that one needs to cultivate through practice. It’s a skill we can only practice through constant self-exploration and the courage to be vulnerable.
To investigate alternative ways of practicing self-disclosure, a research team at the University of Illinois at Urbana-Champaign (UIUC) explored chatbots and conversational AIs as potential mediators in the self-disclosure process in a study in 2020. The team leader, Dr. Yun Huang, is an assistant professor in the School of Information Sciences at UIUC and the co-director of the Social Computing Systems (SALT) Lab. The team is mainly interested in context-based social computing system research.
Chatbots are ubiquitous in today’s online world. They are computer programs interacting with humans back-and-forth, like having a conversation. Some chatbots are task-oriented. An example can be a frequently-asked-questions (FAQ) chatbot that recognizes the keywords a person types and spits out a preset answer according to the keywords. Other more sophisticated chatbots, such as Apple’s Siri and Amazon’s Alexa, are data-driven. They are more contextually aware and can tailor their responses based on user input. Both are ideal qualities for designing an empathetic and tone-aware chatbot capable of self-disclosure.
As such, Dr. Huang’s team built a self-disclosing chatbot that can engage in conversation more naturally and spontaneously. The chatbot would initiate self-disclosure during small-talk sessions. It would gradually move to more sensitive questions that encourage users to self-disclose.
To study how chatbots’ self-disclosure can affect humans’ willingness to self-disclose, the team recruited university students and divided them into three groups. Each group would interact with the chatbot at different levels of self-disclosure, from no self-disclosure to low and high levels of self-disclosure.
During the four-week study, the student participants would interact with the chatbot every day for 7–10 minutes. At the end of the third week, the chatbot would recommend that students interact with a human mental health specialist. The researchers would then evaluate students’ willingness to self-disclose to the professional.
The team found that the groups that self-disclosed to the chatbot reported greater trust in the mental health professional than the control group. Participants felt “confused” when the chatbot brought up the human professional. In the experimental groups, they felt that they could listen to the chatbot and share sensitive experiences.
The team noted that, for participants interacting with the chatbot with the highest level of self-disclosure, their trust for the mental health professional stemmed from the trust of the chatbot. Participants’ trust was mainly directed toward the research team and professionals behind the chatbot for the other two groups.
This study highlights how chatbots can be a great tool to help users practice self-disclosure, making them more comfortable seeking human professionals. It is worth noting that, regardless of how sophisticated chatbots can be, they are just mediators between users and mental health professionals.
At the end of the day, the most meaningful kind of self-disclosure can only be found through care, empathy, and understanding. Human to human.
Get Involved
Contact the Midwest Big Data Innovation Hub if you’re aware of other people or projects we should profile here, or to participate in any of our community-led Priority Areas. The MBDH has a variety of ways to get involved with our community and activities. The Midwest Big Data Innovation Hub is an NSF-funded partnership of the University of Illinois at Urbana-Champaign, Indiana University, Iowa State University, the University of Michigan, the University of Minnesota, and the University of North Dakota, and is focused on developing collaborations in the 12-state Midwest region. Learn more about the national NSF Big Data Hubs community.