Most of us have seen or heard news headlines such as “Depression Linked with Social Media Use in Adults” (Medical News Today, Nov 2021), “How Social Media Overdose Impacts Teenagers’ Mental Health” (The Indian Express, Dec 2021), or “Facebook’s Dangerous Experiment on Teen Girls” (The Atlantic, Nov 2021). Headlines such as these give the impression that social media, as a technology, is more harmful than helpful for the population; yet this has been the attitude about new tech since the introduction of the printing press. Although there is some merit to this notion that there are disadvantages to the use of social media, specifically as it relates to mental health and mental illness, we must contend with the reality that social media, and its more advanced descendants, are here to stay.

According to Pew, 7 in 10 Americans report using social media of some type (Auxier, B., & Anderson, M. 2021). The ubiquity of smartphones has expanded access to new forms of communication through social media, but they’ve also helped to exponentially expand the amount of data produced by us for use by the sophisticated artificial intelligence (AI) that exists all around us, from email spam filters (text classification) and Google search (information retrieval), to Apple FaceID (computer vision) and Netflix recommender systems (group clustering). While AI such as robotics, machine learning, computer vision, and natural language processing have received lots of bad press lately, there are many interesting ways that AI is being used to support mental health as well.

Since 2017, Facebook has been screening all posts in the United States for language indicative of suicidal behavior. To accomplish this, Facebook relies on crowdsourced information and its AI algorithms to detect and respond to posts demonstrating suicide risk. The tech giant is linked with large, national crisis response networks such as the National Suicide Prevention Lifeline to provide support when a post is detected as “immediate risk.” This is just one example of how AI is applied to provide support for those most in need of mental health support across social media in situations where manual screening may not be possible or quick enough. Successful prevention is difficult to quantify; as a result, lives that may have been saved as a result of AI don’t receive much media attention. But, it is important for those of us working to improve mental health systems and the mental well-being of others to understand how technology can support this work today, and into the future.

Online support groups are web 2.0 phenomena that hold a lot of promise for those experiencing poor mental health or mental illness. One of the key advantages of online support is pseudonymity, or perceived anonymity, which lends itself to a sense of safety in online communication (Wright & Bell, 2003). Perceived anonymity in online settings also offers a platform for objectivity that close relationships cannot offer (Turner, Grube & Meyers, 2001). People seeking online mental health support can communicate across space and time to share support and information over periods of hours, weeks, or years on a specific topic. Online support group members tend to have better control over their “presentation of self” which can lead to more therapeutic disclosure and a reduced sense of shame in discussing stigmatized mental illnesses. Social media affords people in need of support access to broad networks of similar others with whom they would not have normally had access.

Beyond broad suicide risk surveillance and online social support, the future of AI holds additional promise for mental health support. As researchers and practitioners advance from simple AI, such as the question answering systems used in Alexa and Siri to state of the art AI like Google’s GPT-3 deep learning language model, we will begin to see systems that provide culturally competent, relevant support that is available at all times. Our technology behavior creates data trails based on online searches, time spent on social media, time spent streaming certain videos, locations tracked through smartphone GPS, and physical health tracked through wearable devices, for example. We may see an integration of health data found in our data trails as private companies collaborate to share mental and physical health-related data about us. Ultimately, it is important to understand that as technology continues to evolve and become increasingly integrated into our lives, we must recognize not only the disadvantages, but the potential for new technologies to improve our lives and support those who are most in need.

References

Auxier, B., & Anderson, M. (2021). Social media use in 2021. Pew Research Center. Retrieved at: https://www.pewresearch.org/internet/2021/04/07/social-media-use-in-2021/

Turner, J. W., Grube, J. A., & Meyers, J. (2001). Developing an optimal match within online communities: An exploration of CMC support communities and traditional support. Journal of Communication, 51(2), 231-251.

Wright, K. B., Bell, S. B., Wright, K. B., & Bell, S. B. (2003). Health-related support groups on the Internet: Linking empirical findings to social support and computer-mediated communication theory. Journal of Health Psychology, 8(1), 39-54.