Skip to main content

Johns Hopkins

Johns Hopkins Pediatric

Suicide Prevention in Social Media: Creating Safe Spaces for Children and Adolescents

Minimizing Risk, Maximizing Results in Spine Surgery for Older Adults

Social media provides an opportunity to identify mental health concerns or crises in children and adolescents. What can be done to protect youth and help create safe spaces for connection and education?

Clinicians, researchers, and young people and their family members are concerned about the impact of social media on youth mental health. Social media use is ubiquitous among youth, and the increasing suicide rate in this group in the past two decades is alarming. According to the U.S. Surgeon General, up to 95% of those ages 13–17 report using a social media platform, with more than a third saying they use social media “almost constantly.” Although age 13 is commonly the required minimum age to use social media platforms in the U.S., nearly 40% of children ages 8–12 use these platforms. The Centers for Disease Control and Prevention reported that the suicide rate among people ages 10–24 remained stable from 2001–2007 (the year the iPhone was released), and then increased 62% from 2007–2021 from 6.8 deaths per 100,000 people to 11. Instagram launched in 2010, Snapchat in 2011 and Tiktok in 2016. The advent of these platforms coincides with the national increase in youth mental health conditions.

WILCOX

Holly Wilcox

Efforts to regulate social media platforms with laws to ensure privacy and protections for minors online and/or holding the platforms accountable thus far are few and uncoordinated. Recently, however, over 30 states collectively sued Meta, which owns Facebook, Instagram, WhatsApp and Messenger, for violating consumer protection laws with minor users. Yet, social media is not all bad, as it provides opportunities for public health interventions. These platforms provide a crucial space to find and connect with peers, especially among youth who may feel isolated.

Holly Wilcox and Paul Nestadt are co-directors of the Johns Hopkins Suicide Prevention Work Group, which includes members from across the Johns Hopkins enterprise. The workgroup is an aspiring center and the only one in the United States with a focus on public health approaches to preventing suicide. It serves as a national resource in the field of suicidology, with robust suicide prevention research and training opportunities.

Dr. Nestadt, how can social media contribute to suicide prevention efforts?

Nestadt: Working with my colleagues in critical care medicine (Katherine Hoops) and computer science (Mark Dredze), we recently made a case for social media standards on suicide prevention in Lancet Psychiatry. We argue for pro-social, life-saving education, and removing harmful content in social media. This would be similar to guidelines for media about suicide coverage and messaging, which aim to minimize contagion effects and triggers, and to inform ways for and normalize getting help. The platforms and influencers can benefit from building on existing journalistic guidelines in providing education, normalizing healthy discussions about mental health, and deleting harmful content and cyberbullying.

Dr. Wilcox, could you walk us through your recent paper, “A Linguistic Analysis of Instagram Captions Between Adolescent Suicide Decedents and Living Controls”?

NESTADT

Paul Nestadt

Wilcox: In the paper, lead author Alex Walker used obituaries and news reports to identify 89 teenagers who died by suicide, and she also identified 89 matched living control teenagers. A linguistic and content analysis of the Instagram posts demonstrated several differences between the teenagers who died by suicide as compared with the controls. Linguistic patterns differentiated the two groups: Adolescents who died by suicide used more words per sentence, and more references to sadness, males, drives and leisure, while they used fewer verbs and made fewer references to they, affiliation, achievement and power. In terms of content differences of social media posts, none of the youth who died by suicide posted text or images explicitly about their plans for suicide, although there were several signs, such as posting lyrics from a song about suicide, expressing feeling unworthy of life, mentioning that life is too short, and posting an apology on the day of their death. However, social media platforms use algorithms to identify and delete posts with possible harmful content. So, we do not know if some of those who died had more explicit posts removed.

How can these findings in language differences guide intervention efforts?

Wilcox: Social media platforms could identify users with risky posts and provide outreach by a trained crisis counselor. YouthLine’s Safe Social Spaces program involves counselors finding youth in crisis on several social media apps and private messaging them to provide personalized support as well as resources in their community. Another approach would be that school-based mental health and suicide prevention programs could include content reinforcing healthy social media use.

How has social media affected patient care?

Nestadt: Social media has become an important way in which we collect history. Parents bring print outs of social media posts for intake. We can reinforce that patients monitor their own social media exposure and discuss healthy use of social media and other media.


© The Johns Hopkins University, The Johns Hopkins Hospital, and Johns Hopkins Health System. All rights reserved.

Powered by BROADCASTMED