The Reddit thread that caught Philip Resnik’s attention posed a potentially life-saving question.
“Formerly suicidal Redditors, what’s something that kept you alive a little while longer and helped you to get through the dark times in your lives?” the post asked.
Seeing the thread as an opportunity to conduct research, Resnik, a computational linguist and University of Maryland professor, got to work. He and his wife — Dr. Rebecca Resnik, the former president of the Maryland Psychological Association — assembled a team of psychology experts and data scientists to analyze the responses to the Reddit thread.
The resulting research paper, titled “Reasons to Live Instead of Dying by Suicide: New Insights from a Computer-Assisted Content Analysis,” utilizes machine learning to discern the reasons the Redditors chose to live.
The Resniks are just two of several researchers exploring suicide prevention through machine learning — a growing field that’s of special importance to young people amid an increase in teen suicide rates.
In 2023, one in five high school students seriously considered attempting suicide, according to a 2024 study by the Yale School of Medicine. The study also found that from 2007 to 2021, suicide deaths increased by over 62% among people ages 10 to 24.
Resnik hopes research like his can share one important message to teens: They’re not alone.
“It is very easy to not realize that what you’re experiencing is something that is almost certainly shared, no matter how specific to your situation,” Resnik said. “I strongly suspect, especially given the demographics of the Reddit population that we sample, that somebody who’s a [teen] right now is going to actually recognize a lot more in the experiences of the people who talked about them in this data set.”
Most suicide research primarily focuses on models that predict if a patient is at risk, factoring in past psychiatric disorders and information like race, age and income.
But through his research, Resnik looked into a budding side of suicide prevention efforts — the experiences of those who got through the hard times.
“As I began looking into this, [I noticed] there is so much work on why people die, but relative to that, so little work on understanding what keeps people from dying by suicide,” Resnik said, “and yet that’s what the clinicians in the trenches are doing.”
A life-changing thread
The Reddit thread Resnik’s team studied did not just include young people. Reddit’s anonymity also means demographic factors like the age, race and income of its users remain largely unknown.
Still, given social media’s popularity with young people, youths were surely represented among the over 16,000 comments in the thread. Resnik’s team used software that grouped responses into four overarching themes to explain why those Reddit users chose to live:
- Concern for others: Some users said they wanted to avoid inflicting emotional pain or financial troubles on friends, loved ones and even pets.
- Sensory pleasures: Some users referenced possessions or media that helped make them happy or distracted them. This ranged from new episodes of their favorite TV shows, their favorite foods and even substances like marijuana.
- Positive foresight: Some users found hope through sources like family and friends, philosophy and faith, or adopted a “one day at at time” mentality.
- Negative valence: Some users channeled negative emotions like fear, spite or guilt to push themselves to live.
Resnik noted any number of those four factors can come into play for a person considering suicide.
“This [research] is providing an alternative lens through which to look at people’s experiences of being suicidal,” Resnik said. “You can think of this as the four themes being four different colors, and you can see what’s stronger and what’s weaker for any individual.”
Reddit’s anonymity may have allowed users to be more forthright with their answers. For example, a user may avoid telling a mental health provider that drug use got them through a dark period of their life, but would anonymously mention it on social media, Resnik said.
These reasons showed up prominently throughout the “negative valence” category. While channeling negative emotions toward others may be harmful in the long term, the research shows it might be beneficial in the short term, Resnik said.
Resnik emphasized his research should not discourage people from seeking professional mental health care. Still, he said his work offers an honest look into the mindset of someone experiencing suicidal ideation.
Katherine Schafer, a researcher on Resnik’s team, hopes these findings will provide a more nuanced look at why people experiencing suicidal thoughts stop before harming themselves.
“In a very unique way — and perhaps in an unfiltered way — this project allows clinicians an intimate view into the thoughts of people who have been really acutely at risk,” Schafer said.
Despite its insights, the model only provides a snapshot of a Reddit user’s experience. The research cannot verify if the reasons that stopped suicide attempts worked in the long term for patients or if there were additional contributing factors, such as therapy.
Even with its limitations, Schafer said this data can provide a source of hope for those struggling with suicidal thoughts, given that it showcases 16,000 people who were on the same journey.
“When we think about what might cause suicide, there might be a feeling of loneliness or isolation, and just the sheer volume of content that came from people across the world could help people feel less alone in their dark moments,” Schafer said.
AI and suicide prevention
Resnik is not alone in turning to technology to save lives. Since the 2010s, researchers have been using machine learning to spot when someone might be suicidal.
This approach appears to work. A March 2024 study led by Dr. Alessandro Pigoni, an Italian psychiatrist, analyzed over 80 datasets and found machine learning models excelled the most at predicting suicidal behaviors and attempts in patients facing mental health struggles.
Emily Haroz, an associate professor at Johns Hopkins University, echoed these findings. She said most models are “trained” on patients’ mental health assessments through electronic health records.
“We’ve seen this idea that if we could identify people at risk, that means we can potentially prevent it,” Haroz said. “And the ways we’ve focused on identifying risk and the [patient’s] history has been like a clinician assessment.”
Haroz said Johns Hopkins researchers have recently explored using AI on social media platforms to both identify users at risk of suicidal behavior and intervene through direct messages.
With a vast amount of data to pull from, researchers can even pinpoint specific areas of the country that are most susceptible to suicide.
Vishnu Kumar, an assistant professor in the Department of Industrial and Systems Engineering at Morgan State University, used data from the Centers of Disease Control to create a suicide vulnerability index. The index, a scale from zero to one, identifies which U.S. counties have the highest suicide rates with numbers closest to one indicating the most risk.
Kumar emphasized these findings can prompt governments to steer resources to counties with higher suicide rates.
“Sometimes [mental health] resources get distributed equally to all the different counties, but then if you analyze it more in-depth, you will see that some counties that need more attention,” Kumar said. “Instead of simply splitting the resources equally, I think it’s a good way to concentrate on those counties that need more help and then provide more resources there.”
In Maryland, the index shows residents of more densely populated areas such as Baltimore, Howard and Montgomery counties are more at risk, while rural areas like Kent and Somerset counties have a comparatively lower vulnerability.
Data-driven, prediction-based models ultimately provide the backbone for most machine learning research on suicide prevention. For researchers like Haroz, these models are supplements that can improve the work of psychiatrists and other clinicians.
“Keeping humans at the forefront of [research] and finding ways it can augment human decision-making or offload reliable tasks is great,” Haroz said.
A gap in suicide research
As machine learning models continue to develop, they often confront a key misperception in how some view suicide: that it’s just a mental health issue.
While some machine learning models predicting suicidal behavior rely on electronic health records, researchers believe labeling suicide as a mental health issue simplifies a complex topic. Craig Bryan, a psychologist at the Ohio State University College of Medicine cited in Resnik’s research, argues outside factors such as decision-making skills, financial troubles and access to lethal weapons play just as significant a role as psychiatric disorders.
A 2024 study led by Mayyas Al-Remawi, a researcher at the University of Petra in Jordan, used machine learning models to examine the role of water contaminants and diets in suicidal ideation. The team found that increased consumption of alcohol, low-nutrient diets and chemicals like mercury in water contribute to higher suicide risks.
Meanwhile, Kumar’s study incorporated data such as race and income to determine suicide vulnerability, and Haroz cites chronic illnesses and childhood trauma as potential contributing factors.
This research differs from Resnik and Schafer’s in one significant way.
“[Our research is] more intervention-focused as opposed to a more risk-focused [model],” Schafer said.
Limitations and criticism
The machine learning revolution in suicide research comes with some risks.
Haroz said one concern is that doctors will treat suicide risk prediction models as official diagnoses rather than guidelines to help at-risk patients.
A 2020 article in the Journal of Medical Ethics by researchers Thomas Grote and Philipp Berens references “peer disagreement,” the idea that two qualified doctors can offer different diagnoses, as a growing problem in suicide prevention machine learning models. The authors argued machine learning can remove nuance from medical opinions, as clinicians may rely solely on the machine learning model’s diagnosis as the “correct” answer.
Machine learning models can also detect a patient who might not think they have a history of suicide ideation as being at risk for suicide, Haroz said. She hopes researchers look into ways to guide these potentially difficult conversations with patients.
The authors of several scientific journal articles also criticize utilizing electronic health records to “train” data models, arguing it raises ethical questions about patient privacy and data security.
To combat this, large research institutions like Johns Hopkins work to ensure the safety of the data, such as establishing an institutional review board that weighs the benefits and risks of research.
“We’re very careful about having a very secure web storage that’s up to legal standards and beyond,” Haroz said.
Beyond the research
Despite those concerns, supporters of machine learning research believe it can revolutionize the world of clinical care by increasing the focus on intervention for those at risk of suicide.
This intervention can extend far past a therapist’s office, Haroz said. Social media sites and even video game chat rooms — which are particularly popular with young people — are potential places where clinicians could monitor concerning behavior, as people may be more comfortable interacting on such platforms.
Kumar envisions a similar use of technology, where a machine learning model can “train” on an individual users’ behavior to detect any unusual changes over time that may lead to suicide risks.
As for Resnik, he hopes research like his exposes clinicians to the “lived experiences” of patients who have considered suicide.
Whether it’s prediction, analysis or intervention, Haroz hopes machine learning can help clinicians rethink how we view suicide prevention, especially among young people.
“No one should lose a child to suicide … this is an issue that crosses all people.” Haroz said. “Caring about suicide prevention is incumbent on all of us to help continue to save lives and save the suffering that happens to people around them.”