While I agree with pulling the app to address privacy concerns, I was very surprised by the following quote which accompanied the NY Times article:
“If someone tweets ‘I’m going to kill myself,’ you can’t just jump in,” said Christophe Giraud-Carrier, a computer scientist at Brigham Young University who studies the role of social media in health surveillance. “There are all these psychological factors that come into play that may push someone over the edge.”
While I acknowledge that there are complex factors in suicide intervention, I worry that this statement discourages intervening online (or developing applications that could facilitate this intervention). A lot of suicide prevention work has focused on training community members (I would argue this includes your online community) to be active bystanders who intervene when they see someone at risk. For example, gatekeeper trainings are popular strategies which help participants to develop the knowledge, attitudes, and skills necessary to identify those at-risk for suicide, determine levels of risk, and make referrals when necessary. The National Suicide Prevention Lifeline provides guidance for helping online when someone might be suicidal. They also link to safety teams at each social media site, including Twitter.
From my review of the research and relevant articles, there seems to be an emerging line between using Twitter to gather anonymous, aggregate mental illness data and identifying and intervening with individual users. For example, researchers at Johns Hopkins University have had a very positive response to their research using Twitter to collect new data on post-traumatic stress disorder, depression, bipolar disorder, and seasonal affective disorder. The scholars emphasize that their findings do not disclose the names of people who publicly tweeted about their disorders. Their goal is to share timely prevalence data with treatment providers and public health officials.
Tell Me What You Think:
- Do these two stories represent the boundaries of using Twitter to search for warning signs or symptoms of mental illness? In other words, is using Twitter to gather anonymous data sets the only way to use it ethically and safely?
- Or is there a way to overcome the privacy concerns to empower/enable/encourage users to intervene with their fellow users if necessary?