Will censoring graphic posts increase stigma of self harm?

Ethen Smith, Staff Writer

In February of this year, both Instagram and Facebook enacted a new policy that bans posts containing graphic imagery, including self-harm.  As a response to Ian Russell’s statement (see above story), this policy was implemented to ensure that those struggling with depression or other mental illnesses would not be triggered or encouraged to take their own lives. 

Although Instagram and Facebook had good intentions, both platforms received an onslaught of public backlash detailing numerous issues with the new rule, mainly being that the ultimate censorship of graphic posts will only increase the stigma of NSSID (Non-Suicidal Self Injury Disorder).  Though the AI that filters through posts was presumed to target images of active self-harm, it takes down healed self-harm scars as well.

“It’s a real issue,” junior Laura Zamora said.  “[Images of self-harm] shouldn’t be hidden because it exists.  Instead of [censoring] it, use social media as a way to educate people on how to overcome it.”

Experts such as Seth Axelrod, a psychiatry professor at Yale specializing in NSSID, suggest that active censorship of these scars may reinforce the stigma surrounding self-harm and increase feelings of isolation and judgment among those with NSSID.  These feelings may also lead to shame, encouraging people to turn to self-harm once more.

However, there has been research suggesting that online content regarding self-harm could prove detrimental.

A medical research article written by Stephen Lewis, head of the Department of Psychology at the University of Guelph, states that exposure to NSSID-related material on social media “may lead to reinforcement of the behavior… when this e-material is repeatedly accessed.”

The article also addresses the impact of trigger warnings on most social media platforms, most notably Instagram.  When a post contains graphic content, the image is blurred, and the user has the choice of whether to view the image or continue to scroll.

Despite the evidence that censoring topics encourages substitution for healthier online activities, some still believe that this censorship disregards the possibility of positivity in what is now deemed as “negative” posts.

“I think those kinds of things shouldn’t be filtered on social media since it’s a serious topic,” senior Alexia Menh said.  “Exposure [to NSSID-related posts] can help someone to get through to another person on seeking help.”

While Instagram’s intentions of censorship are noble, complete and total removal of posts by users with NSSID can be seen as too far.  Substituting complete censorship with a preexisting trigger warning system could protect users by giving them the choice to view content they want to see while allowing users struggling with mental illnesses to continue to use these platforms as an emotional outlet.