Credit: Getty Images / Oscar Wong
Content warning: this article discusses self-harm, ableism, racism, and eating disorders, which could be triggering for some readers.
Last year I was looking for medical advice online, which most medical professionals would deem a bad idea, but is common practice within rare disease and “invisible” illness communities. When a new symptom appears out of the blue, people turn to forums, Instagram pages, and internet communities to avoid a trip to accident and emergency for the third time that month. It is a risk chronically ill people often take to put off medical intervention out of fear of acquiring further medical trauma. For me at least, crowdsourcing for advice often yields better results than the doctors who believe I am wasting their time.
What started as seeking help from others who had experienced debilitating pain and worrying symptoms, led me into a dark pattern of late night meltdowns after hours of online scrolling, I now know that repeated desire to make myself feel awful was a form of self-harm — just digitally.
As I trawled the internet in search of others who were experiencing periodic numbness of their hands and leg, I fell into Reddit. The forum plays host to thousands of micro communities talking about everything from electric bikes to raising picky eaters, as well as large gatherings of people supporting each other through chronic illness and disability. This time, however, I didn’t find myself in communion with other sick people, instead I was exposed to R/illnessfakers, a place with hundreds of people who have made it their job to prove that sick people, just like me, are always lying.
I’ve known about R/illnessfakers for a while — last summer the BBC made an ill-thought-out documentary(Opens in a new tab) on the group that led to serious harm within the disabled community including inappropriate calls to the police and trolling across disabled people’s social media. Matt Klein, Foresight Lead at Reddit announced last May that R/illnessfakers was “one of the most viewed communities this week across health and wellness”(Opens in a new tab), with close to 130,000 members. I knew the forum existed before I watched the show but I never logged on and read myself, I thought I knew better.
However once I fell down the toxic internet rabbit hole of R/illnessfakers, I read every single post. I spent hours consuming this content that was created exclusively by people trying to prove the illnesses I have are not real. Posters in the page regularly mock sick people, acting as if they are true crime documentary researchers, not heartless and nosey busy bodies. I was distraught, I felt my anxiety rise in real time but I couldn’t stop.
To be clear, I was not the subject of scrutiny in this forum, but my illnesses were. There were threads relating to what the group term “illness influencers”, defined also by Klein as “influencers believed to be manufacturing chronic conditions for clout”(Opens in a new tab), although that is a stretch considering there is very material gain to be found as a disabled content creator compared to those flogging fast fashion 24/7. I returned to that forum week after week, until I asked my boyfriend to block access on all of my devices. Why was I doing this to myself? I felt obsessed with knowing how little these people thought of disability, how far they would take an invasion of another’s privacy whilst claiming to be doing something good.
What is digital self-harm? Could my unstoppable desire to keep reading whilst ruining my mental health be considered self-harm? Digital self-harm as a concept was initiated into academic research by the founders of the Cyberbullying Research Center, Justin Patchin and Sameer Hinduja(Opens in a new tab), who define the term as “a practice of anonymous online posting, sending, or otherwise sharing of hurtful content about oneself”(Opens in a new tab), and was originally linked to a trend in teens whereby individuals would “self troll” through burner or anonymous accounts they made themselves.(Opens in a new tab)
However, some newer sources describe digital self-harm more generally as “intentionally seeking out harmful content about oneself(Opens in a new tab)”. This definition certainly applies to me and my habits. “Oneself” can be conceived as about you as an individual, but it could be argued that it can pertain to content relating to your identity or marginalised experience — in my case, disability.
“Digital self-harm is probably not recognised as often as it should be.” David McLaughlan(Opens in a new tab) is a clinical psychiatrist at Priory Hospital Roehampton(Opens in a new tab), with extensive experience of addiction as well as a deep interest in our digital health. He spoke to Mashable about the prevalence of destructive online behaviours like my own: “Digital self-harm is probably not recognised as often as it should be. The act itself is often very secretive and there are no scars to see afterwards. As clinicians, the focus of our attention is often on emotional abuse coming from others, rather than individuals emotionally abusing themselves.”
In order to know I wasn’t alone, I started asking others if they acted in similar ways. It started with my friends and I got a lot of nodding heads in response, particularly from other women in my life. Then I ventured back online.
I spoke to Naomi, a young Black woman living in Nigeria, about her experiences of digital self-harm on TikTok. Naomi is in recovery from an eating disorder, but during some of the lowest times in her illness, she sought out people to validate her negative thoughts: “I would actively trigger myself by going on TikTok and reading comments that trolls would make about women and their bodies. There was this guy that said that if Black people weren’t so lazy and delinquent police wouldn’t kill us.” Naomi spent a lot of time reading all of the man’s hateful material, and was terrified when she started to agree: “After a while I stopped mentally condemning it all and I just started trying to rationalise his beliefs. It was like I was brainwashing myself.”
It is hard to talk about online negative behaviours like digital self-harm without considering the conditions that put us all online so often. Doomscrolling is now a well recognised part of online consumption, referring to spending extensive time online reading negative news stories, videos or general content surrounding a catastrophe or disaster. It is attached to the act of digital self-harm, as the more content we consume, the more we are fed, whether we are seeking it out to hurt ourselves or not. David McLaughlan reminds us, “Social media apps are designed to get you hooked. We often pay more attention to bad news and negativity (known as negativity bias), which in turn will drive the algorithm on social media to present more content of that nature to you.”
How can social media reinforce self-hatred?Even when searching for content that you think may be safe, it’s possible to stumble into previously harmful behaviour patterns. Naomi says when she was using TikTok during her recovery, she still managed to find sources that would harm her: “I saw videos of people who have recovered from their eating disorders and have gained a significant amount of weight. They talk about how they are so much happier now and the whole comment section is filled with people saying they’re lying or ‘good for you, but I’d rather die than weigh 80 kilos.'”
“It was almost like I was looking for confirmation that other people were also thinking the worst things I also thought about myself.” Lex, an online content creator who dispels myths around skin conditions and shares her journey with Rosacea, has a similar story to tell. Talking to Mashable she says: “I used to read comments on articles about me even though I knew there would be cruel comments that would make me cry.”
Although it seems like an easy solution from onlookers would be “just don’t look,” digital self-harm manifests for me, and many others I spoke to, as a compulsion. Lex says, “I couldn’t stop thinking about it. It was almost like I was looking for confirmation that other people were also thinking the worst things I also thought about myself.” I can’t explain it better myself. I kept reading that forum because those people said things that the worst part of my brain told me too. Finding someone (or 100+ of them) who are prepared to say the things internalised ableism said to me already was just fuelling my own self-hatred.
McLaughlan notes that these actions are invisible to many who might be trying to help us: “Most psychiatrists would immediately recognise compulsive physical behaviours which cause harm, such as skin picking (excoriation) or hair pulling (trichotillomania) however compulsive digital behaviours which cause harm are harder to spot. It feels like, as a society, we are starting to wake up to the potential drawbacks of these digital platforms.”
“Every time a doctor told me my tests were clear, I thought back to those forum dwellers who said my disease wasn’t real.” Every time a doctor told me my tests were clear, I thought back to those forum dwellers who said my disease wasn’t real. Every time I cancelled on a friend from a chronic pain flare, I remembered the commentators who called disabled people self centred, avoiding responsibility. It was confirmation bias, seeking out things I already thought to be true.
What is being done to protect vulnerable internet users?There is no easy solution. It is not in the interests of tech giants like TikTok and Instagram to police their content with vigour, many companies do follow legal guidance that forces them to remove illegal content explicitly featuring certain topics, but critics suggest it is not stringent enough(Opens in a new tab). The Online Safety Bill legislation(Opens in a new tab) which continues to debated in the UK parliament,(Opens in a new tab) has recently removed its clauses that would fine companies who fail to remove technically legal but harmful content. At present, social media giants like Meta, claim to remove all graphic images of self-harm(Opens in a new tab), and also do not allow images of self-harm that are not graphic, such as healed scars, to appear in searches or hashtags. However, the distinction between content that helps people in mental distress, and the posts that put them at further risk, is complex to distinguish(Opens in a new tab). The amended Online Safety Bill now proposes to provide users with the ability to filter out said content including pro-eating disorder and misogynistic posts themselves. If the bill passes in full, the onus will still remain on the user to police their own content, which given the mental distress of people seeking out said posts in the first place, does not seem an adequate solution.
Presently, digital self-harm must be deconstructed by the person who is struggling. Lex cut herself off from article comment sections through the help of friends: “I have asked friends to read comment sections and summarise the negative and positive responses to try to ‘scratch the itch’ without actually taking in the specific cruelty. It helps a little.”
What can you do to stop your own digital self-harm?If you nodded your head as you read this piece, and have found yourself stuck in toxic internet places, a cold turkey quitting process might be helpful to explore. Enlisting a trusted friend or partner to block sites or apps on your devices is useful, extensions for your phone or web browser like Cold Turkey Blocker(Opens in a new tab), can also be great. Limiting time spent on social media using your phone’s screen time blockers can be a way to ensure you are not left scrolling in the middle of the night, or at a time when you are at your most vulnerable.
The algorithm works against us, but it is possible to re-calibrate who you follow and redirect your online viewing, so the algorithm suggests less triggering content, and more joyful moments. Moreover, reaching out to a professional therapist to find the root of your desire to hurt yourself, is key to stopping the cycle. The internet can be a joyful place to find like minded friends and watch silly animal videos, but if it isn’t making you feel good, it is time to take a break.
As for me, the habit seems to rear its head in times when I am already low, I haven’t succeeded in breaking the cycle yet but writing this feels like a good start, naming it diminishes some of the shame. As David McLaughlan told me, “Keeping self-harm behaviours secret only serves to perpetuate them.”
If you’re feeling suicidal or experiencing a mental health crisis, please talk to somebody. If you’re in the U.S., text “START” to Crisis Text Line at 741-741. You can reach the 988 Suicide and Crisis Lifeline at 988; the Trans Lifeline at 877-565-8860; or the Trevor Project at 866-488-7386. Text “START” to Crisis Text Line at 741-741. Contact the NAMI HelpLine at 1-800-950-NAMI, Monday through Friday from 10:00 a.m. – 10:00 p.m. ET, or email [email protected]. If you don’t like the phone, consider using the 988 Suicide and Crisis Lifeline Chat at crisischat.org(Opens in a new tab). Here is a list of international resources(Opens in a new tab). If you’re in the UK, call the Samaritans(Opens in a new tab) on 116 123 or contact Shout, a 24/7 free mental health service in the UK(Opens in a new tab)(Text SHOUT to 85258).
By signing up to the Mashable newsletter you agree to receive electronic communications from Mashable that may sometimes include advertisements or sponsored content.