Visibility comes with consequences, especially for people who belong to a marginalized group. Engaging in activism on social media requires creators—along with their content—to be highly visible to a broad audience, which can open them up to a wide range of risks and harms. Most of our participants had experienced harassment and other negative interactions that they perceived as going hand-in-hand with online activism. As their popularity among viewers increased, negative interactions increased as well, as explained by P11: “If you have a platform that brings in viewers outside of your follower base on a larger level... it’s a numbers game of getting more negativity.”
Harassment took a toll on participants’ well-being. When talking about doing activism work on social media, P1 said, “It has changed me a lot. It’s very stressful. At one point, I actually was put in the hospital because of the stress over being attacked like this by a lot of people.” Despite the enormous stress from these attacks, creators feel it is important to continue their work. As P6 said, “ If people [...] want to try and ruin my activism work, then if I stopped doing my activism work, that’s going to give them what they want. And I can’t do that.”
As a result, many creators faced a tension between needing to be visible to spread their message and shielding themselves from harassment. In this section, we discuss the types of harassment and related risks faced by disabled creators and how they respond to these risks.
4.3.1 Types of harassment and negative interactions.
While many individuals—both marginalized and not—experience online harassment, participants experienced a few types of harassment that specifically centered on disability. These involved (1) invalidating, ignorant, and hateful messages about disability, (2) sexual harassment and fetishization of disability, and (3) the use of targeted attacks to suppress disability content.
1) Invalidation, Ignorance, and Hate. Participants received a slew of negative messages from their audience that invalidated their disabilities and contained ignorance and hate—a phenomenon that some attributed to “the toxic atmosphere of social media” (P2).
While disabled people often receive messages that invalidate their disabilities on social media [
53], the fact that disabled activists put in additional effort to make their disabilities highly visible to a broad audience could also be used against them to further invalidate their disabilities and to question their motivations. For example, P8 talked about some of the messages they had received:
“They say that I’m faking it for the likes, for the follows. I’ve had people tell me, ‘You’re faking your disability,’ ‘You’re only doing it for followers,’ ‘You’re only doing it for likes.’ ”
Participants also received extremely hateful messages, including death threats. Harassment was so common among disabled creators that P18 pointed out how normalized it had become, saying, “Do you remember a time when it was a big deal to get a death threat? That was not a normal thing. And now I feel like it’s just so normalized."
2) Sexual Harassment and the Fetishization of Disability. Disabled creators also have to contend with sexual harassment, including the fetishization of disability. Several participants spoke about their experiences with “devotees”—people who fetishize disabled people and seek them out online. This disability-focused sexual harassment took two main forms. In many cases, participants received explicit content in direct messages or comments. For example, P9 described the lengths to which people go to sexually harass creators, saying “I get more creepy DMs from people I don’t know, so those are just... Yeah, people are weird. People are really weird...I got...a voice DM of [someone] basically jacking off.” Others had discovered that their content had been reposted on accounts dedicated to the fetishization of specific disabilities, as with P13: “Acrotomophilia, which is sexualizing amputees... I see my stuff appear in profiles of amputee models [and] it’s just like, is this for awareness and advocacy or is this for sexualizing things?”
Participants had to take extra care to avoid such interactions or mitigate these risks. Sexual harassment could come under several guises and was thus hard to avoid. For example, P11 described having an innocuous conversation with a follower that unexpectedly turned negative: “I’ve actually been totally catfished by somebody like that who was talking to me and I thought they were normal and it turns out they were just disability obsessed [with] the wheelchair.”
3) Technical or Coordinated Attacks Targeting Disability. In addition to harassment in the form of direct messages and comments on their content, participants also had to contend with technical or coordinated attacks.
Individual users could suppress disabled creators by reporting their accounts to shut them down. In this way, bad actors were able to leverage platforms’ own features in order to enact a chilling effect on disabled creators, as in the case of P19:
“I have been Facebook banned for asking someone to stop repeating racist comments. That individual claimed I was harassing them by saying their comments were racist—which they were—and I was banned for three days from Facebook while the racist commenter was able to use the service.”
In addition to receiving account bans, participants also reported having their content removed for inaccurate reasons, as with P12: “There was a video where I was talking about ableist experiences and they took it down for harassment.”
Disabled creators also experienced coordinated attacks where people banded together to suppress disability-related content online. For example, P15 described a community on Reddit that is organized around suppressing disabled creators on TikTok and other platforms, saying:
“They’re from Reddit. They find a creator, they put their name in, and then they go over and they spam report their accounts until they get banned. They’re specifically targeting disabled creators, and mass reporting us.”
4.3.2 Responses to Harassment and Associated Costs.
Our interviews revealed that disabled content creators respond to harassment in a variety of ways, and that while these responses may help mitigate harassment, they also had costs. In this section, we discuss six main responses to harassment, and the costs they incurred to creators’ visibility, content, and well-being.
1) Educate and Spread Awareness.
Some creators used the harassment they received as an opportunity to educate people about the prejudice and ignorance faced by disabled people. In many cases, this strategy meant participants would choose to not delete the negative comments they received in order to make ableism and hate more visible, as explained by P11:
“I want people to see those comments. I want people to see when somebody says something that isn’t kind. And it’s not because I want them to believe it, but it’s because I want them to know it’s there. And I don’t want people to think that we’re in a world where [...] everyone is accepting people with disabilities because we’re not. And the only way we move forward is by creating awareness and creating education.”
Some participants also addressed their harassers directly to educate them. Doing such educational work in the face of harassment often required participants to put aside their personal feelings; for example, P12 said, “The biggest thing I struggle with is how to respond because there’s a part of me that is like, ‘I want to demolish you. I want to rip you to shreds.”’ However, she continued by explaining why she would set aside these feelings to further her advocacy:
“We already in society do such a terrible job educating kids on disability. And for some kids, maybe seeing one of my videos is genuinely the first time they’ve heard about invisible illness or ambulatory wheelchair users. And if they ask a question that has good intentions but is poorly worded, and [I reply] back and shit on them, then they’re not going to have a very nice attitude towards people with disabilities.”
Many participants also wanted the focus to remain on educating people about disability issues rather than protecting themselves. It was common for participants’ followers to defend them on social media, and participants took steps to ensure their followers would not attack the harassers, as explained by P17:
“If I reply to [a harassing comment], sometimes my followers will go to that person’s page and harass them or bully them. So I always say ‘don’t send any hate to this person’, or I’ll try to block out their username so they can’t go to that person and do the same things that they were doing to me because that doesn’t solve anything. I share what the comment is to show, ‘Yes, this is something that I actually get, but also I want to remind you guys this is an educational moment.”’
2) Alter content strategy. Participants also combated harassment by altering their content—either by including content that would deter harassment, or by excluding content that would be more likely to receive harassment.
Participants found that they could deter harassment if they included specific types of content. For example, since people with invisible disabilities often receive comments that question their disabilities, finding ways to make disabilities more visible could help avoid harassment. This tactic could also be employed by people with visible disabilities that may not be visible in their content. For example, P15 was a wheelchair user, and would often choose angles that would show her wheelchair in her TikTok videos, saying “I get more invalidation when they can’t see my wheelchair.”
Alternatively, creators also excluded content that seemed more likely to receive harassment—for example, they chose to not talk about controversial topics. Several participants were selective about the content they shared online more broadly to avoid negative interactions: “I self-censor a lot. I take a lot of care about what I post, out of fear of what people will think and say, basically” (P2).
3) Alter algorithm strategy. Participants also drew a connection between the harassment they faced and the algorithms underlying social media platforms. This was particularly true in the case of TikTok, which makes hyper-personalized content recommendations to users based on inferences about their preferences [
58]. Participants theorized that they received more harassment when the algorithm served their content to a broad audience that may not be receptive to marginalized identities, including those along the lines of disability, race, and sexual orientation. In these cases, creators would attempt to get back to the “right side of TikTok”—i.e., to a more receptive audience. Based on the theory that the algorithm would direct their content toward people who are more likely to engage with it, participants would post video appeals to ask pro-disability people to engage with their content through likes or comments in order to redirect the algorithm. After experiencing a lot of harassment from a viral video, P4 described their attempt to get back to the “right side of TikTok”—in their case, users who are either members or allies of multiple marginalized communities—saying:
“That was the video that had gotten on the wrong side of TikTok. And I had seen so many other people making ‘I got on the wrong side of TikTok, please help’ videos that I figured that well, can’t hurt. Lo and behold, I put that up and I got back on to the right side of queer TikTok, and [specific disability] TikTok, and [religious] TikTok within like 24 hours. [...] Yes, it’s effective. I don’t know why, but it is.”
Based on the theory that the TikTok algorithm boosts content that receives a lot of engagement, some participants talked about a ‘silver lining’—that negative comments could contribute to their content going viral, even if for the wrong reasons. These creators used humor to deal with harassment; for example, P8 described with a touch of irony how she responded to people harassing her:
“My comments back to them are like, ‘Thanks for the engagement. It’s only giving me more views, it’s only giving me more numbers, so hey. Thank you.”’
In addition to having their content boosted through such engagement, some creators also found that they could receive more money from the platform from higher engagement:
“I’ve noticed when people comment a lot, it sometimes will give you more money. I don’t know if that’s true, but I noticed that one day, because I had a video where a lot of people were commenting and getting into these fights with each other. And I was like, ‘Oh whoa, this is a lot of money. This is insane.’ I was like, ‘I didn’t know that my haters would really be financing me like this. But thank you.”’ (P18)
However, responding to the algorithm’s role in harassment was a costly endeavor. Even if participants gained additional visibility through harassment, they had to experience extremely unpleasant interactions, which took a toll on their well-being. Attempting to get on the ‘right’ side of TikTok also required additional labor on top of managing the existing challenges in gaining visibility as detailed in Section 4.2.
4) Control comments and feedback. Participants also mitigated harassment by controlling people’s feedback on their content. They did this in a variety of ways, including using automatic filters, manual filters, deleting comments, or turning off comments entirely; each of these strategies also had associated costs.
On many platforms, people can use filters to avoid harassment before it occurs, though this strategy requires effort in terms of developing a list of potential bullying keywords to filter, along with their many variations. On TikTok, a way of managing comments is to manually approve each comment before it is visible to the public—however, this is even more time-intensive, and does not prevent the creator from experiencing the harassment in the first place. P4 described how they chose which strategy to employ:
“I have my comments set to filter all comments. So nothing goes up without me approving it. I’m still having to see them. It’s not the best solution, but it’s the only solution TikTok currently gives us. Some people use comment, keyword filters and that just filters out comments with a specific keyword in it but I found that was much more work than just filtering all comments.”
However, deleting or turning off comments had an additional consequence—creators theorized that these strategies were penalized by the algorithm, and ultimately lowered their visibility. P15 explained this theory by discussing the analytics on a TikTok video where she had turned off the comments:
“I’ve only gotten 388 likes on that video, and I was getting thousands per hour. Thousands. So I can already tell you by turning those comments off today, it’s already impacted [the number of likes].”
This algorithmic penalty for self-protection could impact creators in one of two ways. Some creators decide to delete or turn off comments despite the cost to their visibility by prioritizing their well-being. When we asked P15 to explain her rationale for turning off her comments despite the cost to visibility she described in the quote above, she said:
“One of the reasons why I turned off my comments, and started deleting and getting rid of them, [was] because somebody was just like, ‘it’s not reaching the people you want to reach anyways.’ [...] At this point my sanity is way more important than how many views I get.”
Alternatively, some creators choose to not delete or filter their comments to avoid being penalized with lowered visibility. While this decision prioritizes their activism, it also harms their well-being and leaves them vulnerable to further harassment.
5) Control audience: blocking, restrictions, private mode. In addition to controlling comments, content creators can also control their audience by limiting who can see their content. Participants reported blocking accounts, restricting specific accounts, and using private mode to limit their visibility more broadly.
When participants blocked accounts that harassed them, they found that harassers could circumvent this strategy by opening a new account. P17 explained how an Instagram feature that restricts rather than blocks users was helpful in avoiding harassment, though this feature did not exist on other platforms they used:
”Instagram has a restrict feature [so] I just don’t have to see their comments. Nobody has to see their comments, but they think that everybody can see their comments. The restrict feature is really great because it used to be if you would block them, somebody could just create a new account and go continue to harass somebody. But with the restrict feature, they think they still have full control, but you’ve taken the control away.”
Some social media platforms allow content creators to set their accounts to private mode so that only their approved followers can view their content; however, this also comes at the cost of their visibility. The decision to stay private or public was a complicated one, as explained by P9:
“The debate I have is if I make myself private, then I’m not going to get as many followers. I used to be private until I really started saying, ‘You know what, I’m going to do this.’ Then I made myself public again, so I kind of go back and forth, mainly to avoid the creeps.”
In addition to taking steps within their control to limit their audience—such as blocking and restricting accounts—creators can also report accounts so that the social media platforms can take action and penalize harassers. However, reporting as a strategy is not always successful; P12 described the frustrations around reporting accounts, saying:
“There’s an account [...] that has been harassing me and other chronically ill folks on TikTok for like weeks now and I can’t tell you how many people have reported them and they’re still there. They’re still there.”
6) Take no action. A subset of participants took no action to mitigate or avoid harassment, even if they were bothered by it. Despite harassment having had an extreme emotional toll on her, P1 said, “My way of dealing with it is just to ignore it.” Some participants viewed harassment as an inevitable consequence of engaging in activism, and others thought that pleasing everyone with one’s content was an impractical goal, as stated by P10: “I can’t cater to everyone and it’s exhausting. So I just do me and I know how to do me, best.”