In the ever expanding world of controversial topics, online content moderation has only recently come up as relevant source of argument. Melissa King’s article, “Better Online Living through Content Moderation,” talks precisely about this issue and defends the opinion held by many that, “Content control features — block and ignore functions, content/trigger warnings, blocklists and privacy options — are valuable to people who need to moderate their time online” (King). Her argument is broken up into three main parts: The psychology behind online harassment, the legal side of content control features, and how a shift towards a more personal online experience is better.
To introduced the topic of how online harassment can be a real detriment to one’s mental health, she first talks about and does away with the widely-held belief that people being bullied and harassed online make the problem seem bigger than it really is; that they can solve all their problems by simply ignoring the “trolls” and by not being so “sensitive”. She compares the ignorance associated to this claim to that of the one in the belief that PTSD is a condition that only affects veterans and cannot affect people who experience online harassment. For many, “mean” words and threats are not enough for a person to get worked up about and certainty not enough to develop any kind of stress disorder. However, such a claim, King argues, ignores the vast diversity of the human physic and how each person’s mental state can be affected differently. She continues arguing against these ideas by also discrediting the solution that many people who belief in the previous statements come up with. Exposure therapy is a treatment in which a patient is slowly, and in a controlled and calculated fashion, exposed to the source of their mental torment. As King puts it, “Exposure Therapy is not about having random internet strangers hurl insults and threats at someone with the hope they somehow come out more mentally durable” (King). The key to this kind of therapy is “control” and that can hardly be said to exist within the internet.
The second phase of her argument deals with the legal repercussions that the implementation of content control features and tools have encountered. King’s prime example deals with Blocklist and how it has become the subject of legal litigation on behalf of various sources. Often times, the source of these lawsuits come from individuals who have been flagged for online harassment, or groups deemed to be hate groups. These sources claim in their complaints that blocklist are a sort of defamation and King provides several examples of this. In King’s opinion, however, the strongest and soundest argument against blocklists, “…come from people who do not harass or threaten other people, yet somehow see any defense by targets of harassment as being equal to the harassment itself” (King). These people see a fundamental problem with blocking out people’s words online just because the person blocking them doesn’t like what they say. What King does a good job of doing is pointing out that its much more than just a simple difference of opinions, the level of harassment often gets so intense, the victims can often feel like they’re fearing for their lives.
The final component of King’s argument is that the decision of blocking out abusive sources on the internet should come down to it being a personal choice. In her opinion, those who have never encountered any high level of harassment online use their experience to argue that online harassment isn’t that big of a problem, and in turn blanket out every other point of view that exists on the matter. King states that this “one-size-fits-all” solution approaches fails to encompass the diverse set of experiences that people have online. To back her argument, she uses the example of how females are frequently the target of online “trolls” especially when they occupy or “tread” into areas which are often seen as male-dominated.