The World of Online Comics Is Overwhelmed by Hate

For the longest time, the Internet has been a place for comic book artists to get their start. Comics such as xkcd and Homestuck inspired cult followings and fandoms. Artists such as Alex Woolfson of The Young Protectors fame proved that indie creators with marginalized identities could successfully express themselves — free of the barriers that plagued more intensive media such as TV shows, movies, and video games.

And yet, the Internet has always had a dark underbelly, where creators use their platforms to spread hatred and misinformation. While increased moderation may have curtailed some of the more direct examples of hateful comics, you will eventually see discriminatory content pop up on Instagram, Twitter, and more if you comb through the handles of white supremacist creators like Stonetoss.

Unlike other content, however, the issue here is not a barrage of creators uploading hateful posts that social media sites cannot properly moderate. The ability to ride that line between what is tolerated and what is effective is something only a handful of artists can do. If social media platforms wanted to, they could limit the spread of the vast majority of hateful webcomics out there with a few clicks of a button.

The decision not to remove these bad-faith actors speaks to a lack of desire to proactively curb hate speech in general.


Far-right webcomic artists occupy an interesting space on the Internet because many of their critiques are not always so dismissable when you examine them closely. Creators such as Gary Varvel will often post criticisms of liberal institutions that, at first glance, are very valid. On January 28th, 2021, for example, they posted an image of President Joe Biden dressed up like French Queen Marie Antoinette, parodying the infamous phrase “let them eat cake” and swabbing it out with “let them learn to code.”

Source: Townhall

Without knowing where this came from, it would be easy to believe that a leftist made this comic. There exists a widespread criticism that the more business-friendly nature of the Biden administration will prevent him from implementing reforms that truly lift most Americans out of poverty. Many people on the far-right understand that something is wrong with the current economic system, but they incorrectly identify the source. They don’t blame exploitative business practices or corrupt politicians, but rather, people with marginalized identities “ruining” society.

The comics they create reflect that repugnant worldview, even when what they produce, at first glance, is not visibly read as hateful. Often, alt-right creators will speak in codewords or innuendo so their hatred can be branded as “acceptable.” This approach creates art that is only offensive to those aware of the history, serving as a rallying call for those “in the know” and passive entertainment for everyone else.

For example, a Hedgewick comic released in July of 2020 on Instagram and other social media asks the reader why communists hate millionaires and billionaires, but not “International Bankers.” “Remember to always check your blind spots,” the description of its Instagram reads. The phrase International Banker has a long history of being associated with Jewish people. There is a malicious stereotype going all the way back to The Protocols of the Elders of Zion in the early 1900s that Jewish people have secretly been in control of (or plotting to take over) all of the world’s major institutions.

This stereotype is rooted in the fact that for most of European history, Jews were excluded from professional guilds and denied the right to own land. This reality forced them into the financial realm, which at the time was seen as “dirty” (see usury). The stereotype of the Jewish moneylender or banker has been used in everything from Nazi propaganda to the current myth that Jews control the Federal Reserve.

When people employ this meme, they are tapping into a painful history, and sadly, it is easily deniable. The creator of Hedgewick comics, after all, isn’t telling readers directly that Jewish people are “the problem,” but by using the term International Bankers, that is the implication for anyone paying attention. And if you bother to explain the original context, you can be easily gaslit by the creator and his audience into thinking you are overreacting or misinterpreting their original intentions.

Even when the message is more direct, the branding of irony and sarcasm allows the creator to imply truly despicable things while simultaneously sidestepping responsibility. For example, a Stonetoss comic released in August of 2020 on Instagram, Facebook, and Twitter shows a panel of a Black man holding up a bag asking for reparations. The next panel is of an injured white man holding up a similar bag, asking for the same thing.

Source: StoneToss

Given that this was released during the height of the 2020 Black Lives Matter Uprising, the implication is that this white man was hurt during the protests. The webcomic is implying that Black people are the ones who deserve to pay damages for this violence. It’s a point that seems deceptively straightforward (e.g., that this violence is bad) when in actuality, it ignores the centuries of atrocities committed under White Supremacy.

This dance is all too prevalent in this space. Many of these artists will make an offensive claim that is ambiguous enough to sidestep when someone calls them out. Peruse white supremacist webcomics long enough, and you will see anti-immigrant posts under the guise of being concerned by the spread of COVID; anti-COVID posts highlighting vaccination as vast government overreach; as well as pro-segregationist posts discussing the “hypocrisy” of banning their content online while not letting businesses discriminate against people of color.

Source: Hedgewick

And, of course, these are the tamer examples. Some webcomics are just unapologetically hateful. There are ones removed from the subtext and irony so often used as a defense from criticism. It’s easy to find comics released on major platforms such as Instagram, Facebook, Tumblr, and Twitter that depict black people as savages, use the N-word as a punchline, portray trans individuals as men who have mutilated their genitals, and glorify straight-up assault on Black characters. These creators are clearly putting out consistently hateful content into the world, and we have to ask, “Why are they slipping through the cracks?”


Publishers such as Facebook and Twitter have community guidelines that ban hate speech — a few of them even have guidelines that ban the spread of misinformation. Facebook explicitly bars the spreading of hateful stereotypes, which many of these posts embody. These companies have repeatedly expressed a commitment to stopping the spread of such content, and yet, as we have just seen, it's rampant on their sites.

The traditional justification often given for why hate speech slips through the cracks is that these platforms are too big to individually monitor every piece of information uploaded onto them. Social media sites prioritize our ability to self-publish, spurning the editorialization of traditional media companies in favor of the user. The methods they use to enforce community guidelines are a combination of ever-evolving algorithms and self-reporting from users. As Victor Tangermann writes in Futurism:

“Most sites use algorithms in tandem with human moderators. These algorithms are trained by humans first to flag the content the company deems problematic. Human moderators then review what the algorithms flag — it’s a reactive approach, not a proactive one.”

There have always been those who think that this system is not enough, advocating for these publishers to take more responsibility (by which they mean liability) for the content they host. Men such as Matt Rosoff have argued that Congress should amend Section 230 of the Communications Decency Act (i.e., the law that removes liability from tech platforms in most instances for their content) so that companies face more legal responsibility for posts that incite violence. They want them to hold responsibility as publishers, and as the world becomes increasingly unstable, it's easier and easier to understand this impulse.

Yet, this debate, although necessary to have, almost seems beside the point. These platforms do not need the law to be changed for them to moderate content. Internet publishers moderate posts all the time and have implemented massive crackdowns in the past when there has been an incentive to do so. For example, when Congress created an exception for Section 230 in 2018 to curb prostitution (note: the law did not distinguish between sex trafficking and sex work), there was a mad dash from Tech companies to mitigate this new risk. Craigslist axed its personal section, which at the time was being used by many sex workers. Reddit likewise axed a series of pro-sex worker subreddits. A host of other websites shut down altogether.

In response to the law, Apple temporarily removed Tumblr from its app store after child pornography was found on the site. Tumblr announced shortly thereafter that it would remove all adult content from its platform— effectively destroying communities that had been using it to organize for years, including LGTQIA+ ones. The new changes to Section 230 created a ripple effect that started with trying to ban sex work and ended up purging pornographic and sexual content on sites across the Internet.

We see from this example how Tumblr was willing to regulate content when there was a financial incentive to do so (i.e., when Apple pulled them from their App Store), and yet we have not seen a similar purge of white supremacist content. While Tumblr allegedly doesn’t permit hate speech, it's there for anyone who wants to find it. Go over to the page of white supremacist webcomic Martian Magazine, and you will see that their profile picture is that of a dog waving the Nazi flag. In their bio, under Heroes, they have listed “Adolf Hitler.” Under Television, they have written, “I don’t watch jew-shit.” This is the same creator who right now has a comic where the n-word is a punchline (also hosted on Tumblr), and there are other similar creators on this platform.

Source: Tumblr

In place of a financial incentive, Tumblr’s community guidelines don’t seem to matter much, and the same can be said for many of these platforms. For better or worse, they are only willing to enforce boundaries when required to do so by law or immense public pressure. The 2018 amendment to Section 230 was by no means ideal, but it signifies that changes can be made quickly when the proper incentives are in place.

It would be straightforward for these platforms to put a dent in white supremacist webcomics because there aren’t many of them overall. While anyone can upload a bigoted comic, it takes a lot of work and ongoing promotion to update one continuously. To gain traction in this industry, many creators have to create at least two comics a month, and then they have to plug them on social media. We are really only talking about a handful of creators in the English-speaking, white supremacist space who are meeting that benchmark (see MadebyJimBobStonetossMartian MagazineHedgewik, etc.).

If such creators were proactively removed, it would go along way to curbing this type of bigotry. The silence of major social media platforms on doing this type of work indicates where their priorities are in this area. They would rather wait for a hypothetical future where an algorithm can sort out this dilemma automatically than to do the work of banning bad faith actors preemptively in the here and now.


The reality is that white supremacist webcomic artists still have a robust reach on the web. Webcomics like StoneToss know how to balance the line between acceptable and hateful rhetoric. They rely on irony, subtext, and humor to diffuse responsibility for the terrible things that they create while cultivating an audience very much motivated by that hatred.

Likewise, companies often rely on the complexity of their platforms (and protections in the law) to avoid responsibility for some of the more hateful content they host. They only seem interested in curbing that content when changes happen in the law (e.g., the 2018 amendment to Section 230) or when there is immense public pressure to do so. It does not appear as though they are interested in enforcing their community guidelines to their fullest extent.

This situation creates a toxic cocktail of neglect where content is only removed when it becomes too large to ignore. These publishers have historically not taken a proactive approach in removing bad-faith actors. Maybe comics such as Hedgewik or Stonetoss will be de-platformed in the coming months and years ahead.

That moment of catharsis, however, comes after years of that hatred and bigotry spreading on sites such as Facebook and Twitter.

In the meantime, hate can be found wherever memes are shared.

Previous
Previous

Our Society Is Determined to Forge New Monsters

Next
Next

The Music of Hatred is Alive and Well on the Web