Attempts to stop “Fake News” may threaten satire
Political cartoons have an undeniable power.
In the 1860s, after being regularly pilloried as corrupt in Thomas Nast’s cartoons, New York Senator Boss Tweed famously ordered his lackeys to “stop them damn pictures!” Adolf Hitler would fly into a rage when British cartoonist David Lowe caricatured him as a buffoon. Political cartoonist Herblock created the term “McCarthyism” in his cartoons, calling attention to ridiculous red-baiting tactics in the U.S. Senate. And in the present day, I’ve known more than a few cartoonists around the world who have been thrown in jail because their work targeted autocrats. Clearly, there is power in satire; power that is sorely needed today.
Yet today’s technology platforms can silence cartoons and other forms of satire just as effectively as an unhinged tyrant. Opaque algorithms have already had a dramatic impact limiting the distribution of news and political content.
During my first quarter as a John S. Knight Journalism Fellow at Stanford University, I’ve been poring over research papers detailing efforts to create automated “fake news detectors” based on artificial intelligence and deep learning. (Never mind the fact that the very term “fake news” has been co-opted by demagogues and generals imposing martial law.) I was shocked to discover that many well-intentioned studies categorize satire as fake news or disinformation.
It appears that satirical cartoons may be destined to get lumped in with disinformation, whether it’s because they deal with hot-button topics deprioritized by politically-averse algorithms or because they are grouped with deceptive news sources in the same deep learning dataset. If cartoons can get shut out because they deal in sensitive topics and persuasion, why not opinion columns, editorials and other forms of pointed commentary? After all, it wasn’t long ago that tech giant Meta decided to back away from controversial topics and altered the dissemination of news and politics in the blink of an eye.
During my JSK Fellowship year and beyond, I want to preserve and amplify the positive effects visual commentary has on democracy and society — whether it’s pointing out the hypocrisy of a particular politician or shining a spotlight on issues people otherwise miss. The beauty of cartoons and other forms of satire is that they can draw people in and distill complex information, making it much more accessible in an entertaining way. Think of it as a real news delivery device, designed to disseminate truth more effectively. Doing just that is even more essential in a time of massive disinformation and information chaos.
I began my career drawing traditional political cartoons for newspapers, but switched to creating short political animated videos that I distributed to online news sites. While satire may be delivered in different ways — single-panel cartoons, short animation, late-night shows — it may all be at risk as huge tech platforms struggle to deal with an onslaught of disinformation and “news” that is truly fake. Don’t get me wrong, I’m all for eliminating disinformation, but I’m worried satire is getting caught up in efforts to build magical “fake news detectors.”
The “Fake News detectors”
The ability of huge datasets that comprise large language models to produce everything from chatbots to image generation is impressive, and it’d be great if that technology helped thwart disinformation. But when I read about attempts to create fake news detectors based on linguistics, I’m reminded of another field: phrenology, the old “science” of studying the lumps on people’s heads to determine their character traits.
Before irate linguists or computer scientists show up at my door with pitchforks, let me explain. One study, for example, explains that language patterns can differentiate between real and fake news. The authors point to “a subtle tendency to exaggeration and use strong words to catch the attention of the reader,” as being a marker for fake news. Wait, isn’t that also the sign of a good cartoon — or newspaper editorial, opinion column or well-reasoned blog post for that matter?

While I’m only just beginning to explore what’s happening behind the scenes, I worry that the rise of so-called “fake news detectors” created by really smart computer scientists (who likely don’t have a background in journalism) will further squeeze out satire.


News and political content have already been hit hard by changes in algorithms as platforms aim for less controversial content — first, following the Russian disinformation campaign of 2016, then after the 2020 election and subsequent violent insurrection at the U.S. Capitol Building, and more recently leading up to the 2024 elections.
This is not a new phenomenon, as you can see in one of my cartoons from 2018.
From “Algorithm News,” by Mark Fiore, 2018 https://vimeo.com/251606397.
Video courtesy of Mark Fiore.
Media outlets became overly reliant on social media corporations to distribute their content and had the rug pulled out from under them. Independent creators rely on technology platforms to connect with their audience and make a living. But when your creation is news, politics or satire, distributing your work becomes increasingly difficult as platforms back away from controversial material — and the writing is on the wall (and white papers) that this recurring problem is being transposed into the world of artificial intelligence.
It’s a misnomer to call AI, deep learning and algorithms a “black box,” these things are created by people who know what goes into them, and it appears they are bringing some of the existing flaws and biases into a shiny new technology. To their credit, some researchers admit they haven’t quite figured out how to differentiate satire from fake news. That’s encouraging, but I’m worried these tools will be implemented before that crucial distinction is resolved.
Starting points for solutions
If we’re to avoid negatively impacting a long-standing American tradition that works to promote facts, truth and accountability, we need to come up with a variety of solutions. Here are a few ideas I’ve been investigating and discussing with experts in the field.
- For more accurate detection, combine multiple types of fake news detectors: those that are linguistically-based and ones based on network analysis. (Simply put, is the post connected to a legitimate news outlet or a crazy conspiracy site?)
- Label works of satire as such. Would this help ward off over-eager algorithms or fake news detectors?
- Collaboration between technologists, journalists and satirists. I think this is essential if we are to make AI models effective and not just dumb machines when it comes to the very human language of satire.
It’s also important to remember that technology alone can’t solve every issue. Humans — with all their bugs and mysteries — must still be part of the equation.
Keep humans in the mix
The first time I came up against the problem of satire and political content getting squeezed out seems almost quaint in retrospect. It involved one company and one outlet staffed by actual humans. In the early days of Apple’s app store, I created an app so my political animation could be easily viewed on the iPhone. The app was quickly rejected because the developer guidelines stated you could not “ridicule public figures.”
Apple’s rejection email welcomed me to resubmit once I got rid of that slight bug, but being a political cartoonist who specialized in ridiculing public figures, I gave up since I didn’t want to take the “political” out of my cartoons.
A few months later, I won the Pulitzer Prize. Soon, news broke that my app was rejected by Apple because they forbade political content and that became a big, embarrassing story for the company. (Keep in mind this was the era when the app store was going to save journalism.) I got a phone call from someone at Apple who said I should resubmit the app, so I did. It was approved, and soon after that, the company changed its developer guidelines, taking out the part that prohibited ridiculing public figures. Which led Steve Jobs to get a little grumpy and complain about a certain “son-of-a-bitch liar” who may or may not have been me.
Steve Jobs at 2010 All Things Digital D8 Conference. Video edits are for brevity, full video can be seen here. (Note: He mentions “defamation” in this video, but the actual wording in the developer guidelines was “ridicules public figures” — which is of course very different from the legal term of defamation.)
Video courtesy of Mark Fiore.
Even though this story was an embarrassment for Apple, it was an example of a technology company changing course to better reflect the values of our society.
Thank you, humans!
Cartoons brought me to the news. Like many people, satire was my gateway to caring about what was going on in the world and paying attention to news. If we silence satire, we lose a valuable tool that helps promote truth, shines a spotlight on hypocrisy and strengthens democracy. Consider this an urgent call for technologists, platforms and journalists to work together to preserve and amplify the positive effects satire has on our society.