Skip to main content Skip to secondary navigation
Main content start

Seeing is no longer believing: Artificial Intelligence’s impact on photojournalism

How might news organizations build public trust in news photography?
David Carson holding up his Stanford ID card in front of palm trees and baloons.
My first day on campus as a 52-year-old student and John S. Knight Journalism Fellow at Stanford University. Courtesy of David Carson. Photo: Betsy Taylor

My cameras, lenses, and lighting equipment were piled on the photo studio floor at the St. Louis Post-Dispatch. I was turning in my gear and the pangs of anxiety were real. After 24 years as a staff photographer at the paper, I was about to embark on a nine-month leave as a 2025 John S. Knight Journalism Fellow at Stanford University.

I was excited to start the fellowship, but it was a little crazy to think of myself as a college student again at age 52. Just a few weeks earlier, my wife Betsy and I had dropped off our daughter at Rice University to begin her freshman year. Now, we were both going to be sitting in classrooms with kids, ahem…young adults the same age as our daughter. We were suddenly an entire family of college students.

When people hear you’re headed to Stanford on a fellowship, they always say “Cool,” and then they immediately ask you what you’ll be studying. I had my elevator speech down cold: “I’m going to be studying the impacts of AI-generated images on photojournalism and what can be done to build public trust in news photos.”

People overwhelmingly responded with approval and nodding heads. “Oh, that’s important,” they’d say. It was reassuring to hear their approval of my project idea, but I also quickly wondered if I’d bitten off more than I could chew and prayed I wasn’t about to choke.

After one quarter at Stanford, I’m by no means an expert in Artificial Intelligence or image authentication. However, I do consider myself an expert in photojournalism, and I wanted to share with you some of what I’m learning or things I’ve found interesting as I explore how AI is impacting photojournalism and the cross-industry efforts to build trust in news photography.

AI images in the 2024 election

In January 2024, I submitted my fellowship application as the Presidential primaries were ramping up. Several politically motivated AI-generated images had already been widely shared. I was confident AI would soon create more havoc. The following is a selection of AI-generated images and digitally altered images that I found interesting or significant. There is also one instance where real images were falsely accused of being AI-generated in an effort to devalue them.

It’s difficult to determine exactly when this particular set of images first started circulating, but as early as December 2023, a post circulating on Facebook featured a few Norman Rockwell-inspired AI-generated images of Donald Trump and Joe Biden. Most reasonable people understood these were not authentic images, they were satirical. The photorealistic quality of the images helped provide comic relief because it was so startling to see these bitter political foes in scenes of bromance.

Screenshot of AI generated images of Trump and Biden.
AI-generated images of Presidents Trump and Biden (Facebook Screenshot).  Courtesy of David Carson.

However, photorealistic images like the ones above, do have a negative effect on people. The flood of AI-generated images that fill social media are sowing seeds of doubt, eroding people’s ability to trust what they see and making people question reality.

The flip side of those satirical photos is the AI-generated images of Donald Trump being taken into custody by police when he was facing trial in New York City. These images require the viewer to take a second closer look to realize they’re synthetic AI-generated fakes.

Screenshot of AI generated images of Trump being arrested by police.
AI-generated photos created by Eliot Higgins who used Midjourney to create the images. (Screenshot from X).  Courtesy of David Carson.

There were a few different iterations of the Trump arrest photos. But I was most disappointed by these AI-generated images that were created by Eliot Higgins, a highly respected journalist and founder of Bellingcat. False images like these dangerously blur the lines of reality for the public and make people more cynical about what they see after they realize they’ve been tricked and lied to. Higgins posted images in jest to entertain himself and in fairness, he disclosed his use of Midjourney AI to make the false images. But these images are lies and responsible, ethical journalists must avoid creating or sharing false images that contribute to the spread of disinformation.

As election campaigning ramped up, several notable instances of synthetic images designed to deceive the public were created and shared on social media.

Donald Trump’s post on Truth Social claiming a non-existent endorsement from Taylor Swift featured a mix of authentic and AI-generated images.

Mixture of AI generated and real images of Taylor Swift.
Former President Donald Trump’s post onTruth Social (Truth Social Screenshot). Courtesy of David Carson.

AI-generated image of Donald Trump with black supporters that was created by a Florida radio host who defended the fake images by saying “I’m not a photojournalist”

AI generated image of Donald Trump with Black supporters.
AI-generated image by posted by radio host Mark Kaye (Screenshot from X). Courtesy of David Carson.

In an interesting twist, Donald Trump falsely accused Kamala Harris of using generative AI to enhance the size of the crowd at a rally, but the images were actually authentic and not created by AI. This post illustrates how AI can be weaponized to discredit authentic images.

Screenshot of Donald Trump's accusatory Truth Social post.
Former President Donald Trump’s post on Truth Social (Screenshot from Truth Social ).  Courtesy of David Carson.

Eight days later, Trump posted this AI-generated photo of a woman from behind who looks like Harris on Truth Social. For me, this image falls more in the satirical category because most people would be able to tell it is not authentic. Also of note, AI struggled to create convincing images of Harris’s facial features.

AI generated image of a woman at a podium in front of the flag of the Soviet Union.
AI-generated photo posted by former President Donal Trump on Twitter. (Screenshot from X).  Courtesy of David Carson.

And this digitally altered image of a young Kamala Harris in a McDonald’s uniform was created by a Trump supporter who was the first to share it on social media. The faked image was picked up on by some Harris supporters who reposted it not realizing it was an altered photo.

digitally altered image of a young Kamala Harris in a McDonald’s uniform.
Digitally altered photo with Kamala Harris’ head added to image (Screenshot via Politifact).  Courtesy of David Carson.

I'd like to highlight one other fake, synthetic image that was spread like a virus leading up to the election. While not directly political, this visual lie was shared widely on conservative social media channels and used to criticize the Biden administration’s response to the hurricane.

AI-generated photo of a girl and puppy being rescued from a flooded area.
AI-generated photo featured in a Twitter post (Screenshot from X).  Courtesy of David Carson.

I believe this image is the worst of the AI-generated images I’ve highlighted because it was shared widely and fooled so many people into accepting it as an image of a real moment. This lying AI-generated pile of synthetic pixels also distracted from the hard work local and national journalists were doing to tell the stories of real people who were affected by Hurricane Helene.

Looking back at the use of AI-generated and faked images in the 2024 election cycle, I’m not sure any of the artificial content swayed voters in significant numbers to one candidate or the other. That might be because our current political environment is already so polarized that people were only exposed to AI images of content they already agreed with. But it is also possible that just the existence of AI-generated images has devalued all visuals and made people more skeptical of anything they see. Seeing is no longer believing.

Combating Synthetic Photos, Disinformation and Visual Lies

Politically motivated disinformation and fake news are not recent inventions of Generative AI and the Internet. The printing press, hailed for its ability to spread information to the masses, has also been used for centuries to spread mis/disinformation and lies. Ben Franklin famously published a fake newspaper supplement with false (and racist) stories from the basement of his Paris apartment to build public support for the colonists fighting the British during the Revolutionary War.

Generative AI has increased the speed at which disinformation can be created and it has democratized the production of believable deepfakes that can be cranked out with a few prompts by keyboard warriors with limited technical skills. The deepfakes then spread virally on social media platforms that are fueled by algorithms that value clicks and engagement over reality. To make matters worse, many social media companies bowed to political pressure and curtailed their efforts at content moderation designed to limit the spread of mis/disinformation. Unchecked synthetic images, turbocharged by social media, then spread their visual lies to the public that is increasingly unable or uninterested in discerning reality from fiction.

Since 2022, billions of AI-generated images have been created, and everyday tens of millions more are pumped out into the world. Synthetic images blur the lines of reality and make it difficult for people to trust what they see as they attempt to wade through a sea of AI-generated images to find authentic images of real moments in life.

At the start of my exploration of AI’s impacts on photojournalism, I wasn’t aware of the cross-industry collaboration to ensure the authenticity of digital content that had been building for years. As my research progressed, I realized the news industry was approaching a tipping point where content provenance and authentication technology could be widely deployed and become a feature of news websites.

The three main areas I’ve been focusing on for combating the mis/disinformation of synthetic content are detection, durable watermarks and authentication.

Detection

Let’s start with AI image detection. Sightengine is a company focused on “content moderation and image analysis,” that has a free AI image detector on their website that I’ve found to be helpful. AI or Not, also works well, but the free version of their tool limits its functionality and how many images you can test a month.

The problem of relying on AI image detection is that it’s an ever-evolving, unwinnable game of cat and mouse. It used to be easier to detect AI-generated images, a quick visual inspection with your eyes and “oh look, that hand has six fingers on it.” But six-fingered hands will soon be relics of AI image generation’s infancy as models with better training continue to rapidly evolve and churn out more photorealistic fakes that can avoid both human and computer detection.

Durable Watermarks

Digital watermarks, which are invisible to the viewer but detectable by a computer, are another possible method for the authentication of images. I heard firsthand from a researcher at Berkeley about his efforts and research focused on strengthening these watermarks, which are embedded at the pixel level in the electronic “noise” of an image. The challenge with digital watermarks is making them durable so they remain with an image that has been resized, copied, or altered.

Photo of a slide showing how tree-ring watermarks work.
Slide from UC Berkeley Postdoctoral Researcher, Xuandong Zhao’s Hoover Institution presentation Challenges and Safeguards Against Al-Generated Disinformation, Watermarking for Al-Generated Content at Stanford University on Dec. 4, 2024. Photo: David Carson.

Both Digimarc and Adobe have developed digital watermarks that can be used for image authentication. While invisible watermarking is a useful tool, its effectiveness is enhanced when it is used in conjunction with image authentication and provenance technology.

Authentication

I believe authentication of images at the moment they’re created is the clearest path forward for building public trust in news photography. Photojournalists with cameras that seamlessly embed Content Credentials, based on the open technical standard developed by the Coalition for Content Provenance and Authenticity (C2PA), will be enabled to create verifiable images with authentication data that the public can readily view.

Time magazine just named Content Credentials as one of the “The Best Inventions of 2024, for Battling Fake Photos.” And Fast Company’s Next Big Things in Tech 2024 report just recognized Andy Parsons, senior director of Adobe’s Content Authenticity Initiative, for “leading an industrywide effort to promote trust and transparency around digital content.”

Leica has already released a camera to the public that is C2PA compliant and Sony, Canon and Nikon are soon expected to release cameras or make the C2PA technology available in some of their cameras through a firmware update and or license. The Associated Press has been field-testing Sony’s C2PA cameras, Getty Images is expected to be testing Canon’s C2PA cameras and Reuters, working with the Starling Lab at Stanford University & USC, was among the first to actively experiment with image authentication.

If you’re so inclined and you’re feeling really geeky, you can read the corporate speak here about all the years of impressive collaborations involved with creating the C2PA standard. But to boil it down in its simplest terms, C2PA and Content Credentials are where the industry is headed. And if you haven’t heard of them yet, I predict you’re about to start hearing lots about C2PA and Content Credentials in 2025.

Black and white "cr" content credential icon.
A small Content Credential icon, above, will appear in the corner of images that can be authenticated.

When you start seeing Content Credential pins on images online think of it as a nutrition label like the ones you’d see on food at the grocery store. An image with Content Credentials can provide educated news consumers with the critical information they need to determine the origin and history of modifications that were made to the image. While I’m focused on Content Credentials in the context of building public trust in news photography the technology also has much broader applications. Content Credentials can be used to verify and authenticate a wide range of photo, video, audio, Gen AI and digital art file formats.

Earlier this year a beta of Adobe Content Authenticity Chrome web browser extension was released. The extension enables embedded Content Credentials and invisible watermark information to be viewed on any website even if the website does not yet display them.

It’s great the photo wire services are getting closer to implementing the C2PA standard, but there are hundreds of smaller local news outlets that could also benefit from the enhanced trust and transparency C2PA and Content Credentials can bring. My concern is local news outlets that are already strapped for cash will have a hard time finding money in their budgets to buy C2PA-compliant cameras and have the resources or technical ability to make changes to their websites to properly display the credentials.

There are a few ideas and products for image authentication that don’t involve buying a new camera that costs thousands of dollars. The Atom H1 is being developed by an MIT physicist. The $299 module, about the size of a deck of cards, attaches to the hot shoe on top of a digital camera and “signs image files with a secure digital fingerprint” that is C2PA compliant.

The Starling Lab is also working on developing lower-cost solutions for image authentication.

If you’re interested in experimenting with creating authenticated photos today and you don’t have a spare $9,000 to buy a Leica there are a couple of other options you can explore. You could try the free “Click Camera: Trusted Content” app that enables you to begin using your phone to take photos using blockchain authentication that adheres to the C2PA standard. Additionally, while it is not authentication at the moment of capture, Adobe has enabled both Photoshop and Lightroom to apply Content Credentials to images you process on your computer.

The adoption of C2PA is on the horizon, but it is not going to be as simple as flicking on a light switch. The biggest issue at the moment is there’s a lack of C2PA compliant cameras in the hands of working photojournalists in the field. If Sony, Canon and Nikon release their cameras or firmware to the public in 2025 that could be a huge step towards implementation.

The next challenge is updating the different content management systems for websites at news organizations use so that Content Credentials can be properly displayed to readers. Then once the Content Credential pins begin appearing on websites, another critical step in building trust in news photography will be educating the public about the meaning of a Content Credential pin on an image.

The other important part of C2PA adoption is legislation on the labeling of AI-generated content. The legislative step is likely to be the longest and most difficult process. At this time there is no federal legislation that offers a comprehensive set of regulations on the development or use of artificial intelligence. In the absence of federal regulations more than 40 states proposed a patchwork of hundreds of different laws that are currently up for debate in statehouses across the country. However, this mishmash of proposed state laws threatens to create a regulatory mess for companies, and this mess might force the federal government into action and create a uniform set of national laws for AI.

The field of image authentication and Artificial Intelligence’s impact on photojournalism is rapidly evolving. I’m interested to hear your thoughts on the subject as the implementation of C2PA and Content Credentials approaches a critical mass and begins to be adopted more widely. Please leave your thoughts or ideas in the comments section or you can message me directly at dscarson@stanford.edu.

More News Topics

More News