Artificially generated images of real-world news events proliferate on stock image sites, blurring truth and fiction

Untitled-1.png


A young Israeli woman, wounded, clinging to a soldier’s arms in anguish. A Ukrainian boy and girl, holding hands, alone in the rubble of a bombed-out cityscape. An inferno rising improbably from the tropical ocean waters amid Maui’s raging wildfires.
At a glance, they could pass as iconic works of photojournalism. But not one of them is real. They’re the product of artificial intelligence software, and they were part of a vast and growing library of photorealistic fakes for sale on one of the web’s largest stock image sites until it announced a policy change this week.

Responding to questions about its policies from The Washington Post, the stock image site Adobe Stock said Tuesday it would crack down on AI-generated images that seem to depict real, newsworthy events and take new steps to prevent its images from being used in misleading ways.

1700841289039.png






















An AI-generated fake photo, labeled by The Washington Post, of a wounded Israeli woman clinging to a soldier can be found on Adobe Stock. (Adobe)

As rapid advances in AI image-generation tools make automated images ever harder to distinguish from real ones, experts say their proliferation on sites such as Adobe Stock and Shutterstock threatens to hasten their spread across blogs, marketing materials and other places across the web, including social media — blurring lines between fiction and reality.

Adobe Stock, an online marketplace where photographers and artists can upload images for paying customers to download and publish elsewhere, last year became the first major stock image service to embrace AI-generated submissions. That move came under fresh scrutiny after a photorealistic AI-generated image of an explosion in Gaza, taken from Adobe’s library, cropped up on a number of websites without any indication that it was fake, as the Australian news site Crikey first reported.

The Gaza explosion image, which was labeled as AI-generated on Adobe’s site, was quickly debunked. So far, there’s no indication that it or other AI stock images have gone viral or misled large numbers of people. But searches of stock image databases by The Post showed it was just the tip of the AI stock image iceberg.

1700841386734.png

An AI-generated fake photo, labeled by The Washington Post, depicts an explosion in Gaza. It since been removed from Adobe Stock. (Adobe)

A recent search for “Gaza” on Adobe Stock brought up more than 3,000 images labeled as AI-generated, out of some 13,000 total results. Several of the top results appeared to be AI-generated images that were not labeled as such, in apparent violation of the company’s guidelines. They included a series of images depicting young children, scared and alone, carrying their belongings as they fled the smoking ruins of an urban neighborhood.

It isn’t just the Israel-Gaza war that’s inspiring AI-concocted stock images of current events. A search for “Ukraine war” on Adobe Stock turned up more than 15,000 fake images of the conflict, including one of a small girl clutching a teddy bear against a backdrop of military vehicles and rubble. Hundreds of AI images depict people at Black Lives Matter protests that never happened. Among the dozens of machine-made images of the Maui wildfires, several look strikingly similar to ones taken by photojournalists.

1700841449902.png

This AI-generated fake photo, labeled by The Washington Post, appears on Adobe Stock with the caption “a girl holding his teddy bear with destructive civilian area during war time, sorrow scenery of war victims, idea for support children's right , especially Ukrainian, Generative Ai.” (Adobe)

1700841877360.png

This AI-generated fake photo, labeled by The Washington Post, can be found on Adobe Stock when users search for “BLM protests.” (Adobe)

1700841916000.png

This AI-generated fake photo, labeled by The Washington Post, can be found on Adobe Stock when users search for “Maui fires.” (Adobe)

“We’re entering a world where, when you look at an image online or offline, you have to ask the question, ‘Is it real?’” said Craig Peters, CEO of Getty Images, one of the largest suppliers of photos to publishers worldwide.

Adobe initially said that it has policies in place to clearly label such images as AI-generated and that the images were meant to be used only as conceptual illustrations, not passed off as photojournalism. After The Post and other publications flagged examples to the contrary, the company rolled out tougher policies Tuesday. Those include a prohibition on AI images whose titles imply they depict newsworthy events; an intent to take action on mislabeled images; and plans to attach new, clearer labels to AI-generated content.

“Adobe is committed to fighting misinformation,” said Kevin Fu, a company spokesperson. He noted that Adobe has spearheaded a Content Authenticity Initiative that works with publishers, camera manufacturers and others to adopt standards for labeling images that are AI-generated or AI-edited.

As of Wednesday, however, thousands of AI-generated images remained on its site, including some still without labels.

Untitled-2.png

This AI-generated fake photo, labeled by The Washington Post, can be found on Adobe Stock with the caption “Poor orphan child in destroyed city in Palestine Israel war conflict. Humanitarian crisis concept. Generative AI.” (Adobe)

1700842388722.png

Adobe Stock places a label beside AI-generated images when users are looking to license them. (Washington Post illustration; Adobe)

Shutterstock, another major stock image service, has partnered with OpenAI to let the San Francisco-based AI company train its Dall-E image generator on Shutterstock’s vast image library. In turn, Shutterstock users can generate and upload images created with Dall-E, for a monthly subscription fee.

A search of Shutterstock’s site for “Gaza” returned more than 130 images labeled as AI-generated, though few of them were as photorealistic as those on Adobe Stock. Shutterstock did not return requests for comment.

Tony Elkins, a faculty member at the nonprofit media organization Poynter, said he’s certain some media outlets will use AI-generated images in the future for one reason: “money,” he said.

Since the economic recession of 2008, media organizations have cut visual staff to streamline their budgets. Cheap stock images have long proved to be a cost-effective way to provide images alongside text articles, he said. Now that generative AI is making it easy for nearly anyone to create a high-quality image of a news event, it will be tempting for media organizations without healthy budgets or strong editorial ethics to use them.

1700842447447.png

This AI-generated fake photo, labeled by The Washington Post, is found on Adobe Stock with the caption “Black Protester in Hoodie and Mask Raising Fists for Social Justice. Generative AI.” (Adobe)

“If you’re just a single person running a news blog, or even if you’re a great reporter, I think the temptation [for AI] to give me a photorealistic image of downtown Chicago — it’s going to be sitting right there, and I think people will use those tools,” he said.
The problem becomes more apparent as Americans change how they consume news. About half of Americans sometimes or often get their news from social media, according to a Pew Research Center study released Nov. 15. Almost a third of adults regularly get it from the social networking site Facebook, the study found.

Amid this shift, Elkins said several reputable news organizations have policies in place to label AI-generated content when used, but the news industry as a whole has not grappled with it. If outlets don’t, he said, “they run the risk of people in their organization using the tools however they see fit, and that may harm readers and that may harm the organization — especially when we talk about trust.”


If AI-generated images replace photos taken by journalists on the ground, Elkins said that would be an ethical disservice to the profession and news readers. “You're creating content that did not happen and passing it off as an image of something that is currently going on,” he said. “I think we do a vast disservice to our readers and to journalism if we start creating false narratives with digital content.”

Realistic, AI-generated images of the Israel-Gaza war and other current events were already spreading on social media without the help of stock image services.

Untitled-3.png

This AI-generated fake photo, labeled by The Washington Post, can be found on Adobe Stock when users search for “Gaza.” (Adobe)

The actress Rosie O’Donnell recently shared on Instagram an image of a Palestinian mother carting three children and their belongings down a garbage-strewn road, with the caption “mothers and children - stop bombing gaza.” When a follower commented that the image was an AI fake, O’Donnell replied “no its not.” But she later deleted it.

A Google reverse image search helped to trace the image to its origin in a TikTok slide show of similar images, captioned “The Super Mom,” which has garnered 1.3 million views. Reached via TikTok message, the slide show’s creator said he had used AI to adapt the images from a single real photo using Microsoft Bing, which in turn uses OpenAI’s Dall-E image-generation software.

Meta, which owns Instagram and Facebook, prohibits certain types of AI-generated “deepfake” videos but does not prohibit users from posting AI-generated images. TikTok does not prohibit AI-generated images, but its policies require users to label AI-generated images of “realistic scenes.”

A third major image provider, Getty Images, has taken a different approach than Adobe Stock or Shutterstock, banning AI-generated images from its library altogether. The company has sued one major AI firm, Stable Diffusion, alleging that its image generators infringe on the copyright of real photos to which Getty owns the rights. Instead, Getty has partnered with Nvidia to build its own AI image generator trained only on its own library of creative images, which it says does not include photojournalism or depictions of current events.

Peters, the Getty Images CEO, criticized Adobe’s approach, saying it isn’t enough to rely on individual artists to label their images as AI-generated — especially because those labels can be easily removed by anyone using the images. He said his company is advocating that the tech companies that make AI image tools build indelible markers into the images themselves, a practice known as “watermarking.” But he said the technology to do that is a work in progress.

“We’ve seen what the erosion of facts and trust can do to a society,” Peters said. “We as media, we collectively as tech companies, we need to solve for these problems.”