Home World As Iran and Israel fought, people turned to AI for facts. They...

As Iran and Israel fought, people turned to AI for facts. They didn’t find many

An AI-generated image of a fighter plane shot down in the desert with dozens of people walking toward the front of the plane.

An AI-generated image of a fighter plane shot down in Iran that was published on a parody account on X. Users repeatedly asked the platform’s AI chatbot, Grok, if the image was real. @hehe_samir/Annotation by NPR hide caption

toggle caption

@hehe_samir/Annotation by NPR

In the first days after Israel’s surprise airstrikes on Iran, a video began circulating on X. A newscast, narrated in Azeri, shows drone footage of a bombed-out airport. The video has received almost 7 million views on X.

Hundreds of users tagged X’s integrated AI bot Grok to ask: Is this real?

It’s not — the video was created with generative AI. But Grok’s responses varied wildly, sometimes minute to minute. “The video likely shows real damage,” said one response; “likely not authentic,” said another.

In a new report, researchers at the Digital Forensic Research Lab tabulated more than 300 responses by Grok to the post.

Sponsor Message

“What we’re seeing is AI mediating the experience of warfare,” said Emerson Brooking, director of strategy at the DFRLab, part of the nonpartisan policy group the Atlantic Council. He co-authored a book about how social media shapes perceptions of war.

“There is a difference between experiencing conflict just on a social media platform and experiencing it with a conversational companion, who is endlessly patient, who you can ask to tell you about anything,” said Brooking. “This is another milestone in how publics will process and understand armed conflicts and warfare. And we’re just at the start of it.”

With AI-generated images and videos rapidly growing more realistic, researchers who study conflicts and information say it has become easier for motivated actors to spread false claims and harder for anyone to make sense of conflicts based on what they’re seeing online. Brooking has watched this escalate since Hamas’ attack on Israel on Oct. 7, 2023.

“Initially, a lot of the AI-generated material was in some early Israeli public diplomacy efforts justifying escalating strikes against Gaza,” said Brooking. “But as time passed, starting last year with the first exchanges of fire between Iran and Israel, Iran also began saturating the space with AI-generated conflict material.”

Destroyed buildings and downed aircraft are among the AI-generated images and videos that have spread, some with obvious tells that they were created with AI but others with more subtle signs.

“This is potentially the worst I have seen the information environment in the last two years,” said Isabelle Frances-Wright, director of technology and society at the nonprofit Institute for Strategic Dialogue. “I can only imagine what it feels like [for] the average social media user to be in these feeds.”

AI bots have entered the chat

Social media companies and makers of AI chatbots have not shared data about how often people use chatbots to seek out information on current events, but a Reuters Institute report published in June showed that about 7% of users in the dozens of countries the institute surveyed use AI to get news. When asked for comment, X, OpenAI, Google and Anthropic did not respond.

Sponsor Message

Beginning in March, X users have been able to ask Grok questions by tagging it in replies. The DFRLab’s report analyzed over 100,000 posts of users tagging Grok and asking it about the Israel-Iran war in its first three days.

The report found that when asked to fact-check something, Grok references Community Notes, X’s crowdsourced fact-checking effort. This made the chatbot’s answers more consistent, but it still contradicted itself.

Smoke rises from locations targeted in Tehran amid the third day of Israel's waves of strikes against Iran, on June 15, 2025. While this image is real, the proliferation of AI-generated images has allowed state-backed influence campaigns to flourish.

Smoke rises from locations targeted in Tehran amid the third day of Israel’s waves of strikes against Iran, on June 15. While this image is real, the proliferation of AI-generated images has allowed state-backed influence campaigns to flourish. Zara/AFP via Getty Images hide caption

toggle caption

Zara/AFP via Getty Images

NPR sent similar queries to other chatbots about the authenticity of photos and videos supposedly depicting the Israel-Iran war. OpenAI’s ChatGPT and Google’s Gemini correctly responded that one image NPR fed it was not from the current conflict, but then misattributed it to other military operations. Anthropic’s Claude said it couldn’t authenticate the content one way or the other.

Even asking chatbots more complicated questions than “is it real?” comes with its own pitfalls, said Mike Caulfield, a digital literacy and disinformation researcher. “[People] will take a picture and they’ll say, ‘Analyze this for me like you’re a defense analyst.'” He said chatbots can respond in pretty impressive ways and can be useful tools for experts, but “it’s not something that’s always going to help a novice.”

Loading…

AI and the “liar’s dividend” 

“I don’t know why I have to tell people this, but you don’t get reliable information on social media or an AI bot,” said Hany Farid, a professor who specializes in media forensics at the University of California, Berkeley.

Farid, who pioneered techniques to detect digital synthetic media, warned against casually using chatbots to verify the authenticity of an image or video. “If you don’t know when it’s good and when it’s not good and how to counterbalance that with more classical forensic techniques, you’re just asking to be lied to.”

He has used some of these chatbots in his work. “It’s actually good at object recognition and pattern recognition,” Farid said, noting that chatbots can analyze the style of buildings and type of cars typical to a place.

The rise of people using AI chatbots as a source of news coincides with AI-generated videos becoming more realistic. Together, these technologies present a growing list of concerns for researchers.

Sponsor Message

“A year ago, mostly what we saw were images. People have grown a little weary or leery, I should say, of images. But now full-on videos, with sound effects — that’s a different ballgame entirely,” he said, pointing to Google’s recently released text-to-video generator, Veo 3.

The new technologies are impressive, said Farid, but he and other researchers have long warned of AI’s potential to bolster what’s known as “the liar’s dividend.” That’s when a person who attempts to avoid accountability is more likely to be believed by others when claiming that incriminating or compromising visual evidence against them is manufactured.

Another concern for Farid is AI’s ability to significantly muddy perceptions of current events. He points to an example from the recent protests against President Trump’s immigration raids: California Gov. Gavin Newsom shared an image of activated National Guard members sleeping on the floor in Los Angeles. Newsom’s post criticized Trump’s leadership, saying, “You sent your troops here without fuel, food, water or a place to sleep.” Farid said internet users began to question the photo’s authenticity, with some saying it was AI generated. Others submitted it to ChatGPT and were told the image was fake.

“And suddenly the internet went crazy: ‘Governor Newsom caught sharing a fake image,'” said Farid, whose team was able to authenticate the photo. “So now, not only are people getting unreliable information from ChatGPT, they’re putting in images that don’t fit their narrative, don’t fit the story that they want to tell, and then ChatGPT says, ‘Ah, it’s fake.’ And now we’re off to the races.”

As Farid warns often, these added layers of uncertainty seem certain to play out in dangerous ways. “When the real video of human rights violations comes out, or a bombing, or somebody saying something inappropriate, who’s going to believe it anymore?” he said. “If I say, ‘1 plus 1 is 2,’ and you say, ‘No, it’s not. It’s applesauce’ — because that’s the tenor of the conversation these days — I don’t know where we are.”

Sponsor Message

How AI accelerates influence campaigns 

While generative AI can conjure convincing new realities, DFRLab’s Brooking said that in conflict, one of the more compelling uses of AI is to easily create a kind of political cartoon or obvious propaganda message.

Brooking said people don’t have to believe visual content is authentic to enjoy sharing it. Humor, for example, attracts plenty of user engagement. He sees AI-generated content following a pattern similar to what researchers have seen with political satire, such as when headlines from The Onion, a satirical newspaper, have gone viral.

“[Internet users] were signaling a certain affinity or set of views by sharing it,” said Brooking. “It was expressing an idea they already had.”

Generative AI’s creative abilities are ripe for use in propaganda of all kinds, according to Darren Linvill, a Clemson University professor who studies how states like China, Iran and Russia use digital tools for propaganda.

“There’s a very famous campaign where the Russians planted a story in an Indian newspaper back in the ’80s,” said Linvill. The KGB sought to spread the false narrative that the Pentagon was responsible for creating the AIDS virus, so “[the KGB] planted this story in a newspaper that they founded, and then used that original story to then layer the story through a purposeful narrative laundering campaign in other outlets over time. But it took years for this story to get out.”

As technology has improved, influence campaigns have sped up. “They’re engaging in the same process in days and even hours today,” Linvill said.

Linvill’s research found that a Russian propaganda site was able to maintain its persuasiveness while more than doubling its output after it started using ChatGPT to help write articles.

Linvill said AI can help with foreign influence campaigns in many ways, but the most effective way these messages actually reach a willing audience is still through prominent people or influencers whom state actors sometimes pay for.

Sponsor Message

“They spread a lot of bread on the water, and some percentage of that picks up and becomes a prominent part of the conversation,” he said.

Whether it comes to propaganda or looking for information in uncertain moments, Linvill and other researchers we spoke to said the most potent ideas AI can help spread are the ones that confirm what people already want to believe.

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Must Read

D.C. residents express concern over crime crackdown. And, Ford invests big in EVs

Good morning. You're reading the Up First newsletter. Subscribe here to get it delivered to your inbox, and listen to the Up First podcast for all the news you need to start your day. Today's top stories In a press conference yesterday, President Trump vowed to use the National Guard and the Metropolitan Police Department to target criminals in

Why lung cancer is a ‘hidden epidemic’ in this part of the world

Simar Bajaj for NPR Eleanor Ceres found out she had lung cancer after the tumor spread from her chest and began protruding out her neck. Born and raised in Cape Town, South Africa, Ceres has been smoking for over 30 years — and smoking causes nearly three-quarters of lung cancer deaths around the world. Why

Alaska was once a full-fledged Russian colony. Now it’s hosting a U.S.-Russia summit

A Russian Orthodox Church in the Alaska village of Tatitlek. Alaska was a Russian colony from 1799 until it was sold to the U.S. in 1867 for $7.2 million. President Trump and Russian leader Vladimir Putin are holding a summit in Alaska on Friday. David McNew/Getty Images hide caption toggle caption David McNew/Getty Images Russia

Israeli strike kills journalists in Gaza City, worsening the death toll for the media

Palestinians inspect on Monday, Aug. 11, 2025, the destroyed tent where journalists, including Al Jazeera correspondents Anas al-Sharif and Mohamed Qureiqa were killed by an Israeli airstrike outside the Gaza City's Shifa hospital complex. Jehad Alshrafi/AP hide caption toggle caption Jehad Alshrafi/AP JERUSALEM — Israel's military targeted an Al Jazeera correspondent with an airstrike Sunday

China and the U.S. clash at the U.N. over the Panama Canal

A cargo ship navigates through the Panama Canal, seen from the Cerro Ancon in Panama City. Matias Delacroix/AP hide caption toggle caption Matias Delacroix/AP UNITED NATIONS — The United States and China clashed over the Panama Canal at the United Nations on Monday, with the U.S. warning that Beijing's influence over the key waterway could