March 11th, 2026

Fact File: As Iran war escalates, fake videos and images proliferate online

By Canadian Press on March 11, 2026.

As the U.S.-Israel war on Iran escalates, a flood of fake and misleading content is muddying the waters online. People are turning to social media for news of the war, but experts in artificial intelligence say bad actors, unchecked tech companies — and an internet where we can no longer trust that everything we see is real — make getting that news much harder.

Fake and misleading videos and images about the conflict are getting millions of views online.

On the X platform, a video of missiles supposedly striking Tel Aviv showed signs it was generated with artificial intelligence, including a distorted Israeli flag and buildings and cars that change shape throughout the video.

A post to X by the state-owned Tehran Times newspaper claimed to show before-and-after photos of destruction to a U.S. military base in Qatar. However, both photos appear to match a satellite image of a base in Bahrain from February 2025, suggesting the image was altered to include the damage from the “after” photo.

Brian McQuinn, an associate professor at the University of Regina and co-director of the Centre for Artificial Intelligence, Data and Conflict, said there is a political motivation for some of the accounts creating fake war content.

He said countries that want the war to end such as Iran and Russia are amplifying fake content on U.S.-owned social media platforms, including X and Instagram. “One of the main goals of a lot of foreign influence operations is to ensure that people don’t know what is true. Because then you can’t get mad and you can’t have an opinion, because you have nothing to base the opinion on,” he said.

McQuinn said foreign disinformation campaigns want to play up the war’s divisiveness in the United States and put pressure on the administration to end it.

SOCIAL MEDIA’S ROLE

McQuinn said much of the disinformation and misinformation Canadians share online does not reflect their ideology but is just content they didn’t vet closely.

A 2023 report by the Regina-based centre and the University of Maryland on Russian weaponization of Canada’s far right found that most Canadian X accounts that amplified Russian disinformation were “average Canadians,” including many who shared information unknowingly.

“The goal of disinformation campaigns is to use a combination of truth, half-truth and lies to try to launder these messages into organic networks,” he said.

And it is getting increasingly difficult to distinguish the truth from the lies. When some users attempted to verify the video showing supposed missile strikes on Tel Aviv by asking X’s artificial intelligence chatbot Grok, it told them the video was real.

It is not uncommon for AI chatbots like Grok to share false information because they are predictive models with no notion of the truth, AI researchers told The Canadian Press in November.

Nikita Bier, X’s head of product, said in reply last week to a user on the platform that the company was working to detect and label misleading war videos.

In a separate post, Bier said users who post AI-generated war videos without adding an AI disclaimer would be suspended from the platform’s creator revenue-sharing program.

“We will continue to refine our policies and product to ensure X can be trusted during these critical moments,” Bier wrote.

IS IT STILL POSSIBLE TO SPOT FAKES ONLINE?

Benjamin Steel, a research engineer and PhD fellow at the Centre for Media, Technology and Democracy at McGill University, said platforms could do more to tackle AI-generated fakes, like tagging content as AI or creating digital watermarks that trace AI content back to the source.

Steel said it’s particularly hard for Canadians to detect AI-generated war videos because most of them don’t have anything to compare the images to. “It’s harder to use your normal intuition about what looks real,” he said. “Not that many people in Canada have probably seen an explosion.”

Last week, AFP Fact Check reported on claims made by social media users and vloggers that the CBC ran an AI-generated image in an article about Ayatollah Ali Khamenei’s death by Israeli strikes. The fact check found that the photo some reported as fake was, in fact, real. A society that distrusts everything it sees online, McQuinn said, “can no longer be an informed society.”

Steel said it’s still possible to get truthful information from social media sites, but users need to be “skeptical.” There are still small “tells” that users can spot in AI-generated content, he said, like “flickering” when people move their hands, an unnatural glossy sheen and physics that shouldn’t work — such as cars that change shape.

But those clues could change months from now as the technology continues to improve, Steel warned, noting it’s already getting harder to spot AI fakes. “But simultaneously, we’ll probably create new tools to measure these things,” he said.

McQuinn cautioned users to take everything online with a grain of salt; look at who is sharing the information and whether they have a trustworthy reputation; and take a closer look at images before sharing. Seeking out trusted news sources that can substantiate information, videos and images is important, both experts advised.

When it comes to misinformation, “Canadians decide” how much gets shared, McQuinn said.

This report by The Canadian Press was first published March 11, 2026.

Marissa Birnie, The Canadian Press

Share this story:

29
-28
Subscribe
Notify of
0 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments