If you open your phone right now and scroll through social media sites you’ll likely see missiles streaking across the night sky. Or fireballs blooming over cities and buildings collapsing. Or crowds running in panic with frantic voiceovers in Arabic, English, or Hebrew.
Some of the video captions say the explosions are in Israel. Others say Iran. Others claim Lebanon, Syria, Gaza, or a U.S. base somewhere in the Gulf. The footage looks real. The sound is cinematic. The captions are confident. Thousands, sometimes millions, of people are sharing the clips as if they are watching the war unfold in real time.
But a growing portion of those videos are not real at all. Welcome, Y’all, to the first major war of the AI video era, where the battlefield is not just in the Middle East. It’s in your phone.
And the weapon being deployed against the global public isn’t just missiles. It’s confusion. What’s unfolding before us is a deeper story of the collapse of informational certainty during war. The public is now experiencing war through an information environment where reality itself feels unstable.
For decades, scholars and other experts warned about propaganda. Governments have always tried to shape how wars are perceived. During World War II, propaganda posters flooded the public sphere. During Vietnam, television coverage reshaped public opinion. During the Iraq War, the U.S. media helped circulate the catastrophic fiction of weapons of mass destruction. But something fundamentally different is happening now.
In previous wars, propaganda flowed primarily from governments and large media institutions. It moved through identifiable channels. Today, information flows through millions of anonymous nodes like TikTok accounts, Telegram channels, Facebook and X feeds, Instagram reels, AI video generators, bot networks, influencers, content creators, and state-sponsored disinformation operations.
And now that generative AI tools can create shockingly realistic video in seconds, the result is something unprecedented. We have a war where millions of people are staring at the same images while nobody can be completely certain what they’re looking at.
A missile strike could be real. It could be recycled footage from another war. It could be a video game clip. It could be AI-generated imagery created by somebody sitting in their mama’s basement thousands of miles away. The visual grammar of war (explosions, tracer fire, drones, burning skylines) is now easy to simulate. Which means that when a war actually breaks out, the internet is a whole hall of mirrors and folks no longer know whether they’re watching documentation, propaganda, or a very convincing fabrication.
And it doesn’t help that our established media ecosystem is already a hot mess. Newsrooms are understaffed, verification desks are overwhelmed, and the modern attention economy rewards speed over accuracy. Fabricated footage ricochets across platforms in minutes while journalists are still trying to figure out where the hell it came from, when it was filmed, and whether it’s even from the conflict people claim it is. By the time verification catches up, the clip has already been viewed millions of times and folded into somebody’s political narrative.
Layer on top of that an administration that lies about damn near everything. When governments repeatedly manipulate facts, exaggerate threats, or spin events for strategic purposes, they corrode the one thing war reporting depends on which is baseline trust in official information.
One minute Donald Trump is declaring that Iran’s military capacity has been “totally destroyed.” The next minute your phone lights up with footage of explosions over Tel Aviv and Jerusalem. So the public is left trying to reconcile two completely different signals: triumphant political messaging on one hand, and images of missiles lighting up the night sky on the other.
That gap is where distrust grows. Because once people begin to suspect that official statements are more about narrative management than factual reporting, every claim coming from the government becomes suspect. Even true information gets filtered through skepticism. The public starts asking: Is this real? Is this propaganda? Is this spin And in the middle of an actual war, that breakdown of credibility makes an already chaotic information environment even harder to navigate.
So now you have three forces colliding at once: viral footage that can be staged or simulated, a media ecosystem racing to keep up with the algorithm, and political leaders whose credibility ain’t worth shit. The result is an information environment where the public is trying to interpret a war through a fog of spectacle, spin, and half-verified information.
I spent much of yesterday afternoon researching what news outlets had to say about this. The coverage has been mostly fragmented. Major outlets like BBC News, Reuters, and The New York Times have reported on deepfakes and manipulated footage in recent conflicts, from the war in Ukraine to misinformation circulating around the Gaza war and election propaganda. But those stories typically focus on individual examples, such as the widely shared fake video in 2022 that appeared to show Ukrainian President Volodymyr Zelenskyy urging his troops to surrender, or AI-generated battlefield imagery spreading across social media.
Other outlets have touched on what some analysts call the “information fog” of war: The Atlantic has examined how dramatic war footage spreads across platforms faster than journalists can verify it, MIT Technology Review has explored how generative AI is making fact-checking harder, and The Washington Post has reported on the growing misinformation ecosystems that form around conflicts. But most of that coverage remains primarily technological and asks how journalists and platforms can detect fake videos, rather than examining the deeper psychological consequences of an information environment where millions of people are watching the same clips and no one is certain whether they are real.
Meanwhile, the broader concept of information warfare has long been discussed in military and policy circles by institutions such as the RAND Corporation and the Brookings Institution, which analyze propaganda, cyber influence campaigns, and narrative manipulation, though those discussions usually appear in defense journals and policy reports rather than accessible public commentary. But those discussions usually live inside defense journals and policy reports which are far removed from the everyday information environment most people inhabit.
And we also have to talk about the role of these creepy tech bros in all this. Because none of this chaos is happening in a vacuum. The same companies that host the videos, amplify the rumors, and algorithmically push the most sensational clips to the top of people’s feeds know exactly what kind of information environment they’ve created. Their own researchers have spent years documenting how emotionally charged content, especially war footage, outrage, and conspiracy, travels farther and faster than verified reporting. But the platforms are built to maximize engagement, not clarity.
That means the algorithm doesn’t necessarily reward what’s accurate. It rewards what’s dramatic, shocking, and shareable. A carefully verified report from a journalist will almost always lose the race to a grainy clip of explosions and a caption screaming that the world is ending. The leadership of these companies know this and they have have aligned themselves politically with an administration that openly attacks traditional journalism while relying heavily on social media ecosystems to shape public perception. When governments erode trust in professional reporting and tech platforms simultaneously amplify spectacle over verification, the two forces start reinforcing each other.
Traditional reporting is weakened. Algorithmic noise fills the vacuum. And the public is left trying to understand war through a feed designed primarily to keep them scrolling. In that environment, people’s confusion is part of the system.
So how do ordinary people translate all these abstract ideas into their daily lived experiences? What does information warfare look like to somebody sitting on their couch, scrolling through their phone late at night? This is not just a technological problem. It is a psychological one.
Neuroscience tells us that human beings are wired to trust visual evidence. Seeing has always carried a special authority. Photographs and video feel like proof. But that cognitive instinct is now colliding with a technological reality I’ve just described. And verification takes time. Journalists need to geolocate footage, examine shadows, cross-check timestamps, and confirm details with sources on the ground. That process can take hours, or sometimes days.
By the time professional verification catches up, the video has already done its work of shaping opinion, fueling outrage, and hardening narratives long before anyone can say with certainty what actually happened. Meanwhile, the algorithmic economy of social media rewards speed, not accuracy. Platforms elevate content that generates immediate emotional reaction: fear, anger, outrage, shock. By the time anyone verifies whether a clip is authentic, millions of people have already seen it and absorbed the emotional impact.
Most people scrolling through social media aren’t trained to do that kind of verification work that journalists do. They don’t have the tools, the time, or the media literacy skills to investigate where a clip came from or whether it’s been manipulated. They’re encountering these images in the middle of a busy day, between work emails, family texts, and whatever the algorithm decides to throw in front of them next. So the brain does what it has always done: it sees the image, feels the emotional impact, and assumes it must be real.
But even when some people suspect a video might not be real, the brain is still processing the image. The visual system has already fired. The amygdala (the brain’s alarm system) has already registered the emotional cues: danger, destruction, fear, outrage. Those reactions happen milliseconds before the rational part of the brain has time to analyze the content.
In neuroscience this is part of what researchers call “motivated reasoning.” Once an image triggers an emotional response or confirms something a person already believes, the brain often shifts into a mode of defending that interpretation rather than carefully questioning it. The prefrontal cortex can evaluate evidence, but it is frequently recruited to justify the feeling that came first, not to neutralize it.
So even when somebody says, “That video might be fake,” the image may already have done its psychological work on them. The brain has absorbed the emotional signal and the narrative has started forming.
And even for folks who are media literate, who understand how propaganda works, who know how easily footage can be manipulated, and who have some awareness of these psychological processes, the exposure itself is still exhausting. The brain and nervous system don’t get to opt out of all that sensory input. Each new image still has to be processed, evaluated, questioned, and mentally sorted into real or fake, current or recycled, evidence or manipulation.
Over time, that constant cognitive filtering creates its own kind of fatigue. The mind gets stuck in a loop of vigilance, watching, doubting, checking, second-guessing. And when that happens across thousands of posts, clips, and headlines, the result is not clarity but freakin’ exhaustion. Even people trying to think critically can feel overwhelmed by the sheer volume of imagery and claims they are expected to evaluate.
Psychologists sometimes call the result epistemic fatigue. What is that? It’s the mental exhaustion that comes from constantly trying to figure out what’s true and what isn’t in an environment flooded with competing claims, manipulated images, and contradictory information. Put more simply, it’s what happens when your brain gets worn the hell out from having to verify reality all the damn time.
When people are bombarded with conflicting information, real footage, fake footage, propaganda, counter-propaganda, they eventually stop trying to sort it out. The brain becomes overwhelmed. At that point, people don’t necessarily believe what is true. They start to believe what feels true. And that, my friends, is where the deeper danger lies.
The goal of modern information warfare is not always to convince you of a specific narrative. Oftentimes, the goal is simply to make you doubt that any reliable narrative exists at all. Read that sentence again.
If you can’t persuade people that your version of events is true, the next best move is to convince them that nothing can be known for sure! So you flood the information space with enough conflicting claims, doctored videos, recycled footage, and conspiracy theories, and eventually people stop trying to sort it out. The result is confusion. And confusion is politically useful. People trust whatever aligns with their political identity and dismiss everything else as manipulation. Truth becomes less important than allegiance.
And phen people feel like the truth is impossible to determine, they also disengage. They throw up their hands and say, “Fuck it! Who knows what’s really happening?” They retreat into tribal loyalties or simply tune out altogether. That’s exactly what modern information warfare is designed to produce.
It’s less about winning the argument than it is about polluting the environment in which arguments happen. Instead of one clear story competing with another, the information space becomes so crowded with noise that the very idea of objective evidence starts to erode. Once that happens, power fills the vacuum. Because in a world where nobody trusts what they see, hear, or read, the advantage shifts to whoever has the loudest megaphone, the fastest propaganda machine, or the biggest ability to shape the narrative before anybody can verify the facts.
Complicating all of this is the growing credibility crisis surrounding Western media. For many viewers around the world, and for many Americans, the trust that once existed between the public and major news institutions has eroded dramatically. That erosion didn’t happen overnight. It happened after decades of highly visible failures. The Iraq War coverage that repeated government claims about weapons of mass destruction. The selective framing of foreign conflicts. The editorial blind spots around colonial violence and geopolitical alliances.
So when people see social media footage that appears to contradict official narratives, their instinct is often to assume that mainstream outlets are hiding something. Sometimes that suspicion is justified. Sometimes it isn’t. But the result is the same. We have a fractured information ecosystem where official reporting, citizen footage, AI-generated content, and outright propaganda all circulate simultaneously. In that environment, even legitimate journalism struggles to compete. Because reality itself is now forced to compete with simulations.
The first casualty of war has always been truth. And uncertainty itself becomes part of the experience of war. It creates anxiety. It feeds conspiracy theories. It erodes trust in institutions. And it amplifies the very polarization that makes conflict harder to resolve. Because if reality itself becomes unstable, if people cannot agree on what they are seeing, then democratic societies lose one of the most basic conditions necessary for collective decision-making, which is a shared understanding of event.
Yes, war has always been violent. Now it is also informationally chaotic. Every major geopolitical actor understands that controlling the narrative of a conflict can be nearly as important as winning battles on the ground. And when that happens, the battlefield expands. Not just across borders. Not just across cities. But across the human mind itself.
Thanks for reading. If this piece resonated with you, then please consider becoming a paid subscriber. Paid subscriptions help keep my Substack unfiltered and ad free. They also help me raise money for HBCU journalism students who need laptops, DSLR cameras, tripods, mics, lights, software, travel funds for conferences and reporting trips, and food from our pantry. You can also follow me on Facebook!
We appreciate you!









