The Issue: Scroll through your Facebook, Instagram, or TikTok feed today, and there’s a decent possibility that one of those videos that tugs on your heartstrings, causes an uprising, or fills you with awe might turn out to be nothing but a result of no camera involved. AI-generated videos have gone viral on social media in 2024-2025, fake history scenes to real life moments are being uploaded all over the world. A recent investigation (Abdullahi et al., 2025) discovered that such AI-generated videos, including fake showing Holocaust victims and staged camp scenes, are being used to get clicks and money, which hurt people who survived the Holocaust a lot and changes what really happened.
Figure 1 AI-generated painting of famous artworks being applied onto toast. (Source:https://v.douyin.com/xLx3wn2IvSk/)
In the blog, we will see the reasons of importance, why things are happening, and we will also know about social communicating concepts and how they are helpful to us to understand it. By the end, you'll know the hidden dangers of AI-generated video content and what can be done to address them. Statistics from the International Holocaust Remembrance Alliance indicate a worrying situation: more than 60% of social media users encountering AI synthesized content find it hard to distinguish it from authentic footage, turning it into a crisis of belief and truth in the digital age.
The Rise of AI-Generated Video: Profit, Virality, and Harm
AI-generated videos, created using tools that turn text into realistic moving images, have become an effective way to generate income for content creators all over the world.
Figure 2 Profit from AI-generated videos (Source: www.baidu.com)
Investigators traced many of the troublesome posts to creators in Pakistan and other countries, where viral Facebook content can generate up to $1,000 per month, with Western views valued far more than Asian ones (Abdullahi et al., 2025). This incentive model results in a flood of low-quality AI slop, including fabricated historical content that crosses ethical lines. For instance, AI videos of fake Auschwitz victims with fictional names and backstories have been shared thousands of times, causing great pain to survivors and their families. Shaina Brander, whose grandmother survived the Holocaust, told AFP that these videos feel like "mocking our loss" (Abdullahi et al., 2025). Beyond historical damage, AI-generated videos pose broader risks: they can spread misinformation, manipulate public opinion, and make it hard for people to believe what they see with their own eyes, which used to be a good way to know the truth.
Figure 3 Real history and virtual images (Source:www.baidu.com)
Theories Behind the Spread: Why AI Videos Go Viral and Harm Deeply
To understand this issue, we can turn to two key social communication theories: Hypodermic Needle Theory and Media Dependency Theory.
To start with, Hypodermic Needle Theory helps explain how AI-generated videos can have such a powerful impact on the audience. This theory suggests that media messages are "injected"directly into passive audiences, who have little choice but to absorb them (Mehrad et al., 2020). On social media, AI videos are often designed to elicit strong emotional responses: sadness, anger, empathy.These emotions bypass critical thinking. Since most platforms algorithmically prioritize engaging content, these emotionally charged videos are amplified, reaching millions before fact-checkers can respond. As Abdullahi et al. (2025) note, fake Holocaust videos spread rapidly because they tap into universal emotions, and users often share them without verifying their authenticity. This aligns with the Hypodermic Needle Theory’s core idea: media can shape audience perceptions quickly and powerfully, especially if it feels emotional.
Figure 4 Hypodermic Needle Theory (Source:www.baidu.com)
Moreover, Media Dependency Theory explains why users are so easily influenced by the false information generated by AI. This theory argues that audiences depend on media for information, especially in complex or uncertain social environments (Jung, et al., 2025). In today’s digital world, social media is a primary source of news and historical information for many people, particularly younger generations.When the AI videos are similar to real clips, users rely on these platforms to understand the background of events, thereby forming a dependency relationship. This dependency relationship is exploited by criminals. For instance, survivors of the massacre believe that their work of educating the public has been undermined by the fake content produced by artificial intelligence, while young users may have difficulty distinguishing between real history and synthetic content. (Chesney & Citron, 2019).
The spread of harmful AI-generated videos has far-reaching implications. For individuals, it creates confusion about what is real, leading to emotional distress and a loss of trust in digital content. For society, it threatens historical memory, as fake videos distort events like the Holocaust and risk normalizing misinformation. For businesses like Meta (Facebook’s parent company), it raises ethical questions about platform responsibility. Currently, Meta argues that AI-generated Holocaust videos don’t technically violate its policies but has removed some accounts for spam (Abdullahi et al., 2025). Critics have described this response as inadequate.
Figure 5 Actions (Source: www.baidu.com)
Figure 6 Video of an apple being sliced by AI(Source:https://v.douyin.com/mEaT30wqUQs/)
In conclusion, AI-generated videos are a powerful example of how social media can both innovate and harm.When used responsibly, AI can create creative, educational content that connects people. But when weaponized for profit or misinformation, it becomes a threat to truth, memory, and empathy. As we’ve explored, theories like Hypodermic Needle and Media Dependency help us understand why these videos spread so quickly and hurt so deeply and how we can push back. The key is that responsibility doesn’t fall on one group alone: platforms, governments, creators, and users all have a role to play in ensuring AI serves the public good, not just clicks and cash.
Figure 7 Share your thoughts (Source: www.baidu.com)
Have you encountered AI-generated videos on social media, and how did you respond? Do you believe platforms are doing enough to regulate this content? Share your thoughts in the comments below, your perspective could help shape how we address this critical issue.
Article Link: https://www.eweek.com/news/ai-holocaust-fakes-facebook/
Abdullahi, A., Crouse, M., Ticong, L., & Shein, E. (2025). AI-Generated Holocaust Images Flood Social Media, Causing Pain and Distorting History. EWeek, N.PAG. https://www.eweek.com/news/ai-holocaust-fakes-facebook/

