大数跨境
0
0

Why Are AI Fake Videos Viral? Can We Trust What We Watch?

Why Are AI Fake Videos Viral? Can We Trust What We Watch? Social Media Networks
2025-11-24
4
导读:AI-Generated Videos on Social Media: Hidden Risks to History—and Can We Tell Fact from Fake?

The Issue: Scroll through your Facebook, Instagram, or TikTok feed today, and there’s a decent possibility that one of those videos that tugs on your heartstrings, causes an uprising, or fills you with awe might turn out to be nothing but a result of no camera involved. AI-generated videos have gone viral on social media in 2024-2025, fake history scenes to real life moments are being uploaded all over the world. A recent investigation (Abdullahi et al., 2025) discovered that such AI-generated videos, including fake showing Holocaust victims and staged camp scenes, are being used to get clicks and money, which hurt people who survived the Holocaust a lot and changes what really happened.

Figure 1 AI-generated painting of famous artworks being applied onto toast. (Source:https://v.douyin.com/xLx3wn2IvSk/)

In the blog, we will see the reasons of importance, why things are happening, and we will also know about social communicating concepts and how they are helpful to us to understand it. By the end, you'll know the hidden dangers of AI-generated video content and what can be done to address them. Statistics from the International Holocaust Remembrance Alliance indicate a worrying situation: more than 60% of social media users encountering AI synthesized content find it hard to distinguish it from authentic footage, turning it into a crisis of belief and truth in the digital age.

The Rise of AI-Generated Video: Profit, Virality, and Harm

AI-generated videos, created using tools that turn text into realistic moving images, have become an effective way to generate income for content creators all over the world.

Figure 2 Profit from AI-generated videos (Source: www.baidu.com)

Investigators traced many of the troublesome posts to creators in Pakistan and other countries, where viral Facebook content can generate up to $1,000 per month, with Western views valued far more than Asian ones (Abdullahi et al., 2025). This incentive model results in a flood of low-quality AI slop, including fabricated historical content that crosses ethical lines. For instance, AI videos of fake Auschwitz victims with fictional names and backstories have been shared thousands of times, causing great pain to survivors and their families. Shaina Brander, whose grandmother survived the Holocaust, told AFP that these videos feel like "mocking our loss" (Abdullahi et al., 2025). Beyond historical damage, AI-generated videos pose broader risks: they can spread misinformation, manipulate public opinion, and make it hard for people to believe what they see with their own eyes, which used to be a good way to know the truth.

Figure 3 Real history and virtual images (Source:www.baidu.com)

Theories Behind the Spread: Why AI Videos Go Viral and Harm Deeply

To understand this issue, we can turn to two key social communication theories: Hypodermic Needle Theory and Media Dependency Theory.

To start with, Hypodermic Needle Theory helps explain how AI-generated videos can have such a powerful impact on the audience. This theory suggests that media messages are "injected"directly into passive audiences, who have little choice but to absorb them (Mehrad et al., 2020). On social media, AI videos are often designed to elicit strong emotional responses: sadness, anger, empathy.These emotions bypass critical thinking. Since most platforms algorithmically prioritize engaging content, these emotionally charged videos are amplified, reaching millions before fact-checkers can respond. As Abdullahi et al. (2025) note, fake Holocaust videos spread rapidly because they tap into universal emotions, and users often share them without verifying their authenticity. This aligns with the Hypodermic Needle Theory’s core idea: media can shape audience perceptions quickly and powerfully, especially if it feels emotional.

Figure 4 Hypodermic Needle Theory (Source:www.baidu.com)

Moreover, Media Dependency Theory explains why users are so easily influenced by the false information generated by AI. This theory argues that audiences depend on media for information, especially in complex or uncertain social environments (Jung, et al., 2025). In today’s digital world, social media is a primary source of news and historical information for many people, particularly younger generations.When the AI videos are similar to real clips, users rely on these platforms to understand the background of events, thereby forming a dependency relationship. This dependency relationship is exploited by criminals. For instance, survivors of the massacre believe that their work of educating the public has been undermined by the fake content produced by artificial intelligence, while young users may have difficulty distinguishing between real history and synthetic content. (Chesney & Citron, 2019).

 Implications and Recommendations

The spread of harmful AI-generated videos has far-reaching implications. For individuals, it creates confusion about what is real, leading to emotional distress and a loss of trust in digital content. For society, it threatens historical memory, as fake videos distort events like the Holocaust and risk normalizing misinformation. For businesses like Meta (Facebook’s parent company), it raises ethical questions about platform responsibility. Currently, Meta argues that AI-generated Holocaust videos don’t technically violate its policies but has removed some accounts for spam (Abdullahi et al., 2025). Critics have described this response as inadequate.

Figure 5 Actions (Source: www.baidu.com)

To address this issue , multiple stakeholders must act. Firstly, social media platforms should update their policies to prohibit AI-generated content that distorts historical events or harms vulnerable communities. Using Socio-Technical Theory, platforms can optimize both technical systems (like AI detectors) and social practices (like user reporting tools) to create a more responsible environment (Baxter & Sommerville, 2011). Technical solutions, such as watermarking AI-generated content, can help users identify synthetic videos, while social measures, like public awareness campaigns, can educate audiences about verification.Secondly, governments should consider regulations that hold content creators and platforms accountable for spreading harmful AI content. The European Union’s AI Act, for example, requires transparency for AI-generated media, a model other regions could adopt. Finally, users can take simple steps to protect themselves: verify content with trusted sources, look for watermarks or disclaimers, and avoid sharing videos that evoke intense emotions without fact-checking (Breakstone et al., 2021).

Figure 6 Video of an apple being sliced by AI(Source:https://v.douyin.com/mEaT30wqUQs/)

In conclusion, AI-generated videos are a powerful example of how social media can both innovate and harm.When used responsibly, AI can create creative, educational content that connects people. But when weaponized for profit or misinformation, it becomes a threat to truth, memory, and empathy. As we’ve explored, theories like Hypodermic Needle and Media Dependency help us understand why these videos spread so quickly and hurt so deeply and how we can push back. The key is that responsibility doesn’t fall on one group alone: platforms, governments, creators, and users all have a role to play in ensuring AI serves the public good, not just clicks and cash. 

Figure 7 Share your thoughts (Source: www.baidu.com)

Have you encountered AI-generated videos on social media, and how did you respond? Do you believe platforms are doing enough to regulate this content? Share your thoughts in the comments below, your perspective could help shape how we address this critical issue.

Article Link: https://www.eweek.com/news/ai-holocaust-fakes-facebook/


 Reference

Abdullahi, A., Crouse, M., Ticong, L., & Shein, E. (2025). AI-Generated Holocaust Images Flood Social Media, Causing Pain and Distorting History. EWeek, N.PAG. https://www.eweek.com/news/ai-holocaust-fakes-facebook/

Ball-Rokeach, S. J., & DeFleur, M. L. (1976). A Dependency Model of Mass-Media Effects.  Communication Research3(1), 3. https://doi.org/10.1177/00936502760030010

Baxter, G., & Sommerville, I. (2011). Socio-technical systems: From design methods to systems engineering. Interacting with Computers, 23(1), 4–17.  https://doi.org/10.1016/j.intcom.2010.07.003

Chesney, B., & Citron, D. (2019). Deep Fakes: A Looming Challenge for Privacy, Democracy, and National Security. California Law Review, 107(6), 1753–1820.

Jafar  Mehrad, Zohre Eftekhar, & Marzieh Goltaji. (2020). Vaccinating Users against the Hypodermic Needle Theory of Social Media: Libraries and Improving Media Literacy. International Journal of Information Science and Management18(1), 17–24. 

Joel Breakstone, Mark Smith, Priscilla Connors, Teresa Ortega, Darby Kerr, & Sam Wineburg. (2021). Lateral reading: College students learn to critically evaluate internet sources in an online course. Harvard Kennedy School Misinformation Review, 2(1).  https://doi.org/10.37016/mr-2020-56

Jung, J.-Y., Kim, Y.-C., Mai, L., Kwesell, A., & Lee, D. (2025). Perceived pervasive ambiguity in the onset of COVID-19: Media dependency and social media sharing in Seoul, Tokyo, and New York. International Communication Gazette, 87(7), 705–726.  https://doi.org/10.1177/17480485241307549

【声明】内容源于网络
0
0
Social Media Networks
商业社交网络课程学习
内容 1
粉丝 0
Social Media Networks 商业社交网络课程学习
总阅读2
粉丝0
内容1