Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124
Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124
Gaming platform extremism uses AI to target young players on Steam, Discord & Twitch. Learn how extremists recruit kids and what parents can do to protect them
The teen was just playing Counter-Strike when it started. First, a few edgy jokes in voice chat. Then links to “funny” videos. Before his parents knew what happened, their 15-year-old son was printing neo-Nazi propaganda in their home office.
This isn’t a one-off story. Gaming platform extremism has exploded across popular platforms like Steam, Discord, and Twitch, while gaming platform extremism increasingly relies on AI tools to identify and recruit vulnerable young players. Meanwhile, artificial intelligence makes these tactics more sophisticated and harder to detect than ever before.
Recent research reveals staggering numbers that should concern every parent and gamer. More than 1.8 million unique pieces of extremist or hateful content, including explicitly antisemitic, neo-Nazi and Islamist terrorist material, were identified on Steam, the world’s largest and most popular online gaming marketplace, according to the Anti-Defamation League’s 2024 analysis.
But here’s what’s truly frightening: More than half of all game players report experiencing some form of hate, harassment or abuse within gaming spaces. When you consider that the gaming industry has approximately 2.8 billion users, which presents a massive target audience for radicalisation, especially the youth majority, we’re looking at a crisis affecting hundreds of millions of young people worldwide.
The United Nations Counter-Terrorism Centre recently confirmed what researchers have been warning about for years. In Australia alone, approximately one in five counter-terrorism cases now involve young people, with gaming platforms playing a role in every investigated case.
Artificial intelligence isn’t just changing how we play games—it’s transforming how extremists target and radicalize young gamers. However, the intersection of AI and gaming platform extremism creates unprecedented challenges that traditional moderation can’t handle.
Extremists can use generative AI technology to create all manner of content from propaganda videos, images, music, and translations etc. Content that previously took weeks and months to produce by individuals with a degree of technical expertise will now be simple to create for anyone.
Think about it: a single extremist can now generate hundreds of memes, create deepfake videos of historical figures, or produce multilingual propaganda—all without technical skills. Therefore, this content floods gaming platforms faster than human moderators can review it.
Extremists can identify those sympathetic to their ideology by using AI-powered algorithms. These algorithms analyse enormous volumes of online data to find patterns and signs of possible radicalisation. Consequently, gaming platforms become hunting grounds where AI systems identify vulnerable players based on their chat patterns, game choices, and social connections.
Extremists could programme chat-bots to mimic the worldview of their propagandists. AI generated accounts could be deployed on gaming platforms to identify and attract potential recruits. These AI-driven accounts can engage in seemingly normal conversations while gradually introducing extremist viewpoints.
Gaming platform extremism doesn’t happen in dark corners of the internet. Instead, extremists operate openly on mainstream platforms that millions of kids use daily.
Steam emerges as the platform with the most serious extremist infiltration. Two Steam groups were expressly affiliated with violent extremist groups. One of these is a group affiliated with the Nordic Resistance Movement (NRM). The NRM was connected to a series of bombings in Gothenburg in 2016 and 2017.
Research shows that these groups were easy to find, with just the most basic keyword searches for racial slurs and Nazi terminology. Moreover, some of these communities have operated for years without significant interference.
While Discord has improved its moderation, researchers have documented that the platform has been repeatedly used by various extremist groups for community building and recruitment. Additionally, the platform’s private server structure makes it difficult for outsiders to monitor extremist activity.
Gaming-adjacent platforms like Steam, Twitch, and Discord have gained notoriety, in part for being exploited by extremists who see them as fertile ground for spreading their ideologies. Furthermore, these platforms often lack the robust moderation systems found on traditional social media.
Understanding how gaming platform extremism operates helps parents and platforms recognize warning signs before it’s too late.
Initial interactions might take place in online gaming communities where individuals bond over shared interests. These interactions can then migrate to less regulated gaming-adjacent platforms, allowing extremist rhetoric to proliferate unchecked.
Here’s how the process typically unfolds:
Stage 1: Normal Gaming Interaction Extremists join popular games and participate normally, building trust with other players through shared gaming experiences.
Stage 2: Testing the Waters They’ll start out with dropping slurs about different races or religions and kind of test the waters… Once they sense that they’ve got their hooks in them they ramp it up.
Stage 3: Off-Platform Migration Recruiters move promising targets to private Discord servers, Telegram groups, or other platforms with less oversight.
Stage 4: Full Indoctrination Through AI-generated content and coordinated messaging, they expose targets to increasingly extreme ideologies.
The gaming subculture could be adopted by far-right organisations to lure young gamers into their extremist worldviews. Specifically, extremists exploit gaming terminology, create game modifications with extremist themes, and organize gaming events that double as recruitment drives.
Gaming platform extremism isn’t just online harassment—it has deadly real-world consequences.
In some cases – like the 18-year-old who attacked a café in Turkey in Aug. 2024 – users posting this content on Steam have subsequently committed acts of offline violence. Similarly, The Christchurch Mosque shootings in 2019 exemplify the devastating impact of far-right extremist propaganda and the role of social media and AI in amplifying such ideologies.
These platforms don’t just spread hate—they actively contribute to real violence through systematic radicalization processes.
Despite these challenges, there are concrete steps you can take to protect yourself and your family from gaming platform extremism.
Monitor Gaming Communications
Educate About Red Flags Teach your children to recognize recruitment tactics:
Use Parental Controls Effectively
Report Suspicious Activity Most platforms have reporting mechanisms, though enforcement varies. Document extremist content and report it to platform administrators.
Don’t Engage with Extremist Content Engagement—even negative—can signal to AI algorithms that you’re interested in similar content. Instead, block and report users spreading extremist material.
Build Positive Gaming Communities Support inclusive gaming groups and speak up against hate speech when you encounter it.
Gaming companies are finally starting to address extremist exploitation, but progress remains slow and inconsistent.
We are looking to develop and deploy advanced content moderation tools, with AI-based tools. However, the gaming community is filled with personalities with large followings so, we want to avoid any type of takedowns or massive actions which could be counterproductive.
Some platforms have implemented:
However, significant obstacles remain:
When platforms have cracked down and moderated extreme right discussion, activity has decreased, proving that enforcement works when implemented consistently.
The intersection of AI and gaming platform extremism will only become more complex as technology advances.
Virtual Reality Exploitation Virtual reality (VR) games like VRChat, which allow real-time and personalised interactions, provide extremists with a window to recruit young, vulnerable players by engaging them in modified settings that promote particular ideologies.
Deepfake Technology The introduction of deepfakes: images, videos, or recordings that have been digitally modified to misrepresent someone’s words or actions enables extremists to create convincing fake content featuring dead leaders or fabricated evidence.
Gaming as Training Grounds Games like Arma 3 and Escape from Tarkov have become virtual training grounds where users can simulate tactical engagements in realistic environments.
Industry Collaboration An unprecedented collaboration between counter terrorism specialists and gaming companies suggests hope for coordinated responses.
AI-Powered Defense Just as extremists use AI for recruitment, platforms can deploy AI for detection and prevention. Advanced algorithms can identify extremist content patterns and recruitment behaviors in real-time.
Regulatory Pressure Government attention is increasing, with gaming companies coordinating with the FBI and Department of Homeland Security to root out so-called domestic violent extremist content.
Gaming platform extremism represents one of the most serious digital threats facing young people today. AI amplifies both the reach and sophistication of extremist recruitment, while the gaming industry’s historically lax moderation creates perfect conditions for radicalization.
Yet this isn’t a reason to abandon gaming entirely. Instead, we need informed awareness, proactive protection measures, and sustained pressure on platforms to prioritize user safety over profits.
The teens being targeted today will shape tomorrow’s society. Whether they’re radicalized by extremists or protected by vigilant communities depends on actions we take right now.
Every parent checking their child’s gaming activity, every platform improving moderation, and every gamer reporting extremist content contributes to solving this crisis. Because in the fight against gaming platform extremism, we’re not just protecting individual players—we’re defending democracy itself.