Islamic State and other extremist groups are increasingly using artificial intelligence to spread propaganda and recruit supporters worldwide.
Security agencies warn that cheap and accessible AI tools are changing how militant organisations operate online.
Fake images, real impact
Since tools like ChatGPT became public, extremist groups have experimented with generative AI to create convincing photos, videos, and audio.
These materials often include deepfakes designed to look authentic and emotionally powerful.
Experts say such content can mislead audiences, provoke outrage, and draw vulnerable individuals into violent ideologies.
Recruitment goes digital
Islamic State, now a decentralised network after losing territory in Iraq and Syria, has long exploited social media.
AI now allows faster translation, automated messaging, and realistic visuals, helping small groups reach global audiences.
A recent online post urged IS supporters to embrace AI, calling it easy and effective.
Recent attacks amplified
After major attacks, AI-generated propaganda has circulated widely, including following a deadly concert shooting in Russia.
Similar tactics were used during the Israel-Hamas war, spreading fake images that fuelled polarisation and hatred.
These campaigns helped recruitment efforts across the Middle East, the United States, and beyond.
Cyber and security fears
Experts warn AI could also strengthen cyberattacks, phishing schemes, and impersonation of officials or business leaders.
US authorities fear militants may eventually use AI to assist chemical or biological weapon development.
Political pressure builds
Lawmakers in Washington are pushing for tougher oversight and better information sharing with AI developers.
New legislation would require annual assessments of AI risks linked to extremist groups.
Officials say policies must evolve quickly to match fast-moving technological threats.