Artificial intelligence that tracks employees’ emotions using webcams and voice recognition will be banned in Europe under new AI regulations. From 2 August, websites in EU countries will also be prohibited from using AI to trick people into spending money.
The Artificial Intelligence Act, introduced by the European Commission last year, is the world’s first comprehensive AI regulation. It aims to protect individuals from AI-driven discrimination, manipulation, and harassment.
“The ambition is to provide legal certainty for those who provide or deploy AI systems on the European market, also for market surveillance authorities. The guidelines are not legally binding,” a Commission official stated.
Among prohibited practices are AI-enabled “dark patterns”—manipulative designs that pressure users into making significant financial commitments. AI systems exploiting users based on age, disability, or economic status will also be banned.

AI-driven social scoring, which evaluates individuals using unrelated personal data such as race or origin, will not be permitted. Additionally, law enforcement agencies cannot predict criminal behaviour solely based on biometric data unless it has been verified. Employers will no longer be able to monitor workers’ emotions via webcams or voice recognition. Mobile AI-equipped CCTV for law enforcement will also be restricted, with limited exceptions and strict safeguards.
EU countries must appoint market surveillance authorities by 2 August to enforce these rules. Companies breaching AI laws face fines of up to 7% of global revenue.
The EU’s strict AI regulation contrasts with the United States’ more relaxed, voluntary approach and China’s AI-driven social control measures. Some experts warn that rapid AI developments and global political shifts could render the guidelines outdated before enforcement begins.
Despite potential challenges, legal experts believe the EU will maintain its strong regulatory stance on AI, setting a precedent for global AI governance.