The European Union’s AI Act, the world’s first regulatory framework for artificial intelligence, has sparked urgency for standardisation. The Dutch privacy watchdog, Autoriteit Persoonsgegevens (AP), warns that the slow pace of developing standards for AI systems could hinder compliance efforts as key provisions of the AI Act begin to apply.
Sven Stevenson, director of coordination and supervision on algorithms at AP, emphasised the need to accelerate the process. “Standards create certainty for companies to demonstrate compliance. Much work remains, and time is running out,” he told Euronews. Standardisation processes typically take years, but Stevenson insists they must be expedited to meet deadlines.
The European Commission tasked standardisation organisations, including CEN-CENELEC and ETSI, to develop these standards in May 2022. However, progress remains ongoing. The AI Act, which became law in August, introduces gradual implementation. For instance, rules for providers of general-purpose AI (GPAI) will take effect in August 2024.
The AP, which also enforces GDPR compliance, is poised to share oversight of the AI Act with other regulators like the RDI, overseeing digital infrastructure. Highlighting its role, AP fined Clearview AI €30.5 million in September for illegal use of European biometric data. Future cases may see the AI Act complement GDPR, focusing on product safety and harmonisation across EU member states.
To support businesses, the EU launched the AI Pact, offering workshops and joint commitments. In the Netherlands, a sandbox initiative, set to start in 2026, aims to guide companies through compliance with impactful AI systems. Meanwhile, a public algorithm register checks government-used algorithms for discrimination and arbitrariness, ensuring transparency.
The clock is ticking as the EU strives to balance innovation with robust regulation.