A new warning from the World Health Organization says Europe is embracing health-care AI without enough protection for patients or staff. The report highlights sharp differences across 50 countries in how tools are being tested, funded, and regulated.
The findings show fast adoption but weak safeguards. Half of the surveyed nations already use AI chatbots to support patients. Thirty-two countries rely on AI diagnostics, especially for imaging and disease detection. Others are experimenting with AI for screening, pathology, mental health support, data analysis, and planning.
Several countries are pushing ahead with new projects. Spain is trialling systems that speed up early detection of serious illnesses. Finland is using AI to train its health workforce. Estonia is applying advanced tools to improve data analysis across services.
Yet the report warns that progress lacks firm foundations. Only fourteen countries have allocated money to support their AI plans. Just four nations — Andorra, Finland, Slovakia, and Sweden — have a dedicated national strategy for AI in health.
WHO regional director Dr Hans Kluge says weak rules risk deepening existing inequalities. He argues that Europe needs strong privacy rules, clear legal safeguards, and better training to ensure AI helps rather than harms.
Experts say the biggest danger lies in faulty or biased datasets, which can lead to wrong diagnoses or flawed treatment decisions. The report urges governments to define responsibility for errors caused by AI systems.
WHO adviser Dr David Novillo Ortiz adds that unclear standards may already be deterring health workers from using AI tools. He says Europe must test systems for safety, fairness, and real-world performance before patients are exposed.
The report concludes that Europe must align AI development with public health goals, strengthen laws, and maintain transparent communication with citizens.