Researchers warn that advanced AI systems can now imitate human survey behaviour with alarming accuracy. A new Dartmouth University study shows that language models can defeat safeguards and distort online polling at scale. The research highlights a major risk for public opinion data that often guides political and scientific decisions.
Lead author Sean Westwood says survey platforms can no longer assume their responses come from genuine participants. His team built a simple tool using a short prompt that created believable demographic profiles and realistic behaviour. The system copied human reading rhythms, produced natural typing patterns, and even inserted convincing errors.
In more than 43,000 trials, the synthetic respondent deceived almost every system it faced. It passed logic puzzles without mistakes and bypassed common protections, including reCAPTCHA. The tool operated cheaply and could be deployed in large numbers, raising fears about organised manipulation.
The researchers warn that elections could face added uncertainty as AI expands disinformation efforts. Recent European contests have already seen rising automated interference, increasing pressure on polling agencies. The study used the 2024 United States presidential race to show how little input is needed to sway projected outcomes.
Only a few dozen fake entries were enough to flip seven major national polls during a decisive week. Each fake response cost just a few cents, making large-scale meddling both feasible and inexpensive. The tool also produced excellent English answers even when programmed in foreign languages, deepening concerns about overseas actors.
The findings raise serious questions for scientific research that relies on large online samples. Westwood argues that stronger verification methods already exist and urges rapid adoption. He says decisive action could protect polling credibility and preserve democratic accountability