The Federal Communications Commission (FCC) in the United States has swiftly implemented a prohibition on robocalls utilizing AI-generated voices. The announcement, made on Thursday, grants state authorities the ability to prosecute perpetrators responsible for such calls.
The FCC’s decision responds to a surge in robocalls mimicking the voices of celebrities and political figures, exploiting AI technology for unsolicited communication. FCC Chairwoman Jessica Rosenworcel emphasized the misuse of AI-generated voices, citing instances where bad actors engage in extortion of vulnerable individuals, celebrity impersonations, and dissemination of misinformation to voters. The move is particularly pertinent following a recent incident in New Hampshire, where voters received robocalls impersonating US President Joe Biden, discouraging participation in the state’s presidential primary.
The FCC argues that these deceptive calls, capable of imitating public figures and even family members, have the potential to confuse consumers with false information. While state attorneys general already have the authority to prosecute for scams or fraud, the FCC’s recent action explicitly deems the use of AI-generated voices in robocalls illegal, thereby enhancing legal avenues for holding perpetrators accountable.
The regulatory decision follows a January letter from attorneys general in 26 states urging the FCC to restrict the use of AI in marketing phone calls. Concerns about technology advancements, particularly deepfakes that leverage AI to manipulate video or audio content, have gained prominence globally, especially in the context of impending elections in countries like the US, UK, and India. The FCC’s move aligns with efforts to curtail the misuse of AI technology in consumer communications, addressing the evolving landscape of deceptive practices.