US lawmakers are advocating for the implementation of new legislation to criminalize the creation of deepfake images following the widespread dissemination of explicit, fabricated photos featuring Taylor Swift, which garnered millions of online views. These images were circulated on various social media platforms, including X and Telegram.
Representative Joe Morelle expressed his dismay, labeling the spread of such pictures as “appalling.”
Despite many images being removed, one particular photo of Swift reportedly accumulated 47 million views before being taken down.
Deepfakes utilize artificial intelligence (AI) to manipulate facial or bodily features. A 2023 study revealed a 550% rise in doctored image creation since 2019, propelled by AI advancements. Notably, there are currently no federal laws specifically addressing the sharing or creation of deepfake images, although some states are taking steps to tackle the issue.
Democratic Representative Morelle, who previously introduced the Preventing Deepfakes of Intimate Images Act, called for urgent action. He highlighted the potential for deepfake images and videos to cause irreparable harm, particularly impacting women, who constitute 99% of those targeted in deepfake pornography, as reported by the State of Deepfakes study.
Representatives Yvette D Clarke and Tom Kean Jr echoed concerns about the rapid advancement of AI technology outpacing necessary safeguards.
Taylor Swift’s team is reportedly considering legal action against the site that published the AI-generated images.
Meanwhile, worries about AI-generated content have heightened amid global elections, with a recent investigation into a fake AI-generated robocall claiming to be from President Joe Biden.