Microsoft has published an impassioned plea to lawmakers to take rapid action against AI-generated deepfakes.
Warning that deepfakes are “increasingly being used for fraud, abuse, and manipulation,” the company’s Vice Chair and President, Brad Smith, has called on members of Congress to act now to protect the public.
He warned that senior citizens and children are particularly at risk and is urging lawmakers to create a “deepfake fraud statute” to make it easier to prosecute.
Laws Need to Evolve
“While the tech sector and non-profit groups have taken recent steps to address this problem, it has become apparent that our laws will also need to evolve to combat deepfake fraud,” says Smith in the blog post.
Proposed bills on this have yet to pass into law though California, Colorado and Virginia have established regulatory and compliance frameworks for AI systems.
This just in! View
the top business tech deals for 2024 👨💻
As well as lacking legislation, Smith argues that until now, the focus has also been too narrow. The technology industry has been concentrating on the use of deepfakes for political purposes in the lead-up to the election — Ironically, this blog post was published just a day after Elon Musk shared an unmarked deepfake of Vice-President Kamala Harris on X.
However, Smith urges that a wider awareness is needed. Industry players and lawmakers need to look at “…the broad role [deepfakes] play in these other types of crime and abuse,” he writes.
A Tool – And a Weapon
The blog post marks the publication of a 42-page report, which begins by repeating a warning that Smith made in his 2019 book, Tools and Weapons. Then, and now, he wrote about how “technological innovation can serve as both a tool for societal advancement and a powerful weapon.”
The blog speaks to the transformative power that AI is having in so many areas – including medicine. But it also details how the FBI has just taken down a Russian bot-farm that was designed to “disseminate AI-generated foreign disinformation.”
“…We find ourselves at a moment in history when anyone with access to the Internet can use AI tools to create a highly realistic piece of synthetic media that can be used to deceive: a voice clone of a family member, a deepfake image of a political candidate, or even a doctored government document.” -Smith
He warns that we are at a profound tipping point at which “…AI has made manipulating media significantly easier – quicker, more accessible, and requiring little skill,” stating simply: “As swiftly as AI technology has become a tool, it has become a weapon.”
Time to Act
The new report is a call to action to urge US lawmakers to “pass a comprehensive deepfake fraud statute to prevent cybercriminals from using this technology to steal from everyday Americans. ” Smith urges that speed is key. “We don’t have all the solutions or perfect ones, but we want to contribute to and accelerate action,” he explains.
Smith acknowledges that Congress has “a range of legislation that would go a long way toward addressing the issue” and it is working with action groups and technology giants in this area. However, he says: “We need to give law enforcement officials, including state attorneys general, a standalone legal framework to prosecute AI-generated fraud and scams as they proliferate in speed and complexity. ” This must focus on election interference; protecting the elderly from fraud; and protecting women and children from online exploitation.
With the election just months away, Smith ends by encouraging humility from tech industry players and “a bias towards action.” As the debate about the use of deepfakes rages – not least on X – Smith urges law makers and the technology industry to act right now. As he says: “The danger is not that we will move too fast, but that we will move too slowly or not at all. ”