US businesses are seeing an increased risk of deepfake scams, with one in 10 surveyed executives saying that their companies had been targeted.
That number looks more worrying when combined with another statistic from the study that indicated more than half of those business leaders reported that their employees lacked training in identifying or preventing deepfake attacks.
It’s yet more ammunition in the arsenal of those who are wary of the accelerating use of AI, with the list of AI errors, mistakes and failures ever growing.
Businesses Vulnerable to Deepfake Scams
Over 10% of companies have “faced successful or attempted deepfake fraud,” according to a study carried out by Business.com in which 244 CEOs, C-suite executives, presidents, and vice presidents were surveyed.
And less than a third (31%) of respondents believe that deepfakes haven’t increased their exposure to potential fraud.
This just in! View
the top business tech deals for 2024 👨💻
Yet despite these figures, 61% of the business leaders noted that no protocols had been established at their companies to address the risks brought about by deepfake technology, with less than half confirming that their employees had received training to deal with these risks.
“Many companies are vulnerable to financial losses and reputation damage because they operate with outdated or weak cybersecurity measures. Too many executives admit their employees have not been trained to identify deepfake media.” – Chad Brooks, Managing Editor of Business.com
That has resulted in 32% of respondents having zero confidence in the ability of staff members to recognize such fraud attempts. Indeed, a quarter of the executives themselves admitted to having “little to no familiarity with deepfake technology”.
How Deepfakes Can Target Businesses
While many of the most notorious deepfake scandals have been at the cost of individuals – fabricated pornography featuring Scarlett Johansson and Taylor Swift, or false videos showing UK politician Sir Keri Starmer berating his staff, for example – its utility to defraud businesses is only likely to be exploited with growing frequency.
In its blog post, Business.com gives the example of a customer relations department at a bank: “Armed with voice cloning technology, the fraudster could impersonate a valued customer by contacting the bank’s call center and authorizing fraudulent transactions.”
It also cites a story from CNN that reported on a fraud to the tune of $25 million when a Hong Kong finance worker was fooled into handing over the sum to a fraudster appearing as the company’s CFO on a video call.
“AI programs can create manipulated videos, photos, or even audio with speed and sophistication, so it is easier than ever for scammers to mislead customers or defraud employees.” – Chad Brooks, Managing Editor of Business.com
Lack of Training on Deepfake Risks
While most big corporations are now attuned to more traditional cyberattacks, hacks and phishing scams, there remains a big training gap when it comes to deepfakes.
The survey found that a mere 14% of respondents said that their companies had “fully implemented” measures to address deepfakes.
“80 percent of companies lack protocols for handling deepfake attacks,” says the Business.com blog post. “Without a plan, these companies are vulnerable, as they won’t be prepared to address and mitigate such incidents to protect their business.”