The threat of deepfakes is becoming increasingly pervasive, with nearly half of organizations (47 percent) having encountered them and 70 percent anticipating a significant impact on their operations due to deepfake attacks created using generative AI tools. Despite these concerns, there remains a cautious optimism about AI, as 68 percent of organizations acknowledge its role in creating cybersecurity threats, but an even greater majority (84 percent) recognize its potential in bolstering defenses against these very threats. These insights come from a new global survey by iProov, a leading provider of biometric identity solutions, which also highlights that 75 percent of the solutions being implemented to combat deepfakes involve biometric technologies.
The survey, titled “The Good, The Bad, and The Ugly” gathered opinions from 500 technology decision-makers across the UK, US, Brazil, Australia, New Zealand, and Singapore, focusing on the growing threat of generative AI and deepfakes. While organizations acknowledge the efficiency gains AI offers, they are also keenly aware that these same advancements are being leveraged by cybercriminals. Nearly 73 percent of organizations have begun implementing solutions to address the deepfake threat, but confidence remains low. The study found that over two-thirds (62 percent) of respondents fear their organization isn’t taking the deepfake threat seriously enough.
Deepfakes present a real and immediate danger, with potential misuse in various harmful ways, including defamation, reputational damage, and financial fraud. They can be employed to commit large-scale identity fraud, such as impersonating individuals to gain unauthorized access to systems, initiate financial transactions, or deceive others into transferring funds– reminiscent of the recent Hong Kong deepfake scam. The survey reveals a growing concern that organizations are not adequately equipped to address these threats.
Andrew Bud, founder and CEO of iProov, commented: “We’ve been monitoring deepfakes for years, but recent advancements have made them easier to create and more capable of causing widespread damage to organizations and individuals. One of the most overlooked risks is the creation of synthetic identities, which go undetected because they aren’t tied to a real person, allowing them to wreak havoc and defraud organizations and governments of millions.”
Bud further emphasized that detecting high-quality deepfakes with the naked eye is now impossible.
“Even though our research shows that half of the organizations surveyed have encountered a deepfake, the actual number is likely higher, as many are not equipped to identify them.
With the rapid evolution of the threat landscape, organizations cannot afford to ignore these new attack methodologies. Facial biometrics have proven to be the most resilient solution for remote identity verification,” he added.
The study also uncovered regional differences in the perception of deepfake threats.
Organizations in the Asia-Pacific (51 percent), Europe (53 percent), and Latin America (53 percent) regions are more likely than those in North America (34 percent) to report encountering a deepfake. However, organizations in Asia-Pacific (81 percent), Europe (72 percent), and North America (71 percent) are more likely to believe deepfake attacks will have a significant impact, compared to those in Latin America (54 percent).
Despite the dangers posed by deepfakes, organizations recognize the positive potential of AI. Most see generative AI as innovative, secure, and reliable, with the ability to help solve problems. Many organizations view AI as more ethical than unethical and believe it will have a positive impact on the future. In response to the risks associated with AI, only 17 percent of organizations have failed to increase their budget for programs addressing AI-related risks, and most have introduced policies for the use of new AI tools.
Biometric solutions have emerged as the preferred method for combating deepfakes, with facial and fingerprint biometrics being the most commonly implemented. The type of biometric technology used varies depending on the task. For instance, facial biometrics are considered the most suitable additional authentication method for account access, personal account changes, and typical transactions.
However, the study reveals that software alone is not sufficient to address the deepfake threat. Organizations view biometrics as a specialized area of expertise, with nearly all respondents (94 percent) agreeing that a biometric security partner should offer more than just a software product. Key requirements include continuous monitoring (80 percent), multi-modal biometrics (79 percent), and liveness detection (77 percent) to adequately protect against deepfakes.