Veriff
LibraryFraud centerFraud learnDeepfake danger: AI and the fight against fraudsters

Deepfake danger: AI and the fight against fraudsters

Deepfakes are a growing threat in identity verification (IDV), with dangers from face swapping to lip synching and a surge in fake documents. However, the same technologies that the fraudsters exploit – particularly artificial intelligence (AI) – can also be used to protect honest users.

Header image
Author
Abe Post-Hyatt
Manager, Strategic Revenue
June 5, 2024
Fraud
Finserv
Veriff
Share:
On this page
1. Deepfakes are easier to make – and higher quality
2. But the good guys can also use AI
3.The benefits of a multi-layered defence strategy
4. Don’t overcorrect

That was the major theme of ‘Deepfakes and the future of online fraud’, a webinar I hosted recently, where I was joined by Liisi German, our lead product manager, and Vinny Gaglioti, Veriff’s strategic solutions engineer. 

The danger is clear. We’ve seen a 20% rise in overall fraud year-on-year, while 6% of all verification attempts in 2023 were fraudulent. Deepfakes are a large – and growing – part of the problem, particularly in terms of face swaps and face modification, lip sync algorithms, fake documents and video manipulation. 

So how have things changed – and what steps are we taking to address the threat? Here are some key takeaways from the webinar.

1. Deepfakes are easier to make – and higher quality

Deepfakes aren’t new: at Veriff, we’ve dealt with the threat for years. Liisi said she’s faced the deepfake challenge across her five years of working for Veriff. However, “it has become easier for the fraudsters to use different kinds of tools”, she said. 

The growth of Generative AI (GenAI) is a big part of the problem. Today, you can use a whole range of image generation apps to synthesise an identity. This is connected to the continuing dominance of social media – with so much information about our lives published online, there’s a ready-made resource for fraudsters to tap.

Deepfakes aren’t new: at Veriff, we’ve dealt with the threat for years. Liisi said she’s faced the deepfake challenge across her five years of working for Veriff. However, “it has become easier for the fraudsters to use different kinds of tools”, she said.

Liisi German, Lead Product Manager

2. But the good guys can also use AI

The fraudsters are enabled by AI – but so are we. GenAI technology is both the sword and the shield. We use our own AI-powered technology – such as device analytics, pattern codes, and dynamic ruling – to build our deepfake detection capabilities. Every time we see a deepfake, we get better. 

AI and machine learning (ML) are core to Veriff’s products, noted Vinny, providing advanced technologies that are often better than humans in anomaly detection. However, human insight is still key, he said. 

“Humans can have better understanding of context, they can identify anomalies, and you create a feedback loop,” which can be used to enhance AI models, Vinny explained. 

German said that both automated technology and human analysis comes into play when identifying common indicators of a deepfake: for example, examining the edges or corners of an image on a driving license to determine if it is real, or studying a signature to see if it is genuine or one of the common examples produced by the tools that fraudsters use. With our internal document database and fraud risk score models, we can quickly separate the real from the fake. 

3. The benefits of a multi-layered defence strategy

Human intelligence and analytic capabilities are key elements of a multi-layered defence strategy, Vinny told our audience. He likened it to the approach taken in medieval times:

“If you had a castle, maybe you [also] built a moat – you want to have different types of protections, so that you’re not relying on just one, singular method to defend against bad actors.”

For instance, he highlighted Veriff’s cross-linking capability: taking huge amounts of data and searching for patterns that have appeared before to determine future outcomes. Again, this relies on both automated technologies and human knowledge and intelligence.

“Being able to combine all of these different insights and also their expertise to create a really comprehensive approach to identity verification and to deepfakes has allowed us to stay ahead of the curve.”

“Humans can have better understanding of context, they can identify anomalies, and you create a feedback loop,” which can be used to enhance AI models, Vinny explained.

Vinny Gaglioti, Senior Solutions Engineer

4. Don’t overcorrect

It’s true that there are real dangers in IDV, including deepfakes. But we shouldn’t forget that the vast majority of users are honest customers and clients. In fact, erroneously declining an honest user can be more expensive than accidentally approving a deepfake. 

Overcorrecting and turning away potentially great customers – due to fear – is a dangerous mistake. It’s vital that we strike the right balance in a collaborative way, making full use of advanced technology like biometrics and AI, along with human expertise. 

Deepfakes & the future of online fraud

In this webinar our experts explore the latest trends in online fraud, with a focus on the growing threat of deepfakes and synthetic media.

Get the latest Veriff updates - subscribe to our newsletter

Veriff will only use the information you provide to share blog updates.

You can unsubscribe at any time. Read our privacy terms.