Veriff
LibraryblogIdentity fraud: the growing AI threat

Identity fraud: the growing AI threat

Rapid developments in AI have dominated industry news in almost every sector in 2023. Unfortunately, criminals and fraudsters have been among the most enthusiastic early adopters. We spoke to two leading experts about the growing threat from AI-powered deepfakes.

Header image
Author
Chris Hooper
Director of Brand at Veriff.com
November 28, 2023
Postagem de Blog
Serviços financeiros
Prevenção de Fraudes
Share:
On this page
Why are deepfakes such a threat in terms of online fraud?
Using biometrics in a holistic fraud-prevention ecosystem
Identity documents could also be vulnerable to AI-based manipulation
Automated attacks and combative models
Addressing AI-based fraud will involve a layered approach

Synthetic media, or so-called ‘deepfakes’, have been around for several years. However, recent developments in AI have enabled a quantum leap in both the quality and volume of deepfakes produced. In fact, according to DeepMedia, about 500,000 video and voice deepfakes will be shared online globally in 2023; most will be of a quality that was previously unimaginable. 

Initially, media coverage of the deepfake phenomenon mainly focused on the potential for the technology to generate fake news. More recently, concern has grown over the threat to young people’s mental wellbeing from the kind of hyperreal image manipulation accessible via apps like TikTok’s ‘Bold Glamour’ filter. But there is a third, potentially even more concerning, use case for deepfake technology: committing identity fraud.

During a recent webinar, our Senior Director of Fraud Prevention and Experience David Divitt was joined by Maarten Wegdam, CEO and cofounder of NFC identity verification leaders Inverid. Together, the two discussed the growing threat to businesses and individuals from AI-based fraud online.

Why are deepfakes such a threat in terms of online fraud?

As consumers, we increasingly expect 24-7 remote access to a wide range of businesses. But the rapid migration of much of our lives online, driven partly by the COVID pandemic, has had the unwanted side effect of creating new opportunities for fraudsters. 

“If we look at what's happened over the last three to five years, we've seen this massive digitization of services which were previously very traditional, very bricks and mortar,” says David. “What that means is that suddenly you've got a set of very traditional players, with very traditional, well-established processes, that have to rapidly move into the digital space.”

As David points out, criminals love new processes, since their very newness means they have yet to be fully stress tested ‘in the field’ and are therefore more vulnerable to weaknesses. In theory, many of these weaknesses can be ‘designed out’, but the speed of digitization means this often doesn’t happen.

One of the key issues for many online activities, from banking, payments, healthcare and government services to car rental and gaming, is the need to verify the identity of the user trying to carry out a transaction.

“As all of these services have digitized, fraudsters have found that one of the weaknesses they can exploit is being able to digitally manipulate or enhance this identity verification piece,” says David.

Our research for the Veriff Fraud Report 2024 found that nearly a third of altered media we’ve encountered over the last year involved digital manipulation. That’s a figure we only expect to increase as deepfake technology becomes cheaper and more accessible.

Maarten agrees that the use of deepfakes in identity fraud can only grow.

“The threshold to create them will become lower and lower as these technologies become easier to access,” he comments. “So, it's not only state-based attacks or very professional criminals, but it will be everyone, you know, every script kiddie can use those tools very quickly.”

Our research for the Veriff Fraud Report 2024 found that nearly a third of altered media we’ve encountered over the last year involved digital manipulation. That’s a figure we only expect to increase as deepfake technology becomes cheaper and more accessible.

Using biometrics in a holistic fraud-prevention ecosystem

The increasing regularity and scale of data breaches have highlighted the issues with traditional password or knowledge-based methods of identity verification. The search for better means of authentication has led to biometrics being touted as something of a cure all. However, the increasing quality of deepfake technology means that when adopting biometrics it is important to be aware of the risks and should be used with additional data points.

“It's an obvious way of securing an account,” comments David. “But as deepfakes become more and more sophisticated, we definitely have to think about the risk that brings to securing an account with a face, or even a voice.”

"[...] the challenge for us, as an industry, is to use same technologies, but a little bit smarter and a little bit faster [...] trying to keep up with the bad guys." added Maarten.

As industry players, particularly ones with a global view in our field, we can identify not just individual criminal organizations or types of attacks but we can analyze them collectively. As a result, we can develop tools that capitalize on broad datasets encompassing diverse threats.

"But as deepfakes become more and more sophisticated, we definitely have to think about the risk that brings to securing an account with a face, or even a voice."

David Divitt

Identity documents could also be vulnerable to AI-based manipulation

Maarten’s company Inverid specializes in the use of near field communication (NFC) technology to verify identity via chip-based documents such as passports and ID cards. He says that, fortunately, generative AI has yet to be employed to any great extent to fake official documents, but that could easily change.  

“We did some tests recently and the tooling isn't there yet. So very clearly, whether you're human or an algorithm, you'll see that's a fake, because I think there's been less attention on that. But the potential of that technology is certainly there.”

David agrees that for AI-generated fake documents, it’s not a case of if but when they’ll appear.

“I think absolutely there'll be networks that are trained on real document images and able to basically reproduce them with the data that you require very easily,” he says.

"We did some tests recently and the tooling isn't there yet. So very clearly, whether you're human or an algorithm, you'll see that's a fake, because I think there's been less attention on that. But the potential of that technology is certainly there."

Maarten Wegdam

Automated attacks and combative models

If all this sounds a little frightening, the bad news is that AI-powered identity fraud is still only in its infancy. As criminals’ use of AI evolves, attacks will grow both in scale and sophistication. 

“One of the predictions I have for the near future is the bringing together of (stolen) data that can be bought and sold on the internet with AI technology to enable fraudsters to automate these attacks,” says David. “So, you have a thousand identity credentials. You have a deepfake selfie and document generator. You put them together in the mix and you feed them through an API at a target – even if you just get a few percent hit, it's a very cheap way of actually attacking customers.” 

Maarten agrees that the ability of automation to facilitate attacks is a major concern.

“I think especially the scalability becomes a thing,” he says. “Suppose we're not agile enough, we cannot train our networks fast enough to counter them. Then of course we're in big trouble.”

Suppose we're not agile enough, we cannot train our networks fast enough to counter them. Then of course we're in big trouble.

Maarten Wegdam

Addressing AI-based fraud will involve a layered approach

“The thing that you learn time and time again is that there is no magic bullet, the single piece of anything that can stop all types of fraud,” says David. 

Instead, he says the best approach is to combine layers of defense that together create enough friction to discourage attacks. 

“It's about reducing the ROI for the fraudster by putting lots of barriers in their way.”

Access the Veriff Fraud Report 2024, here.

Ready to watch?

Hear David and Maarten explain in more detail about what can be done to effectively counter AI-powered identity fraud.

Get the latest from Veriff. Subscribe to our newsletter.

Veriff will only use the information you provide to share blog updates.

You can unsubscribe at any time. Read our privacy terms