Qoobiss

Can a deepfake app endanger the KYC system?

According to BBC, new data shows that the creation of deepfake videos has increased dramatically in recent months, with the number of contents posted online increasing by almost half in the previous nine months. Deep fakes might soon jeopardize consumer and company identity verification, even though their primary aim is different now.
Although much of the concern over deep fakes focuses on their use in political campaigns, celebrity impersonations and adult entertainment, will this new era of technology put corporate and customer identification at risk?
In the following article, we’ll look at whether deep fake technology is a risk to your client onboarding processes and how you can protect your company against identity theft.

What is a deepfake app and what can it do

Using cutting-edge AI, a deep fake overlay an existing video of a face on top of a source’s head and body, and the technology behind this kind of a deepfake app is simply available to anyone.
A deepfake could appear to be a genuine person’s recorded face and voice, but the words they appear to be speaking were never really spoken by them, at least not in that order.
Not only does this technique present a clear danger to politics and highly classified government organizations, but to any business that uses remote biometric authentication to verify new customers.

Can deepfake content put in danger biometric authentication?

Most of the major remote KYC onboarding players have incorporated some form of live detection as part of the identity verification procedure considering the risk of identity fraud. “Liveness” detection is an essential component for contemporary biometric-based recognition solutions.
By checking for “liveness,” we can confirm that the person attempting to verify their identity is a living subject and not a copy or imitation. Liveness detection combines biometric facial recognition, identity verification and lip-sync authentication to greatly reduce chances of a spoofing attempt being successful.
Deep fake videos are posing increasing risk of supplanting current liveness check capabilities as they become more sophisticated.

Can a deepfake app pass “liveness” checks?

To reduce the risk of spoofing, many liveness checks will ask users to look in different directions or change their facial expressions by frowning or smiling. Voice authentication is an essential part of a liveness check, as all quality ones will require users to say a set of randomised numbers out loud, to a camera. The reason it is essential to use a set of randomised numbers is so that anyone with malicious intent cannot predict the digits that will be displayed.
Currently, there is no deepfake app available, that can generate an accurate synthetic response resembling the user saying random words or performing movements within the limited timeframe available. It would be prohibitively time-consuming to construct a fake with each application, making large-scale fraud impossible.
The goal is to construct a product that not only easy for applicants to use, but also prevents any fraudulent activity, such as deep fakes. A randomly generated lip sync challenge tests both the video and audio channels. This makes it impossible for bad actors generate fake responses in real-time without any artifacts, which isn’t possible at this moment.

Find out more about the QOOBISS KYC/AML software by setting an online meeting with our team.