How do you detect deepfakes in live video calls? Here’s what the experts say to do

Next time you get a file Zoom Calling, you may want to ask the person you are talking to push their finger into the side of their nose. Or maybe turn your entire profile into the camera for a minute.

These are just some of the ways experts have recommended as ways to provide a guarantee that you’re seeing a real picture of the person you’re talking to and not an impersonation created using deepfake technology.

It seems like a strange precaution, but we live in strange times.

Last month, a senior executive of cryptocurrency exchange Binance He said That scammers have used a deep fake “hologram” of him to deceive several cryptocurrency projects. Patrick Hellman, chief communications officer at Binance, says criminals used deepfakes to impersonate him on Zoom calls. (Hellman has not provided evidence to support his claim and some experts suspect deepfakes were used. However, security researchers say such incidents are now plausible.) warned People can use deepfakes in job interviews conducted via video conferencing software. A month ago, many European mayors said they are At first fooled Via deepfake video call allegedly with Ukrainian President Volodymyr Zelensky. Meanwhile, a startup called metaphysical deepfake developer, reached the finals of America’s Got Talent, by creating remarkably good programs Simon Cowell’s deepfake and other celebrity judges, transforming other singers into celebrities in real time, right in front of the audience’s eyes.

Deepfakes are highly disguised fake photos and videos created through the use of artificial intelligence. It once required a lot of photos of a person, a lot of time, a fair degree of coding skill and knowledge of special effects to create a believable deep fake. Even once established, the AI ​​model cannot run fast enough to produce real-time deepfakes on a live video stream.

This is no longer the case, as both the story of Binance and Metaphysics’ “America’s Got Talent” highlight. In fact, it is becoming increasingly easy for people to use deepfakes to impersonate others in live video broadcasts. Software that would allow someone to do this is now readily available, for free, and requires relatively little technical skill to use. And as the Binance story also shows, this opens up the potential for all kinds of fraud - and political disinformation.

“I am amazed at how quickly and how well the deepfake technology has emerged,” says Hani Farid, a computer scientist at the University of California, Berkeley and expert in video analysis and authentication. He says there are at least three different open source programs that allow people to create fake videos live.

He is unique among those who fear that the deepfake technique can increase the power of fraud. “This would be like phishing,” he says.

“Pencil test” and other tricks to catch an AI crook

Fortunately, experts say there are still a number of techniques a person can use to give themselves reasonable assurance that they are not communicating with impersonation. One of the most reliable things is to simply ask someone to turn so that they are The camera captures it in full profile. Deepfakes struggle with profiles for a number of reasons. For most people, there are not enough profile images available to train the deepfake model to reliably reproduce the angle. And while there are ways to use computer software to estimate the width of a profile from a foreground image, using this software adds complexity to the deepfake creation process.

Deepfake also uses “anchor points” on a person’s face to properly place a deepfake “mask” over them. Rotation of 90 degrees removes half of the anchor points, which often results in distorting, distorting, or distorting the profile picture in very noticeable weird ways.

Yisroel Mirsky, researcher who heads Offensive AI Lab at Ben-Gurion University in Israel A number of other ways to detect deepfakes Which he compared to the CAPTCHA system that many websites use to detect bots (you know, the system that asks you to take all traffic lights into a squared picture). His techniques include asking people in a video call to pick up something random and move it across their face, bounce something, lift and fold their shirt, comb their hair, or hide part of their face with their hand. . In each case, either Deepfake fails to depict the object being passed in front of the face or the method will cause serious distortion of the image of the face. For deep voice fakes, Mirsky suggests asking the person to whistle, try to speak in an unusual accent, gorge or sing a randomly chosen tune.

Image showing screenshots of researcher Isruel Mirsky performing a number of deepfake detection methods.
Here Yisroel Mirsky, who directs the Offensive Artificial Intelligence Laboratory at Ben-Gurion University in Israel, explains a number of simple techniques for deepfake detection.

Image courtesy of Israel Mersky

“All existing deepfake technologies follow a very similar protocol,” says Mirsky. “They’ve been trained on lots and lots of data and that data has to have a certain pattern that you teach the model.” Most AI programs are taught to reliably mimic a person’s frontal face and can’t handle tilted corners or objects that block the face very well.

Meanwhile, Fred explained that another way to detect potential deepfakes is to use a simple program that causes the other person’s computer screen to flicker or display in a specific pattern. light style On the face of the person using the computer. Either Deepfake fails to transfer the lighting effect to impersonation or it will be too slow to do so. A similar discovery might be possible by simply asking someone to use another light source, such as a smartphone lamp, to illuminate their face from a different angle, says Fred.

In order to impersonate someone who does something realistically unusual, Mirsky says the AI ​​program needs to see thousands of examples of people doing that thing. But collecting a set of such data is difficult. And even if you can train an AI to reliably impersonate someone who performs one of these difficult tasks — like picking up a pencil and passing it in front of their face — deepfakes are still likely to fail if you ask the person to use a completely different kind of thing, like a cup . It is also unlikely that attackers using deepfake technology will be able to train a deep fake to overcome multiple challenges, such as pencil testing and profile testing. Each different task increases the complexity of the training that AI requires, Mersky says. “You are limited in what aspects you want deepfake to master,” he says.

Deep fake gets better all the time

Right now, a few security experts are suggesting that people will need to use these CAPTCHA-like challenges in every Zoom meeting they take. But Mirsky and Fred said people might be wise to use them in high-risk situations, such as a call between political leaders, or a meeting that could lead to a high-value financial deal. Both Fred and Mirsky urged people to heed other potential red flags, such as voice calls from unfamiliar numbers or people acting strangely or making unusual requests (does President Biden really want you to buy a set of apple Gift cards for him?).

Fred says that for very important calls, people might use a simple type of two-factor authentication, such as sending a text message to a mobile number that you know is the correct number for that person, and asking them if they are now on a video call with you.

The researchers also emphasized that deepfakes are improving all the time and that there is no guarantee that it will not become easy for them to avoid any particular challenge - or even combinations of them - in the future.

It’s also why many researchers try to approach the direct deepfake problem from an opposite perspective - creating some kind of digital signature or watermark that would prove that a video call is authentic, rather than trying to detect deepfakes.

One group that might be working on a protocol to verify live video calls is the Coalition for Content Creation and Authentication (C2PA) —an organization dedicated to digital media authentication standards supported by businesses including MicrosoftAnd the AdobeAnd the SonyAnd the Twitter. “I think C2PA should pick this up because they have built a specification for recorded video and extended it to live video which is normal,” says Freed. But Fred admits that trying to authenticate data being streamed in real time is not an easy technical challenge. “I don’t see how to do it right away, but it would be interesting to think about,” he says.

Meanwhile, remind invitees on the next Zoom call to bring a pencil to the meeting.

Sign up for Wealth Features Email list so you don’t miss our biggest features, exclusive interviews and investigations.

Leave a Comment