Preparing Your Organization for Deepfake Video and Audio Clone Fraud

August 17, 2023

Artificial intelligence can be used for good – but it is also a growing favorite tool of bad actors looking to con organizations and people out of precious resources, be it through phishing emails, malware attacks and a fairly new vector: deepfake video and audio clone fraud.

Deepfake is a method to create a seemingly real video with realistic movements and audio – or a voice-only recording. Just think of it as a blend between animation and photorealistic art. Deepfakes are made from an AI-based deep database that mimics people’s faces and voices to the point where it is impossible to tell the difference between what is fake and what is reality. This recording can then be used to impersonate the person in order to commit fraud or other malicious activities.

Ways Businesses Can Combat Deepfake Fraud

Businesses need to be aware of this type of fraid. Here are some ways that organizations can prepare for deepfake video and audio clone fraud:

Educate employees about deepfake technology: Employees should be aware of the dangers of deepfake technology and how it can be used to commit fraud. They should also be trained to identify deepfakes and to report them to the appropriate authorities.

Use multi-factor authentication: Multi-factor authentication (MFA) adds an extra layer of security to accounts by requiring users to provide two or more pieces of information to authenticate themselves. MFA can help protect against fraudsters who may have obtained a person’s voice recording or video.

Monitor social media: Organizations should monitor social media for any suspicious activity that may indicate a deepfake attack. This includes looking for accounts that are using the person’s voice or likeness to make unauthorized statements or requests.

Use fraud detection software: There are a number of fraud detection software solutions that can help to identify deepfake video and audio recordings. These solutions can analyze videos and audio recordings for signs of tampering or manipulation.

Keep systems up to date: Organizations should keep their systems up to date with the latest security patches. This can help to protect against vulnerabilities that could be exploited by fraudsters to create deepfakes.

More Tips to Thwart Deepfake Video and Audio Cloning

Here are some additional tips for organizations to prepare for deepfake video and audio clone fraud:

  • Use a strong password policy: Employees should use strong passwords for all of their accounts, including their work accounts. Passwords should be at least 12 characters long and should include a combination of upper and lowercase letters, numbers, and symbols.
  • Be wary of emails and phone calls from unknown senders: Fraudsters may use deepfake technology to create fake emails and phone calls that appear to be from legitimate sources. Employees should be wary of any emails or phone calls that they receive from unknown senders, and they should never click on links or provide personal information in response to these messages.
  • Keep an eye out for suspicious activity: If an employee receives a suspicious email or phone call, they should report it to their supervisor immediately. Employees should also be aware of any suspicious activity on their work accounts, such as unauthorized logins or changes to account settings.
  • Stay up-to-date on the latest deepfake news: Fraudsters are constantly developing new ways to use deepfake technology. Organizations should stay up-to-date on the latest deepfake news so that they can be aware of the latest threats.

Real-World Examples of Business and Personal Deepfake Fraud

Several organizations have been hit by deepfake fraud attacks. Here are some cases:

A U.K.-based energy firm CEO demanded $243,000. According to a Wall Street Journal report:

“Criminals used artificial intelligence-based software to impersonate a chief executive’s voice and demand a fraudulent transfer of €220,000 ($243,000) in March in what cybercrime experts described as an unusual case of artificial intelligence being used in hacking.

“The CEO of a U.K.-based energy firm thought he was speaking on the phone with his boss, the chief executive of the firm’s German parent company, who asked him to send the funds to a Hungarian supplier. The caller said the request was urgent, directing the executive to pay within an hour, according to the company’s insurance firm, Euler Hermes Group SA.”

A bank manager in Hong Kong was fooled by someone using voice-cloning technology into making hefty transfers in early 2020. The Forbes article details the case:

“In early 2020, a branch manager of a Japanese company in Hong Kong received a call from a man whose voice he recognized—the director of his parent business. The director had good news: the company was about to make an acquisition, so he needed to authorize some transfers to the tune of $35 million. A lawyer named Martin Zelner had been hired to coordinate the procedures and the branch manager could see in his inbox emails from the director and Zelner, confirming what money needed to move where. The manager, believing everything appeared legitimate, began making the transfers.

“What he didn’t know was that he’d been duped as part of an elaborate swindle, one in which fraudsters had used “deep voice” technology to clone the director’s speech, according to a court document unearthed by Forbes in which the U.A.E. has sought American investigators’ help in tracing $400,000 of stolen funds that went into U.S.-based accounts held by Centennial Bank. The U.A.E., which is investigating the heist as it affected entities within the country, believes it was an elaborate scheme, involving at least 17 individuals, which sent the pilfered money to bank accounts across the globe.”

Scammers likely used artificial intelligence to con Newfoundland seniors out of $200K. A CBC News report breaks down the fraud case:

“As soon as she picked up the phone, Jane knew there was something wrong. Her grandson was on the other end, saying he’d been in a car accident and had been arrested. He sounded panicked.

“The police found drugs in the car. Someone was seriously injured. He used his one phone call to contact the one person he knew would help him without judgment. Jane — not her real name — was stunned, but promised to help without telling his parents. The phone was handed over to a police officer, who gave her instructions on how to post bail. Her unconditional love for her grandson cost her $58,350 by the end of the next day.

“I really believed it was him,” she said.

“A white, male police officer in uniform with short brown hair. Const. James Cadigan, the Royal Newfoundland Constabulary’s media relations officer, says the goal of such scams is usually to go after a large sum of money in a short time frame. (Mark Cumby/CBC)

“The Royal Newfoundland Constabulary says at least eight senior citizens lost a combined $200,000 to similar scams over a three-day period between Feb. 28 and March 2. Police say one man, 23-year-old Charles Gillen, came to St. John’s from Toronto to collect the money in person.

“Gillen was arrested on the tarmac at St. John’s International Airport on the evening of March 2. He was on a flight leaving the province.”

More on Deploying MFA to Protect Against Deepfake Fraud

One effective tip that businesses can deploy to combat deepfake video and audio fraud is to implement a multi-factor authentication (MFA) system for sensitive transactions or communications. MFA adds an extra layer of security by requiring users to provide multiple forms of verification before gaining access or completing a transaction.

Here’s how this can help mitigate deepfake fraud:

  1. Biometric Verification: In addition to traditional username and password credentials, MFA can involve biometric verification methods such as facial recognition, fingerprint scanning, or voice recognition. These biometric factors are difficult for deepfake technology to replicate accurately.
  2. Randomized Challenges: MFA systems can employ randomized challenges that require the user to perform specific actions or provide certain information in real-time. For example, they might be asked to blink or smile to prove their presence, which is challenging for a pre-recorded deepfake to imitate.
  3. Out-of-Band Verification: MFA can also involve sending verification codes or alerts to separate communication channels, such as a mobile app, email, or SMS. This ensures that even if a deepfake attacker gains control over one channel, they would still need access to another to complete the authentication.
  4. Time-Based Challenges: MFA can include time-based challenges where the user is prompted to perform a certain action within a short time window. Deepfake attackers would find it difficult to respond quickly and convincingly to these challenges.
  5. Behavioral Analysis: Advanced MFA systems can employ behavioral analysis to detect anomalies in user behavior. This might include typing speed, mouse movement patterns, or even the way a person speaks. Deepfake-generated inputs are less likely to mimic these behavioral nuances accurately.
  6. Continuous Authentication: Some MFA systems offer continuous authentication, monitoring user behavior throughout a session to ensure that the user’s actions remain consistent. If an unexpected change is detected, the session can be flagged or terminated.

By implementing a robust MFA system that incorporates various layers of verification, businesses can significantly reduce the risk of falling victim to deepfake video and audio fraud. It’s important to stay updated with the latest advancements in both deepfake technology and authentication methods – and educate employees – to ensure the ongoing effectiveness of these measures.

Submit a Comment

Your email address will not be published. Required fields are marked *

Pin It on Pinterest

Share This