The rapid emergence of Generative AI models such as ChatGPT, Gemini, Claude, and so forth, which are based on natural language processing models, is shaking up the landscape of online deepfake scams. The question remains: What promise do these tools hold, and what risks will arise if they wind up in the hands of scammers?

Cybersecurity experts warn that the problem is expected to worsen as criminals exploit it using generative AI technology. This has lowered the barrier of entry for such sophisticated scams. Moreover, deepfakes can also be used to spread fake news, manipulate stock prices, or even defame a company’s brand and sales.

Traditional fraud detection and prevention models are becoming ineffective and outdated with the rise of new AI-based scams. A report published by CNBC highlights the potential of AI scams using deepfakes and the accumulation of millions of dollars using sophisticated attacks.

The report highlights the following figures in numbers:

  • A Hong Kong finance worker was duped into transferring $25 million to a fraudster who had used AI-based deep fake technology to impersonate the Company’s Chief Finance officer (CFO) and ordered the transfer on a video call.
  • In a similar case in China, a finance worker was tricked into transferring 1.86 million Chinese Yuan (equivalent to $262,000) to a fraudster’s account after a video call with a deepfake of her boss.
  • In 2019, the CEO of a British energy company transferred €220,000 (equivalent to $238,000) to a scammer who had digitally mimicked his parent company's head using deepfake technologies.

potential of AI scams using deepfakes

Mandiant CEO Calls for Naming Names to Fight Cybercriminals

Kevin Mandia, the CEO of Mandiant at Google Cloud, has shared staggering statements about Deepfake and AI at the RSA 2024 Conference held in San Francisco.

In his statements to DarkReading, he mentioned:

  • To combat the new wave of sophisticated deepfake technology media, content creators are urged to embed “watermarks” with immutable metadata, digital certificates, and signed files that guarantee authenticity.
  • Mandia argues that it is time to make it riskier for threat actors themselves, suggesting doubling down on sharing attribution and also naming names in order to raise the stakes for cybercriminals.

Lohrman: AI Deepfakes Pose Major Threat, Need Holistic Approach to Fightback

In another statement from Dan Lohrman, who is an internationally recognized cybersecurity leader, keynote speaker, and author. In his Cybersecurity blog with Government Technology, He shared:

  • AI Deepfake has become a major cybersecurity threat for governmental organizations. With carefully articulated fake messages and sophisticated videos that are extremely difficult to interpret.
  • Traditional security awareness is no longer enough. We need a more holistic approach to “Human Risk Management” to change the security culture and empower employees to detect and report cyber threats.
  • Specific measures need to be taken to (re)train employees at all levels to identify inconsistencies in deep fakes. It is essential to provide them with tools and processes to verify message authenticity and leverage AI-based enterprise-level technologies to automatically detect deepfake scams and fraudulent content.

What is deepfake?

This is the question that requires us to consider the threat of deepfakes more seriously than ever before. In simpler words, a deepfake is an artificial image, video, or audio clip generated by a special kind of machine learning that is called “deep” learning as the name implies.

Deepfake is used for scientific research mostly, but this technology can also be used to create bogus content to impersonate high-profile personalities like politicians, world leaders, celebrities, and so forth to deliberately mislead people.

Deepfake Technology

Deepfake technology can seamlessly stitch and embed anyone in the world into a video or photo they never participated in or existed in the first place. This is possible by using a deep learning algorithm. In technical terms, deep learning is similar to machine learning, where an algorithm is fed with examples, and it learns to produce the output that ditto resembles the examples it learned from in the first place. Humans also have the same learning patterns, e.g. a baby tries eating random objects and quickly discovers what is edible and what is not.

How deepfake AI works?

Machine learning (ML) is the secret sauce in creating deepfakes, which has made it possible to create deepfake much faster and at a lower cost. In order to make an AI deepfake video of someone, the creator will first train a neural network by supplying it with a series of videos of the person (victim) to give it a realistic understanding of the person and how they look alike. The supplementary video usually covers multiple angles and different lighting conditions so that deep learning is effective. After that, they would combine the trained network with the aid of computer-generated graphic techniques to superimpose a copy of that person (victim) onto a different actor.

Recent Examples of AI Deepfake Scams

AI deepfake scams have become increasingly prevalent in recent years. Here are some recent examples:

Deepfake AI-generated Taylor Swift Ad Found Promoting Le Creuset Cookware

An article posted by NBC News discusses a fake AI-generated video snippet that uses Taylor Swift’s resemblance to promote a Le Creuset cookware set. The ad intelligently deepfakes Taylor’s voice and layers it over a camera shot of the cookware. The ad mentions that due to a packaging error, 3,000 cookware sets are given away for free to Taylor Swift’s loyal fans. The ad was posted on a Facebook page known as “The Most Profitable Shares” and it accrued 2300 views before being taken offline by Meta. The company Le Creuset denied participation and stated that they are not involved with the ad and that all of their giveaways or promotions are handled by their official accounts. Similar instances of deepfake technology were also found using fake celebrity endorsements, including Scarlett Johansson and Tom Hanks. This points to the direction that deepfake technology has become more accessible and advanced exponentially in recent years, leading to concerns about its potential to be used for scams, non-consensual use and political discrimination.

Deepfakes Used in Romance Scams Cost Victims Over $650 Million

WIRED publishes an article discussing how smoothly scammers are using face-swapping tech to carry out sophisticated romance scams. The use of Deepfakes by Yahoo Boys, a group of scammers based in Nigeria that are experts in carrying out romance scams. Yahoo Boys have been experimenting with deepfake videos for more than two years, and with the recent advancements and practically no cost of entry, they have shifted to more real-time deepfake video calls in the last year. They use a setup of two phones and a face-swapping app or a laptop & software to change their appearance in real time during video calls with their victims. These scammers are also skilled social manipulators, and they build trust with their victims before luring them into giving thousands of dollars. The artificially altered videos used by Yahoo Boys are often unbelievable and obvious fakes, but some videos appear to be plausible and go undetected, especially when the victims have been social-engineered for a long period of time. According to the FBI, over $650 million was lost in the name of romance scams, and this number is expected to rise as technology improves and keeps getting better.

Celebrity AI Scams: What You Need to Know to Protect Yourself

An article posted by VERIFYThis discusses deepfake videos that have been shared across social media, featuring renowned celebrities like Selena Gomes, Jennifer Aniston and Taylor Swift. The videos appear to show as if these celebrities are endorsing products or doing giveaways, but they are actually fake videos created with AI-based software. The scammers have intelligently used The Wall Street Journal and Vogue interviews to train the software in order to make a real lookalike deepfakes of those celebrities. The article also shares tips for spotting fake videos, such as paying attention to body movement, background graphics, source and context. Conducting reverse image searches and inspecting links before opening them is equally important.

Deepfake Scam Targets the CEO of WPP with AI-based Voice Clone

The head of the world’s biggest advertising group, namely WPP, was targeted by an elaborate deepfake scam, as reported by The Guardian. The attackers used an AI voice clone and a publicly available image of the CEO, Mark Read, to impersonate him on a Microsoft Teams call. The scam attempted to obtain money and personal details from a senior agency leader by asking them to set up a new business. However, the scam attempt was unsuccessful, and WPP confirmed that no money was lost. Deepfake attacks in the corporate world have surged massively, with AI-based voice clones making fools of banks and financial firms out of millions. Many businesses are ramping up with the boom of AI, directing their resources toward technology while simultaneously trying to fight and face off the potential harms caused by AI. A low barrier of entry to deepfake audio has become widely available with far more convincing and realistic imitation of a person’s voice using only a few minutes of audio.

 

Cybercriminals Used AI to Steal $243,000 from a UK Energy Company

An article published by a renowned security firm, Trend Micro, discusses an unusual case of CEO fraud involving sophisticatedly engineered deepfake audio that led to the loss of US $243,000 from a UK-based energy firm. Fraudsters used AI software to imitate the voice of the CEO of the company's German-based parent company, tricking the employee into making an urgent wire transfer to a Hungarian supplier. Those funds were later transferred to Mexican accounts and other locations. Despite new ways of cyberattacks, traditional cyberattack vectors are still prevalent, such as phishing and Business Email Compromise (BEC) attacks. By employing BEC attacks, the cybercriminals attempted to steal up to US$ 301 million per month. To prevent falling for BEC attacks, it is essential to look for red flags in business transactions, scrutinize emails and use security tools that automatically detect suspicious links and emails.  

A Hong Kong Company was Scammed of $25.7 Million Using Deepfake Technology

According to the report published by Bank Info Security, fraudsters successfully duped an employee of a multinational company based in Hong Kong into transferring HK$200 million, or $25.7 million to their accounts utilizing deepfake technology. The scammers created a fake video conference call in which they impersonated the company’s CFO. They directed the employee to make confidential financial transactions into undisclosed accounts. The deceived employee made 15 separate payments to five local bank accounts, and it was too late before realizing it was a scam.

In lieu of this scam, the Hong Kong police have warned the public about the deceptive tactics involving the use of AI technologies in online meetings.

Employee in Shaanxi Province, China Falls Victim to $258,000 Deepfake Scam

Another deepfake scam case occurred in the Shaanxi province of China, where scammers used AI to impersonate an individual by manipulating the video and audio of the person. As mentioned in the China Daily HK article, a financial employee was deceived into transferring 1.86 million yuan (equivalent to $258,000) to a fraudster who happened to impersonate as the boss of the employee over a video link call. The employee reported it to the police after verifying with the boss and came to know that the call was initiated by scammers. However, the police and relative authorities managed to freeze 1.56 million yuan of the total transfer amount.

Deepfake Scams on the Rise in Politics: Indonesian Election and Imran Khan Cases

AI Deepfake is seen as heavily used in elections and between politicians and their counterparts using it for their own gains and spreading propaganda of opposing parties. As reported by CNBC, the number of deepfake scams across the world rose at a staggering rate of 10 times in YoY from 2022—2023 alone according to the data verification firm Sumsub.

Before commencing the Indonesian election on the 14th of February, a video surfaced online of late Indonesian president Suharto advocating for people to vote for the political party he once presided over. The video went viral on social media platforms and racked up to 4.7 million views on the “X” platform only.

In a similar case, a deepfake clip of Imran Khan, the former prime minister of Pakistan, emerged around the timeline of national elections provoking a message that his party is boycotting and denying national elections.

Deepfakes of politicians and governmental personalities are becoming increasingly common, especially with the year 2024 forecasted to be the biggest global election year in the history of mankind.

How to defend against deepfake scams?

It is clear from the above statements that the deepfake scam threat is real and it is growing exponentially in usage while becoming more realistic and authentic. On the contrary, the battle against defeating the deepfake is also evolving with new dynamics and technologies to counter new threats. It is essential to familiarize yourself with tools and technologies and integrate them into your personal and professional practices. Imparting yourself with this knowledge will protect you from the deceptive, disruptive, and destructive potential of deepfake technology.

AI Deepfake Scams can be detected by a variety of methods and detection techniques, the main ones are listed below:

  • Advanced Detection Tools
  • Digital Identity Security with Biometrics
  • AI & Blockchain Technologies

Advanced Detection Tools

These AI-powered cybersecurity solutions utilize AI algorithms to meticulously identify subtle anomalies in video and audio deepfakes. This could be irregular blinking, non-synced lip movement, irregular motion and emotions, white strip instead of teeth, unnatural skin tone, or inconsistent lighting, and among others.

Digital Identity Security with Biometrics

The following table shows some of the important deepfake detection methods using biometric authentication.

Biometric Detection Method Description
Facial Recognition

It can be utilized to compare a video or image against a database of known individuals to verify authenticity.

Finger scan

Deepfakes are digital and cannot trick fingerprint scanners easily. FP scanners also rely on heat to verify that a finger is placed on the scanner surface.

Palm Scan

A cutting-edge biometric detection method that creates an image of vein patterns in the hand and compares them to others in the database. Deepfake AI technology is still far from manipulating palm scans.

Vien Print

The image of vessels within the human body that can be seen as a random mesh is utilized for this authentication technology. Researchers think that it is hard to bypass a person’s vein using deepfake AI.

Retinal Scan

A biometric technique that maps the unique pattern of the human retina using specialized devices to capture a hi-res image of the retina. These cameras utilize ultra-low intensity light sources to capture the images which makes it hard to mimic deepfake technology.

ECG Biometric

The human heart’s electrical signals are sufficiently distinct to allow ECG biometrics to be used for user identification and authentication. With this level of sophisticated internal biometric technique, the entry barrier of deepfake is negligible.

 

AI & Blockchain Technologies

By combining AI and blockchain technologies, it is possible to create a verifiable history of digital content. AI can verify the authenticity of the content against its blockchain records while highlighting discrepancies that point toward content manipulation.

Industry leader’s viewpoint on deepfake scams

Hugh Thomspon, the executive chairman of the RSA Conference is one of the leading experts on cyber security and privacy. He is also a member of the Aspen Cyber Security Group and the STS Forum Council. In his statements to Fortune.com, he shared the following insights:

  • Over 40,000 people from 130 countries are expected to attend the 33rd edition of the annual RSA conference on Cybersecurity with a focus on emerging threats such as deepfake scams.
  • Scams generated by AI Deepfake are becoming sophisticated. Bad actors were able to implant backdoors such as XZ Utils, in commonly used software applications with the potential of compromising tens of thousands of companies if not discovered.

Securus Communications posted an article that highlights the rise of deepfakes in the cybersecurity landscape. The article shared the following insights:

  • According to a survey conducted by iProov, the awareness of deepfake technology has grown significantly from 13% to 29% from 2019 to 2022. However, 57% of people believe that they could spot a deepfake, which is unlikely true for a well-constructed deepfake scam. The survey also deduced that a whopping 80% of people are more likely to use services that take measures against deepfake.
  • The article also mentions a US law called the DEEPFAKES Accountability Act that makes it illegal to create or distribute deepfakes without consent or proper labeling.
  • In a similar way, the UK government is creating new criminal offenses through the Criminal Justice Bill to punish taking or recording intimate images of people without consent.

Conclusion

The rise of generative AI and deepfake technology poses a major threat in the form of sophisticated almost real online scams and fraud. As the examples cited in the article, it is clear that deepfakes can convincingly impersonate any human in the world from a company executive to political leaders and celebrities to lure victims into transferring hefty amounts of money or spreading false news and misinformation.

To counter this increasing threat, a diverse approach is needed by utilizing tools powered by AI in cybersecurity. Leading cybersecurity experts continuously warn that these attacks are getting smarter day by day with low costs and easy barriers of entry. They are stressing the importance of implementing technical and AI-based countermeasures while also raising security awareness training among employees of all levels to be able to identify deepfake scams right before they happen.

Ultimately, the governing bodies and regulatory authorities must keep updating the laws around digital impersonation and non-consensual synthetic media creation. As deepfake scams are getting hyper-realistic, proactive measures should also ensure truth and trust online for individuals, businesses, and societies at large.

For further information on Sangfor’s cyber security and cloud computing solutions, visit our website, www.sangfor.com.

Contact Us for Business Inquiry

Listen To This Post

Search

Get in Touch

Get in Touch with Sangfor Team for Business Inquiry

Related Articles

Cyber Security

Understanding EDR, EPP, and NGAV: Key Differences Explained

Date : 31 Oct 2024
Read Now
Cyber Security

South China Athletic Association Criticized for Lack of Care Leading to Data Breach

Date : 29 Oct 2024
Read Now
Cyber Security

CSAM 2024 & Top Cybersecurity Threats to Watch Out in 2024

Date : 24 Oct 2024
Read Now

See Other Product

Sangfor Omni-Command
Replace your Enterprise NGAV with Sangfor Endpoint Secure
Cyber Command - NDR Platform
Endpoint Secure
Internet Access Gateway (IAG)
Sangfor Network Secure - Next Generation Firewall