How to Combat Deepfake in the Age of AI Technology

Currently, attackers are leveraging Large Language Models (LLMs) to impersonate humans and automate social engineering tactics on a large scale. This article will focus on the current state of these attacks and how to prevent them – not just detect them.

The person video calling you might not be real!

Recent threat intelligence reports show the increasing sophistication and prevalence of AI-driven attacks:

  • Voice Phishing Surge: According to CrowdStrike’s 2025 Global Threat Report, voice phishing (vishing) attacks increased by 442% between the first and second half of 2024, due to AI-generated spoofing and phishing techniques.

  • Prevalence of Social Engineering Techniques: Verizon’s 2025 Data Breach Investigations Report indicates that social engineering remains one of the most common methods in breaches, with phishing and pretexting accounting for a significant proportion.

  • North Korea's Deepfake Campaign: Hacker groups from North Korea have been recorded using deepfake technology to create fake identities to participate in remote job interviews, aiming to secure positions and infiltrate organizations.

In this new era, trust cannot be based on intuition or assumptions – it needs to be verified clearly, accurately, and in real-time.

Why is this issue becoming increasingly serious?

There are three main reasons why AI impersonation is gradually becoming a common attack method:

  1. AI reduces the cost of scams: With open-source tools for creating voice and video, attackers can impersonate anyone with just a few sample data points.

  2. Remote work exposes trust gaps: Tools like Zoom, Teams, or Slack lack mechanisms to authenticate the identity between the account owner and the actual user.

  3. Defensive measures often rely on probability rather than evidence: Deepfake detection tools use facial cues and analysis to guess if someone is real. This is insufficient in high-risk environments.

Although end-to-end security tools and user training programs can help mitigate risks, they are not designed to provide immediate answers to a core issue: whether the person interacting is truly trustworthy.

AI detection technology is not enough

Traditional defensive measures focus on detection – such as training users to recognize suspicious behavior or using AI to analyze if someone is fake.
But deepfakes are becoming too good, too fast. You cannot combat AI-generated impersonation with probability-based tools.

True prevention requires a different foundation, based on provable elements, not assumptions. This includes:

  • Identity Verification: Only verified and authorized users can participate in sensitive meetings or conversations, based on cryptographic certificates, not passwords or authentication codes.

  • Device Integrity Checks: If a user's device is infected with malware, jailbroken, or non-compliant, it remains an entry point for attackers – even if the identity is verified. Block these devices from meetings until they are remediated.

  • Visible Trust Indicators: Other participants need to see clear evidence that each person in the meeting is who they claim to be and is using a secure device.

Prevention means creating conditions where impersonation is not just difficult but impossible.
This is how you stop AI deepfake attacks before they engage in high-risk conversations like board meetings, financial transactions, or supplier collaborations.

Recommendations

FPT Threat Intelligence recommends organizations and individuals take several measures to prevent these dangerous attack campaigns:

  • Strong Identity Authentication: Prioritize using identity verification mechanisms based on verifiable evidence (e.g., cryptographic certificates), rather than just relying on passwords or access codes.

  • Endpoint Device Integrity Assessment: Establish mechanisms to check the safety and compliance of devices before allowing participation in sensitive sessions, to minimize risks from untrustworthy devices.

  • Display Clear Verification Indicators: Provide transparent identification signals to help users determine the identity and safety level of participants in digital collaboration environments.

  • Design Systems for Proactive Prevention: Minimize the possibility of impersonation from the outset by setting conditions where fraudulent behavior becomes impossible, rather than just detecting it after a breach occurs.

References

0
Subscribe to my newsletter

Read articles from Tran Hoang Phong directly inside your inbox. Subscribe to the newsletter, and don't miss out.

Written by

Tran Hoang Phong
Tran Hoang Phong

Just a SOC Analyst ^^