Insider Threats, AI and Social Engineering: The Triad of Modern Cybersecurity Threats

Insider threats have become one of the most complex and damaging forms of cyberattacks in today’s digital landscape. These threats specifically target critical organizational data and are often difficult to detect because they originate from individuals who already have authorized access. Whether due to malicious intent or unintentional actions, insider threats can lead to severe financial, reputational, and operational consequences for organizations.
Using use case engineering to model and understand potential insider threat scenarios is crucial for proactive risk mitigation. This approach enables organizations to simulate and analyze how different types of insider behavior could lead to data compromise, helping to design more effective detection and prevention strategies.
Insiders can be broadly categorized into the following groups:
Negligent Employees: Individuals whose carelessness or lack of security awareness results in accidental data leaks.
Vulnerable Employees: Employees susceptible to psychological manipulation, making them prime targets for social engineering attacks.
Risk-Taking Employees: Individuals who knowingly bypass security protocols, putting the system at risk for efficiency or convenience.
Malicious Employees: Insiders with deliberate intent to harm the organization, often for personal gain or revenge.
Ideology-Driven Employees: Individuals who expose sensitive data to support a political, religious, or social agenda.
The evolving threat landscape indicates that insider risks can arise from employees acting on their own or being manipulated by external adversaries. The blend of internal access and external influence often creates scenarios that traditional security tools fail to identify. Use case engineering helps bridge this gap by allowing security teams to create detailed insider threat models, simulate attack paths, and assess the impact of specific user behaviors under varying threat conditions.
A prominent example involves the cyber-criminal group Famous Chollima, which infiltrated organizations by recruiting insiders. The group forged identities for individuals, placed them within target companies, and used their internal access to exfiltrate sensitive information. Traditional defenses were bypassed because the threat actors appeared as legitimate employees from the outset.
These campaigns often incorporate advanced social engineering techniques such as pretexting and impersonation, further complicating detection. In such cases, case engineering is vital to anticipate how trusted insiders might be manipulated and how access privileges could be misused under realistic threat conditions.
Ultimately, addressing insider threats requires more than reactive controls—it demands predictive and scenario-based modeling. By integrating use case engineering into their cybersecurity strategies, organizations can better anticipate insider behaviors, detect anomalies early, and design robust defenses against this growing category of cyber risk.
The Perfectly Engineered Insider
Despite social engineering being a common cyber-attack and security awareness, social engineering still works because human error is the weakest link to bypass security controls. Insider threats pose dangerous risks because of its complex methodology that eludes security technologies. It is especially true in cases where the insiders are tricked or targeted by cyber criminals to reveal company information.
Both modern insider threats, where individuals maliciously exfiltrate or disclose company information for personal gain, and accidental breaches resulting from employee negligence are concerning. Additionally, sophisticated targeted insider threats exist, wherein external actors use employees to unintentionally facilitate security breaches through complex schemes.
So, what makes targeted insider threats successful? The initial phase involves reconnaissance to gather detailed information about potential insider targets. Social engineering attacks are the best approach for cybercriminals to gather personal information about the insiders that can be used against them to get sensitive data from inside the organization.
It is also true for malicious insiders who might conduct insider threat reconnaissance with a combination of engineering and attack techniques to get the information like reaching out to someone having access to the desired information.
Some examples of activities malicious insiders may perform apart from accessing unauthorized systems include:
Asking for access to confidential data
Intentionally make security errors or ignore security protocols
Performing tasks usually done by other departments
Transferring/Copying files to an external USB
Under the umbrella of threat reconnaissance comes the type of reconnaissance when an attacker carries out extensive research on an employee of the organization. Attackers conduct deep reconnaissance on workers to extract information from them using a variety of social engineering techniques that include phishing, pharming, spear phishing, or whaling.
Leveraging AI platforms, threat actors looking to target employees benefit from rapid results, bypassing the need for time-consuming efforts typically associated with traditional persistent attacks. The prevalence of social media amplifies the risk, as data mining and scraping techniques can harvest insider information. External actors use this data to devise complex schemes with potentially severe repercussions for the organization. Adversaries can easily utilize this gathered intelligence to either groom a company insider to get involved in an information exchange deal or manipulate employees into inadvertently disclosing sensitive information.
Use of AI in Insider Threat Attacks
Threat actors can use LLM (Language Learning Models) and Gen AI to expedite and amplify social engineering attacks, achieving speed and scale that precedes manual efforts.
Recently, there have been reports of cyber actors using AI/gen AI to create visual content that attempts to trick their targets into establishing means of gaining access to company data. For example, a fraudster can pose as an IT worker, trick the company with deepfake videos in interviews, submit false resumes, and enter the organization. Once they enter, they can act as an insider, deploy malware, and cause data breaches. Moreover, threat actors can automate manual steps and increase attack efficiency with Gen AI and machine learning algorithms. For example, to facilitate the success of a phishing attack on a targeted insider, an attacker can manipulate AI technologies like ChatGPT to generate email content that sounds original to the insider, tricking them into opening the mail.
Other cases that are soaring with AI: are accidental insider threats, when employees share sensitive data with LLM models, like ChatGPT, these models could be hacked, leading to a potential data breach.
In industry sectors such as banking and technology, where customer and proprietary data are highly valuable, gaining internal access to this information without launching an overt cyber-attack results in huge repercussions. All it takes is for one insider to use AI automation to gather confidential company data and leak it on the dark web or sell it to external cyber actors. Combined with other cyberattacks, it could lead to a full-blown breach.
Since AI can mimic user behavior, it is hard for security teams to detect the difference between normal activity and AI-generated activity. AI can also be used by insiders to assist in their plans, such as like an insider could use AI or train AI models to analyze user activity and pinpoint the window of least activity to deploy malware onto a critical system at an optimal time and disguise this activity under a legitimate action, to avoid detection with monitoring solutions.
Another risk of using Gen AI is the potential information leak from the AI model itself. Some AI models may use past recorded conversations with their users for training, which leads to accidental data leaks if employees reveal sensitive data in their messages with the AI models. Companies like Samsung and JPMorgan have restricted the use of gen AI due to the risk of accidental data breaches and the possibility of data being used to train AI models.
Vulnerabilities in the AI model could also result in data breaches. For example, a poorly configured app with vulnerable AI models could unintentionally expose sensitive data like passwords or credentials through prompt injection techniques that override the original behavior of the AI. There are real-world examples of accidental data leaks through AI chatbots, AI models being manipulated into delivering inaccurate results, as well as using AI to expose user information. For example, customer-facing AI bots can be manipulated to obtain refunds greater than the original amount.
Insiders act covertly, so detecting them might be challenging even for skilled security experts. With AI, insiders have better ways to cover their tracks and avoid detections. One “deceptive AI model” that pursues information while hiding its true nature will do the trick. In time, attackers using AI as potential “insiders” would no longer be fiction.
Engineered Insiders and External Threat Actors
With AI, many threat actors leverage social engineering to achieve goals rather than just using malware. Threat actors can target employees inside a company, use them as “insiders” to execute their malicious intentions and use them to commit data breaches. The human element often serves as a vulnerability in most data breach cases, so external actors can use this weakness through social engineering and AI to find ways to crumble an organization’s data security. In the modern threat landscape, this leads to a complex pattern where internal and external threats become one with a mix of tactics and attack vectors.
Attackers may employ various tactics to compromise employees inside a company, utilizing psychological techniques like blackmail, coercion, and emotional manipulation, and social engineering techniques like phishing, pharming, quid pro quo, and baiting. They may identify employees who display dissatisfaction with their employer and manipulate them into becoming the attacker’s help from the inside.
Generally, there are two ways an attacker can target employees: one is to use them as “unintentional insiders” for compromised access and sensitive data leaks or turn them over time to commit breaches intentionally. The former involves a mix of social engineering techniques and attack tactics by attackers, whereas the latter involves psychological techniques like motivation to covertly influence the employee to leak information, as seen in cyber espionage cases.
Last year, the Chief Financial Officer of a company allegedly contacted a worker from the finance department of a multinational company in Hong Kong via email for a transaction. This worker was initially skeptical of this email and thought it was a phishing scam. The worker was then contacted for a video call with the officer and other employees of his company. It dismissed his suspicions, and the transactions were done. The later investigations after the worker discovered that his employees did not contact him revealed that this incident was an intricate scam incorporating two of the most prevalent social engineering tactics: phishing and whaling. Except this time, attackers used AI integrated with these techniques. The attackers initially crafted a meticulous email and sent it under a legitimate address to convince the employee it had come from the CFO. Later, they drew this worker to a call with multiple employees of his company with advanced deepfake technology that digitally recreated his employees in an artificial virtual environment. It clearly shows how threat actors craft elaborate and sophisticated attacks with social engineering and AI to target workers of an organization. Given the meticulous approach in this case, the attackers would have conducted thorough reconnaissance on the company’s personnel, and it helped them execute a structured operational crime.
However, there are cases when insiders agree to sell information due to the influence of adversaries. These cases may differ from malicious insiders; for example, disgruntled employees actively looking to cause consequential losses to their organizations. The distinction between these cases lies in their motivation: one category gets compromised by external parties while the other actively intends to harm their organizations.
For example, there are reports that ransomware gangs actively seek to recruit insiders to help them breach their organization’s cyber defenses. While most insiders will not be compromised, with a targeted approach, employees may become influenced to violate their organization’s security protocols. While this may sound farfetched, we cannot entirely rule this out from a technical standpoint. On the other hand, there are reports of threat actors who trick companies into hiring them and then infiltrate their organizations to commit espionage or other cybercrimes. It is a clear example of how malicious insiders use their access to breach the security of their companies.
Conclusion
It is paramount for security experts to understand the insider threats that they may deal with, whether they are intentional or accidental. Insiders who are compromised or influenced cannot be addressed with security solutions that do not consider human elements; it may require a comprehensive approach that incorporates behavioral analysis. Therefore, insider risk management must incorporate AI and security tools to cover technical issues and human factors within the organization.
To combat the sophisticated and challenging landscape of modern insider threats, where malicious and even unintentionally compromised employees can leverage AI and social engineering, organizations need advanced solutions for comprehensive visibility. By leveraging expertise and integrated technologies for monitoring and analysis, CyberProof can help organizations gain critical visibility into user behavior and potential anomalies, which is essential for detecting both malicious and accidental insider threats to better thwart attacks and safeguard sensitive data.
Subscribe to my newsletter
Read articles from Cyber proof directly inside your inbox. Subscribe to the newsletter, and don't miss out.
Written by

Cyber proof
Cyber proof
CyberProof is a global cybersecurity services provider specializing in Managed Detection and Response (MDR), Extended Detection and Response (XDR), and comprehensive security solutions. They assist organizations in proactively identifying, assessing, and mitigating cyber threats to enhance security posture and ensure compliance with industry regulations.