🥑AI & Cyberthreats | Separating Fear from Reality👨‍💻

Ronald BartelsRonald Bartels
4 min read

The rapid rise of AI-powered solutions has sparked both excitement and concern in the cybersecurity industry. With tools like Deepscan making headlines and warnings issued about AI’s secure use, many worry that AI is amplifying cyber risks. But is this really the case?

While AI does introduce new considerations, it does not inherently increase the risk of malware and trojans. Traditional malicious software—including trojans, keyloggers, and remote access tools—has always operated by mimicking normal programs. AI doesn't change this core principle; it simply makes the process more automated and efficient.


AI’s Role in Cybersecurity Threats

1. AI Does Not Directly Amplify Malware Risks

Malware is effective because it disguises itself as legitimate software. Whether generated by AI or written by a human, a remote access tool can be either useful or malicious, depending on its intended use. AI does not inherently make malware more dangerous, but it can assist cybercriminals by:

  • Automating malware generation to avoid detection.

  • Generating social engineering attacks (e.g., phishing emails) that are more convincing.

  • Creating polymorphic malware that changes its code dynamically to evade antivirus detection.

However, these techniques are not entirely new—they are simply evolving with AI assistance. Threat actors have always adapted to new technologies, and AI is just another tool in their arsenal.

2. The Real Risk | Blindly Sharing Data with AI

One of the biggest threats posed by AI isn’t malware generation, but rather unintentional data leaks. Many AI tools require users to submit data for processing, and in some cases, that data is stored and used to improve the AI model.

A common example:

  • A developer submits proprietary company code to ChatGPT or Deepscan for debugging or improvement.

  • That code is then stored and potentially used to train future AI models.

  • If the AI tool allows public access, other users might receive suggestions based on that leaked code.

The issue here isn’t AI itself, but rather how AI services handle user data.


Understanding the Risk Profile

1. Public Cloud Tools (Including AI) Carry Inherent Risks

Many AI tools operate on public cloud platforms—just like GitHub, Dropbox, and Google Drive. Any unsecured public cloud service can potentially leak sensitive information. The main risks include:

  • Data retention policies: AI tools may store inputs indefinitely.

  • Data exposure: AI services that use user inputs to train models may unintentionally expose proprietary data.

  • Compliance issues: Using AI without oversight may violate data protection laws (e.g., GDPR, POPIA).

2. AI is Not a Unique or Better Source for Malicious Code

  • Cybercriminals have long used forums, dark web marketplaces, and code repositories like GitHub to find malware.

  • AI is not creating new threats—it is just another tool that bad actors can exploit.

  • The real danger lies in how AI is used, not the existence of AI itself.


Mitigation | How Businesses Should Address AI Risks

Since AI is neither inherently good nor bad, businesses need to implement clear policies around its use.

1. Policy & Training for AI Usage

  • Educate employees about the risks of sharing sensitive data with AI.

  • Restrict AI tool usage for proprietary data unless explicitly permitted by company policy.

  • Establish clear guidelines for what data can be processed by AI and what must remain internal.

2. Secure AI Implementations

  • Use private AI deployments where possible (e.g., self-hosted LLMs instead of public ChatGPT).

  • Verify data handling policies before using any AI tool in a business environment.

  • Implement Data Loss Prevention (DLP) solutions to detect and block sensitive data submissions to AI services.

3. Monitor and Adapt

  • Track AI-related activity on corporate networks to detect potential data leaks.

  • Keep up with AI security advisories and adapt policies as AI evolves.


Wrapping Up

AI is a powerful tool, but its risks are often misunderstood. While it can be used to automate cyberattacks, it does not inherently amplify the risk of malware and trojans. Instead, the real danger lies in how AI is used, particularly in sharing sensitive data with AI services.

Businesses should not fear AI, but they must adopt smart policies and security measures to ensure that AI is used safely and responsibly.

1
Subscribe to my newsletter

Read articles from Ronald Bartels directly inside your inbox. Subscribe to the newsletter, and don't miss out.

Written by

Ronald Bartels
Ronald Bartels

Driving SD-WAN Adoption in South Africa