When Layer 2 Strikes Back — A Real-World ARP Attack Case from the SOC Floor

1. The Day the Helpdesk Lit Up

Tuesdays in the SOC are always the most mundane of days. The hum of the AC, the quiet whir of monitors, and the sound of the lid closing on a keyboard to close out tickets — that's as exciting as it gets.

At 10:42 AM ( Yes, I do remember the exact time ), that all changed.

Helpdesk wallboard turned from green to blazing red. Phones started ringing non-stop. Our dashboard promptly filled up with tickets — all Finance:

  • “VPN disconnecting every few minutes”

  • “ERP login fails, reports credentials invalid”

  • “Outlook keeps asking for password”

  • “Citrix session continues to freeze”

There were 14 open tickets at 10:44 AM. Something epic was happening — and it was striking Finance right between the eyes.

SOC Slack Bridge

[10:43] Helpdesk_T1: Finance floor reporting network instability. Multiple users down.  
[10:44] NetOps: No alerts on core firewall. All WAN links stable.  
[10:45] SOC: No spikes in Palo Alto threat logs. No malware hits on EDR. Investigating.

At first glance, everything looked fine. Cisco Prime Infrastructure had all the access switches in green. No spanning-tree events. No CPU spikes.

The firewall (Palo Alto PA-5220) looked good too — a few reconnects, nothing to worry about. No brute-force attempts, no failed-auth storms.

And still, something didn't feel right.

SOC Analyst Note:

“If the tools say “all green” and users are crying, trust the users”.

10:46 AM – my phone rang. The finance analyst on the phone:

“I can log into ERP, but it times out on me after 30 seconds. Same with Outlook. Even Teams is crashing repeatedly.”

Then she said something that made my stomach drop to my ankles:

“Oh, and when I went to ERP, the little padlock icon in Chrome was missing. Thought it was a bug.”

Missing padlock was first good lead. Internal HTTPS but no SSL? Not a bug. That was a warning.

SOC Analyst Note:

Missing SSL padlock from internal apps → usually a sign of a man-in-the-middle.

10:47 AM – Got DHCP logs. Clean as a whistle. No man-in-the-middle servers. No mass leases.

So if it wasn’t the firewall, wasn’t DHCP, and wasn’t the WAN…
Then the fire wasn’t at Layer 3.
It was smoldering below, at Layer 2.

2. First Clues — Something’s Off

I SSH'd to the Finance access switch and typed:

show arp | include 10.10.50.

The gateway IP 10.10.50.1 was being mapped to two different MAC addresses — flip-flopping back and forth.

That's not supposed to happen. Ever.

And when it does, there is only one real reason: ARP poisoning.


3. Digging Deeper — Confirming ARP Poisoning

I mirrored VLAN 50 to a monitoring port and fired up Wireshark:

monitor session 1 source vlan 50
monitor session 1 destination interface Gi1/0/48

Then ARP anomaly tests:

arp.duplicate-address-detected == 1

In seconds, the realization struck me — an ARP storm of spurious replies infecting the gateway.

The offender? A workstation — an HP-EliteDesk for a visiting contractor.

But that was not all, packet captures unveiled SSLStrip in action. HTTPS traffic is being reduced to HTTP. Internal creds going over the wire in plaintext.

Above pic : ARP poisoning attack sequence: attacker sends spoofed ARP responses to send Finance VLAN traffic through their system.

SOC Analyst Note:

Not a script kiddie launching a network flood. The attacker was being deliberate — selective poisoning, low noise. They needed persistence.

4. Containment — Locking It Down

We expedited once we had our bad guy:

  • Shut down the port:

      interface Gi1/0/14
      shutdown
    
  • Blacklisted MAC on every access switch.

  • Disabled contractor's AD account.

  • Pulled the machine for forensic imaging.

The noise stopped immediately.

SOC Analyst Note:

Contain fast, but smart. Dropped the network path before alerting the attacker — or else they adapt.

5. Root Cause

The contractor had been running “Bettercap” inside a Kali VM. And our network made it easy:

  • 802.1X authentication disabled

  • Dynamic ARP Inspection (DAI) is not enabled

  • No MAC limiting on ports

    The switch accepted any ARP response it received. That was taken advantage of.

6. Hardening the Network

After we finished panicking, we secured Layer 2 in Finance:

  • Dynamic ARP Inspection (DAI):

      ip arp inspection vlan 50
      ip arp inspection trust interface Gi1/0/48
    
  • Port Security: One MAC per port.

  • BPDU Guard: It prevents malicious STP alterations.

  • 802.1X NAC: Authenticate before

  • SIEM Integration: Switch logs → Splunk with ARP anomaly alerts.

Above pic : Network hardening: Layer 2 defenses before and after the incident response.

SOC Analyst Note:

802.1X + DAI would have prevented that. Layer 2 security is no longer a choice.

7. Lessons Learned

  • Layer 2 attacks escape detection by nearly every SOC dashboard.

  • MITM can be done in your walls without affecting Layer 3.

  • Switch logs and the ARP tables are gold — if you're gathering them.

This is how the investigation proceeded:

Above pic : SOC triage path: from user complaints to ARP investigation, packet capture, and containment.

Final Takeaway

If your SOC can’t see Layer 2, you’re fighting blind on half the battlefield.

This attacker didn’t need zero-days, fancy malware, or nation-state resources.
They just exploited the one thing every network still gives by default — trust, the oldest vulnerability in networking.

0
Subscribe to my newsletter

Read articles from bharathraaj pandian directly inside your inbox. Subscribe to the newsletter, and don't miss out.

Written by

bharathraaj pandian
bharathraaj pandian