Decoding Session Security with Burp Suite Sequencer


April 09, 2025
Welcome to Day 5 of my week-long Burp Suite deep dive! Today, I explored the Sequencer tab, a tool designed to test the randomness of session tokens—critical for preventing session hijacking. Using my custom VulnHub app (a vulnerable Flask app running in Docker , I set up an automated process to collect tokens and analyzed their security. Here’s how I did it and what I learned.
What Is Sequencer?
Sequencer assesses whether tokens—like session IDs or CSRF tokens—are random enough to resist prediction. It captures a sample, runs statistical tests (bit-level and character-level), and estimates entropy (randomness in bits). Low entropy means tokens are guessable, a major security flaw. Fully available in Burp Suite Community Edition, Sequencer is a diagnostic gem for pentesters.
Setting Up Sequencer with VulnHub
My goal was to analyze VulnHub’s session tokens, which follow a user_id-timestamp
format . Here’s my step-by-step setup:
Automated Token Collection
Manually logging in 100 times was impractical, so I wrote a Python script using requests
:
import requests
url = "http://127.0.0.1:5000/login"
payload = {"username": "admin", "password": "1234"}
headers = {"Content-Type": "application/x-www-form-urlencoded"}
num_tokens = 100
tokens = []
for i in range(num_tokens):
response = requests.post(url, data=payload, headers=headers, allow_redirects=False)
if "session" in response.cookies:
tokens.append(response.cookies["session"])
print(f"Token {i+1}: {response.cookies['session']}")
else:
print(f"Attempt {i+1}: No cookie")
with open("session_tokens.txt", "w") as f:
for token in tokens:
f.write(f"{token}\n")
- Ran it with
python generate_
tokens.py
, aiming for 100 unique tokens.
Loaded Tokens into Sequencer
In Burp, I captured a
/login
response from Proxy > HTTP history, right-clicked, and “Sent to Sequencer.”Switched to Sequencer > Manual load, loaded
session_tokens.txt
, and clicked “Analyze now.”
Results and Insights
Sequencer’s bit-level analysis was stark:
Overall Result: “Extremely poor” randomness.
Effective Entropy: 0 bits at 1% significance—tokens are fully predictable.
Chart: Flat at 0 bits, confirming the
1-timestamp
pattern.The “sample size too small” note for character-level analysis was a hiccup—I’ll target 100 tokens next time.
Takeaways
Predictability Exposed: VulnHub’s tokens are guessable (e.g.,
1-[timestamp+1]
), a session security fail.Automation Wins: The script saved hours, though I need to tweak it for 100 unique tokens.
Lesson: Random, high-entropy tokens (e.g., UUIDs) are non-negotiable.
Subscribe to my newsletter
Read articles from abishekvengeri directly inside your inbox. Subscribe to the newsletter, and don't miss out.
Written by

abishekvengeri
abishekvengeri
Cybersecurity Enthusiast | CTF Creator | Ethical Hacking Learner Passionate about cybersecurity, CTF challenges, and ethical hacking. Sharing my journey, experiences, and lessons as I explore the world of security.