Day 10: Automating Log Analysis and Report Generation with Bash Scripting
In today's challenge, we dive into building a bash script that automates log analysis and report generation. As a system administrator, handling log files and extracting meaningful insights from them is an essential skill. This task will help you automate the process of analyzing server logs, identifying critical events, and summarizing the information into a report.
Step 1: Set Up Your Script
Let's create the Bash script log_
analyzer.sh
and make it executable.
Create the Script File:
nano log_analyzer.sh
Add the Shebang: Start the script with a shebang to specify that it should be executed using the Bash shell.
#!/bin/bash
Step 2: Check for Log File Input
Ensure the script takes the log file path as a command-line argument and verifies if the file exists.
# Check if the log file path is provided as an argument
if [ -z "$1" ]; then
echo "β Please provide a log file path. Usage: $0 /path/to/logfile"
exit 1
fi
LOG_FILE="$1"
# Check if the specified log file exists
if [ ! -f "$LOG_FILE" ]; then
echo "β Log file not found: $LOG_FILE"
exit 1
fi
Step 3: Count Error Messages
Analyze the log file to find error messages. We'll search for lines containing "ERROR" or "Failed" and count them using grep
and wc
.
# Count lines containing "ERROR" or "Failed" (case-insensitive)
ERROR_COUNT=$(grep -i "ERROR\|Failed" "$LOG_FILE" | wc -l)
echo "π Total Error Count: $ERROR_COUNT"
Step 4: Find Critical Events
Search for lines containing the keyword "CRITICAL" and display the line numbers using grep -in
. This makes it easy to locate severe issues quickly.
# Find lines containing "CRITICAL" along with their line numbers
echo "π΄ Critical Events:"
grep -in "CRITICAL" "$LOG_FILE"
Step 5: Identify the Top 5 Error Messages
Display the most common error messages by sorting and counting occurrences using sort
, uniq
, and head
.
# Display the top 5 most frequent error messages
echo "π Top 5 Error Messages:"
grep -i "ERROR\|Failed" "$LOG_FILE" | sort | uniq -c | sort -nr | head -5
Step 6: Generate a Summary Report
Create a detailed report with all the gathered information and save it to a file called log_report.txt
.
# Generate a report with the analysis summary
REPORT_FILE="log_report.txt"
echo "π Generating Summary Report: $REPORT_FILE"
echo "---------------------------------------" > "$REPORT_FILE"
echo "π
Date of Analysis: $(date)" >> "$REPORT_FILE"
echo "ποΈ Log File: $LOG_FILE" >> "$REPORT_FILE"
echo "π Total Lines Processed: $(wc -l < "$LOG_FILE")" >> "$REPORT_FILE"
echo "π¨ Total Error Count: $ERROR_COUNT" >> "$REPORT_FILE"
echo "" >> "$REPORT_FILE"
echo "π Top 5 Error Messages:" >> "$REPORT_FILE"
grep -i "ERROR\|Failed" "$LOG_FILE" | sort | uniq -c | sort -nr | head -5 >> "$REPORT_FILE"
echo "" >> "$REPORT_FILE"
echo "π΄ Critical Events:" >> "$REPORT_FILE"
grep -in "CRITICAL" "$LOG_FILE" >> "$REPORT_FILE"
echo "---------------------------------------" >> "$REPORT_FILE"
echo "β
Summary Report Generated Successfully!"
Step 7: Archive or Move Processed Log Files (Optional)
After processing, you can move the log file to a backup directory to keep things organized.
# Move the log file to an archive directory for processed logs
BACKUP_DIR="/var/log/processed_logs"
mkdir -p "$BACKUP_DIR" # Create the directory if it doesn't exist
mv "$LOG_FILE" "$BACKUP_DIR"
echo "π¦ Log file moved to backup directory: $BACKUP_DIR"
Step 8: Make the Script Executable
Make sure the script is executable so you can run it directly.
chmod +x log_analyzer.sh
Step 9: Run the Script
Execute the script and pass the log file path as an argument.
./log_analyzer.sh /path/to/your/logfile.log
Complete Bash Script: log_
analyzer.sh
Here's how the final script should look:
#!/bin/bash
# Check if the log file path is provided as an argument
if [ -z "$1" ]; then
echo "β Please provide a log file path. Usage: $0 /path/to/logfile"
exit 1
fi
LOG_FILE="$1"
# Check if the specified log file exists
if [ ! -f "$LOG_FILE" ]; then
echo "β Log file not found: $LOG_FILE"
exit 1
fi
# Count lines containing "ERROR" or "Failed" (case-insensitive)
ERROR_COUNT=$(grep -i "ERROR\|Failed" "$LOG_FILE" | wc -l)
echo "π Total Error Count: $ERROR_COUNT"
# Find lines containing "CRITICAL" along with their line numbers
echo "π΄ Critical Events:"
grep -in "CRITICAL" "$LOG_FILE"
# Display the top 5 most frequent error messages
echo "π Top 5 Error Messages:"
grep -i "ERROR\|Failed" "$LOG_FILE" | sort | uniq -c | sort -nr | head -5
# Generate a report with the analysis summary
REPORT_FILE="log_report.txt"
echo "π Generating Summary Report: $REPORT_FILE"
echo "---------------------------------------" > "$REPORT_FILE"
echo "π
Date of Analysis: $(date)" >> "$REPORT_FILE"
echo "ποΈ Log File: $LOG_FILE" >> "$REPORT_FILE"
echo "π Total Lines Processed: $(wc -l < "$LOG_FILE")" >> "$REPORT_FILE"
echo "π¨ Total Error Count: $ERROR_COUNT" >> "$REPORT_FILE"
echo "" >> "$REPORT_FILE"
echo "π Top 5 Error Messages:" >> "$REPORT_FILE"
grep -i "ERROR\|Failed" "$LOG_FILE" | sort | uniq -c | sort -nr | head -5 >> "$REPORT_FILE"
echo "" >> "$REPORT_FILE"
echo "π΄ Critical Events:" >> "$REPORT_FILE"
grep -in "CRITICAL" "$LOG_FILE" >> "$REPORT_FILE"
echo "---------------------------------------" >> "$REPORT_FILE"
echo "β
Summary Report Generated Successfully!"
# Move the log file to an archive directory for processed logs
BACKUP_DIR="/var/log/processed_logs"
mkdir -p "$BACKUP_DIR"
mv "$LOG_FILE" "$BACKUP_DIR"
echo "π¦ Log file moved to backup directory: $BACKUP_DIR"
Tips for Success
Automation Magic: Make use of Bash's power to automate repetitive tasks efficiently. Your future self will thank you!
Error Handling: Always check for missing arguments or file existence to avoid script failures.
Organize Outputs: Create well-structured logs and report files for easy readability.
Creative Add-Ons
Email the Report: Use
mail
orsendmail
commands to email the report automatically.Set Up a Cron Job: Schedule this script to run daily using cron to automate log analysis.
By following these steps, you can easily automate log analysis and create a professional-grade script that handles daily tasks for you. π Happy scripting! , π Happy Learning!
Subscribe to my newsletter
Read articles from Naushad Khan directly inside your inbox. Subscribe to the newsletter, and don't miss out.
Written by
Naushad Khan
Naushad Khan
DevOps engineer with a passion for automation, CI/CD, and cloud platforms like AWS. I bridge dev and ops, optimizing workflows and sharing insights through technical blogs. Letβs automate the future! πβοΈ