#90DaysOfDevops | Day 10

Rajendra PatilRajendra Patil
3 min read

Write a Bash script that automates the process of analyzing log files and generating a daily summary report. The script should perform the following steps:

  1. Input: The script should take the path to the log file as a command-line argument.

  2. Error Count: Analyze the log file and count the number of error messages. An error message can be identified by a specific keyword (e.g., "ERROR" or "Failed"). Print the total error count.

  3. Critical Events: Search for lines containing the keyword "CRITICAL" and print those lines along with the line number.

  4. Top Error Messages: Identify the top 5 most common error messages and display them along with their occurrence count.

  5. Summary Report: Generate a summary report in a separate text file. The report should include:

    • Date of analysis

    • Log file name

    • Total lines processed

    • Total error count

    • Top 5 error messages with their occurrence count

    • List of critical events with line numbers

Bash script:

#!/bin/bash

# Check if a log file path is provided
if [ "$#" -ne 1 ]; then
    echo "Usage: $0 /path/to/logfile"
    exit 1
fi

logfile="$1"
reportfile="summary_report_$(date +%F).txt"

# Check if the log file exists
if [ ! -f "$logfile" ]; then
    echo "Log file does not exist: $logfile"
    exit 1
fi

# Initialize variables
total_lines=0
error_count=0
declare -A error_messages
critical_events=()

# Analyze the log file
while IFS= read -r line; do
    total_lines=$((total_lines + 1))

    # Count errors
    if echo "$line" | grep -q "ERROR\|Failed"; then
        error_count=$((error_count + 1))
        # Extract and count error messages
        error_msg=$(echo "$line" | awk '{for(i=1;i<=NF;i++) if ($i ~ /ERROR|Failed/) {print $i}}')
        error_messages["$error_msg"]=$((error_messages["$error_msg"] + 1))
    fi

    # Identify critical events
    if echo "$line" | grep -q "CRITICAL"; then
        critical_events+=("$total_lines: $line")
    fi
done < "$logfile"

# Sort and get top 5 error messages
top_errors=$(for msg in "${!error_messages[@]}"; do
    echo "$msg ${error_messages[$msg]}"
done | sort -k2,2nr | head -n 5)

# Generate summary report
{
    echo "Date of Analysis: $(date)"
    echo "Log File Name: $(basename "$logfile")"
    echo "Total Lines Processed: $total_lines"
    echo "Total Error Count: $error_count"
    echo
    echo "Top 5 Error Messages:"
    echo "$top_errors"
    echo
    echo "List of Critical Events:"
    for event in "${critical_events[@]}"; do
        echo "$event"
    done
} > "$reportfile"

echo "Summary report generated: $reportfile"

# Optional enhancement: Archive or move the processed log file
archive_dir="/home/ubuntu/archive"
mkdir -p "$archive_dir"
mv "$logfile" "$archive_dir/"
echo "Log file moved to: $archive_dir/"

Explanation:

  1. Input Validation:

    • The script checks if the log file path is provided as an argument and if the file exists.
  2. Variables Initialization:

    • Initializes necessary variables and arrays for processing.
  3. Log File Processing:

    • Reads the log file line by line, counting total lines and errors, and storing error messages and critical events.
  4. Top Error Messages:

    • Uses associative arrays to count error messages and identifies the top 5 most common ones.
  5. Summary Report Generation:

    • Writes the summary report to a file, including the date, log file name, total lines processed, total error count, top 5 error messages, and critical events.
  6. Archiving Log File:

    • Moves the processed log file to an archive directory.

Output:

Happy Learning ๐Ÿš€

0
Subscribe to my newsletter

Read articles from Rajendra Patil directly inside your inbox. Subscribe to the newsletter, and don't miss out.

Written by

Rajendra Patil
Rajendra Patil