5 (More) Bash Snippets That Saved My Dev Life

Shy DevShy Dev
7 min read

Part 2 of my series on Bash helpers I used during AI app development and system debugging.

Photo by Lukas on Unsplash

Read this first

1. Smart Orphan & Resource Killer

While testing AI inference applications that spawn subprocesses, I often encountered zombie or orphaned processes due to bugs in my early code. I manually killed them at first — but it got old quickly. Eventually, I extended my helper script to also flag non-zombie processes that were hogging CPU or memory. Now, it prompts me before termination, unless I want to run it in dry-run mode or kill everything automatically.

What it does

  • Detects and kills zombie processes automatically.

  • Lists high-resource processes and prompts: do you want to kill them?

  • Displays PID, CPU%, MEM%, and command line for clarity.

  • Offers a --dry-run to simulate what would be killed.

#!/bin/bash
CPU_THRESHOLD=50
MEM_THRESHOLD=50
DRY_RUN=false
AUTO_KILL=false
print_usage() {
    echo "Usage: $0 [--cpu <percent>] [--mem <percent>] [--dry-run] [--auto]"
    exit 1
}
# Parse Arguments
while [[ $# -gt 0 ]]; do
    case "$1" in
        --cpu)
            CPU_THRESHOLD="$2"
            shift 2
            ;;
        --mem)
            MEM_THRESHOLD="$2"
            shift 2
            ;;
        --dry-run)
            DRY_RUN=true
            shift
            ;;
        --auto)
            AUTO_KILL=true
            shift
            ;;
        *)
            print_usage
            ;;
    esac
done
echo "[*] Scanning for zombie processes..."
ZOMBIES=$(ps -eo pid,stat,cmd | awk '$2 ~ /Z/ { print $1 }')
if [[ -n "$ZOMBIES" ]]; then
    echo "[!] Found zombie processes: $ZOMBIES"
    for PID in $ZOMBIES; do
        echo "[-] Killing zombie PID $PID"
        kill -9 "$PID"
    done
else
    echo "[+] No zombie processes found."
fi
echo ""
echo "[*] Checking for high CPU/RAM usage..."
ps -eo pid,comm,%cpu,%mem,cmd --sort=-%cpu | tail -n +2 | while read -r PID COMM CPU MEM CMD; do
    CPU_INT=${CPU%.*}
    MEM_INT=${MEM%.*}
    if (( CPU_INT >= CPU_THRESHOLD || MEM_INT >= MEM_THRESHOLD )); then
        echo "[$(date)] High usage detected: PID=$PID, CPU=$CPU%, MEM=$MEM%, CMD=$CMD"
        if $DRY_RUN; then
            echo "    >> DRY RUN: Would prompt or kill process $PID"
        elif $AUTO_KILL; then
            echo "    >> AUTO: Killing process $PID"
            kill -9 "$PID"
        else
            echo -n "    >> Kill process $PID? (y/N/q): "
            read -r REPLY
            if [[ "$REPLY" == "y" || "$REPLY" == "Y" ]]; then
                kill -9 "$PID"
                echo "    >> Process $PID killed."
            elif [[ "$REPLY" == "q" || "$REPLY" == "Q" ]]; then
                echo "    >> Exiting helper."
                break
            else
                echo "    >> Skipped."
            fi
        fi
    fi
done

2. Config Diff Tracker for AI Experiments

When experimenting with AI model behavior, we constantly tweaked a config.json that controls the application logic. After many changes, we forgot which combination led to success or failure. So I made this tracker that logs full config snapshots and diffs to keep track of iterations.

What it does

  • Watches a target config file (default: config.json).

  • When modified, saves a timestamped copy.

  • Also creates a diff against the last saved version.

  • User can control what to save using --mode full, diff, or both.

#!/bin/bash
TARGET_FILE="config.json"
SNAPSHOT_DIR="/opt/file_diff_snapshots"
MODE="both"
while [[ $# -gt 0 ]]; do
    case "$1" in
        --file)
            TARGET_FILE="$2"
            shift 2
            ;;
        --snapshot-dir)
            SNAPSHOT_DIR="$2"
            shift 2
            ;;
        --mode)
            MODE="$2"
            shift 2
            ;;
        *)
            echo "Usage: $0 [--file <path>] [--snapshot-dir <path>] [--mode full|diff|both]"
            exit 1
            ;;
    esac
done
mkdir -p "$SNAPSHOT_DIR"
BASENAME=$(basename "$TARGET_FILE")
TIMESTAMP=$(date +%Y%m%d_%H%M%S)
LATEST_SNAPSHOT=$(ls -t "$SNAPSHOT_DIR"/*_"$BASENAME" 2>/dev/null | head -n 1)
NEW_SNAPSHOT="$SNAPSHOT_DIR/${TIMESTAMP}_$BASENAME"
cp "$TARGET_FILE" "$NEW_SNAPSHOT"
echo "[*] Saved snapshot: $NEW_SNAPSHOT"
if [[ "$MODE" == "diff" || "$MODE" == "both" ]] && [[ -n "$LATEST_SNAPSHOT" && "$LATEST_SNAPSHOT" != "$NEW_SNAPSHOT" ]]; then
    DIFF_FILE="$SNAPSHOT_DIR/${TIMESTAMP}_diff.patch"
    diff -u "$LATEST_SNAPSHOT" "$NEW_SNAPSHOT" > "$DIFF_FILE"
    echo "[*] Saved diff: $DIFF_FILE"
fi

3. Safe FFmpeg Recorder

Edge devices like Jetsons and Pi have limited disk space. While recording video feeds via USB or IP cameras, I needed a safeguard to stop ffmpeg before it filled the drive and crashed the system. This script handles it for me.

What it does

  • Starts an ffmpeg recording session.

  • Monitors available disk space.

  • If either --min-mb or --min-pct is breached, it terminates recording gracefully.

#!/bin/bash
DEVICE="/dev/video0"
OUTPUT="recording_$(date +%Y%m%d_%H%M%S).mp4"
MIN_MB=500
MIN_PCT=10
while [[ $# -gt 0 ]]; do
    case "$1" in
        --input)
            DEVICE="$2"
            shift 2
            ;;
        --output)
            OUTPUT="$2"
            shift 2
            ;;
        --min-mb)
            MIN_MB="$2"
            shift 2
            ;;
        --min-pct)
            MIN_PCT="$2"
            shift 2
            ;;
        *)
            echo "Usage: $0 [--input /dev/videoX] [--output file.mp4] [--min-mb X] [--min-pct Y]"
            exit 1
            ;;
    esac
done
echo "[*] Starting ffmpeg recording..."
ffmpeg -f v4l2 -i "$DEVICE" -vcodec libx264 -preset ultrafast "$OUTPUT" &
PID=$!
while true; do
    AVAIL_MB=$(df --output=avail -m . | tail -1)
    TOTAL_MB=$(df --output=size -m . | tail -1)
    AVAIL_PCT=$((AVAIL_MB * 100 / TOTAL_MB))
    if (( AVAIL_MB < MIN_MB || AVAIL_PCT < MIN_PCT )); then
        echo "[!] Low space: $AVAIL_MB MB (${AVAIL_PCT}%). Stopping ffmpeg..."
        kill -INT "$PID"
        break
    fi
    sleep 5
done

4. Checksum Integrity Watchdog

I needed to ensure that post-deployment configs and binaries hadn’t been tampered with or accidentally modified. Especially when troubleshooting unexpected behavior. So this script became my watchdog.

What it does

  • Supports a file or entire directory.

  • --init: saves current checksums.

  • --check: verifies once and reports diffs.

  • --start: begins looped monitoring in background.

  • --stop: halts the background watchdog.

#!/bin/bash
TARGET_PATH=""
MODE=""
CHECKSUM_FILE="/tmp/checksums.txt"
PID_FILE="/tmp/checksum_watchdog.pid"

print_usage() {
    echo "Usage: $0 <path> --init|--check|--start|--stop"
    exit 1
}
# Generate SHA256 checksums to stdout
generate_checksums() {
    if [ -d "$TARGET_PATH" ]; then
        find "$TARGET_PATH" -type f -exec sha256sum {} +
    elif [ -f "$TARGET_PATH" ]; then
        sha256sum "$TARGET_PATH"
    else
        echo "[!] Invalid path: $TARGET_PATH"
        exit 1
    fi
}
# Compare saved and current checksums
compare_checksums() {
    TMP_FILE=$(mktemp)
    generate_checksums > "$TMP_FILE"
    echo "[*] Checking for changes..."
    diff -u "$CHECKSUM_FILE" "$TMP_FILE"
    rm "$TMP_FILE"
}
# Monitor in a loop
watch_loop() {
    while true; do
        TMP_FILE=$(mktemp)
        generate_checksums > "$TMP_FILE"
        if ! diff -q "$CHECKSUM_FILE" "$TMP_FILE" > /dev/null; then
            echo "[!] Detected file changes at $(date)"
            diff -u "$CHECKSUM_FILE" "$TMP_FILE"
        fi
        rm "$TMP_FILE"
        sleep 10
    done
}
# Ensure at least two arguments (path + mode)
if [[ $# -lt 2 ]]; then
    print_usage
fi
# Parse arguments
while [[ $# -gt 0 ]]; do
    case "$1" in
        --init)
            MODE="init"; shift ;;
        --check)
            MODE="check"; shift ;;
        --start)
            MODE="start"; shift ;;
        --stop)
            MODE="stop"; shift ;;
        *)
            TARGET_PATH="$1"; shift ;;
    esac
done
# Execute mode
case "$MODE" in
    init)
        generate_checksums > "$CHECKSUM_FILE"
        echo "[+] Initial checksums saved to $CHECKSUM_FILE"
        ;;
    check)
        compare_checksums
        ;;
    start)
        echo "[*] Starting checksum monitor in background..."
        watch_loop &
        echo $! > "$PID_FILE"
        ;;
    stop)
        if [ -f "$PID_FILE" ]; then
            kill "$(cat "$PID_FILE")" && rm "$PID_FILE"
            echo "[+] Watchdog stopped."
        else
            echo "[!] No watchdog running."
        fi
        ;;
    *)
        print_usage
        ;;
esac

5. Offline Log Notifier with WebSocket Reporting

During offline debugging of embedded Linux devices, I wanted to know if critical log patterns like segfault, oom, or panic occurred—even without Internet. This script watches logs and ships matching alerts to a WebSocket endpoint once network resumes.

What it does

  • Watches log file (dmesg or any path).

  • Filters lines using a configurable regex.

  • Sends alerts to a WebSocket server.

  • Optionally runs in console-only mode (--local-mode).

#!/bin/bash
LOGFILE="/var/log/dmesg"
KEYWORDS="segfault|oom|panic"
ENDPOINT=""
LOCAL_ONLY=false
while [[ $# -gt 0 ]]; do
    case "$1" in
        --logfile)
            LOGFILE="$2"
            shift 2
            ;;
        --keywords)
            KEYWORDS="$2"
            shift 2
            ;;
        --endpoint)
            ENDPOINT="$2"
            shift 2
            ;;
        --local-mode)
            LOCAL_ONLY=true
            shift
            ;;
        *)
            echo "Usage: $0 [--logfile file] [--keywords regex] [--endpoint ws://...] [--local-mode]"
            exit 1
            ;;
    esac
done
echo "[*] Watching $LOGFILE for keywords: $KEYWORDS"
tail -Fn0 "$LOGFILE" | while read -r LINE; do
    if echo "$LINE" | grep -Ei "$KEYWORDS" > /dev/null; then
        echo "[!] Matched log: $LINE"
        if ! $LOCAL_ONLY && [[ -n "$ENDPOINT" ]]; then
            curl -s -X POST -H "Content-Type: application/json" -d "{\"message\": \"$LINE\"}" "$ENDPOINT"
        fi
    fi
done

Final Thoughts

These aren’t polished, one-size-fits-all tools. Most of them were born out of need and evolved over time. There might be more improvements or logical improvements I missed.

Do you have a Bash trick or script that saved your dev life? Drop it in the comments — I’d love to hear it from you!

#linux#bash#programming#shellscripting#tipsandtricks

0
Subscribe to my newsletter

Read articles from Shy Dev directly inside your inbox. Subscribe to the newsletter, and don't miss out.

Written by

Shy Dev
Shy Dev