ServerlessBase Blog
  • How to Automate Server Tasks with Shell Scripts

    A 150-160 character meta description containing 'automate server tasks with shell scripts' naturally

    How to Automate Server Tasks with Shell Scripts

    You've probably spent hours manually running the same commands on your servers—updating packages, rotating logs, checking disk space, or restarting services. Every time you need to do it, you copy-paste from a documentation page or remember the exact sequence of commands. This is inefficient and error-prone. Shell scripting is the solution.

    Shell scripts let you combine multiple commands into a single executable file, automate repetitive tasks, and schedule jobs to run automatically. Whether you're managing a single server or a fleet of containers, shell scripting is an essential skill for any system administrator or DevOps engineer.

    What Is a Shell Script?

    A shell script is a plain text file containing a sequence of commands that the shell executes. The shell is the command-line interpreter that reads and executes commands you type. Common shells include bash, sh, zsh, and fish.

    Think of a shell script like a recipe. You list the ingredients (commands) in the order they should be used, and the shell follows the instructions step by step. If you need to repeat the recipe, you don't have to memorize the steps—you just run the script.

    Shell scripts are powerful because they can:

    • Combine multiple commands into one
    • Use variables to store data
    • Make decisions with conditionals
    • Loop through lists of items
    • Run other scripts
    • Schedule execution with cron

    Why Automate Server Tasks?

    Manual task execution has several problems:

    1. Errors: Copy-pasting commands from documentation introduces typos and syntax errors.
    2. Inconsistency: You might forget a step or execute commands in the wrong order.
    3. Time: Repetitive tasks consume time that could be spent on more valuable work.
    4. Reliability: Human attention spans are limited—manual execution is prone to mistakes.
    5. Scalability: You can't manually manage hundreds of servers effectively.

    Automation solves these problems by ensuring consistent, reliable execution of tasks. Once a script is tested and working, you can run it on any server without worrying about human error.

    Basic Shell Script Structure

    Every shell script starts with a shebang line that specifies which interpreter to use:

    #!/bin/bash

    This tells the system to use bash as the interpreter. After the shebang, you write your commands. Here's a simple script that prints a message:

    #!/bin/bash
     
    echo "Hello from shell script"
    echo "Current directory: $(pwd)"
    echo "Current user: $(whoami)"

    The echo command prints text to the terminal. The $(command) syntax runs a command and captures its output, which can be used in other commands.

    Making Scripts Executable

    Before you can run a script, you need to make it executable:

    chmod +x script.sh

    This adds the execute permission to the file. Now you can run it with:

    ./script.sh

    Or you can run it directly with the shell:

    bash script.sh

    Variables in Shell Scripts

    Variables store data that you can reuse throughout your script. They're untyped, meaning you can store strings, numbers, or commands without declaring a type.

    #!/bin/bash
     
    SERVER_NAME="production-db-01"
    BACKUP_DIR="/var/backups"
    DATE=$(date +%Y-%m-%d)
     
    echo "Backing up $SERVER_NAME to $BACKUP_DIR/$DATE"

    Note that variable names don't have $ when you assign them, but you use $ when you reference them. Also, there's no space around the = sign.

    Conditional Logic

    Shell scripts can make decisions using if, else, and elif statements:

    #!/bin/bash
     
    DISK_USAGE=$(df -h / | awk 'NR==2 {print $5}' | sed 's/%//')
     
    if [ $DISK_USAGE -gt 90 ]; then
        echo "WARNING: Disk usage is at ${DISK_USAGE}%"
        # Send alert notification
    elif [ $DISK_USAGE -gt 80 ]; then
        echo "Disk usage is at ${DISK_USAGE}%"
    else
        echo "Disk usage is normal at ${DISK_USAGE}%"
    fi

    The [ $DISK_USAGE -gt 90 ] test checks if the value is greater than 90. The -gt operator means "greater than". Other comparison operators include -lt (less than), -eq (equal), -ne (not equal), and -ge (greater than or equal).

    Loops for Repetition

    Loops let you repeat commands multiple times. The most common loop is the for loop:

    #!/bin/bash
     
    # Loop through a list of servers
    SERVERS=("web-server-01" "web-server-02" "web-server-03")
     
    for SERVER in "${SERVERS[@]}"; do
        echo "Checking $SERVER"
        # Run commands for each server
    done
     
    # Loop through numbers
    for i in {1..5}; do
        echo "Iteration $i"
    done

    The ${SERVERS[@]} syntax expands the array into individual elements. The for i in {1..5} syntax creates a sequence of numbers from 1 to 5.

    Practical Example: Server Backup Script

    Let's build a complete backup script that creates a compressed archive of a directory and logs the operation:

    #!/bin/bash
     
    # Configuration
    SOURCE_DIR="/var/www/html"
    BACKUP_DIR="/var/backups"
    RETENTION_DAYS=7
    TIMESTAMP=$(date +%Y%m%d_%H%M%S)
    BACKUP_FILE="${BACKUP_DIR}/backup_${TIMESTAMP}.tar.gz"
     
    # Create backup directory if it doesn't exist
    mkdir -p "$BACKUP_DIR"
     
    # Create the backup
    echo "Creating backup: $BACKUP_FILE"
    tar -czf "$BACKUP_FILE" "$SOURCE_DIR"
     
    # Check if backup was successful
    if [ $? -eq 0 ]; then
        echo "Backup completed successfully"
     
        # Remove old backups
        echo "Cleaning up backups older than $RETENTION_DAYS days"
        find "$BACKUP_DIR" -name "backup_*.tar.gz" -mtime +$RETENTION_DAYS -delete
     
        # Log the operation
        echo "$(date): Backup completed for $SOURCE_DIR" >> /var/log/backup.log
    else
        echo "ERROR: Backup failed" >&2
        exit 1
    fi

    This script:

    1. Defines configuration variables at the top
    2. Creates the backup directory if needed
    3. Creates a compressed archive with a timestamp
    4. Checks if the backup succeeded
    5. Removes old backups based on retention policy
    6. Logs the operation to a file

    Scheduling Scripts with Cron

    Cron is the time-based job scheduler in Unix-like systems. It runs scripts at specific times or intervals.

    To edit your crontab:

    crontab -e

    This opens your crontab file in the default editor. Here are some common cron expressions:

    # Run every day at 2 AM
    0 2 * * * /path/to/backup.sh
     
    # Run every hour
    0 * * * * /path/to/check.sh
     
    # Run every Monday at 3 AM
    0 3 * * 1 /path/to/weekly-report.sh
     
    # Run every 5 minutes
    */5 * * * * /path/to/monitor.sh

    The format is: minute hour day-of-month month day-of-week command

    The * wildcard means "any value". The /5 syntax means "every 5 minutes".

    Comparison: Manual vs Automated Task Execution

    FactorManual ExecutionShell Script Automation
    SpeedSlow, requires multiple stepsFast, single command
    ConsistencyProne to errors and omissionsGuaranteed consistent execution
    ScalabilityLimited to one server at a timeCan run on multiple servers
    ReliabilityDependent on human attentionRuns automatically
    DebuggingHard to reproduce issuesEasy to review and debug scripts
    MaintenanceRequires remembering commandsScripts are self-documenting
    CostTime spent on repetitive tasksTime saved on automation

    Error Handling and Logging

    Good scripts handle errors gracefully and log their operations:

    #!/bin/bash
     
    LOG_FILE="/var/log/my-script.log"
     
    log() {
        echo "[$(date '+%Y-%m-%d %H:%M:%S')] $1" | tee -a "$LOG_FILE"
    }
     
    log "Starting script"
     
    # Check if required command exists
    if ! command -v nginx &> /dev/null; then
        log "ERROR: nginx is not installed"
        exit 1
    fi
     
    # Run commands with error checking
    if nginx -t; then
        log "nginx configuration is valid"
        systemctl reload nginx
    else
        log "ERROR: nginx configuration test failed"
        exit 1
    fi
     
    log "Script completed successfully"

    The tee -a command prints to the terminal and appends to the log file. The command -v check verifies if a command exists before using it.

    Best Practices for Shell Scripting

    1. Use meaningful variable names: BACKUP_DIR is better than b
    2. Add comments: Explain complex logic and configuration
    3. Validate inputs: Check that required parameters are provided
    4. Handle errors: Use set -e to exit on errors and check command exit codes
    5. Keep scripts simple: Break complex scripts into smaller, reusable functions
    6. Test thoroughly: Run scripts in a test environment before production
    7. Use idempotent operations: Scripts should produce the same result when run multiple times
    8. Document dependencies: Note any external tools or files the script requires

    Common Pitfalls to Avoid

    • Missing quotes: Always quote variables to handle spaces in filenames: "${BACKUP_DIR}"
    • Using #!/bin/sh instead of #!/bin/bash: Bash has more features and better error handling
    • Not checking exit codes: Always check if commands succeed with if [ $? -eq 0 ]
    • Hardcoding paths: Use variables for configuration values
    • Ignoring edge cases: Test with empty directories, special characters, and error conditions
    • Not making scripts executable: Remember chmod +x script.sh
    • Running as root unnecessarily: Use sudo only when needed and with specific commands

    Advanced Techniques

    Functions for Reusability

    #!/bin/bash
     
    # Define a function
    check_disk_space() {
        local usage=$(df -h / | awk 'NR==2 {print $5}' | sed 's/%//')
        if [ $usage -gt 90 ]; then
            echo "WARNING: Disk usage is at ${usage}%"
            return 1
        fi
        return 0
    }
     
    # Use the function
    if check_disk_space; then
        echo "Disk space is OK"
    else
        echo "Disk space is critically low"
    fi

    Functions let you organize code into reusable blocks. The return statement exits the function with a status code.

    Reading User Input

    #!/bin/bash
     
    read -p "Enter server name: " SERVER_NAME
    read -p "Enter backup directory: " BACKUP_DIR
     
    echo "Backing up $SERVER_NAME to $BACKUP_DIR"

    The read command prompts for user input and stores it in a variable. The -p flag shows a prompt before waiting for input.

    Command Substitution

    #!/bin/bash
     
    # Get the current user
    CURRENT_USER=$(whoami)
     
    # Get the current date
    CURRENT_DATE=$(date +%Y-%m-%d)
     
    # Get the number of running processes
    PROCESS_COUNT=$(ps aux | wc -l)
     
    echo "User: $CURRENT_USER"
    echo "Date: $CURRENT_DATE"
    echo "Processes: $PROCESS_COUNT"

    Command substitution $(command) runs the command and captures its output. This is useful for getting dynamic values.

    Step-by-Step: Creating a Complete Server Monitoring Script

    Let's build a comprehensive monitoring script that checks disk space, memory usage, and running processes:

    #!/bin/bash
     
    # Configuration
    LOG_FILE="/var/log/server-monitor.log"
    ALERT_THRESHOLD=80
    EMAIL="admin@example.com"
     
    # Logging function
    log() {
        echo "[$(date '+%Y-%m-%d %H:%M:%S')] $1" | tee -a "$LOG_FILE"
    }
     
    # Check disk space
    check_disk_space() {
        local usage=$(df -h / | awk 'NR==2 {print $5}' | sed 's/%//')
        log "Disk usage: ${usage}%"
     
        if [ $usage -gt $ALERT_THRESHOLD ]; then
            log "WARNING: Disk usage is at ${usage}%"
            # Send alert (implementation depends on your setup)
        fi
    }
     
    # Check memory usage
    check_memory_usage() {
        local memory_usage=$(free | grep Mem | awk '{print int($3/$2 * 100.0)}')
        log "Memory usage: ${memory_usage}%"
     
        if [ $memory_usage -gt $ALERT_THRESHOLD ]; then
            log "WARNING: Memory usage is at ${memory_usage}%"
        fi
    }
     
    # Check running processes
    check_processes() {
        local process_count=$(ps aux | wc -l)
        log "Running processes: $process_count"
     
        if [ $process_count -gt 1000 ]; then
            log "WARNING: High number of running processes"
        fi
    }
     
    # Main execution
    log "Starting server monitoring"
     
    check_disk_space
    check_memory_usage
    check_processes
     
    log "Monitoring completed"

    To schedule this script to run every 15 minutes:

    # Edit crontab
    crontab -e
     
    # Add this line
    */15 * * * * /path/to/server-monitor.sh

    Conclusion

    Shell scripting is a fundamental skill for server automation. By combining commands, variables, conditionals, and loops into scripts, you can eliminate repetitive manual tasks, reduce errors, and scale your operations efficiently.

    The key takeaways are:

    • Shell scripts let you automate repetitive tasks with a single command
    • Use variables for configuration and make scripts reusable
    • Implement error handling and logging for reliability
    • Schedule scripts with cron for automated execution
    • Follow best practices to keep scripts maintainable and robust

    Start small with simple scripts for common tasks like backups, log rotation, and service monitoring. As you gain experience, build more complex scripts that integrate with your infrastructure. Platforms like ServerlessBase can help you deploy and manage these scripts alongside your applications, providing a unified platform for your automation needs.

    The next step is to identify a repetitive task on your servers and write a script to automate it. Even a simple script that saves you 10 minutes of work per day will pay off over time.

    Leave comment