AWS CLI Guide Part 2: Advanced Commands, Automation, and Best Practices

Dive deeper into AWS CLI with advanced commands, scripting techniques, automation strategies, and essential best practices

AWS CLI Guide Part 2: Advanced Commands, Automation, and Best Practices

Table of Contents

AWS CLI Guide Part 2: Advanced Commands, Automation, and Best Practices

Advanced AWS CLI Commands

Building on the intermediate commands covered in Part 1, let’s explore more advanced AWS CLI capabilities that can significantly enhance your cloud infrastructure management.

1. Advanced Parameter Handling

Using JMESPath Queries

AWS CLI supports JMESPath, a query language for JSON that allows you to extract specific data from command output:

  • Command:
    aws ec2 describe-instances --query "Reservations[*].Instances[*].[InstanceId,State.Name]" --output table
    
  • What it does: Extracts only the instance IDs and their states from the entire EC2 instance description.
  • Outcome:
    -------------------------
    |  DescribeInstances    |
    +-------------+---------+
    |  i-1234abcd |  running|
    |  i-5678efgh |  stopped|
    +-------------+---------+
    
  • Real-world Usage: This helps filter out only the information you need from large JSON responses, making it easier to process or read the output.

Using Output Filters for Complex Data

  • Command:
    aws ec2 describe-instances --filters "Name=instance-type,Values=t2.micro" --query "Reservations[*].Instances[*].InstanceId" --output text
    
  • What it does: Finds all t2.micro instances and returns only their IDs.
  • Outcome:
    i-1234abcd i-5678efgh
    
  • Example In Practice: This command could be part of a script to restart all instances of a certain type during maintenance windows.

2. AWS Systems Manager Commands

Running Commands on Multiple EC2 Instances

  • Command:
    aws ssm send-command \
        --document-name "AWS-RunShellScript" \
        --parameters commands="yum update -y" \
        --targets "Key=tag:Environment,Values=Production" \
        --comment "Patching production servers"
    
  • What it does: Runs the yum update -y command on all EC2 instances tagged with “Environment=Production”.
  • Outcome: JSON output with command ID and status information.
  • Real-world Usage: This allows you to execute maintenance tasks across multiple servers without logging into each one.

Getting Parameter Store Values

  • Command:
    aws ssm get-parameter --name "/application/database/password" --with-decryption
    
  • What it does: Retrieves a secure parameter (like a database password) and decrypts it if it was encrypted.
  • Outcome:
    {
      "Parameter": {
        "Name": "/application/database/password",
        "Type": "SecureString",
        "Value": "MySecurePassword123!",
        "Version": 1,
        "LastModifiedDate": "2023-09-01T12:00:00.000Z",
        "ARN": "arn:aws:ssm:us-east-1:123456789012:parameter/application/database/password"
      }
    }
    
  • Security Benefit: Allows secure storage and retrieval of sensitive information without hardcoding it in scripts.

3. Advanced CloudWatch Commands

Creating Dashboard

  • Command:
    aws cloudwatch put-dashboard --dashboard-name "MyAppMonitoring" --dashboard-body file://dashboard.json
    
  • What it does: Creates a custom CloudWatch dashboard using the configuration specified in dashboard.json.
  • Outcome: Your metrics are visualized in a dashboard for easier monitoring.
  • Operational Benefit: Helps visualize application health and performance at a glance.

Setting Up Composite Alarms

  • Command:
    aws cloudwatch put-composite-alarm \
        --alarm-name "HighCPUAndMemoryAlarm" \
        --alarm-rule "(ALARM(HighCPUAlarm) AND ALARM(HighMemoryAlarm))"
    
  • What it does: Creates a composite alarm that triggers only when both CPU and memory alarms are in ALARM state.
  • Outcome: More intelligent alerting that reduces false positives.
  • Practical Example: Only get notified when both CPU and memory are high, indicating a real issue rather than a temporary spike.

4. Advanced RDS (Relational Database Service) Commands

Creating a Database Snapshot

  • Command:
    aws rds create-db-snapshot \
        --db-instance-identifier my-database \
        --db-snapshot-identifier my-database-snapshot-$(date +%Y-%m-%d)
    
  • What it does: Creates a snapshot of the RDS instance with a date-stamped name.
  • Outcome: A point-in-time backup of your database.
  • Data Protection Strategy: This can be scheduled regularly to ensure backups before major changes.

Restoring a Database from Snapshot

  • Command:
    aws rds restore-db-instance-from-db-snapshot \
        --db-instance-identifier my-restored-db \
        --db-snapshot-identifier my-database-snapshot-2023-09-15
    
  • What it does: Creates a new RDS instance from a previous snapshot.
  • Outcome: A complete restoration of your database to the point when the snapshot was taken.
  • Disaster Recovery Example: After data corruption, you can rapidly restore your database to a previous good state.

5. ECS (Elastic Container Service) Commands

Updating a Service

  • Command:
    aws ecs update-service \
        --cluster my-cluster \
        --service my-service \
        --desired-count 5 \
        --force-new-deployment
    
  • What it does: Updates an ECS service to use 5 tasks and forces a new deployment.
  • Outcome: The service will scale to 5 containers and roll out a fresh deployment.
  • Zero-Downtime Deployment: This approach allows you to roll out new versions of your application without service interruption.

Running a Task

  • Command:
    aws ecs run-task \
        --cluster my-cluster \
        --task-definition my-task:3 \
        --count 1 \
        --launch-type FARGATE \
        --network-configuration "awsvpcConfiguration={subnets=[subnet-12345678],securityGroups=[sg-12345678]}"
    
  • What it does: Runs a one-off task in the ECS cluster using the Fargate launch type.
  • Outcome: A containerized task runs without requiring you to manage servers.
  • Maintenance Example: Perfect for running database migrations or one-time data processing jobs.

6. Route 53 Commands

Creating a DNS Record

  • Command:
    aws route53 change-resource-record-sets \
        --hosted-zone-id Z1D633PJN98FT9 \
        --change-batch '{
          "Changes": [{
            "Action": "CREATE",
            "ResourceRecordSet": {
              "Name": "www.example.com",
              "Type": "A",
              "TTL": 300,
              "ResourceRecords": [{ "Value": "192.0.2.1" }]
            }
          }]
        }'
    
  • What it does: Creates an A record pointing www.example.com to the IP address 192.0.2.1.
  • Outcome: The domain will resolve to the specified IP address.
  • Infrastructure Automation: Can be used when provisioning new servers to automatically update DNS records.

Checking Health Check Status

  • Command:
    aws route53 get-health-check-status --health-check-id 1234abcd-56ef-78gh-90ij-1234klmnopqr
    
  • What it does: Returns the current status of a Route 53 health check.
  • Outcome: JSON output showing whether the monitored endpoint is healthy.
  • Monitoring Strategy: You can query this in a script to take automated actions if a service becomes unhealthy.

Automation with AWS CLI

One of the most powerful aspects of AWS CLI is the ability to automate repetitive tasks. This section explores how to build scripts and automate common workflows.

1. Basic Bash Scripting with AWS CLI

Simple Backup Script

Here’s a basic script to back up the content of an S3 bucket to another bucket:

#!/bin/bash
# Backup script for S3 bucket

SOURCE_BUCKET="source-bucket"
BACKUP_BUCKET="backup-bucket"
TIMESTAMP=$(date +%Y-%m-%d-%H-%M)

# Create a backup with timestamp
aws s3 sync s3://$SOURCE_BUCKET/ s3://$BACKUP_BUCKET/$TIMESTAMP/ \
  --exclude "temporary/*"

# Output results
echo "Backup of $SOURCE_BUCKET completed to $BACKUP_BUCKET/$TIMESTAMP/"
  • What it does: Synchronizes all files from one bucket to a timestamped folder in another bucket.
  • Practical Application: Schedule this script to run daily for incremental backups.

Instance Cleanup Script

This script finds and terminates EC2 instances that have been stopped for more than 30 days:

#!/bin/bash
# Find and terminate old stopped instances

# Get a list of stopped instances
STOPPED_INSTANCES=$(aws ec2 describe-instances \
  --filters "Name=instance-state-name,Values=stopped" \
  --query "Reservations[*].Instances[*].[InstanceId,LaunchTime]" \
  --output text)

# Calculate cutoff date (30 days ago)
CUTOFF_DATE=$(date -d "30 days ago" +%s)

# Process each instance
echo "$STOPPED_INSTANCES" | while read INSTANCE_ID LAUNCH_TIME; do
  # Convert launch time to seconds since epoch
  STOP_TIME=$(date -d "$LAUNCH_TIME" +%s)

  # Check if older than cutoff
  if [ $STOP_TIME -lt $CUTOFF_DATE ]; then
    echo "Terminating old instance: $INSTANCE_ID (stopped since $(date -d @$STOP_TIME))"
    aws ec2 terminate-instances --instance-ids $INSTANCE_ID
  fi
done
  • What it does: Finds EC2 instances that have been stopped for a long time and terminates them.
  • Cost Optimization: Helps reduce costs by eliminating unused resources.

2. Creating JSON for Complex Commands

Some AWS CLI commands require complex JSON input. Here’s how to handle this:

Using a JSON File

For a CloudFormation stack with complex parameters:

# File: stack-params.json
{
  "StackName": "MyApplicationStack",
  "Parameters": [
    {
      "ParameterKey": "EnvironmentType",
      "ParameterValue": "Production"
    },
    {
      "ParameterKey": "InstanceType",
      "ParameterValue": "t3.large"
    }
  ],
  "Tags": [
    {
      "Key": "Department",
      "Value": "Engineering"
    }
  ]
}
  • Command:
    aws cloudformation create-stack --cli-input-json file://stack-params.json
    
  • Why This Approach: Makes complex commands more manageable and allows version control of configurations.

Using Inline JSON

For simpler cases, you can use inline JSON:

aws ec2 run-instances \
  --image-id ami-0abcdef1234567890 \
  --instance-type t2.micro \
  --tag-specifications '[
    {
      "ResourceType": "instance",
      "Tags": [
        {
          "Key": "Environment",
          "Value": "Development"
        },
        {
          "Key": "Project",
          "Value": "TestProject"
        }
      ]
    }
  ]'
  • When to Use: Good for one-off commands or when the JSON structure is relatively simple.

3. Scheduling AWS CLI Commands

Using Cron Jobs (Linux/macOS)

To run an S3 cleanup job every day at 2 AM:

# Add to crontab with: crontab -e
0 2 * * * /path/to/aws-s3-cleanup.sh >> /var/log/aws-cleanup.log 2>&1
  • How it works: The cron scheduler runs your script at the specified time.
  • Logging Best Practice: Always redirect output to a log file to capture any errors.

Using Task Scheduler (Windows)

On Windows, you can use Task Scheduler to run AWS CLI commands:

  1. Create a batch file (e.g., aws-backup.bat):
    @echo off
    aws s3 sync C:\Important-Files s3://my-backup-bucket/
    
  2. Open Task Scheduler and create a new task to run this batch file on your desired schedule.

4. Parameterizing Scripts

Make your scripts more flexible by accepting parameters:

#!/bin/bash
# Dynamic EC2 instance creator

# Process command line arguments
INSTANCE_TYPE=${1:-t2.micro}  # Default to t2.micro if not provided
AMI_ID=${2:-ami-0abcdef1234567890}  # Default AMI if not provided
KEY_NAME=${3:-my-key}  # Default key if not provided

# Launch instance
aws ec2 run-instances \
  --image-id $AMI_ID \
  --instance-type $INSTANCE_TYPE \
  --key-name $KEY_NAME \
  --tag-specifications "ResourceType=instance,Tags=[{Key=CreatedBy,Value=CLI-Script}]"

echo "Instance launched with type $INSTANCE_TYPE using AMI $AMI_ID"
  • Usage:
    ./create-instance.sh t3.large ami-1234567890abcdef my-production-key
    
  • Flexibility Benefit: You can reuse the same script for different environments or requirements.

Best Practices for AWS CLI

To maximize the effectiveness and security of your AWS CLI usage, follow these best practices:

1. Security Best Practices

Use IAM Roles for EC2 Instances

Instead of storing credentials on EC2 instances:

  • Command:
    aws ec2 associate-iam-instance-profile \
        --instance-id i-1234567890abcdef \
        --iam-instance-profile Name=EC2-S3-Access-Role
    
  • Security Benefit: Eliminates the need to store AWS credentials on the instance itself.

Regularly Rotate Access Keys

  • Commands:

    # Create new access key
    aws iam create-access-key --user-name MyUser
    
    # After updating credentials file, remove old key
    aws iam delete-access-key --user-name MyUser --access-key-id AKIAIOSFODNN7EXAMPLE
    
  • Security Benefit: Reduces the impact of leaked credentials.

Use Least Privilege Permissions

  • Create IAM policies that provide only the permissions needed:
    aws iam put-user-policy \
        --user-name DevUser \
        --policy-name DevS3ReadOnly \
        --policy-document '{
          "Version": "2012-10-17",
          "Statement": [{
            "Effect": "Allow",
            "Action": [
              "s3:Get*",
              "s3:List*"
            ],
            "Resource": "arn:aws:s3:::development-bucket/*"
          }]
        }'
    
  • Security Principle: Always limit permissions to only what is needed.

2. Performance Best Practices

Using the --no-paginate Option for Speed

  • Command:
    aws s3api list-objects --bucket my-large-bucket --no-paginate
    
  • Performance Benefit: Avoids making multiple API calls for large datasets when you don’t need all results.

Using --dry-run for Testing

  • Command:
    aws ec2 start-instances --instance-ids i-1234567890abcdef --dry-run
    
  • What it does: Validates the command without actually executing it.
  • Testing Strategy: Test commands in non-production environments first, or use --dry-run where available.

Using Profiles for Multiple AWS Accounts

  • Configuration:
    aws configure --profile production
    aws configure --profile development
    
  • Usage:
    aws s3 ls --profile production
    
  • Organizational Benefit: Easily switch between different AWS accounts without logging out/in.

3. Error Handling in Scripts

Adding Error Checking to Scripts

Robust error handling in Bash:

#!/bin/bash
# Script with error handling

# Set error flag to exit on any command failure
set -e

# Run AWS CLI command and capture the exit code
aws s3 mb s3://my-new-bucket
if [ $? -ne 0 ]; then
  echo "Failed to create bucket. Exiting."
  exit 1
fi

# Continue with successful execution
echo "Bucket created successfully!"
  • Reliability Benefit: Scripts will fail fast rather than continuing after errors.

Using Conditional Execution

# Only run the second command if the first succeeds
aws ec2 stop-instances --instance-ids i-1234567890abcdef && \
aws ec2 modify-instance-attribute --instance-id i-1234567890abcdef --instance-type "{\"Value\":\"t3.large\"}"
  • Operational Safety: Ensures the second command only runs if the first is successful.

4. Documentation Best Practices

Adding Comments to Scripts

Well-documented script example:

#!/bin/bash
# Purpose: Daily backup of critical databases
# Author: AWS Administrator
# Last Updated: 2023-09-15
#
# This script creates RDS snapshots and copies them to another region
# for disaster recovery purposes.

# Configuration variables
SOURCE_REGION="us-east-1"
BACKUP_REGION="us-west-2"
DB_INSTANCES=("prod-db-1" "prod-db-2")
RETENTION_DAYS=7

# Function to create a snapshot
create_snapshot() {
  local db_instance=$1
  local timestamp=$(date +%Y-%m-%d-%H-%M)

  echo "Creating snapshot of $db_instance..."
  aws rds create-db-snapshot \
    --db-instance-identifier $db_instance \
    --db-snapshot-identifier "${db_instance}-${timestamp}" \
    --region $SOURCE_REGION
}

# Main execution
for db in "${DB_INSTANCES[@]}"; do
  create_snapshot $db
done

# Delete old snapshots
# ...rest of script...
  • Maintenance Value: Makes scripts easier to maintain and understand for future administrators.

Using AWS CLI’s Built-in Documentation

  • Command:
    aws ec2 describe-instances help
    
  • Learning Benefit: Quickly access detailed information about commands without leaving the terminal.

Troubleshooting AWS CLI

Even experienced users encounter issues with AWS CLI. Here are common problems and their solutions:

1. Common Error Messages

“Unable to locate credentials”

  • Potential Solutions:
    • Run aws configure to set up your credentials.
    • Check if ~/.aws/credentials file exists and has correct entries.
    • Verify environment variables AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY if you use them.

“An error occurred (AccessDenied) when calling the XXX operation”

  • Potential Solutions:
    • Verify the IAM permissions associated with your credentials.
    • If using an IAM role, ensure the role has the necessary policies attached.
    • Check for resource-level permissions or conditions that might be restricting access.

2. Debugging AWS CLI Commands

Using --debug Flag

  • Command:
    aws s3 ls --debug
    
  • Troubleshooting Value: Shows detailed API calls, requests, and responses for better troubleshooting.

Setting Log Levels in Config

  • Configuration in ~/.aws/config:
    [profile myprofile]
    region = us-east-1
    output = json
    cli_history = enabled
    
  • Debugging Advantage: Enables history tracking to review previous commands for troubleshooting.

3. Handling Throttling and Retries

Working with AWS API Throttling

  • Strategy: Add a delay between API calls to avoid triggering throttling:
    #!/bin/bash
    # Loop through multiple S3 buckets with a delay
    for bucket in $(aws s3 ls | awk '{print $3}'); do
      echo "Processing $bucket"
      aws s3 ls s3://$bucket
      # Sleep for 1 second between calls
      sleep 1
    done
    
  • Operational Benefit: Helps avoid hitting API rate limits when processing many resources.

Configuring Retries

  • Configuration in ~/.aws/config:
    [default]
    region = us-east-1
    max_attempts = 5
    retry_mode = adaptive
    
  • Reliability Improvement: Automatically retries failed API calls with exponential backoff.

Conclusion

AWS CLI is a powerful tool that can significantly enhance your AWS management capabilities. From basic operations to complex automation, it offers flexibility and efficiency that the Management Console simply cannot match.

Key Takeaways from This Guide:

  1. Advanced Commands: Learn to leverage advanced operations for specific AWS services.
  2. Automation: AWS CLI’s true power lies in its ability to script and automate repetitive tasks.
  3. Best Practices: Follow security, performance, and documentation best practices to maximize effectiveness.
  4. Troubleshooting: Know how to diagnose and resolve common issues when they arise.

Next Steps in Your AWS CLI Journey:

  1. Explore the AWS CloudShell, a browser-based shell with AWS CLI pre-installed.
  2. Consider using AWS CLI inside CI/CD pipelines for infrastructure deployment.
  3. Look into AWS SDK if you need more programmatic control from languages like Python or Node.js.

By mastering AWS CLI, you’ve gained a valuable skill that not only makes your daily AWS tasks more efficient but also opens up new possibilities for infrastructure automation and management.

Table of Contents