EC2, S3, Lambda, DynamoDB, RDS, CloudFormation, IAM, VPC — everything for AWS.
# ── EC2 Instance Types ──
# General Purpose (balanced compute/memory)
t3.micro # 2 vCPU, 1 GiB RAM — free tier eligible
t3.small # 2 vCPU, 2 GiB RAM
t3.medium # 2 vCPU, 4 GiB RAM
m5.large # 2 vCPU, 8 GiB RAM
m5.xlarge # 4 vCPU, 16 GiB RAM
# Compute Optimized
c5.large # 2 vCPU, 4 GiB RAM
c5.xlarge # 4 vCPU, 8 GiB RAM
c5.4xlarge # 16 vCPU, 32 GiB RAM
# Memory Optimized
r5.large # 2 vCPU, 16 GiB RAM
r5.xlarge # 4 vCPU, 32 GiB RAM
x1e.xlarge # 4 vCPU, 122 GiB RAM# ── Launch an EC2 Instance ──
# 1. Create a key pair
aws ec2 create-key-pair \
--key-name my-app-key \
--key-type rsa \
--key-format pem \
--query "KeyMaterial" \
--output text > my-app-key.pem
chmod 400 my-app-key.pem
# 2. Create a security group
aws ec2 create-security-group \
--group-name my-app-sg \
--description "Security group for web application"
# 3. Add inbound rules
aws ec2 authorize-security-group-ingress \
--group-name my-app-sg \
--protocol tcp \
--port 22 \
--cidr 0.0.0.0/0
aws ec2 authorize-security-group-ingress \
--group-name my-app-sg \
--protocol tcp \
--port 443 \
--cidr 0.0.0.0/0
# 4. Launch instance
aws ec2 run-instances \
--image-id ami-0c55b159cbfafe1f0 \
--instance-type t3.micro \
--key-name my-app-key \
--security-groups my-app-sg \
--user-data file://user-data.sh \
--tag-specifications \
"ResourceType=instance,Tags=[{Key=Name,Value=my-web-app}]"
# 5. Describe instance
aws ec2 describe-instances \
--instance-ids i-0abc123def456 \
--query "Reservations[0].Instances[0].[InstanceId,State.Name,PublicIpAddress]"#!/bin/bash
# ── EC2 User Data Script (runs as root on first boot) ──
set -eux
# Update system
yum update -y
# Install dependencies
yum install -y httpd python3
# Deploy application
cat > /var/www/html/index.html << 'EOF'
<!DOCTYPE html>
<html><body><h1>Hello from EC2!</h1></body></html>
EOF
# Start web server
systemctl start httpd
systemctl enable httpd| Family | Use Case | Key Feature |
|---|---|---|
| T3/T3a | Dev/test, microservices | Burstable CPU credits |
| M5/M5a | General purpose apps | Balanced CPU/memory |
| C5/C5a | Batch processing, gaming | High CPU performance |
| R5/R5a | Databases, caching | High memory capacity |
| P4d | ML training, HPC | GPU (NVIDIA A100) |
| Inf1 | ML inference | AWS Inferentia chip |
| Mac M1 | macOS builds, CI/CD | Apple Silicon |
| Feature | Security Group | NACL |
|---|---|---|
| Layer | L4 (stateful) | L3/L4 (stateless) |
| Scope | Instance level | Subnet level |
| Rules | Allow only | Allow + Deny |
| Default | Deny all inbound | Allow all inbound/outbound |
| Return traffic | Auto-allowed | Must be explicit |
| Order | All rules evaluated | Rules numbered (priority) |
// ── Lambda: Node.js Handler ──
export const handler = async (event) => {
// API Gateway proxy event
const { httpMethod, path, body, queryStringParameters } = event;
try {
if (httpMethod === 'GET' && path === '/api/hello') {
return {
statusCode: 200,
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({
message: 'Hello from Lambda!',
timestamp: new Date().toISOString()
})
};
}
if (httpMethod === 'POST' && path === '/api/users') {
const data = JSON.parse(body || '{}');
// Validate input
if (!data.name || !data.email) {
return {
statusCode: 400,
body: JSON.stringify({ error: 'Name and email are required' })
};
}
return {
statusCode: 201,
body: JSON.stringify({ id: Date.now(), ...data })
};
}
return { statusCode: 404, body: JSON.stringify({ error: 'Not found' }) };
} catch (err) {
return {
statusCode: 500,
body: JSON.stringify({ error: err.message })
};
}
};# ── Lambda: Python Handler ──
import json
import boto3
from datetime import datetime
dynamodb = boto3.resource('dynamodb')
table = dynamodb.Table('users-table')
def handler(event, context):
"""Lambda handler with DynamoDB integration"""
http_method = event.get('httpMethod', '')
path = event.get('path', '')
try:
if http_method == 'GET' and path == '/api/users':
response = table.scan()
return {
'statusCode': 200,
'headers': {'Content-Type': 'application/json'},
'body': json.dumps({'users': response.get('Items', [])})
}
elif http_method == 'GET' and '/api/users/' in path:
user_id = path.split('/')[-1]
response = table.get_item(Key={'userId': user_id})
if 'Item' in response:
return {
'statusCode': 200,
'body': json.dumps(response['Item'])
}
return {'statusCode': 404, 'body': json.dumps({'error': 'User not found'})}
elif http_method == 'POST' and path == '/api/users':
body = json.loads(event.get('body', '{}'))
item = {
'userId': f"user_{datetime.now().timestamp()}",
'name': body['name'],
'email': body['email'],
'createdAt': datetime.now().isoformat()
}
table.put_item(Item=item)
return {
'statusCode': 201,
'body': json.dumps(item)
}
return {'statusCode': 404, 'body': json.dumps({'error': 'Not found'})}
except Exception as e:
return {'statusCode': 500, 'body': json.dumps({'error': str(e)})}# ── Lambda CLI Commands ──
# Create Lambda function
aws lambda create-function \
--function-name my-api-handler \
--runtime nodejs20.x \
--handler index.handler \
--role arn:aws:iam::123456789012:role/lambda-execution-role \
--zip-file fileb://function.zip \
--timeout 30 \
--memory-size 256 \
--environment Variables={NODE_ENV=production}
# Update function code
aws lambda update-function-code \
--function-name my-api-handler \
--zip-file fileb://function.zip
# Update configuration
aws lambda update-function-configuration \
--function-name my-api-handler \
--timeout 60 \
--memory-size 512 \
--environment Variables={DB_HOST=my-db.example.com}
# Add concurrency limit
aws lambda put-function-concurrency \
--function-name my-api-handler \
--reserved-concurrent-executions 100
# List functions
aws lambda list-functions --max-items 50
# Invoke function (for testing)
aws lambda invoke \
--function-name my-api-handler \
--payload '{"httpMethod":"GET","path":"/api/hello"}' \
response.json
# Create Lambda layer
aws lambda publish-layer-version \
--layer-name my-shared-layer \
--description "Shared utilities" \
--zip-file fileb://layer.zip \
--compatible-runtimes nodejs20.x python3.12# Create target group
aws elbv2 create-target-group \
--name my-app-tg \
--protocol HTTP \
--port 3000 \
--target-type instance \
--vpc-id vpc-12345678
# Create ALB
aws elbv2 create-load-balancer \
--name my-app-alb \
--subnets subnet-abc subnet-def \
--security-groups sg-12345678
# Create listener (HTTPS)
aws elbv2 create-listener \
--load-balancer-arn arn:aws:... \
--protocol HTTPS \
--port 443 \
--certificates CertificateArn=arn:... \
--default-actions Type=forward,TargetGroupArn=arn:...
# Register targets
aws elbv2 register-targets \
--target-group-arn arn:... \
--targets Id=i-abc123 Id=i-def456# ── S3 Bucket Operations ──
# Create bucket
aws s3 mb s3://my-application-bucket
# Create bucket with versioning & encryption
aws s3api create-bucket \
--bucket my-secure-bucket \
--region us-east-1 \
--object-lock-enabled-for-bucket
# Enable versioning
aws s3api put-bucket-versioning \
--bucket my-secure-bucket \
--versioning-configuration Status=Enabled
# Enable default encryption (SSE-S3)
aws s3api put-bucket-encryption \
--bucket my-secure-bucket \
--server-side-encryption-configuration \
'{"Rules":[{"ApplyServerSideEncryptionByDefault":{"SSEAlgorithm":"AES256"}}]}'
# Upload files
aws s3 cp ./local-file.txt s3://my-bucket/path/to/file.txt
aws s3 cp ./dist/ s3://my-bucket/assets/ --recursive
aws s3 sync ./build/ s3://my-bucket/ --delete
# Download files
aws s3 cp s3://my-bucket/file.txt ./local-file.txt
aws s3 sync s3://my-bucket/ ./local-backup/ --delete
# List objects
aws s3 ls s3://my-bucket/
aws s3 ls s3://my-bucket/ --recursive --summarize
aws s3api list-objects-v2 --bucket my-bucket --max-items 100
# Delete objects
aws s3 rm s3://my-bucket/old-file.txt
aws s3 rm s3://my-bucket/temp/ --recursive
# Delete bucket (must be empty)
aws s3 rb s3://my-bucket --force# ── S3 Bucket Policy ──
# Public read access for a website bucket
aws s3api put-bucket-policy \
--bucket my-website-bucket \
--policy '{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "PublicReadGetObject",
"Effect": "Allow",
"Principal": "*",
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::my-website-bucket/*"
}
]
}'
# Cross-account access
aws s3api put-bucket-policy \
--bucket my-app-data \
--policy '{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::123456789012:root"
},
"Action": [
"s3:GetObject",
"s3:PutObject",
"s3:ListBucket"
],
"Resource": [
"arn:aws:s3:::my-app-data",
"arn:aws:s3:::my-app-data/*"
]
}
]
}'{
"CORSRules": [
{
"AllowedHeaders": ["*"],
"AllowedMethods": ["GET", "PUT", "POST", "DELETE", "HEAD"],
"AllowedOrigins": ["https://my-app.example.com"],
"ExposeHeaders": ["ETag", "x-amz-request-id"],
"MaxAgeSeconds": 3600
}
]
}# ── S3 Presigned URLs ──
# Generate presigned GET URL (valid for 1 hour)
aws s3 presign s3://my-bucket/secret-document.pdf --expires-in 3600
# Generate presigned PUT URL for file uploads
aws s3 presign s3://my-bucket/uploads/new-file.jpg \
--method PUT --expires-in 300
# Using boto3 (Python)
python3 -c "
import boto3
s3 = boto3.client('s3')
url = s3.generate_presigned_url(
'get_object',
Params={'Bucket': 'my-bucket', 'Key': 'report.pdf'},
ExpiresIn=3600
)
print(url)
"{
"Rules": [
{
"ID": "MoveToIAAfter30Days",
"Status": "Enabled",
"Filter": { "Prefix": "logs/" },
"Transitions": [
{
"Days": 30,
"StorageClass": "STANDARD_IA"
},
{
"Days": 90,
"StorageClass": "GLACIER"
}
],
"Expiration": {
"Days": 365
}
},
{
"ID": "AbortIncompleteMultipartUploads",
"Status": "Enabled",
"Filter": { "Prefix": "uploads/" },
"AbortIncompleteMultipartUpload": {
"DaysAfterInitiation": 7
}
}
]
}| Class | Availability | Min Duration | Use Case |
|---|---|---|---|
| S3 Standard | 99.99% | None | Frequently accessed data |
| S3 Intelligent-Tiering | 99.9% | None | Unknown/changing access patterns |
| S3 Standard-IA | 99.9% | 30 days | Infrequently accessed data |
| S3 One Zone-IA | 99.5% | 30 days | Re-creatable data, secondary backups |
| S3 Glacier Instant | 99.99% | None | Archive with instant retrieval (ms) |
| S3 Glacier Flexible | 99.99% | 90 days | Archive with minutes-hours retrieval |
| S3 Glacier Deep Archive | 99.99% | 180 days | Long-term archive (12-48h retrieval) |
| S3 Express One Zone | 99.95% | None | Single AZ, highest performance |
| Method | Key Management | Use Case |
|---|---|---|
| SSE-S3 (AES-256) | AWS managed | Default, simplest option |
| SSE-KMS | AWS KMS CMK | Audit trail, key rotation |
| SSE-C | Customer provided | Bring your own keys |
| Client-side | Application managed | Full control over encryption |
# ── EBS Volume Management ──
# Create EBS volume (gp3 — default, best price/performance)
aws ec2 create-volume \
--availability-zone us-east-1a \
--volume-type gp3 \
--size 100 \
--tag-specifications ResourceType=volume,Tags=[{Key=Name,Value=app-data}]
# Create io2 volume (high-performance databases)
aws ec2 create-volume \
--availability-zone us-east-1a \
--volume-type io2 \
--size 500 \
--iops 16000 \
--throughput 1000
# Attach volume to instance
aws ec2 attach-volume \
--volume-id vol-0abc123 \
--instance-id i-0def456 \
--device /dev/sdf
# Create snapshot
aws ec2 create-snapshot \
--volume-id vol-0abc123 \
--description "Daily backup - $(date +%Y-%m-%d)"
# Create snapshot from running instance (consistent)
aws ec2 create-snapshot \
--volume-id vol-0abc123 \
--description "App data backup"
# List snapshots
aws ec2 describe-snapshots --owner-ids self --query "Snapshots[*].[SnapshotId,VolumeSize,State]" --output table
# Delete volume (must be detached)
aws ec2 delete-volume --volume-id vol-0abc123
# ── EBS Volume Types ──
# gp3: 3,000 IOPS, 125 MB/s included — best value
# io2: Up to 64,000 IOPS, 1,000 MB/s — critical workloads
# st1: Throughput-optimized (500 MB/s) — big data, logs
# sc1: Cold HDD — infrequently accessed data# ── EFS (Elastic File System) ──
# Create file system
aws efs create-file-system \
--creation-token my-app-fs \
--performance-mode generalPurpose \
--throughput-mode bursting \
--encrypted
# Create mount target in a subnet
aws efs create-mount-target \
--file-system-id fs-0abc123 \
--subnet-id subnet-0def456 \
--security-groups sg-0789
# Mount on EC2 instance
# sudo mount -t nfs4 -o nfsvers=4.1,rsize=1048576,wsize=1048576 \
# fs-0abc123.efs.us-east-1.amazonaws.com:/ /mnt/efs# ── DynamoDB Table Creation ──
# Create table with on-demand capacity
aws dynamodb create-table \
--table-name users \
--attribute-definitions \
AttributeName=userId,AttributeType=S \
AttributeName=email,AttributeType=S \
--key-schema \
AttributeName=userId,KeyType=HASH \
--billing-mode PAY_PER_REQUEST \
--table-class STANDARD
# Create table with provisioned capacity
aws dynamodb create-table \
--table-name orders \
--attribute-definitions \
AttributeName=orderId,AttributeType=S \
AttributeName=customerId,AttributeType=S \
--key-schema \
AttributeName=orderId,KeyType=HASH \
--billing-mode PROVISIONED \
--provisioned-throughput ReadCapacityUnits=25,WriteCapacityUnits=25
# Add a Global Secondary Index (GSI)
aws dynamodb update-table \
--table-name orders \
--attribute-definitions AttributeName=customerId,AttributeType=S \
--global-secondary-index-updates '
[{
"Create": {
"IndexName": "customer-index",
"KeySchema": [{"AttributeName":"customerId","KeyType":"HASH"}],
"Projection": {"ProjectionType":"ALL"},
"ProvisionedThroughput": {
"ReadCapacityUnits": 10,
"WriteCapacityUnits": 5
}
}
}]'# ── DynamoDB Item Operations ──
# Put item
aws dynamodb put-item \
--table-name users \
--item '{
"userId": {"S": "user_001"},
"email": {"S": "alice@example.com"},
"name": {"S": "Alice Johnson"},
"age": {"N": "30"},
"active": {"BOOL": true},
"createdAt": {"S": "2025-01-15T10:30:00Z"},
"tags": {"L": [{"S": "premium"}, {"S": "early-adopter"}]}
}'
# Get item
aws dynamodb get-item \
--table-name users \
--key '{"userId": {"S": "user_001"}}'
# Update item (add attribute)
aws dynamodb update-item \
--table-name users \
--key '{"userId": {"S": "user_001"}}' \
--update-expression "SET #e = :email, #l = :lastLogin" \
--expression-attribute-names '{"#e": "email", "#l": "lastLogin"}' \
--expression-attribute-values '{":email": {"S": "new@example.com"}, ":lastLogin": {"S": "2025-06-01T08:00:00Z"}}' \
--return-values ALL_NEW
# Delete item
aws dynamodb delete-item \
--table-name users \
--key '{"userId": {"S": "user_001"}}'
# Query with filter
aws dynamodb query \
--table-name orders \
--index-name customer-index \
--key-condition-expression "customerId = :cid" \
--filter-expression "orderStatus = :status" \
--expression-attribute-values '{":cid": {"S": "cust_123"}, ":status": {"S": "SHIPPED"}}' \
--limit 20# ── DynamoDB Batch Operations ──
# Batch write (up to 25 items per request)
aws dynamodb batch-write-item \
--request-items '{
"users": [
{
"PutRequest": {
"Item": {
"userId": {"S": "user_002"},
"name": {"S": "Bob Smith"},
"email": {"S": "bob@example.com"}
}
}
},
{
"PutRequest": {
"Item": {
"userId": {"S": "user_003"},
"name": {"S": "Carol White"},
"email": {"S": "carol@example.com"}
}
}
}
]
}'
# Batch get (up to 100 items or 16 MB per request)
aws dynamodb batch-get-item \
--request-items '{
"users": {
"Keys": [
{"userId": {"S": "user_001"}},
{"userId": {"S": "user_002"}}
],
"ProjectionExpression": "userId, name, email"
}
}'
# Scan entire table (use with caution on large tables)
aws dynamodb scan \
--table-name users \
--projection-expression "userId, name, email" \
--limit 100| Feature | On-Demand | Provisioned |
|---|---|---|
| Scaling | Automatic | Manual (or auto-scaling) |
| Pricing | Per request + storage | Per RCU/WCU + storage |
| Use case | Unpredictable workloads | Predictable, cost-optimized |
| Throughput limit | 40,000 RCUs / 40,000 WCUs | Unlimited (with auto-scaling) |
| Burst capacity | Default 5K RCU / 5K WCU | Defined by provisioned units |
| Best for | New apps, sporadic traffic | Production with known patterns |
# ── DynamoDB: Enable TTL & Streams ──
# Enable TTL (auto-delete expired items)
aws dynamodb update-time-to-live \
--table-name sessions \
--attribute-name expiresAt
# Enable DynamoDB Streams
aws dynamodb update-table \
--table-name orders \
--stream-specification StreamEnabled=true,StreamViewType=NEW_AND_OLD_IMAGES
# Describe table info
aws dynamodb describe-table \
--table-name orders \
--query "Table.[TableName,TableStatus,BillingModeSummary,ProvisionedThroughput]"# ── RDS (Relational Database Service) ──
# Create PostgreSQL instance
aws rds create-db-instance \
--db-instance-identifier my-app-db \
--db-instance-class db.t3.medium \
--engine postgres \
--engine-version 16.1 \
--allocated-storage 100 \
--storage-type gp3 \
--master-username admin \
--master-user-password MyStr0ngP@ssw0rd! \
--vpc-security-group-ids sg-0123456 \
--db-subnet-group-name my-db-subnet-group \
--backup-retention-period 7 \
--multi-az \
--publicly-accessible false
# Create MySQL instance
aws rds create-db-instance \
--db-instance-identifier my-mysql-db \
--db-instance-class db.t3.medium \
--engine mysql \
--engine-version 8.0 \
--allocated-storage 50 \
--master-username admin \
--master-user-password MyStr0ngP@ssw0rd!
# Create read replica
aws rds create-db-instance-read-replica \
--db-instance-identifier my-db-replica \
--source-db-instance-identifier my-app-db
# Create snapshot
aws rds create-db-snapshot \
--db-instance-identifier my-app-db \
--db-snapshot-identifier my-db-snapshot-$(date +%Y%m%d)
# Modify instance (scale up)
aws rds modify-db-instance \
--db-instance-identifier my-app-db \
--db-instance-class db.r5.large \
--allocated-storage 500 \
--apply-immediately| Engine | Versions | Best For |
|---|---|---|
| PostgreSQL | 13, 14, 15, 16 | Complex queries, extensions, JSONB |
| MySQL | 8.0 | Web apps, wide compatibility |
| MariaDB | 10.6, 10.11 | MySQL-compatible, community features |
| Oracle | 19c, 21c | Enterprise workloads |
| SQL Server | 2019, 2022 | .NET ecosystem integration |
| Aurora PostgreSQL | Postgres-compatible | 5x faster, auto-scaling storage |
| Aurora MySQL | MySQL-compatible | 5x faster, up to 15 replicas |
| Feature | Description |
|---|---|
| Multi-AZ | Automatic failover to standby in different AZ |
| Read Replicas | Up to 15 MySQL, 5 PostgreSQL replicas |
| Cross-Region Replicas | DR and reduced latency reads |
| Automated Backups | Daily full + continuous WAL (up to 35 days) |
| Manual Snapshots | User-initiated, retained until deleted |
| Parameter Groups | Customize engine configuration |
# ── VPC Creation ──
# Create VPC with CIDR block (10.0.0.0/16 = 65,536 IPs)
aws ec2 create-vpc \
--cidr-block 10.0.0.0/16 \
--tag-specifications ResourceType=vpc,Tags=[{Key=Name,Value=my-app-vpc}]
# Enable DNS support & hostnames
VPC_ID=$(aws ec2 describe-vpcs \
--filters Name=tag:Name,Values=my-app-vpc \
--query "Vpcs[0].VpcId" --output text)
aws ec2 modify-vpc-attribute \
--vpc-id $VPC_ID \
--enable-dns-support '{"Value": true}'
aws ec2 modify-vpc-attribute \
--vpc-id $VPC_ID \
--enable-dns-hostnames '{"Value": true}'
# Create public subnet
aws ec2 create-subnet \
--vpc-id $VPC_ID \
--cidr-block 10.0.1.0/24 \
--availability-zone us-east-1a \
--tag-specifications ResourceType=subnet,Tags=[{Key=Name,Value=public-subnet-1a}]
# Create private subnet
aws ec2 create-subnet \
--vpc-id $VPC_ID \
--cidr-block 10.0.10.0/24 \
--availability-zone us-east-1a \
--tag-specifications ResourceType=subnet,Tags=[{Key=Name,Value=private-subnet-1a}]
# Create second AZ private subnet (for RDS Multi-AZ)
aws ec2 create-subnet \
--vpc-id $VPC_ID \
--cidr-block 10.0.11.0/24 \
--availability-zone us-east-1b \
--tag-specifications ResourceType=subnet,Tags=[{Key=Name,Value=private-subnet-1b}]# ── Internet Gateway & NAT Gateway ──
# Create and attach Internet Gateway
IGW_ID=$(aws ec2 create-internet-gateway \
--tag-specifications ResourceType=internet-gateway,Tags=[{Key=Name,Value=my-app-igw}] \
--query "InternetGateway.InternetGatewayId" --output text)
aws ec2 attach-internet-gateway \
--internet-gateway-id $IGW_ID \
--vpc-id $VPC_ID
# Create NAT Gateway (in public subnet for private subnet egress)
ELASTIC_IP=$(aws ec2 allocate-address \
--domain vpc \
--query "AllocationId" --output text)
aws ec2 create-nat-gateway \
--subnet-id subnet-public_1a \
--allocation-id $ELASTIC_IP \
--tag-specifications ResourceType=natgateway,Tags=[{Key=Name,Value=my-app-nat}]
# ── Route Tables ──
# Create public route table (routes to IGW)
PUB_RT=$(aws ec2 create-route-table \
--vpc-id $VPC_ID \
--tag-specifications ResourceType=route-table,Tags=[{Key=Name,Value=public-rt}] \
--query "RouteTable.RouteTableId" --output text)
aws ec2 create-route \
--route-table-id $PUB_RT \
--destination-cidr-block 0.0.0.0/0 \
--gateway-id $IGW_ID
# Associate public subnet with public route table
aws ec2 associate-route-table \
--route-table-id $PUB_RT \
--subnet-id subnet-public_1a
# Create private route table (routes to NAT)
PRI_RT=$(aws ec2 create-route-table \
--vpc-id $VPC_ID \
--tag-specifications ResourceType=route-table,Tags=[{Key=Name,Value=private-rt}] \
--query "RouteTable.RouteTableId" --output text)
aws ec2 create-route \
--route-table-id $PRI_RT \
--destination-cidr-block 0.0.0.0/0 \
--nat-gateway-id nat-0abc123# ── VPC Peering ──
# Request VPC peering connection
aws ec2 create-vpc-peering-connection \
--vpc-id vpc-11111111 \
--peer-vpc-id vpc-22222222 \
--peer-region us-west-2 \
--tag-specifications ResourceType=vpc-peering-connection,Tags=[{Key=Name,Value=peer-east-west}]
# Accept the peering (in the accepter account)
aws ec2 accept-vpc-peering-connection \
--vpc-peering-connection-id pcx-0abc123def456
# Update route tables to route through peering
aws ec2 create-route \
--route-table-id rtb-111111 \
--destination-cidr-block 10.1.0.0/16 \
--vpc-peering-connection-id pcx-0abc123def456| Subnet | CIDR Example | Use Case |
|---|---|---|
| Public | 10.0.1.0/24 | ALB, NAT Gateway, bastion host |
| Private (app) | 10.0.10.0/24 | EC2 app servers, ECS tasks |
| Private (data) | 10.0.20.0/24 | RDS, ElastiCache, no internet |
| Isolated | 10.0.30.0/24 | No IGW, no NAT — strict control |
| Aspect | Security Group | NACL |
|---|---|---|
| Level | Instance / ENI | Subnet |
| Stateful | Yes (auto-allows return) | No (must add both directions) |
| Rule type | Allow rules only | Allow and deny rules |
| Default deny | All inbound denied | Allows all traffic |
| Evaluation | All rules checked together | Lowest rule number wins |
| Best for | Instance-level traffic control | Subnet-level blanket rules |
# ── Route 53: Hosted Zones & Records ──
# Create hosted zone
aws route53 create-hosted-zone \
--name example.com \
--caller-reference "$(date +%s)"
# Create A record (alias to ALB)
aws route53 change-resource-record-sets \
--hosted-zone-id Z1234567890ABC \
--change-batch '{
"Changes": [{
"Action": "CREATE",
"ResourceRecordSet": {
"Name": "app.example.com",
"Type": "A",
"AliasTarget": {
"HostedZoneId": "Z35SXDOTRQ7X7K",
"DNSName": "my-app-alb-1234567890.us-east-1.elb.amazonaws.com",
"EvaluateTargetHealth": true
}
}
}]
}'
# Create CNAME record
aws route53 change-resource-record-sets \
--hosted-zone-id Z1234567890ABC \
--change-batch '{
"Changes": [{
"Action": "CREATE",
"ResourceRecordSet": {
"Name": "www.example.com",
"Type": "CNAME",
"TTL": 300,
"ResourceRecords": [{"Value": "app.example.com"}]
}
}]
}'
# Create MX record for email
aws route53 change-resource-record-sets \
--hosted-zone-id Z1234567890ABC \
--change-batch '{
"Changes": [{
"Action": "CREATE",
"ResourceRecordSet": {
"Name": "example.com",
"Type": "MX",
"TTL": 3600,
"ResourceRecords": [
{"Value": "10 mail.example.com"}
]
}
}]
}'| Type | Purpose | Points To |
|---|---|---|
| A | IPv4 address | 1.2.3.4 |
| AAAA | IPv6 address | 2001:0db8::1 |
| CNAME | Alias to another domain | cdn.example.com |
| ALIAS | AWS resource (free apex) | ELB, CloudFront, S3 |
| MX | Mail exchange | 10 mail.example.com |
| TXT | Text (SPF, DKIM, verification) | "v=spf1 include:_spf..." |
| NS | Name servers | ns-xxx.awsdns-xx.com |
| SOA | Start of authority | Administrative info |
| SRV | Service locator | _sip._tcp.example.com |
| Policy | Use Case | Key Feature |
|---|---|---|
| Simple | Single resource | One record, no health check needed |
| Weighted | A/B testing, gradual rollout | Percentage-based traffic split |
| Latency | Global, multi-region | Route to lowest-latency region |
| Failover | Active-passive DR | Health check + automatic failover |
| Geolocation | Localization, compliance | Route based on user location |
| Multivalue | Random load balancing | Up to 8 healthy records returned |
# ── Route 53: Health Checks ──
# Create health check for endpoint
aws route53 create-health-check \
--caller-reference "$(date +%s)" \
--health-check-config '{
"Port": 443,
"Type": "HTTPS",
"ResourcePath": "/health",
"FullyQualifiedDomainName": "app.example.com",
"RequestInterval": 30,
"FailureThreshold": 3
}'
# Associate health check with failover record
aws route53 change-resource-record-sets \
--hosted-zone-id Z1234567890ABC \
--change-batch '{
"Changes": [{
"Action": "CREATE",
"ResourceRecordSet": {
"Name": "api.example.com",
"Type": "A",
"SetIdentifier": "primary",
"Failover": "PRIMARY",
"AliasTarget": {
"HostedZoneId": "Z35SXDOTRQ7X7K",
"DNSName": "my-alb.us-east-1.elb.amazonaws.com",
"EvaluateTargetHealth": true
},
"HealthCheckId": "12345678-1234-1234-1234-123456789012"
}
}]
}'{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "AllowS3ReadWriteForSpecificBucket",
"Effect": "Allow",
"Action": [
"s3:GetObject",
"s3:PutObject",
"s3:DeleteObject",
"s3:ListBucket"
],
"Resource": [
"arn:aws:s3:::my-app-bucket",
"arn:aws:s3:::my-app-bucket/*"
]
},
{
"Sid": "DenyUploadToProductionUnlessEncrypted",
"Effect": "Deny",
"Action": "s3:PutObject",
"Resource": "arn:aws:s3:::my-app-bucket/*",
"Condition": {
"StringNotEquals": {
"s3:x-amz-server-side-encryption": "aws:kms"
}
}
},
{
"Sid": "RestrictToSpecificIPRange",
"Effect": "Deny",
"Action": "*",
"Resource": "*",
"Condition": {
"NotIpAddress": {
"aws:SourceIp": [
"203.0.113.0/24",
"198.51.100.0/24"
]
}
}
}
]
}| Element | Description | Example |
|---|---|---|
| Version | Policy language version | "2012-10-17" (current) |
| Statement | Container for permissions | Array of statement objects |
| Sid | Optional statement ID | "AllowS3Read" |
| Effect | Allow or Deny | "Allow" | "Deny" |
| Action | API actions allowed/denied | "s3:GetObject", "ec2:*" |
| Resource | ARNs the action applies to | "arn:aws:s3:::bucket/*" |
| Condition | Optional conditions | {"aws:SourceIp": "..."} |
| Principal | Who the policy applies to | In resource-based policies only |
| Condition Key | Operator | Use Case |
|---|---|---|
| aws:SourceIp | IpAddress / NotIpAddress | Restrict by IP range |
| aws:PrincipalOrgID | StringEquals | Restrict to organization |
| s3:prefix | StringLike | Restrict S3 path |
| aws:CurrentTime | DateGreaterThan | Time-based access |
| aws:MultiFactorAuthAge | NumericLessThan | Require recent MFA |
| aws:TagKeys | ForAnyValue:StringEquals | Require specific tags |
| ec2:ResourceTag/env | StringEquals | Resource tag-based access |
# ── IAM User, Group & Role Management ──
# Create group and attach policies
aws iam create-group --group-name developers
aws iam attach-group-policy \
--group-name developers \
--policy-arn arn:aws:iam::aws:policy/ReadOnlyAccess
# Create user and add to group
aws iam create-user --user-name dev-alice
aws iam add-user-to-group --user-name dev-alice --group-name developers
aws iam create-login-profile --user-name dev-alice --password TempPass123!
# Create access key for programmatic access
aws iam create-access-key --user-name dev-alice
# Create IAM role for EC2 (instance profile)
aws iam create-role \
--role-name ec2-app-role \
--assume-role-policy-document '{
"Version": "2012-10-17",
"Statement": [{
"Effect": "Allow",
"Principal": {"Service": "ec2.amazonaws.com"},
"Action": "sts:AssumeRole"
}]
}'
# Attach managed policy to role
aws iam attach-role-policy \
--role-name ec2-app-role \
--policy-arn arn:aws:iam::aws:policy/AmazonS3ReadOnlyAccess
# Create instance profile and attach role
aws iam create-instance-profile --instance-profile-name ec2-app-profile
aws iam add-role-to-instance-profile \
--instance-profile-name ec2-app-profile \
--role-name ec2-app-role
# Create role for Lambda
aws iam create-role \
--role-name lambda-execution-role \
--assume-role-policy-document '{
"Version": "2012-10-17",
"Statement": [{
"Effect": "Allow",
"Principal": {"Service": "lambda.amazonaws.com"},
"Action": "sts:AssumeRole"
}]
}'
aws iam attach-role-policy \
--role-name lambda-execution-role \
--policy-arn arn:aws:iam::aws:policy/service-role/AWSLambdaBasicExecutionRole# ── STS: Assume Role (Cross-Account Access) ──
# Assume role (returns temporary credentials)
aws sts assume-role \
--role-arn arn:aws:iam::123456789012:role/cross-account-role \
--role-session-name my-dev-session \
--duration-seconds 3600
# The response includes AccessKeyId, SecretAccessKey, SessionToken
# Use them to make API calls:
AWS_ACCESS_KEY_ID=ASIA... \
AWS_SECRET_ACCESS_KEY=... \
AWS_SESSION_TOKEN=FQoGZXIv... \
aws s3 ls s3://cross-account-bucket/
# Get caller identity (verify who you are)
aws sts get-caller-identity# ── KMS (Key Management Service) ──
# Create KMS key
aws kms create-key \
--description "Encryption key for S3 objects" \
--key-usage ENCRYPT_DECRYPT \
--origin AWS_KMS \
--tag-specifications TagSpec=[{Key=Name,Value=s3-encryption-key}]
# Create alias for the key
aws kms create-alias \
--alias-name alias/s3-app-key \
--target-key-id 1234abcd-12ab-34cd-56ef-1234567890ab
# Encrypt data
aws kms encrypt \
--key-id alias/s3-app-key \
--plaintext fileb://secret-data.txt \
--output text \
--query CiphertextBlob > encrypted.bin
# Decrypt data
aws kms decrypt \
--ciphertext-blob fileb://encrypted.bin \
--output text \
--query Plaintext > decrypted.txt
# Generate data key (for client-side encryption)
aws kms generate-data-key \
--key-id alias/s3-app-key \
--key-spec AES_256
# Enable automatic key rotation (annual)
aws kms enable-key-rotation \
--key-id 1234abcd-12ab-34cd-56ef-1234567890ab# ── SSM Parameter Store & Secrets Manager ──
# SSM Parameter Store
# Create a parameter (String type)
aws ssm put-parameter \
--name "/my-app/database/host" \
--value "my-db.cluster-abc123.us-east-1.rds.amazonaws.com" \
--type String \
--description "RDS endpoint"
# Create SecureString (encrypted with KMS)
aws ssm put-parameter \
--name "/my-app/database/password" \
--value "MyStr0ngP@ssw0rd!" \
--type SecureString \
--description "RDS master password"
# Get parameter (SecureString auto-decrypts with IAM KMS permissions)
aws ssm get-parameter --name "/my-app/database/host"
aws ssm get-parameter --name "/my-app/database/password" --with-decryption
# List parameters under a path
aws ssm get-parameters-by-path --path "/my-app/" --recursive
# ── Secrets Manager ──
# Create a secret
aws secretsmanager create-secret \
--name my-app/db-credentials \
--description "Database credentials" \
--secret-string '{"username":"admin","password":"MyStr0ngP@ssw0rd!"}'
# Retrieve secret value
aws secretsmanager get-secret-value --secret-id my-app/db-credentials
# Update secret value
aws secretsmanager update-secret \
--secret-id my-app/db-credentials \
--secret-string '{"username":"admin","password":"NewP@ssw0rd!"}'
# Enable automatic rotation (with Lambda)
aws secretsmanager rotate-secret \
--secret-id my-app/db-credentials \
--rotation-lambda-arn arn:aws:lambda:us-east-1:123456789012:function:rotate-db-secret \
--rotation-rules AutomaticallyAfterDays=30# ── AWS CLI Configuration ──
# Configure named profiles
aws configure --profile production
# AWS Access Key ID: AKIA...
# AWS Secret Access Key: ...
# Default region: us-east-1
# Default output: json
aws configure --profile staging
aws configure --profile dev
# Use profiles
aws s3 ls --profile production
export AWS_PROFILE=production
# Set MFA for CLI sessions
# Step 1: Get temporary credentials
aws sts get-session-token \
--serial-number arn:aws:iam::123456789012:mfa/alice \
--token-code 123456 \
--duration-seconds 43200
# Step 2: Export the temporary credentials
export AWS_ACCESS_KEY_ID=ASIA...
export AWS_SECRET_ACCESS_KEY=...
export AWS_SESSION_TOKEN=FQoGZXIv...# ── CloudFormation: VPC + EC2 + S3 + IAM Role ──
AWSTemplateFormatVersion: '2010-09-09'
Description: Production web application stack
Parameters:
Environment:
Type: String
Default: production
AllowedValues: [production, staging, development]
Description: Deployment environment
InstanceType:
Type: String
Default: t3.small
AllowedValues: [t3.micro, t3.small, t3.medium, m5.large]
Description: EC2 instance type
VpcCidr:
Type: String
Default: 10.0.0.0/16
Description: VPC CIDR block
KeyName:
Type: AWS::EC2::KeyPair::KeyName
Description: EC2 key pair name for SSH access
Mappings:
RegionMap:
us-east-1:
AMI: ami-0c55b159cbfafe1f0
us-west-2:
AMI: ami-0b898040803850657
eu-west-1:
AMI: ami-047bb4163c5061489
Conditions:
IsProduction: !Equals [!Ref Environment, production]
IsStaging: !Equals [!Ref Environment, staging]
Resources:
# ── VPC & Networking ──
VPC:
Type: AWS::EC2::VPC
Properties:
CidrBlock: !Ref VpcCidr
EnableDnsSupport: true
EnableDnsHostnames: true
Tags:
- Key: Name
Value: !Sub '${AWS::StackName}-vpc'
InternetGateway:
Type: AWS::EC2::InternetGateway
Properties:
Tags:
- Key: Name
Value: !Sub '${AWS::StackName}-igw'
AttachGateway:
Type: AWS::EC2::VPCGatewayAttachment
Properties:
VpcId: !Ref VPC
InternetGatewayId: !Ref InternetGateway
PublicSubnet:
Type: AWS::EC2::Subnet
Properties:
VpcId: !Ref VPC
CidrBlock: 10.0.1.0/24
AvailabilityZone: !Select [0, !GetAZs '']
MapPublicIpOnLaunch: true
Tags:
- Key: Name
Value: !Sub '${AWS::StackName}-public-subnet'
PrivateSubnet:
Type: AWS::EC2::Subnet
Properties:
VpcId: !Ref VPC
CidrBlock: 10.0.10.0/24
AvailabilityZone: !Select [0, !GetAZs '']
Tags:
- Key: Name
Value: !Sub '${AWS::StackName}-private-subnet'
PublicRouteTable:
Type: AWS::EC2::RouteTable
Properties:
VpcId: !Ref VPC
PublicRoute:
Type: AWS::EC2::Route
DependsOn: AttachGateway
Properties:
RouteTableId: !Ref PublicRouteTable
DestinationCidrBlock: 0.0.0.0/0
GatewayId: !Ref InternetGateway
SubnetRouteTableAssoc:
Type: AWS::EC2::SubnetRouteTableAssociation
Properties:
SubnetId: !Ref PublicSubnet
RouteTableId: !Ref PublicRouteTable# ── IAM Role for EC2 ──
EC2Role:
Type: AWS::IAM::Role
Properties:
AssumeRolePolicyDocument:
Version: '2012-10-17'
Statement:
- Effect: Allow
Principal:
Service: ec2.amazonaws.com
Action: sts:AssumeRole
ManagedPolicyArns:
- arn:aws:iam::aws:policy/AmazonS3ReadOnlyAccess
- arn:aws:iam::aws:policy/CloudWatchLogsFullAccess
Tags:
- Key: Name
Value: !Sub '${AWS::StackName}-ec2-role'
EC2InstanceProfile:
Type: AWS::IAM::InstanceProfile
Properties:
Roles:
- !Ref EC2Role
# ── Security Group ──
WebSecurityGroup:
Type: AWS::EC2::SecurityGroup
Properties:
GroupDescription: Web server security group
VpcId: !Ref VPC
SecurityGroupIngress:
- IpProtocol: tcp
FromPort: 22
ToPort: 22
CidrIp: !If [IsProduction, '10.0.0.0/8', '0.0.0.0/0']
- IpProtocol: tcp
FromPort: 443
ToPort: 443
CidrIp: 0.0.0.0/0
Tags:
- Key: Name
Value: !Sub '${AWS::StackName}-web-sg'
# ── EC2 Instance ──
WebServer:
Type: AWS::EC2::Instance
Properties:
InstanceType: !Ref InstanceType
KeyName: !Ref KeyName
ImageId: !FindInMap [RegionMap, !Ref 'AWS::Region', AMI]
SubnetId: !Ref PublicSubnet
SecurityGroupIds:
- !Ref WebSecurityGroup
IamInstanceProfile: !Ref EC2InstanceProfile
UserData:
Fn::Base64: !Sub |
#!/bin/bash
set -eux
yum update -y
yum install -y httpd
echo "<h1>Hello from ${Environment}</h1>" > /var/www/html/index.html
systemctl start httpd
systemctl enable httpd
Tags:
- Key: Name
Value: !Sub '${AWS::StackName}-web-server'
- Key: Environment
Value: !Ref Environment
# ── S3 Bucket ──
AppBucket:
Type: AWS::S3::Bucket
Properties:
BucketName: !Sub '${AWS::StackName}-app-bucket-${AWS::AccountId}'
VersioningConfiguration:
Status: Enabled
PublicAccessBlockConfiguration:
BlockPublicAcls: true
BlockPublicPolicy: true
IgnorePublicAcls: true
RestrictPublicBuckets: true
BucketEncryption:
ServerSideEncryptionConfiguration:
- ServerSideEncryptionByDefault:
SSEAlgorithm: AES256
Outputs:
InstanceId:
Description: EC2 instance ID
Value: !Ref WebServer
Export:
Name: !Sub '${AWS::StackName}-instance-id'
WebsiteUrl:
Description: Public IP of the web server
Value: !Sub 'http://${WebServer.PublicIp}'
Export:
Name: !Sub '${AWS::StackName}-website-url'
BucketName:
Description: S3 bucket name
Value: !Ref AppBucket# ── CloudFormation Stack Management ──
# Create stack
aws cloudformation create-stack \
--stack-name my-app-production \
--template-body file://template.yaml \
--parameters \
ParameterKey=Environment,ParameterValue=production \
ParameterKey=InstanceType,ParameterValue=t3.small \
ParameterKey=KeyName,ParameterValue=my-key \
--capabilities CAPABILITY_NAMED_IAM \
--on-failure ROLLBACK \
--tags Key=Project,Value=my-app
# Create stack with change set (review before deploying)
aws cloudformation create-change-set \
--stack-name my-app-production \
--change-set-name my-update-$(date +%Y%m%d) \
--template-body file://template.yaml \
--capabilities CAPABILITY_NAMED_IAM
aws cloudformation describe-change-set \
--stack-name my-app-production \
--change-set-name my-update-$(date +%Y%m%d)
aws cloudformation execute-change-set \
--stack-name my-app-production \
--change-set-name my-update-$(date +%Y%m%d)
# Update existing stack
aws cloudformation update-stack \
--stack-name my-app-production \
--template-body file://template.yaml \
--capabilities CAPABILITY_NAMED_IAM
# List stacks
aws cloudformation list-stacks --stack-status-filter CREATE_COMPLETE UPDATE_COMPLETE
# Describe stack events (for troubleshooting)
aws cloudformation describe-stack-events \
--stack-name my-app-production \
--max-items 20
# Delete stack
aws cloudformation delete-stack --stack-name my-app-production
# Monitor stack creation
aws cloudformation wait stack-create-complete --stack-name my-app-production| Function | Description | Example |
|---|---|---|
| !Ref | Return value of parameter/resource | !Ref InstanceType |
| !Sub | String substitution | !Sub "arn:aws:s3:::${Bucket}" |
| !Join | Join values with delimiter | !Join ['.', ['www', Domain]] |
| !Select | Select from list | !Select [0, !GetAZs ''] |
| !GetAtt | Resource attribute | !GetAtt WebServer.PublicIp |
| !FindInMap | Map lookup | !FindInMap [Region, Ref, AMI] |
| !If | Condition evaluation | !If [IsProd, ValA, ValB] |
| !GetAZs | List AZs in region | !GetAZs 'us-east-1' |
| !Base64 | Base64 encode | Fn::Base64: UserData |
| !ImportValue | Cross-stack reference | !ImportValue SharedVpcId |
# SAM template (extends CloudFormation)
AWSTemplateFormatVersion: '2010-09-09'
Transform: AWS::Serverless-2016-10-31
Resources:
MyFunction:
Type: AWS::Serverless::Function
Properties:
Handler: index.handler
Runtime: nodejs20.x
CodeUri: ./src
MemorySize: 256
Timeout: 30
Environment:
Variables:
TABLE_NAME: !Ref MyTable
Events:
ApiEvent:
Type: Api
Properties:
Path: /users
Method: GET
Policies:
- DynamoDBCrudPolicy:
TableName: !Ref MyTable
MyTable:
Type: AWS::Serverless::SimpleTable
Properties:
PrimaryKey:
Name: userId
Type: String# ── CDK Basics (TypeScript) ──
# The CDK uses programming languages to define infrastructure
# Install CDK
npm install -g aws-cdk
# Initialize a CDK project
cdk init app --language typescript
# Synthesize CloudFormation template
cdk synth
# Deploy stack
cdk deploy --require-approval never
# Diff changes
cdk diff
# Destroy stack
cdk destroy--capabilities CAPABILITY_NAMED_IAM when deploying CloudFormation stacks that create IAM roles or policies. This confirms you are aware that the template can create/modify IAM resources. For production, use change sets to review changes before applying.Parameters for deploy-time inputs and Mappings for region-specific lookups. Use Fn::Sub and !Ref to build ARNs and resource names dynamically.# ── Lambda + API Gateway (REST API) ──
# Create REST API
API_ID=$(aws apigateway create-rest-api \
--name 'my-serverless-api' \
--description 'Serverless API with Lambda integration' \
--query 'id' --output text)
# Get root resource ID
ROOT_ID=$(aws apigateway get-resources \
--rest-api-id $API_ID \
--query 'items[0].id' --output text)
# Create /users resource
USERS_ID=$(aws apigateway create-resource \
--rest-api-id $API_ID \
--parent-id $ROOT_ID \
--path-part users \
--query 'id' --output text)
# Create GET method
aws apigateway put-method \
--rest-api-id $API_ID \
--resource-id $USERS_ID \
--http-method GET \
--authorization-type NONE
# Create Lambda integration
aws apigateway put-integration \
--rest-api-id $API_ID \
--resource-id $USERS_ID \
--http-method GET \
--type AWS_PROXY \
--integration-http-method POST \
--uri arn:aws:apigateway:us-east-1:lambda:path/2015-03-31/functions/arn:aws:lambda:us-east-1:123456789012:function:getUsers/invocations
# Grant API Gateway permission to invoke Lambda
aws lambda add-permission \
--function-name getUsers \
--statement-id apigateway-get-users \
--action lambda:InvokeFunction \
--principal apigateway.amazonaws.com \
--source-arn arn:aws:execute-api:us-east-1:123456789012:$API_ID/*/GET/users
# Deploy API
aws apigateway create-deployment \
--rest-api-id $API_ID \
--stage-name prod
# Set up CORS
aws apigateway put-method \
--rest-api-id $API_ID \
--resource-id $USERS_ID \
--http-method OPTIONS \
--authorization-type NONE# ── S3 Event Triggers for Lambda ──
# Add S3 event notification to Lambda
aws lambda create-function \
--function-name process-uploads \
--runtime python3.12 \
--handler index.handler \
--role arn:aws:iam::123456789012:role/lambda-execution-role \
--zip-file fileb://process-uploads.zip
# Grant S3 permission to invoke Lambda
aws lambda add-permission \
--function-name process-uploads \
--statement-id s3-trigger \
--action lambda:InvokeFunction \
--principal s3.amazonaws.com \
--source-arn arn:aws:s3:::my-upload-bucket \
--source-account 123456789012
# Configure S3 to send events to Lambda
aws s3api put-bucket-notification-configuration \
--bucket my-upload-bucket \
--notification-configuration '{
"LambdaFunctionConfigurations": [
{
"LambdaFunctionArn": "arn:aws:lambda:us-east-1:123456789012:function:process-uploads",
"Events": ["s3:ObjectCreated:*"],
"Filter": {
"Key": {
"FilterRules": [
{"Name": "prefix", "Value": "uploads/"},
{"Name": "suffix", "Value": ".jpg"}
]
}
}
}
]
}'# ── Lambda: S3 Event Handler (Python) ──
import json
import boto3
import urllib.parse
from PIL import Image
import io
s3_client = boto3.client('s3')
dynamodb = boto3.resource('dynamodb')
table = dynamodb.Table('images')
def handler(event, context):
"""Process uploaded images: resize, save thumbnails, update DB"""
for record in event['Records']:
bucket = record['s3']['bucket']['name']
key = urllib.parse.unquote_plus(record['s3']['object']['key'])
try:
# Get image from S3
response = s3_client.get_object(Bucket=bucket, Key=key)
image_data = response['Body'].read()
image = Image.open(io.BytesIO(image_data))
# Create thumbnail
image.thumbnail((300, 300))
thumb_buffer = io.BytesIO()
image.save(thumb_buffer, format='JPEG', quality=85)
thumb_buffer.seek(0)
# Save thumbnail
thumb_key = f"thumbnails/{key.split('/')[-1]}"
s3_client.put_object(
Bucket=bucket,
Key=thumb_key,
Body=thumb_buffer.getvalue(),
ContentType='image/jpeg'
)
# Update DynamoDB
table.put_item(Item={
'imageId': key.split('/')[-1],
'originalKey': key,
'thumbnailKey': thumb_key,
'width': image.width,
'height': image.height,
'size': len(image_data),
'processedAt': context.aws_request_id
})
print(f"Processed: {key} -> {thumb_key}")
except Exception as e:
print(f"Error processing {key}: {str(e)}")
raise e# ── SQS (Simple Queue Service) ──
# Create standard queue
aws sqs create-queue \
--queue-name my-app-queue \
--attributes \
DelaySeconds=0 \
MaximumMessageSize=262144 \
MessageRetentionPeriod=1209600 \
ReceiveMessageWaitTimeSeconds=20 \
VisibilityTimeout=60
# Create FIFO queue (exactly-once processing)
aws sqs create-queue \
--queue-name my-app-queue.fifo \
--attributes \
FifoQueue=true \
DeduplicationScope=messageGroup \
FifoThroughputLimit=perMessageGroupId \
ContentBasedDeduplication=true \
DelaySeconds=0 \
MessageRetentionPeriod=1209600
# Send message
aws sqs send-message \
--queue-url https://sqs.us-east-1.amazonaws.com/123456789012/my-app-queue \
--message-body '{"orderId": "12345", "action": "process"}' \
--delay-seconds 0
# Send FIFO message (requires MessageGroupId)
aws sqs send-message \
--queue-url https://sqs.us-east-1.amazonaws.com/123456789012/my-app-queue.fifo \
--message-body '{"orderId": "12345", "action": "process"}' \
--message-group-id order-12345 \
--message-deduplication-id order-12345-attempt-1
# Receive message (long polling)
aws sqs receive-message \
--queue-url https://sqs.us-east-1.amazonaws.com/123456789012/my-app-queue \
--max-number-of-messages 10 \
--wait-time-seconds 20 \
--visibility-timeout 60
# Delete message after processing
aws sqs delete-message \
--queue-url https://sqs.us-east-1.amazonaws.com/123456789012/my-app-queue \
--receipt-handle AQEB...# ── SNS (Simple Notification Service) ──
# Create topic
TOPIC_ARN=$(aws sns create-topic \
--name my-app-notifications \
--query 'TopicArn' --output text)
# Subscribe email endpoint
aws sns subscribe \
--topic-arn $TOPIC_ARN \
--protocol email \
--notification-endpoint admin@example.com
# Subscribe Lambda endpoint
aws sns subscribe \
--topic-arn $TOPIC_ARN \
--protocol lambda \
--notification-endpoint arn:aws:lambda:us-east-1:123456789012:function:notification-handler
# Subscribe SQS queue
aws sns subscribe \
--topic-arn $TOPIC_ARN \
--protocol sqs \
--notification-endpoint https://sqs.us-east-1.amazonaws.com/123456789012/my-app-queue
# Publish message
aws sns publish \
--topic-arn $TOPIC_ARN \
--subject "Order Processing Complete" \
--message '{"orderId": "12345", "status": "completed", "total": 99.99}' \
--message-attributes \
eventType={DataType=String,StringValue=order.completed}
# Create FIFO topic (for exactly-once fanout)
aws sns create-topic \
--name my-app-events.fifo \
--attributes FifoTopic=true,ContentBasedDeduplication=true# ── EventBridge (CloudWatch Events) ──
# Create event bus (custom)
aws events create-event-bus \
--name my-app-events
# Create rule (schedule — cron expression)
aws events put-rule \
--name daily-report \
--schedule-expression "cron(0 9 * * ? *)" \
--description "Generate daily report at 9 AM UTC"
# Create rule (event pattern)
aws events put-rule \
--name order-processor \
--event-pattern '{
"source": ["my.app.orders"],
"detail-type": ["OrderCreated"],
"detail": {
"amount": [{"numeric": [">", 100]}]
}
}'
# Add Lambda target to rule
aws events put-targets \
--rule daily-report \
--targets '[
{
"Id": "1",
"Arn": "arn:aws:lambda:us-east-1:123456789012:function:generate-report",
"InputTransformer": {
"InputPathsMap": {"time": "$.time"},
"InputTemplate": "{\"reportDate\": \"<time>\"}"
}
}
]'
# Grant EventBridge permission to invoke Lambda
aws lambda add-permission \
--function-name generate-report \
--statement-id eventbridge-daily-report \
--action lambda:InvokeFunction \
--principal events.amazonaws.com \
--source-arn arn:aws:events:us-east-1:123456789012:rule/daily-report{
"Comment": "Order processing state machine",
"StartAt": "ValidateOrder",
"States": {
"ValidateOrder": {
"Type": "Task",
"Resource": "arn:aws:lambda:us-east-1:123456789012:function:validate-order",
"Next": "CheckValidation"
},
"CheckValidation": {
"Type": "Choice",
"Choices": [
{
"Variable": "$.valid",
"BooleanEquals": true,
"Next": "ProcessPayment"
},
{
"Variable": "$.valid",
"BooleanEquals": false,
"Next": "NotifyFailure"
}
]
},
"ProcessPayment": {
"Type": "Task",
"Resource": "arn:aws:lambda:us-east-1:123456789012:function:process-payment",
"Retry": [
{
"ErrorEquals": ["States.Timeout", "States.TaskFailed"],
"IntervalSeconds": 3,
"MaxAttempts": 3,
"BackoffRate": 2.0
}
],
"Next": "FulfillOrder"
},
"FulfillOrder": {
"Type": "Task",
"Resource": "arn:aws:lambda:us-east-1:123456789012:function:fulfill-order",
"Next": "NotifySuccess"
},
"NotifySuccess": {
"Type": "Task",
"Resource": "arn:aws:sns:us-east-1:123456789012:my-app-notifications",
"Parameters": {
"Message.$": "$",
"Subject": "Order Fulfilled Successfully"
},
"End": true
},
"NotifyFailure": {
"Type": "Task",
"Resource": "arn:aws:sns:us-east-1:123456789012:my-app-notifications",
"Parameters": {
"Message.$": "$",
"Subject": "Order Validation Failed"
},
"End": true
}
}
}| Feature | Standard | FIFO |
|---|---|---|
| Ordering | Best effort | Strict first-in-first-out |
| Duplication | At-least-once delivery | Exactly-once delivery |
| Throughput | Nearly unlimited | 300 TPS (default), 3,000 TPS |
| Message groups | N/A | Yes (parallel processing within FIFO) |
| Suffix | .fifo required | No suffix |
| Use case | Decoupling, load leveling | Order processing, billing |
| Pattern | Services | Use Case |
|---|---|---|
| API Backend | API GW + Lambda + DynamoDB | REST APIs, microservices |
| Event Processing | S3 + Lambda + SNS/SQS | File processing, ETL |
| Webhook Handler | API GW + SQS + Lambda | Async processing, retries |
| Orchestration | Step Functions + Lambda | Multi-step workflows |
| Real-time | DynamoDB Streams + Lambda | CQRS, materialized views |
| Scheduled Jobs | EventBridge + Lambda | Reports, cleanup, backups |
| Fan-out | SNS + Lambda/SQS | Multi-destination notifications |
NetworkingSecurity
Security Groups operate at the instance (ENI) level and are stateful — return traffic is automatically allowed. They support only allow rules and evaluate all rules together.
NACLs operate at the subnet level and are stateless — you must explicitly allow both inbound and return traffic. They support allow and deny rules evaluated in numbered order (lowest number wins).
Best practice: Use Security Groups for instance-level control and NACLs as an extra defense layer for subnet-level blocking (e.g., blocking known bad IPs).
DatabaseArchitecture
Use DynamoDB when you need single-digit millisecond latency at any scale, simple key-value access patterns, predictable performance, and automatic scaling. Ideal for session management, shopping carts, real-time bidding, and gaming leaderboards.
Use RDS/Aurora when you need complex queries (JOINs, aggregations), ACID transactions across multiple tables, relational data modeling, or existing SQL expertise. Aurora provides 5x better performance than standard MySQL/PostgreSQL with auto-scaling storage (up to 128 TB).
SecurityCompliance
AWS is responsible for "Security of the Cloud" — the physical infrastructure, networking hardware, hypervisor, and managed service security (e.g., DynamoDB encryption at rest).
The customer is responsible for "Security in the Cloud" — configuring security groups, managing IAM policies, encrypting data, patching EC2 instances, and securing application code.
Key point: For managed services (Lambda, DynamoDB, S3), AWS manages more of the stack. For IaaS (EC2, EBS), the customer manages more, including OS patches and runtime security.
NetworkingVPC
VPC peering connects two VPCs so their resources can communicate using private IP addresses, as if they were in the same network. Works across accounts and regions.
Limitations: (1) Peering is not transitive — if A peers with B and B peers with C, A cannot communicate with C. (2) Overlapping CIDR blocks are not supported. (3) You must update route tables in both VPCs to route through the peering connection. (4) DNS resolution across peered VPCs requires enabling resolveToRemoteVpcCidr on the accepter side.
ServerlessPerformance
Cold starts occur when Lambda must provision a new execution environment. Causes include: first invocation after deployment, infrequent traffic, and scaling up from zero.
Mitigation strategies:
StorageS3
Since December 2020, S3 provides strong read-after-write consistency for all operations (PUT, DELETE, LIST) in all AWS Regions. This means:
This was a common interview topic when S3 had eventual consistency for overwrite-PUTs and DELETEs. Now all operations are strongly consistent — this is an important update to know.
StorageCost Optimization
S3 Standard: Frequently accessed data (websites, analytics). 99.99% availability, lowest latency.
S3 Standard-IA: Infrequently accessed but needs fast retrieval when accessed. 30-day minimum storage duration. Higher per-GB cost but lower retrieval cost.
S3 Glacier Instant Retrieval: Archive data accessed once per quarter. Millisecond retrieval.
S3 Glacier Flexible Retrieval: Long-term archive. Minutes to hours retrieval. 90-day minimum.
S3 Glacier Deep Archive: Longest-term, cheapest storage. 12-48 hour retrieval. 180-day minimum.
Use lifecycle rules to automatically transition objects between classes based on age or access patterns. Use S3 Intelligent-Tiering if access patterns are unpredictable.
ArchitectureBest Practices
The Well-Architected Framework consists of six pillars that help build secure, high-performing, resilient, and efficient infrastructure:
Use AWS Well-Architected Tool to review workloads against these pillars and identify improvements.
DatabaseHigh Availability
Multi-AZ is a high-availability feature. It maintains a synchronous standby replica in a different AZ. The standby is not accessible for read queries. During a primary failure, RDS automatically fails over to the standby with minimal downtime (typically 30-120 seconds). You do not manage the failover.
Read Replicas are for scaling read capacity. They are asynchronous, can be in the same or different AZ/region, and are accessible for read queries. You must manage connection routing in your application. They can be promoted to standalone databases. Up to 15 for MySQL, 5 for PostgreSQL.
Use together: Multi-AZ for HA/failover + Read Replicas for read scaling. They serve different purposes and are complementary.
SecurityIAM
Least privilege means granting only the minimum permissions required for a user, role, or service to perform its intended function.
Implementation strategies:
s3:GetObject) instead of wildcards (s3:*)*