Data Structures, Pub/Sub, Transactions, Lua, Clustering, Caching — in-memory data.
Redis is an in-memory data structure store used as a database, cache, message broker, and streaming engine. It supports strings, lists, sets, sorted sets, hashes, streams, bitmaps, HyperLogLogs, and geospatial indexes.
| Data Type | Command Examples | Time Complexity | Use Cases |
|---|---|---|---|
| Strings | SET, GET, INCR, APPEND | O(1) | Caching, counters, locks, rate limiting |
| Lists | LPUSH, RPUSH, LRANGE, BLPOP | O(1) push/pop; O(N) range | Queues, stacks, recent items |
| Sets | SADD, SREM, SINTER, SUNION | O(1) add/remove; O(N) ops | Tags, unique items, relationships |
| Sorted Sets | ZADD, ZRANGE, ZRANGEBYSCORE | O(log N) add; O(log N + M) range | Leaderboards, rankings, rate limits |
| Hashes | HSET, HGET, HGETALL, HMSET | O(1) field ops | Objects, sessions, user profiles |
| Streams | XADD, XREAD, XREADGROUP | O(log N) add; O(N) read | Event sourcing, message queues |
| Bitmaps | SETBIT, GETBIT, BITCOUNT | O(1) | Feature flags, presence, analytics |
| HyperLogLog | PFADD, PFCOUNT, PFMERGE | O(1) add; O(1) count | Cardinality estimation (unique views) |
| Geospatial | GEOADD, GEORADIUS, GEOHASH | O(log N) | Location-based services, nearby search |
# Connect to Redis
redis-cli # localhost:6379
redis-cli -h 127.0.0.1 -p 6379 -a password
redis-cli --tls --cert /path/cert.pem # TLS connection
redis-cli -u redis://user:pass@host:6379/0
# Server info
PING # returns PONG
INFO # full server info
INFO server # server section only
INFO memory # memory stats
INFO replication # replication status
INFO clients # connected clients
DBSIZE # total keys in current DB
CLIENT LIST # list connected clients
CLIENT SETNAME myapp # name the connection
MONITOR # real-time command stream
SLOWLOG GET 10 # last 10 slow queries
DEBUG SLEEP 1 # pause server (debug only)
# Database operations
SELECT 0 # switch to DB 0 (default)
SELECT 1 # switch to DB 1
FLUSHDB # clear current database
FLUSHALL # clear ALL databases
ECHO "hello" # echo string
TIME # server unix timestamp + microsecondsobject-type:id:field. Example: user:1001:profile, cache:product:5523, session:abc123def. Avoid long keys (>1024 bytes). Always set TTL on cache keys.# Key naming examples
SET user:1001:name "Alice"
SET user:1001:email "alice@example.com"
SET session:abc123def '{"user_id":1001,"role":"admin"}'
SET cache:api:users:page1 '[...]' EX 300
SET rate:ip:192.168.1.1:20240101 15 EX 86400
SET lock:resource:order:5523 "worker-7" NX EX 30# Setting expiration
SET key value EX 60 # expire in 60 seconds
SET key value PX 60000 # expire in 60000 ms
SETEX key 60 value # shortcut: SET with EX
PSETEX key 60000 value # shortcut: SET with PX
# Managing TTL
TTL key # remaining seconds (-1=none, -2=not exists)
PTTL key # remaining milliseconds
EXPIRE key 300 # set 300s TTL on existing key
PEXPIRE key 300000 # set 300000ms TTL
EXPIREAT key 1709251200 # expire at Unix timestamp
PERSIST key # remove TTL, make persistent
# Touch / update without changing value
TOUCH key # updates LRU/LFU info
EXISTS key # 1 if exists, 0 if not
DEL key1 key2 key3 # delete multiple keys
UNLINK key1 key2 # non-blocking delete (Redis 4.0+)
TYPE key # returns data type: string, list, set, etc.
OBJECT ENCODING key # internal encoding (raw, int, ziplist, etc.)
OBJECT IDLETIME key # seconds since last accessKEYS in production! KEYS scans the entire keyspace and blocks the server. Use SCANfor incremental iteration — it's non-blocking and cursor-based.# DANGEROUS - blocks server, O(N)
KEYS user:* # all keys matching user:*
KEYS *cache* # all keys containing "cache"
# SAFE - incremental scan, non-blocking
SCAN 0 MATCH user:* COUNT 100 # cursor=0, pattern, batch size
# Returns: 1) next_cursor 2) array of keys
# Keep calling with next cursor until cursor returns 0
# Scan example with full iteration
SCAN 0 # first call with cursor 0
SCAN 128 MATCH session:* COUNT 50 # resume from cursor 128
SCAN 256 MATCH cache:* TYPE string COUNT 200
# Other scan commands
HSCAN myhash 0 MATCH field:* COUNT 10 # scan hash fields
SSCAN myset 0 MATCH "pattern*" COUNT 10 # scan set members
ZSCAN myzset 0 MATCH "elem:*" COUNT 10 # scan sorted set members
# Key renaming and moving
RENAME oldkey newkey # rename (overwrites target)
RENAMENX oldkey newkey # rename only if newkey doesn't exist
MOVE key 1 # move key to database 1
RANDOMKEY # return a random key# Memory inspection
MEMORY USAGE key # bytes used by key
MEMORY DOCTOR # memory analysis report
MEMORY STATS # memory allocator stats
MEMORY PURGE # ask allocator to release memory
# Memory optimization tips
# 1. Use hashes with ziplist encoding for small objects (configurable)
# 2. Use 32-bit integers when possible (Redis auto-detects)
# 3. Avoid frequent creation/deletion of large keys
# 4. Use proper data structures (bitmap instead of set of 0/1 values)
# Serialization options
SET user:1001 '{"name":"Alice","age":30}' # JSON as string
HSET user:1001 name "Alice" age 30 # Hash (more efficient for updates)
HGETALL user:1001 # retrieve all fieldsStrings are the most fundamental Redis data type. A Redis string can hold text, serialized JSON, binary data (images up to 512MB), or integers. When a string holds an integer, Redis supports atomic increment/decrement.
# SET variations
SET key value # basic set
SET key value EX 60 # with TTL
SET key value PX 60000 # with TTL in ms
SET key value NX # only if key does NOT exist
SET key value XX # only if key DOES exist
SET key value KEEPTTL # preserve existing TTL
SET key value GET # set and return old value (Redis 6.2+)
SETEX key 60 value # SET + EX shortcut
SETNX key value # SET if Not eXists (returns 1/0)
PSETEX key 60000 value # SET + PX shortcut
# GET variations
GET key # returns value or nil
GETDEL key # get then delete (Redis 6.2+)
GETEX key EX 60 # get and set new TTL (Redis 6.2+)
# Multiple keys
MSET k1 v1 k2 v2 k3 v3 # set multiple keys (atomic)
MGET k1 k2 k3 # get multiple keys
MSETNX k1 v1 k2 v2 # set multiple only if ALL don't exist# Append and length
SET msg "Hello"
APPEND msg " World" # "Hello World"
STRLEN msg # 11
# Ranges (0-indexed, inclusive)
SET mykey "Hello, Redis!"
GETRANGE mykey 0 4 # "Hello"
GETRANGE mykey -6 -1 # "Redis!"
GETRANGE mykey 0 -1 # full string
SETRANGE mykey 7 "World" # "Hello, World!" (overwrite from pos 7)
# Get and set atomically
GETSET counter 0 # set new value, return old (deprecated)
# Use SET key value GET instead (Redis 6.2+)# Atomic increment/decrement
SET page_views 100
INCR page_views # 101
INCRBY page_views 10 # 111
DECR page_views # 110
DECRBY page_views 5 # 105
INCRBYFLOAT temperature 0.5 # floating point increment
# Atomic counter — thread-safe across all clients
INCR user:1001:login_count
INCRBY daily:2024-01-15:orders 1
INCR article:42:views| Command | Description | Return Value | Notes |
|---|---|---|---|
| SET key val [EX|PX] [NX|XX] | Set string value | OK or nil | Redis 6.2+ adds GET, KEEPTTL |
| GET key | Get value | Value or nil | nil if key does not exist |
| GETDEL key | Get and delete | Value or nil | Redis 6.2+ |
| GETEX key [EX sec] | Get and set TTL | Value or nil | Redis 6.2+ |
| MGET k1 k2 ... | Get multiple keys | Array of values | nil for missing keys |
| MSET k1 v1 k2 v2 ... | Set multiple keys | OK | Always succeeds |
| MSETNX k1 v1 ... | Set if none exist | 1 or 0 | All-or-nothing |
| INCR key | Increment by 1 | New value | Error if not integer |
| INCRBY key n | Increment by n | New value | Signed integer |
| INCRBYFLOAT key n | Float increment | New value | Double precision |
| APPEND key val | Append string | New length | Creates key if absent |
| STRLEN key | String length | Integer | 0 if key absent |
| GETRANGE key s e | Substring | Substring | Negative = from end |
| SETRANGE key off val | Overwrite substring | New length | Pads with zeros if needed |
# Bitmaps are strings where each bit can be set/cleared
SETBIT user:1001:days 0 1 # set bit at offset 0 to 1
SETBIT user:1001:days 1 1 # set bit at offset 1 to 1
GETBIT user:1001:days 0 # 1
BITCOUNT user:1001:days # count bits set to 1
BITCOUNT user:1001:days 0 1 # count bits in range [0, 1]
BITPOS user:1001:days 1 # first bit set to 1
# Bitmap operations
BITOP AND dest k1 k2 # bitwise AND
BITOP OR dest k1 k2 # bitwise OR
BITOP XOR dest k1 k2 # bitwise XOR
BITOP NOT dest k1 # bitwise NOT
# BITFIELD — multi-bit integer operations (Redis 3.2+)
BITFIELD mykey INCRBY i5 0 1 # increment 5-bit signed int at offset 0
BITFIELD mykey INCRBY u8 16 1 # increment 8-bit unsigned int at offset 16
BITFIELD mykey GET u4 0 # get 4-bit unsigned int at offset 0
BITFIELD mykey SET u8 0 255 # set 8-bit unsigned int at offset 0
# Chain operations with OVERFLOW WRAP|SAT|FAIL
BITFIELD mykey OVERFLOW WRAP INCRBY i5 0 1 INCRBY i5 5 1# Acquire lock (atomic)
SET lock:order:5523 "worker-7" NX EX 30
# Release lock (Lua script ensures atomicity)
EVAL "if redis.call('get', KEYS[1]) == ARGV[1] then return redis.call('del', KEYS[1]) else return 0 end" 1 lock:order:5523 worker-7# Allow 100 requests per minute per user
MULTI
INCR rate:user:1001:minute
EXPIRE rate:user:1001:minute 60
EXEC
# Then check: if value > 100, reject// Cache-aside pattern (application code)
async function getUser(id) {
const cached = await redis.get(`user:${id}`);
if (cached) return JSON.parse(cached);
const user = await db.users.findById(id);
await redis.setex(`user:${id}`, 3600, JSON.stringify(user));
return user;
}# More memory-efficient than JSON string for partial updates
import redis
r = redis.Redis()
# Store user object as hash
r.hset("user:1001", mapping={
"name": "Alice",
"email": "alice@example.com",
"age": 30,
"role": "admin"
})
r.expire("user:1001", 3600)
# Partial update (only changes one field)
r.hset("user:1001", "age", 31)
# Get all fields
user = r.hgetall("user:1001") # {"name": b"Alice", ...}Redis lists are linked lists of strings, ordered by insertion order. Redis sets are unordered collections of unique strings. Both support O(1) push/pop operations.
# Push operations (O(1))
LPUSH mylist "c" "b" "a" # ["a", "b", "c"] — push to head
RPUSH mylist "d" "e" "f" # ["a","b","c","d","e","f"] — push to tail
# Pop operations (O(1))
LPOP mylist # "a" — remove from head
RPOP mylist # "f" — remove from tail
LPOP mylist 2 # pop 2 items (Redis 6.2+)
RPOP mylist 2 # pop 2 items
# Block operations (wait if list is empty)
BLPOP mylist 5 # block up to 5s for left pop
BRPOP mylist 5 # block up to 5s for right pop
BLPOP list1 list2 list3 10 # check multiple lists, block 10s
# Returns: [list_name, value] or nil on timeout
# Access elements
LRANGE mylist 0 -1 # all elements
LRANGE mylist 0 4 # first 5 elements
LRANGE mylist -3 -1 # last 3 elements
LLEN mylist # list length
LINDEX mylist 0 # element at index 0
LINDEX mylist -1 # last element
# Modification
LINSERT mylist BEFORE "b" "x" # insert "x" before "b"
LINSERT mylist AFTER "b" "y" # insert "y" after "b"
LSET mylist 0 "z" # set element at index 0
LTRIM mylist 0 99 # keep only elements 0-99
LTRIM mylist 0 -1 # keep all (no-op)
LTRIM mylist 1 -1 # remove first element
# Move element between lists
RPOPLPUSH source dest # atomic right-pop + left-push
BRPOPLPUSH source dest 30 # blocking version| Command | Complexity | Description |
|---|---|---|
| LPUSH key val [val ...] | O(1) | Push values to head of list |
| RPUSH key val [val ...] | O(1) | Push values to tail of list |
| LPOP key [count] | O(1) | Pop from head (Redis 6.2+ count) |
| RPOP key [count] | O(1) | Pop from tail (Redis 6.2+ count) |
| LRANGE key start stop | O(S+N) | Get range (S=start offset, N=range) |
| LLEN key | O(1) | Get list length |
| LINDEX key index | O(N) | Get element by index |
| LINSERT key BEFORE|AFTER pivot val | O(N) | Insert before/after pivot |
| LTRIM key start stop | O(N) | Trim to range |
| BLPOP key [key ...] timeout | O(1) | Blocking left pop |
| BRPOP key [key ...] timeout | O(1) | Blocking right pop |
| RPOPLPUSH src dest | O(1) | Atomic pop-push between lists |
# Producer: push to tail
RPUSH queue:orders '{"order_id":1,"item":"widget"}'
RPUSH queue:orders '{"order_id":2,"item":"gadget"}'
# Consumer: pop from head
RPOP queue:orders # {"order_id":1,"item":"widget"}
# Reliable queue: use BRPOPLPUSH to processing list
BRPOPLPUSH queue:orders queue:processing 30
# ... process the message ...
LREM queue:processing 1 "message" # remove after processing
# Or: RPOPLPUSH back to queue on failure# Push and pop from same end
LPUSH stack:undo '{"action":"delete","data":{...}}'
LPUSH stack:undo '{"action":"edit","data":{...}}'
LPOP stack:undo # pop last action# Basic operations
SADD myset "a" "b" "c" "d" # add members
SREM myset "a" # remove member
SMEMBERS myset # all members (unsorted)
SISMEMBER myset "a" # 1 if exists, 0 if not
SCARD myset # cardinality (size)
SPOP myset # remove and return random member
SPOP myset 2 # remove and return 2 random members
SRANDMEMBER myset # return random member (without removing)
SRANDMEMBER myset 3 # return 3 random members (no duplicates)
SRANDMEMBER myset -3 # return 3 members (with duplicates)
SMOVE src dst "member" # move member between sets
# Set operations
SADD set1 "a" "b" "c"
SADD set2 "b" "c" "d"
SUNION set1 set2 # {"a","b","c","d"} — union
SINTER set1 set2 # {"b","c"} — intersection
SDIFF set1 set2 # {"a"} — difference (in set1 not set2)
SUNIONSTORE dest set1 set2 # store union in dest
SINTERSTORE dest set1 set2 # store intersection in dest
SDIFFSTORE dest set1 set2 # store difference in dest| Command | Complexity | Description |
|---|---|---|
| SADD key member ... | O(1) per member | Add members to set |
| SREM key member ... | O(1) per member | Remove members from set |
| SISMEMBER key member | O(1) | Check membership |
| SCARD key | O(1) | Set cardinality |
| SMEMBERS key | O(N) | All members |
| SPOP key [count] | O(1) | Random member (removes) |
| SRANDMEMBER key [count] | O(N) | Random member (keeps) |
| SUNION key ... | O(N) | Union of sets |
| SINTER key ... | O(N*M) | Intersection (min set) |
| SDIFF key ... | O(N) | Difference |
# Tag a post with categories
SADD post:42:tags "redis" "database" "nosql" "caching"
# All posts tagged "redis"
SADD tag:redis:posts 42 55 100 203
# Posts tagged with BOTH "redis" AND "database"
SINTER tag:redis:posts tag:database:posts
# Posts tagged with "redis" but NOT "sql"
SDIFF tag:redis:posts tag:sql:posts
# Add post to multiple tags atomically
SADD tag:redis:posts 300
SADD tag:caching:posts 300SSCAN myset 0 MATCH pattern COUNT 100Hashes are maps of string fields to string values — ideal for representing objects. Sorted sets map unique members to floating-point scores, keeping members ordered by score. They are Redis's most powerful data structure for rankings, leaderboards, and priority queues.
# Set and get
HSET user:1001 name "Alice" email "alice@example.com" age 30
HGET user:1001 name # "Alice"
HGET user:1001 missing # nil
HMSET user:1001 name "Bob" role "editor" # multi-set (deprecated, use HSET)
HMGET user:1001 name email age # ["Bob", "alice@example.com", "30"]
HGETALL user:1001 # all fields and values
# Check and delete
HEXISTS user:1001 name # 1 (exists)
HDEL user:1001 email # 1 (deleted)
HKEYS user:1001 # ["name", "age", "role"]
HVALS user:1001 # ["Bob", "30", "editor"]
HLEN user:1001 # 3 (field count)
# Numeric operations
HINCRBY user:1001 age 1 # 31
HINCRBY user:1001 login_count 1 # atomic increment
HINCRBYFLOAT stats:revenue 99.99 # float increment
# Hash scan
HSCAN user:1001 0 MATCH "pref:*" COUNT 10
# Redis 4.0+ new commands
HSTRLEN user:1001 name # 3 (length of field value)
HSETNX user:1001 country "US" # set only if field doesn't exist| Command | Complexity | Description |
|---|---|---|
| HSET key field val [f v ...] | O(1) per field | Set field(s) in hash |
| HGET key field | O(1) | Get field value |
| HGETALL key | O(N) | All fields and values |
| HMGET key field [field ...] | O(1) per field | Multiple field values |
| HDEL key field [field ...] | O(1) per field | Delete fields |
| HEXISTS key field | O(1) | Check field exists |
| HKEYS key | O(N) | All field names |
| HVALS key | O(N) | All field values |
| HLEN key | O(1) | Field count |
| HINCRBY key field n | O(1) | Increment field by integer |
| HINCRBYFLOAT key field n | O(1) | Increment field by float |
| HSETNX key field val | O(1) | Set if field not exists |
| HSCAN key cursor | O(1) per call | Incremental field scan |
# Add members with scores
ZADD leaderboard 1000 "player1" 2000 "player2" 1500 "player3"
ZADD leaderboard NX 3000 "player4" # only add if not exists
ZADD leaderboard XX CH 2500 "player1" # only update existing, change score
ZADD leaderboard GT 3200 "player4" # only update if new score > current
ZADD leaderboard LT 800 "player1" # only update if new score < current
# Score queries
ZSCORE leaderboard "player1" # 2500 (updated)
ZMSCORE leaderboard "player1" "player2" # [2500, 2000] (Redis 6.2+)
# Range queries
ZRANGE leaderboard 0 -1 # all members (ascending by score)
ZRANGE leaderboard 0 9 WITHSCORES # top 10 with scores
ZREVRANGE leaderboard 0 9 WITHSCORES # top 10 (highest first)
ZRANGEBYSCORE leaderboard 1000 2000 # scores between 1000-2000
ZRANGEBYSCORE leaderboard -inf +inf WITHSCORES # all with scores
ZREVRANGEBYSCORE leaderboard +inf 1000 # descending score range
ZCOUNT leaderboard 1000 2000 # count of members in score range
# Rank operations
ZRANK leaderboard "player1" # rank (0-indexed, ascending)
ZREVRANK leaderboard "player1" # rank (0-indexed, descending)
# Removal
ZREM leaderboard "player2" # remove member
ZREMRANGEBYRANK leaderboard 0 4 # remove bottom 5
ZREMRANGEBYSCORE leaderboard 0 500 # remove all with score <= 500
ZPOPMAX leaderboard # remove highest (Redis 5.0+)
ZPOPMAX leaderboard 3 # remove top 3
ZPOPMIN leaderboard 3 # remove bottom 3
# Lexicographic (when all scores are the same)
ZADD myindex 0 "apple" 0 "banana" 0 "cherry"
ZRANGE myindex 0 -1 # alphabetical
ZRANGEBYLEX myindex "[b" "[c" # range by lex value
# Aggregation
ZUNIONSTORE dest 2 zset1 zset2 WEIGHTS 1 2 AGGREGATE SUM
ZINTERSTORE dest 2 zset1 zset2 WEIGHTS 1 1 AGGREGATE MAX| Command | Complexity | Description |
|---|---|---|
| ZADD key score member ... | O(log N) per member | Add/update members |
| ZSCORE key member | O(1) | Get member score |
| ZRANGE key start stop | O(log N + M) | Range by rank (ascending) |
| ZREVRANGE key start stop | O(log N + M) | Range by rank (descending) |
| ZRANGEBYSCORE key min max | O(log N + M) | Range by score |
| ZRANK key member | O(log N) | Member rank (ascending) |
| ZREVRANK key member | O(log N) | Member rank (descending) |
| ZREM key member ... | O(log N) per member | Remove members |
| ZCARD key | O(1) | Member count |
| ZINCRBY key increment member | O(log N) | Increment member score |
| ZUNIONSTORE dest N k ... | O(N log N) | Union with weights |
| ZINTERSTORE dest N k ... | O(N log N) | Intersection with weights |
| ZPOPMAX key [count] | O(log N) | Pop highest score(s) |
| ZPOPMIN key [count] | O(log N) | Pop lowest score(s) |
import redis
import time
r = redis.Redis(decode_responses=True)
class Leaderboard:
def __init__(self, key="leaderboard"):
self.key = key
def update_score(self, player, delta):
"""Atomically increment player score"""
r.zincrby(self.key, delta, player)
def add_score(self, player, score):
"""Set absolute score for player"""
r.zadd(self.key, {player: score})
def top_n(self, n=10, with_scores=True):
"""Get top N players (highest first)"""
return r.zrevrange(self.key, 0, n - 1, withscores=with_scores)
def rank(self, player):
"""Get player rank (1-based, highest = #1)"""
rank = r.zrevrank(self.key, player)
return rank + 1 if rank is not None else None
def score(self, player):
"""Get player score"""
return r.zscore(self.key, player)
def around_me(self, player, radius=3):
"""Show players around a given player"""
rank = r.zrevrank(self.key, player)
if rank is None:
return []
start = max(0, rank - radius)
end = rank + radius
return r.zrevrange(self.key, start, end, withscores=True)
def page(self, page=1, per_page=20):
"""Paginated leaderboard"""
start = (page - 1) * per_page
end = start + per_page - 1
return r.zrevrange(self.key, start, end, withscores=True)
# Usage
lb = Leaderboard("game:season1")
lb.add_score("Alice", 1000)
lb.add_score("Bob", 850)
lb.add_score("Charlie", 920)
lb.update_score("Alice", 50) # Alice now 1050
print(lb.top_n(3)) # [(Alice, 1050), (Charlie, 920), (Bob, 850)]
print(lb.rank("Bob")) # 3-- Sliding window rate limiter with sorted sets
-- KEYS[1] = rate limit key
-- ARGV[1] = window size in seconds
-- ARGV[2] = max requests allowed
-- ARGV[3] = current timestamp
-- ARGV[4] = unique request ID
local key = KEYS[1]
local window = tonumber(ARGV[1])
local limit = tonumber(ARGV[2])
local now = tonumber(ARGV[3])
local id = ARGV[4]
-- Remove expired entries
redis.call('ZREMRANGEBYSCORE', key, '-inf', now - window)
-- Count remaining
local count = redis.call('ZCARD', key)
if count < limit then
redis.call('ZADD', key, now, id)
redis.call('EXPIRE', key, window)
return limit - count - 1
else
return -1
end# Use score as priority (lower = higher priority)
ZADD queue:tasks 1 "urgent-fix"
ZADD queue:tasks 3 "normal-feature"
ZADD queue:tasks 2 "bug-review"
ZADD queue:tasks 5 "nice-to-have"
# Get highest priority task
ZPOPMIN queue:tasks # ("urgent-fix", 1)
# Block for next task (BZPOPMIN, Redis 5.0+)
BZPOPMIN queue:tasks 5 # block 5s waiting for taskRedis Pub/Sub is a lightweight messaging system where publishers send messages to channels and subscribers receive them. It's fire-and-forget — messages not delivered to offline subscribers are lost. For persistent messaging, use Redis Streams.
# In terminal 1 — Subscribe to channels
SUBSCRIBE news alerts # subscribe to specific channels
# Once subscribed, the terminal only accepts SUB/PUB commands
PSUBSCRIBE news:* # subscribe with pattern matching
PSUBSCRIBE log.* error.* # subscribe to multiple patterns
# In terminal 2 — Publish messages
PUBLISH news "Breaking: Redis 8 released!" # returns 1 (1 subscriber)
PUBLISH alerts "Server CPU at 95%" # returns number of subscribers
# In terminal 1 — receives:
# 1) "message"
# 2) "news"
# 3) "Breaking: Redis 8 released!"
# Unsubscribe
UNSUBSCRIBE news # unsubscribe from specific channel
UNSUBSCRIBE # unsubscribe from all
PUNSUBSCRIBE news:* # unsubscribe from pattern
PUNSUBSCRIBE # unsubscribe from all patterns
# Pub/Sub info
PUBSUB CHANNELS # list active channels (with subscribers)
PUBSUB CHANNELS news:* # channels matching pattern
PUBSUB NUMSUB news alerts # subscriber counts per channel
PUBSUB NUMPAT # count of pattern subscriptions# Subscribe to all channels starting with "order:"
PSUBSCRIBE order:*
# These messages would be received:
PUBLISH order:created '{"id":1}'
PUBLISH order:updated '{"id":1,"status":"shipped"}'
PUBLISH order:cancelled '{"id":2}'
# Pattern syntax (glob-style)
PSUBSCRIBE h?llo # ? = single char (hello, hallo, hxllo)
PSUBSCRIBE h*llo # * = zero or more chars (hllo, heeeello)
PSUBSCRIBE h[ae]llo # [ae] = one of the chars (hallo, hello)import Redis from 'ioredis';
const sub = new Redis();
const pub = new Redis();
// Subscribe to events
sub.subscribe('notifications', (err, count) => {
console.log('Subscribed to', count, 'channels');
});
sub.on('message', (channel, message) => {
console.log('Received:', channel, message);
// Handle notification...
});
sub.on('pattern-message', (pattern, channel, message) => {
console.log('Pattern match:', pattern, channel, message);
});
// Publish events
await pub.publish('notifications', JSON.stringify({
type: 'order_placed',
orderId: 1234,
userId: 5678,
}));
// Cleanup
sub.unsubscribe();
sub.quit();
pub.quit();Streams were added in Redis 5.0 as a robust alternative to Pub/Sub. They provide persistence, consumer groups, message acknowledgment, and replayability.
# Adding entries to a stream (XADD)
XADD mystream * name "Alice" action "login"
# Returns: "1709251200000-0" (timestamp-ID)
XADD mystream * name "Bob" action "purchase" amount "99.99"
XADD mystream * name "Charlie" action "logout"
# Stream ID options
XADD mystream 0-1 name "first" # explicit ID (0-1 = minimum)
XADD mystream * MAXLEN 1000 * name "x" # trim to last 1000 entries
# Reading from streams
XREAD COUNT 2 STREAMS mystream 0-0 # read from beginning, limit 2
XREAD COUNT 10 BLOCK 5000 STREAMS mystream $ # block 5s for new entries ($ = latest)
XREAD STREAMS mystream s1 s2 1709251200-0 # read multiple streams from IDs
# XRANGE — range query
XRANGE mystream - + # all entries
XRANGE mystANGE mystream - + COUNT 10 # first 10 entries
XRANGE mystream 1709251200-0 + # from specific ID
XREVRANGE mystream + - COUNT 5 # last 5 entries (reverse)
# Stream length and trimming
XLEN mystream # number of entries
XTRIM mystream MAXLEN ~ 1000 # trim to ~1000 (approximate, efficient)
XTRIM mystream MINID ~ 1709251200-0 # trim entries older than ID
XDEL mystream 1709251200-0 # delete specific entry
XDEL mystream id1 id2 id3 # delete multiple entries# Create a consumer group
XGROUP CREATE mystream mygroup $ # start from new messages ($)
XGROUP CREATE mystream mygroup 0-0 # start from beginning
XGROUP CREATE mystream mygroup $ MKSTREAM # create stream if needed
# Consumer group info
XINFO STREAM mystream # stream metadata
XINFO GROUPS mystream # list consumer groups
XINFO CONSUMERS mystream mygroup # consumers in a group
# Read as consumer group
XREADGROUP GROUP mygroup consumer1 COUNT 1 STREAMS mystream >
# XACK — acknowledge messages
XACK mystream mygroup 1709251200-0 1709251201-0 # returns count acknowledged
# XPENDING — check pending messages
XPENDING mystream mygroup # summary of pending entries
XPENDING mystream mygroup - + 10 # up to 10 pending entries with details
# XCLAIM — claim/reassign pending messages
XCLAIM mystream mygroup consumer2 30000 1709251200-0
# Reassign messages idle for >30s to consumer2
# XAUTOCLAIM — claim and return (Redis 6.2+)
XAUTOCLAIM mystream mygroup consumer2 30000 "-" COUNT 10
# Delete consumer group
XGROUP DESTROY mystream mygroupimport Redis from 'ioredis';
const redis = new Redis();
async function produceEvent(type, data) {
const id = await redis.xadd('events', '*', {
type,
data: JSON.stringify(data),
timestamp: Date.now().toString(),
});
console.log('Produced:', id);
}
async function consumeEvents(group, consumer) {
// Create group if not exists
try {
await redis.xgroup('CREATE', 'events', group, '$', 'MKSTREAM');
} catch (e) { /* BUSYGROUP = already exists */ }
while (true) {
const results = await redis.xreadgroup(
'GROUP', group, consumer,
'COUNT', '10', 'BLOCK', '5000',
'STREAMS', 'events', '>'
);
if (!results) continue;
for (const [stream, messages] of results) {
for (const [id, fields] of messages) {
try {
await processMessage(fields);
await redis.xack(stream, group, id);
} catch (err) {
console.error('Processing failed:', id, err);
// Message stays in PEL for retry
}
}
}
}
}Redis transactions group multiple commands into a single atomic operation. All commands in a transaction are executed sequentially without interruption. For complex logic, Lua scripts run atomically on the server side — they are the most powerful tool for multi-step operations.
# Basic transaction
MULTI
SET account:alice:balance 1000
SET account:bob:balance 500
DECRBY account:alice:balance 200
INCRBY account:bob:balance 200
EXEC
# Returns array of results: [OK, OK, 800, 700]
# Discard transaction
MULTI
SET key1 val1
SET key2 val2
DISCARD # abort, no commands executed
# Errors during queueing vs execution
MULTI
SET good_key value
INCR bad_string_key # queued even though wrong type
SET another_key value
EXEC
# Result: [OK, ERROR, OK]
# The INCR fails but the other commands still execute# WATCH monitors keys for changes before EXEC
WATCH account:alice:balance
balance = GET account:alice:balance # e.g., 1000
# If another client changes account:alice:balance between WATCH and EXEC...
# ...the EXEC will return nil (transaction rejected)
MULTI
SET account:alice:balance (balance - 200)
SET account:bob:balance (bob_balance + 200)
EXEC
# Returns nil if watched key was modified → retry from WATCH
# Correct pattern with retry loop (pseudocode)
WATCH key
val = GET key
MULTI
SET key (new_val)
EXEC
# if EXEC returns nil → start over
# if EXEC returns [OK] → successimport redis
r = redis.Redis(decode_responses=True)
def transfer(from_acct, to_acct, amount, max_retries=3):
for _ in range(max_retries):
try:
with r.pipeline() as pipe:
while True:
try:
pipe.watch(from_acct)
sender_balance = int(pipe.get(from_acct) or 0)
if sender_balance < amount:
pipe.unwatch()
return False # Insufficient funds
receiver_balance = int(pipe.get(to_acct) or 0)
pipe.multi()
pipe.set(from_acct, sender_balance - amount)
pipe.set(to_acct, receiver_balance + amount)
pipe.execute()
return True
except redis.WatchError:
continue # Key modified, retry
except redis.RedisError:
return False
return False # Max retries exceeded# EVAL — run Lua script directly
EVAL "return redis.call('SET', KEYS[1], ARGV[1])" 1 mykey myvalue
# script num_keys key arg
# EVALSHA — run script by SHA1 hash (more efficient)
SCRIPT LOAD "return redis.call('SET', KEYS[1], ARGV[1])"
# Returns: "a526e6b2a9e3c1a5..."
EVALSHA a526e6b2a9e3c1a5 1 mykey myvalue
# Script management
SCRIPT EXISTS sha1 [sha2 ...] # check if scripts are cached
SCRIPT FLUSH # clear script cache
SCRIPT KILL # kill running script (no write commands used)EVALSHA in production via client libraries. They auto-load scripts and use the SHA cache. Redis runs Lua scripts atomically — no other commands run while a script executes.| Rule | Description | Example |
|---|---|---|
| Atomic by default | Scripts block the server | Keep scripts fast (<5ms) |
| Pass keys via KEYS[] | Never hardcode keys in script | EVAL "..." 1 key1 val |
| Use ARGV[] for data | Pass values as arguments | ARGV[1], ARGV[2] |
| Return types | Use redis.status_reply(str) | For multi-bulk returns |
| Local variables | Use local for all vars | local x = 1 |
| No closures | Cannot access external state | All data via KEYS/ARGV |
| Script size limit | Max script: 64MB (Lua 5.1) | Split large scripts |
-- Token bucket rate limiter
-- KEYS[1] = rate limit key
-- ARGV[1] = capacity (max tokens)
-- ARGV[2] = refill rate (tokens per second)
-- ARGV[3] = requested tokens
-- ARGV[4] = current time (seconds)
local key = KEYS[1]
local capacity = tonumber(ARGV[1])
local rate = tonumber(ARGV[2])
local requested = tonumber(ARGV[3])
local now = tonumber(ARGV[4])
local last_refill = tonumber(redis.call('HGET', key, 'last_refill') or now)
local tokens = tonumber(redis.call('HGET', key, 'tokens') or capacity)
-- Calculate refill
local elapsed = now - last_refill
tokens = math.min(capacity, tokens + (elapsed * rate))
if tokens >= requested then
tokens = tokens - requested
redis.call('HMSET', key, 'tokens', tokens, 'last_refill', now)
redis.call('EXPIRE', key, math.ceil(capacity / rate) + 1)
return 1 -- allowed
else
redis.call('HMSET', key, 'tokens', tokens, 'last_refill', now)
redis.call('EXPIRE', key, math.ceil(capacity / rate) + 1)
return 0 -- denied
end-- Simple distributed lock with atomic acquire + check
-- KEYS[1] = lock key
-- ARGV[1] = unique lock token
-- ARGV[2] = TTL in milliseconds
local key = KEYS[1]
local token = ARGV[1]
local ttl = tonumber(ARGV[2])
-- Try to acquire lock
if redis.call('SET', key, token, 'NX', 'PX', ttl) then
return 1 -- lock acquired
end
-- Lock exists — check if it's ours (token matches)
local current = redis.call('GET', key)
if current == token then
-- Extend our own lock
redis.call('PEXPIRE', key, ttl)
return 2 -- lock extended
end
return 0 -- lock held by another processimport redis
import uuid
import time
r = redis.Redis()
class DistributedLock:
def __init__(self, name, ttl_ms=30000):
self.key = f"lock:{name}"
self.ttl_ms = ttl_ms
self.token = str(uuid.uuid4())
def acquire(self, retries=3, delay=0.1):
"""Acquire lock with retry loop"""
script = """
if redis.call('SET', KEYS[1], ARGV[1], 'NX', 'PX', ARGV[2]) then
return 1
end
return 0
"""
for _ in range(retries):
result = r.eval(script, 1, self.key, self.token, self.ttl_ms)
if result == 1:
return True
time.sleep(delay)
return False
def release(self):
"""Release lock (only if we own it)"""
script = """
if redis.call('get', KEYS[1]) == ARGV[1] then
return redis.call('del', KEYS[1])
end
return 0
"""
r.eval(script, 1, self.key, self.token)
# Usage
lock = DistributedLock("order:5523", ttl_ms=30000)
if lock.acquire():
try:
# Critical section
process_order(5523)
finally:
lock.release() # Only releases if we still own it-- Fair queue: move task from priority queue to in-progress
-- KEYS[1] = pending queue (sorted set)
-- KEYS[2] = in-progress hash
-- ARGV[1] = worker ID
-- ARGV[2] = timeout timestamp (when to reclaim)
local queue = KEYS[1]
local in_progress = KEYS[2]
local worker = ARGV[1]
local timeout = tonumber(ARGV[2])
-- Get highest priority (lowest score) task
local task = redis.call('ZPOPMIN', queue)
if #task == 0 then
return nil -- no tasks
end
local task_id = task[1]
redis.call('HSET', in_progress, task_id,
worker .. "|" .. timeout)
return task_idFUNCTION as a replacement for EVAL. Functions are named, versioned, and managed as libraries. They have better lifecycle management and can be loaded with FUNCTION LOAD and called with FCALL / FCALL_RO.Redis supports multiple deployment topologies for scaling and high availability: single-node (development), master-slave replication, Redis Sentinel (HA), and Redis Cluster (sharding).
# Master configuration
# redis.conf
replicaof no one # this is the master
bind 0.0.0.0
port 6379
# Replica configuration
# redis.conf
replicaof 10.0.1.100 6379 # connect to master
replica-serve-stale-data yes # serve stale data when disconnected
replica-read-only yes # replicas are read-only by default
# Check replication status
INFO replication
# Returns: role:master, connected_slaves:2, ...
# Or on replica: role:slave, master_link_status:up
# Replication commands
REPLCONF # configure replication
ROLE # returns role info (master/replica/sentinel)
SYNC # force full resynchronization (internal)
PSYNC # partial resynchronization (internal)| Feature | Description |
|---|---|
| Async replication | Replicas receive writes asynchronously from master |
| Read replicas | Replicas serve stale reads (eventual consistency) |
| Partial resync | PSYNC only transfers differences after disconnect |
| Replica priority | replica-priority controls failover order |
| Replica read-only | Replicas reject writes by default |
| Delayed replicas | replica-serve-stale-data controls stale data serving |
# sentinel.conf — at least 3 Sentinel nodes recommended
port 26379
sentinel monitor mymaster 10.0.1.100 6379 2 # name, ip, port, quorum
sentinel down-after-milliseconds mymaster 5000 # consider down after 5s
sentinel failover-timeout mymaster 60000 # failover timeout 60s
sentinel parallel-syncs mymaster 1 # 1 replica at a time during failover
sentinel auth-pass mymaster yourpassword # master password
# Sentinel commands
sentinel ckquorum mymaster # check if quorum is reachable
sentinel failover mymaster # force failover
sentinel get-master-addr-by-name mymaster # get current master IP:port
sentinel master mymaster # master info + replicas + sentinels
sentinel replicas mymaster # list replicas
sentinel sentinels mymaster # list sentinel instances# Cluster configuration
# redis.conf (per node)
cluster-enabled yes
cluster-config-file nodes.conf
cluster-node-timeout 5000
cluster-announce-ip 10.0.1.101 # public IP
cluster-announce-port 6379
# Create cluster (minimum 6 nodes: 3 masters + 3 replicas)
redis-cli --cluster create \
10.0.1.101:6379 10.0.1.102:6379 10.0.1.103:6379 \
10.0.1.104:6379 10.0.1.105:6379 10.0.1.106:6379 \
--cluster-replicas 1
# Cluster management
CLUSTER INFO # cluster state, slots, nodes
CLUSTER NODES # list all nodes with IDs and roles
CLUSTER SLOTS # slot ranges → node mapping
CLUSTER KEYSLOT "user:1001" # which slot this key hashes to
CLUSTER COUNTKEYSINSLOT 5460 # keys in a specific slot
CLUSTER GETKEYSINSLOT 5460 10 # sample keys from a slot
# Cluster operations
CLUSTER MEET 10.0.1.104 6379 # add node to cluster
CLUSTER REPLICATE <node-id> # set as replica of node-id
CLUSTER FAILOVER # manual failover (no data loss)
CLUSTER FORGET <node-id> # remove node from cluster
CLUSTER RESET [HARD|SOFT] # reset node
# Resharding
redis-cli --cluster reshard 10.0.1.101:6379 \
--cluster-from <src-id> \
--cluster-to <dst-id> \
--cluster-slots 1000
# Fix cluster
redis-cli --cluster fix 10.0.1.101:6379 # auto-repair# Keys in {} are hashed by the content inside braces only
# This ensures related keys live on the same shard
# These keys all hash to the SAME slot (using "user1001"):
SET {user1001}:name "Alice"
SET {user1001}:email "alice@example.com"
SET {user1001}:balance 1000
HSET {user1001}:profile age 30 role "admin"
# This allows multi-key operations in cluster mode:
MGET {user1001}:name {user1001}:email
DEL {user1001}:name {user1001}:email
# Without hash tags, these would be on DIFFERENT shards:
# SET user1001:name "Alice" → slot calculated from "user1001:name"
# SET user1001:email "a@b.c" → slot calculated from "user1001:email"# RDB Snapshots (point-in-time, fork-based)
save 900 1 # save after 900s if 1 change
save 300 10 # save after 300s if 10 changes
save 60 10000 # save after 60s if 10000 changes
save "" # disable RDB saves
rdbcompression yes # compress RDB with LZF
rdbchecksum yes # CRC64 checksum
dbfilename dump.rdb
dir /var/lib/redis # RDB file directory
# Manual RDB operations
SAVE # blocking save (fork + write)
BGSAVE # non-blocking background save
BGREWRITEAOF # rewrite AOF file in background
LASTSAVE # unix timestamp of last save
# AOF (Append Only File) — every write logged
appendonly yes # enable AOF
appendfilename "appendonly.aof"
appendfsync everysec # fsync every second (recommended)
# appendfsync always # fsync every write (safest, slowest)
# appendfsync no # let OS decide (fastest, least safe)
# AOF rewrite
auto-aof-rewrite-percentage 100 # trigger rewrite when 2x size
auto-aof-rewrite-min-size 64mb # minimum size for rewrite
# Hybrid persistence (Redis 4.0+)
aof-use-rdb-preamble yes # RDB format + AOF delta (best of both)
# Loading on startup
# Redis loads: RDB if exists → replay AOF if AOF enabled
# With aof-use-rdb-preamble: loads RDB base → replays AOF increment| Feature | RDB Snapshots | AOF | Hybrid (RDB+AOF) |
|---|---|---|---|
| Persistence | Point-in-time | Every write | RDB base + AOF delta |
| File size | Compact | Large | Moderate |
| Recovery speed | Fast | Slow | Fast |
| Data safety | Can lose minutes | Can lose 1 second | Can lose 1 second |
| Performance impact | Fork-based (burst) | Continuous writes | Moderate |
| Best for | Backups, replication | Durability | Production (recommended) |
# Memory limits
maxmemory 4gb # max memory usage
maxmemory-policy allkeys-lru # eviction policy
# Eviction policies
# noeviction — return errors on writes (default)
# allkeys-lru — evict least recently used keys
# allkeys-lfu — evict least frequently used keys (Redis 4.0+)
# allkeys-random — evict random keys
# volatile-lru — evict LRU among keys with TTL
# volatile-lfu — evict LFU among keys with TTL (Redis 4.0+)
# volatile-random — evict random among keys with TTL
# volatile-ttl — evict keys with shortest TTL first| Policy | Scope | Algorithm | Best For |
|---|---|---|---|
| allkeys-lru | All keys | Least Recently Used | General caching |
| allkeys-lfu | All keys | Least Frequently Used | Hot data caching |
| allkeys-random | All keys | Random | Simple caching |
| volatile-lru | Keys with TTL | LRU | Cache with mixed persistent data |
| volatile-lfu | Keys with TTL | LFU | Hot temporary data |
| volatile-ttl | Keys with TTL | Shortest TTL | Time-sensitive caching |
| noeviction | N/A | None (errors) | Database use (no data loss) |
# Runtime configuration
CONFIG GET maxmemory # get config value
CONFIG SET maxmemory 8gb # set config value (runtime)
CONFIG GET "save*" # get all save-related configs
CONFIG REWRITE # save current config to redis.conf
CONFIG RESETSTAT # reset runtime stats
# Common configs to inspect/tune
CONFIG GET maxmemory-policy
CONFIG GET timeout # client idle timeout
CONFIG GET tcp-keepalive
CONFIG GET databases # number of databases (default 16)
CONFIG GET requirepass # password
CONFIG GET cluster-enabled
# Client management
CLIENT SETNAME app-server-1
CLIENT LIST # all connected clients
CLIENT KILL ip:port # disconnect client
CLIENT PAUSE 10000 # pause all clients for 10s
CLIENT UNPAUSE # resume all clients# Redis 8.0 key features
# 1. Improved multi-threaded I/O (default on)
# 2. Client-side caching improvements
# 3. Vector similarity search (RediSearch native)
# 4. New RESP3 protocol enhancements
# 5. ACL improvements (key-based permissions)
# ACL — Access Control List
ACL SETUSER developer on +@read +@connection ~* &*
ACL SETUSER app-writer on +@all -@dangerous ~app:* &strongpass
ACL LIST # list all users
ACL WHOAMI # current user
ACL CAT # list command categories
ACL GETUSER developer # user details
# Key-based permissions
ACL SETUSER reporting on +GET +SCAN ~reporting:* ~analytics:*Essential Redis interview questions covering core concepts, architecture decisions, and real-world patterns.
Redis is an in-memory data structure store supporting strings, lists, sets, hashes, sorted sets, streams, bitmaps, HyperLogLogs, and geospatial indexes. Key differences from memcached:
Redis offers two persistence mechanisms, often used together:
Reload priority: If both exist, Redis prefers AOF. Disable persistence entirely for pure caching.
Use SET key token NX PX ttl for atomic lock acquisition:
# Acquire: atomic SET with NX (only if not exists) + PX (TTL)
SET lock:resource:42 "unique-uuid-123" NX PX 30000
# Release: Lua script ensures only lock owner can release
EVAL "if redis.call('get', KEYS[1]) == ARGV[1] then return redis.call('del', KEYS[1]) else return 0 end" 1 lock:resource:42 "unique-uuid-123"Why Lua for release? Prevents race condition where lock expires and another client acquires it between GET and DEL. The Lua script runs atomically.
Redis Cluster partitions data across 16384 hash slots. Each node owns a subset of slots.
Redis uses a single-threaded event loop based on epoll/kqueue. It processes commands sequentially:
Three common approaches with different trade-offs:
# 1. Fixed Window (simplest)
MULTI
INCR rate:user:1001:202401151430 # key = user + minute bucket
EXPIRE rate:user:1001:202401151430 59
EXEC
# Check: if result > 100, reject
# 2. Sliding Window (more accurate, uses sorted set)
# Each request adds current timestamp as a member
ZADD rate:user:1001 <timestamp> <request_id>
ZREMRANGEBYSCORE rate:user:1001 -inf <window_start>
ZCARD rate:user:1001
# If count > limit, reject
# 3. Token Bucket (smooth rate, best UX)
# Implemented with Lua script (see Section 6)
# Tokens refill at a constant rate; each request consumes tokens