Redis Essentials Cheatsheet
Redis 7.4 from a PostgreSQL developer's perspective: data structures, patterns, and when to reach for a cache vs a database.
Redis Essentials Cheatsheet
Quick Overview
Redis is an in-memory data store that doubles as a cache, message broker, and session store. Unlike PostgreSQL, it doesn’t persist to disk by default (though it can), trades ACID guarantees for microsecond latency, and models data as typed values rather than rows and columns. You reach for Redis when PostgreSQL is fast enough but not fast enough — repeated reads that hit the same data, rate limiting, leaderboards, real-time pub/sub, and ephemeral session state. Redis 7.4 is the current stable release.
# Install on macOS
brew install redis && brew services start redis
# Install on Ubuntu
sudo apt install redis-server && sudo systemctl start redis
Getting Started
# Verify Redis is running
redis-cli ping
# PONG
# Open the interactive CLI
redis-cli
# Connect to a remote instance
redis-cli -h hostname -p 6379 -a yourpassword
# Your first key/value — the "hello world" of Redis
SET greeting "hello"
GET greeting
# "hello"
# Keys expire — this is what makes Redis different from a database
SET session:abc123 "user:42" EX 3600 # expires in 1 hour
TTL session:abc123 # seconds remaining
If you’re coming from PostgreSQL: there are no tables, no schemas, no joins. Everything is a key mapped to a typed value. The key is a string; the value can be a string, list, set, sorted set, hash, stream, or bitmap.
Core Concepts
| Concept | PostgreSQL equivalent | What it means |
|---|---|---|
| Key | Primary key | Global string identifier for any value |
| TTL | No native equivalent | Automatic expiry after N seconds |
| Database (0-15) | Schema/database | Namespace separation — SELECT 1 to switch |
| Hash | Row in a table | Field-value pairs under one key |
| Sorted Set | Table with an index | Members scored by a float — ordered by score |
| Pub/Sub | LISTEN/NOTIFY | Fire-and-forget message channels |
Persistence modes:
- RDB (default): periodic snapshot to disk — fast restarts, some data loss on crash
- AOF: append-only log — durable, slower
- No persistence: pure cache — fastest, loses everything on restart
Essential Commands
Strings (the default type)
# Set and get
SET user:42:name "Alice"
GET user:42:name
# Atomic increment — perfect for counters and rate limiting
INCR page:views:homepage
INCRBY page:views:homepage 10
# Set only if key doesn't exist (NX) — useful for distributed locks
SET lock:job:1 "worker-1" NX EX 30
Hashes (like a table row)
# Store a user record
HSET user:42 name "Alice" email "alice@example.com" plan "pro"
# Get one field
HGET user:42 email
# Get all fields
HGETALL user:42
# Increment a numeric field atomically
HINCRBY user:42 login_count 1
Lists (ordered, duplicates allowed)
# Push to head (left) or tail (right)
LPUSH jobs:queue "job:1001"
RPUSH jobs:queue "job:1002"
# Pop from head — blocking variant waits for new items
LPOP jobs:queue
BLPOP jobs:queue 5 # blocks up to 5 seconds
# Peek without popping
LRANGE jobs:queue 0 -1 # all items
LRANGE jobs:queue 0 9 # first 10 items
Sets (unordered, unique members)
# Track unique visitors
SADD page:visitors:2026-04-03 "user:42" "user:99"
SCARD page:visitors:2026-04-03 # count
# Set operations — useful for friend/follower graphs
SINTER follows:alice follows:bob # mutual follows
SUNION follows:alice follows:bob # everyone either follows
SDIFF follows:alice follows:bob # alice follows but bob doesn't
Sorted Sets (ordered by score)
# Leaderboard: member + score
ZADD leaderboard 9500 "alice" 8200 "bob" 11000 "carol"
# Top 3 players (highest score first)
ZREVRANGE leaderboard 0 2 WITHSCORES
# Alice's rank (0-indexed, lower = better)
ZREVRANK leaderboard "alice"
# Range by score
ZRANGEBYSCORE leaderboard 8000 10000
Key management
# Check existence
EXISTS user:42
# Delete
DEL user:42 user:99 # accepts multiple keys
# Set expiry on an existing key
EXPIRE session:abc123 1800
# Scan keys matching a pattern (never use KEYS * in production)
SCAN 0 MATCH "user:*" COUNT 100
Common Patterns
Pattern 1: Cache-aside with PostgreSQL
You don’t replace PostgreSQL — you put Redis in front of it for hot reads.
import redis
import psycopg2
import json
r = redis.Redis(host="localhost", decode_responses=True)
def get_user(user_id: int) -> dict:
cache_key = f"user:{user_id}"
# Try cache first
cached = r.get(cache_key)
if cached:
return json.loads(cached)
# Miss — hit the database
with psycopg2.connect("dbname=myapp user=postgres") as conn:
with conn.cursor() as cur:
cur.execute("SELECT id, name, email FROM users WHERE id = %s", (user_id,))
row = cur.fetchone()
if not row:
return None
user = {"id": row[0], "name": row[1], "email": row[2]}
# Write to cache, expire in 5 minutes
r.setex(cache_key, 300, json.dumps(user))
return user
Pattern 2: Rate limiting with sliding window
import time
def is_rate_limited(user_id: int, limit: int = 100, window: int = 60) -> bool:
key = f"ratelimit:{user_id}"
now = time.time()
window_start = now - window
pipe = r.pipeline()
# Remove requests outside the window
pipe.zremrangebyscore(key, 0, window_start)
# Add this request
pipe.zadd(key, {str(now): now})
# Count requests in window
pipe.zcard(key)
# Reset TTL
pipe.expire(key, window)
results = pipe.execute()
request_count = results[2]
return request_count > limit
Pattern 3: Distributed lock
import uuid
def acquire_lock(resource: str, ttl: int = 30) -> str | None:
lock_key = f"lock:{resource}"
token = str(uuid.uuid4())
# NX = only set if not exists, prevents race condition
acquired = r.set(lock_key, token, nx=True, ex=ttl)
return token if acquired else None
def release_lock(resource: str, token: str) -> bool:
# Lua script ensures atomic check-and-delete
script = """
if redis.call('get', KEYS[1]) == ARGV[1] then
return redis.call('del', KEYS[1])
else
return 0
end
"""
result = r.eval(script, 1, f"lock:{resource}", token)
return bool(result)
Gotchas & Tips
Never use KEYS * in production. It blocks the entire Redis server while it scans. Use SCAN with a cursor instead — it’s slower but non-blocking.
Redis is single-threaded (for commands). One slow Lua script or a large SMEMBERS on a million-member set will block every other client. Keep operations fast; avoid large collection scans.
Hashes are cheaper than string keys for small objects. Storing user:42:name, user:42:email as separate string keys uses more memory than a single HSET user:42 name ... email .... Redis internally encodes small hashes as zipmaps.
TTL doesn’t carry over on SET. If you SET a key that already has a TTL, the TTL is removed unless you pass KEEPTTL (new in Redis 6.0): SET user:42:name "Bob" KEEPTTL.
Pub/Sub messages are not persisted. If a subscriber is offline when a message is published, it’s lost. For durable messaging, use Redis Streams (XADD/XREAD) — added in Redis 5.0 and significantly improved in 7.x.
MULTI/EXEC is not rollback. Redis transactions batch commands and run them atomically, but if one command fails mid-transaction, the rest still execute. It prevents interleaving from other clients, not partial failure. For true conditional logic, use Lua scripts.
Memory is finite. Configure maxmemory and an eviction policy (allkeys-lru is a safe default for caches). Without it, Redis will keep accepting writes until the server runs out of RAM.
# Check memory usage
redis-cli INFO memory | grep used_memory_human
# See eviction policy
redis-cli CONFIG GET maxmemory-policy
Next Steps
- Official docs: redis.io/docs — the command reference is excellent; every command page has complexity and usage notes
- Redis University: university.redis.com — free courses, worth the RU101 fundamentals course
- Related: once you’re comfortable with Redis, look at Redis Streams for event sourcing and Redisson (Java) or redis-py for production client configuration
Source: z2h.fyi/cheatsheets/redis-cheatsheet — Zero to Hero cheatsheets for developers.