Redis is the backbone of the modern web stack — in-memory, sub-millisecond latency and a rich set of data types. From caching to session stores, rate limits and pub/sub, it solves five or more problems well. This article summarises the most common use cases and a production configuration.

Data Types

  • String — basic key-value
  • Hash — object (field-value map)
  • List — ordered list, push/pop from either end
  • Set — unique items
  • Sorted set — score-ordered set (leaderboard)
  • Stream — append-only log (Redis's take on Kafka)
  • Bitmap, HyperLogLog, Geo — specialised types

Use Case 1: Cache

const Redis = require('ioredis');
const redis = new Redis();

async function getUser(id) {
    const cacheKey = `user:${id}`;
    const cached = await redis.get(cacheKey);
    if (cached) return JSON.parse(cached);

    const user = await db.users.findByPk(id);
    await redis.set(cacheKey, JSON.stringify(user), 'EX', 300);  // 5 min TTL
    return user;
}

// Invalidation
async function updateUser(id, data) {
    const user = await db.users.update(id, data);
    await redis.del(`user:${id}`);  // drop the cache
    return user;
}

Use Case 2: Session Store

const session = require('express-session');
const RedisStore = require('connect-redis').default;

app.use(session({
    store: new RedisStore({ client: redis, prefix: 'sess:' }),
    secret: process.env.SESSION_SECRET,
    resave: false,
    saveUninitialized: false,
    cookie: { httpOnly: true, secure: true, maxAge: 7 * 24 * 3600 * 1000 }
}));

Use Case 3: Rate Limit

// Sliding-window rate limit
async function checkRateLimit(userId, maxPerMinute = 60) {
    const key = `rate:${userId}:${Math.floor(Date.now() / 60000)}`;
    const count = await redis.incr(key);
    if (count === 1) await redis.expire(key, 60);
    return count <= maxPerMinute;
}

if (!(await checkRateLimit(req.user.id))) {
    return res.status(429).json({ error: 'Too many requests' });
}

Use Case 4: Pub/Sub

// Publisher
redis.publish('notifications', JSON.stringify({ userId: 42, msg: 'Hi' }));

// Subscriber (requires its own connection)
const sub = new Redis();
sub.subscribe('notifications');
sub.on('message', (channel, message) => {
    const data = JSON.parse(message);
    io.to(`user:${data.userId}`).emit('notification', data);
});

Use Case 5: Distributed Lock

// Atomic lock via SET NX EX
async function withLock(key, ttlSec, fn) {
    const token = Math.random().toString(36);
    const acquired = await redis.set(key, token, 'NX', 'EX', ttlSec);
    if (!acquired) throw new Error('Lock busy');
    try {
        return await fn();
    } finally {
        // Release only our own token via Lua script
        await redis.eval(`
            if redis.call('get', KEYS[1]) == ARGV[1] then
                return redis.call('del', KEYS[1])
            else return 0 end
        `, 1, key, token);
    }
}

await withLock('job:daily-report', 300, generateReport);

Persistence: RDB vs AOF

Redis lives in memory — data is lost on restart unless you persist. Two options:

  • RDB (snapshot): periodic full dumps. Fast restart, small files. Last few minutes can be lost
  • AOF (append-only file): every command is logged. Safer, but files are bigger and restart slower
  • Hybrid: enable both — this is the default production setup
# redis.conf
save 900 1       # 1 change in 15 min
save 300 10      # 10 in 5 min
save 60 10000    # 10000 in 1 min
appendonly yes
appendfsync everysec   # fsync once per second (good balance)

Memory Management

# Maximum memory
maxmemory 2gb

# What to do when full
maxmemory-policy allkeys-lru   # evict least-recently-used
# Alternatives: volatile-lru, allkeys-lfu, noeviction

# Find big keys
redis-cli --bigkeys

Cluster and Sentinel

  • Master-Replica (1 master + N replicas): read scaling, read-from-replica
  • Redis Sentinel: automatic failover (replica promoted when master dies)
  • Redis Cluster: sharding, 1000+ instances per cluster

Security

  • Redis has no auth by default on port 6379 — exposed to the internet it is a disaster
  • Use requirepass for a password, and ACLs for per-user permissions
  • bind 127.0.0.1 or a firewall rule to allow only the app server
  • protected-mode yes — enabled by default, but verify
  • TLS connections (Redis 6+)
  • Rename dangerous commands: rename-command FLUSHALL ""

Monitoring

redis-cli INFO stats
redis-cli INFO memory
redis-cli INFO replication
redis-cli SLOWLOG GET 10

# Prometheus exporter
docker run -p 9121:9121 oliver006/redis_exporter

Conclusion

Redis belongs next to every Node/Python/PHP app. Start with caching, then add sessions, rate limits and queues. A 2-4GB Redis instance comfortably carries most startups through their first two years.

Redis setup and tuning

Redis integration, cluster architecture and persistence configuration Contact us

WhatsApp