Skip to main content
Back to Blog
25 April 202415 min read

Redis in Enterprise: Caching Patterns and Pitfalls

RedisCachingDatabaseArchitecture

Practical patterns for using Redis in enterprise applications. Cache invalidation strategies, cluster deployment, and common anti-patterns.


Redis in Enterprise: Caching Patterns and Pitfalls

Redis is more than a cache, but caching is where most enterprises start. After implementing Redis across multiple large-scale applications—from e-commerce platforms handling 50,000 requests per second to real-time analytics dashboards—I've learned that the difference between Redis success and failure often comes down to understanding patterns and avoiding common pitfalls.

When to Cache (and When Not To)

Before diving into patterns, understand what benefits from caching:

Good Caching Candidates

  • Expensive database queries: Complex joins, aggregations
  • External API responses: Rate-limited third-party APIs
  • Computed results: Recommendations, search rankings
  • Session data: User authentication state
  • Configuration: Feature flags, settings

Poor Caching Candidates

  • Frequently changing data: Real-time stock prices
  • User-specific data with high cardinality: Every user's unique feed
  • Large objects: Videos, large files (use CDN instead)
  • Data requiring strong consistency: Financial transactions

Core Caching Patterns

Cache-Aside (Lazy Loading)

The most common pattern—application manages the cache directly:

async function getUserById(userId: string): Promise<User> { const cacheKey = `user:${userId}`; // Try cache first const cached = await redis.get(cacheKey); if (cached) { return JSON.parse(cached); } // Cache miss - fetch from database const user = await db.users.findById(userId); // Store in cache for future requests if (user) { await redis.setex(cacheKey, 3600, JSON.stringify(user)); // 1 hour TTL } return user; } async function updateUser(userId: string, data: Partial<User>): Promise<User> { // Update database const user = await db.users.update(userId, data); // Invalidate cache await redis.del(`user:${userId}`); return user; }

Pros: Simple, only caches what's needed

Cons: Initial requests are slow (cache miss), potential for stale data

Write-Through

Write to cache and database simultaneously:

async function updateProduct(productId: string, data: Product): Promise<Product> { const cacheKey = `product:${productId}`; // Update database first const product = await db.products.update(productId, data); // Update cache with same data await redis.setex(cacheKey, 3600, JSON.stringify(product)); return product; }

Pros: Cache always consistent with database

Cons: Higher latency on writes, may cache data never read

Write-Behind (Write-Back)

Write to cache immediately, persist to database asynchronously:

async function recordPageView(pageId: string): Promise<void> { const cacheKey = `pageviews:${pageId}`; // Increment in Redis immediately await redis.incr(cacheKey); // Queue for database persistence await queue.add('persist-pageviews', { pageId }, { delay: 5000, // Batch writes every 5 seconds removeOnComplete: true }); } // Background worker async function persistPageViews(job: Job): Promise<void> { const { pageId } = job.data; const cacheKey = `pageviews:${pageId}`; // Get current count and reset const count = await redis.getset(cacheKey, '0'); if (parseInt(count) > 0) { await db.pageViews.increment(pageId, parseInt(count)); } }

Pros: Very fast writes, handles spikes well

Cons: Risk of data loss if Redis fails before persistence

Read-Through

Cache layer handles fetching from database:

// Using a caching library with read-through support const cache = new CacheManager({ store: redisStore, ttl: 3600, refreshThreshold: 300 // Refresh if TTL < 5 minutes }); async function getProduct(productId: string): Promise<Product> { return cache.wrap( `product:${productId}`, () => db.products.findById(productId), // Fetch function { ttl: 3600 } ); }

Pros: Clean separation, automatic refresh

Cons: More complex setup, library dependency

Cache Invalidation Strategies

Phil Karlton said there are only two hard things in computer science: cache invalidation and naming things. He wasn't wrong.

Time-Based Expiration (TTL)

Simplest approach—let entries expire:

// Set with expiration await redis.setex('product:123', 3600, JSON.stringify(product)); // Or set expiration separately await redis.set('product:123', JSON.stringify(product)); await redis.expire('product:123', 3600);

When to use: Data that can be slightly stale, low update frequency

TTL Guidelines:

Data TypeTTL
Static content24 hours
User profiles15-60 minutes
Product catalog5-15 minutes
Price/inventory30-60 seconds
Session data30 minutes sliding

Event-Driven Invalidation

Invalidate on data changes:

// Publisher (on data change) async function updateProduct(productId: string, data: Product): Promise<void> { await db.products.update(productId, data); await redis.publish('product-updates', JSON.stringify({ type: 'updated', productId })); } // Subscriber (cache invalidation service) const subscriber = redis.duplicate(); await subscriber.subscribe('product-updates'); subscriber.on('message', async (channel, message) => { const event = JSON.parse(message); if (event.type === 'updated') { await redis.del(`product:${event.productId}`); // Also invalidate related caches await redis.del(`product-list:category:*`); } });

Pattern-Based Invalidation

For related cache entries:

// Using Redis SCAN to find and delete matching keys async function invalidateByPattern(pattern: string): Promise<number> { let cursor = '0'; let deleted = 0; do { const [nextCursor, keys] = await redis.scan(cursor, 'MATCH', pattern, 'COUNT', 100); cursor = nextCursor; if (keys.length > 0) { await redis.del(...keys); deleted += keys.length; } } while (cursor !== '0'); return deleted; } // Usage await invalidateByPattern('user:123:*'); // All user 123 caches await invalidateByPattern('product:*:inventory'); // All inventory caches

Warning: Pattern scanning can be expensive. Prefer explicit key tracking:

// Track related keys in a set async function cacheWithTracking( mainKey: string, trackingKey: string, value: string, ttl: number ): Promise<void> { const pipeline = redis.pipeline(); pipeline.setex(mainKey, ttl, value); pipeline.sadd(trackingKey, mainKey); pipeline.expire(trackingKey, ttl + 60); await pipeline.exec(); } // Invalidate all tracked keys async function invalidateTracked(trackingKey: string): Promise<void> { const keys = await redis.smembers(trackingKey); if (keys.length > 0) { await redis.del(...keys, trackingKey); } }

Production Deployment

Redis Cluster Configuration

import Redis from 'ioredis'; const cluster = new Redis.Cluster([ { host: 'redis-node-1', port: 6379 }, { host: 'redis-node-2', port: 6379 }, { host: 'redis-node-3', port: 6379 } ], { redisOptions: { password: process.env.REDIS_PASSWORD, tls: {} // Enable TLS in production }, scaleReads: 'slave', // Read from replicas maxRedirections: 16, retryDelayOnClusterDown: 300 });

Key Design for Clustering

Redis Cluster uses hash slots. Keys with same hash tag go to same slot:

// These keys may be on different nodes 'user:123' 'user:123:orders' 'user:123:preferences' // These keys will be on the same node (hash tag: {user:123}) '{user:123}:profile' '{user:123}:orders' '{user:123}:preferences' // Use hash tags for multi-key operations await redis.mget( '{user:123}:profile', '{user:123}:orders' );

Connection Pooling

const redis = new Redis({ host: 'redis-primary', port: 6379, maxRetriesPerRequest: 3, retryStrategy: (times) => { if (times > 3) return null; // Stop retrying return Math.min(times * 100, 3000); // Exponential backoff }, // Connection pool settings lazyConnect: true, enableReadyCheck: true, connectTimeout: 10000 }); // Handle connection events redis.on('error', (err) => { logger.error('Redis connection error', err); }); redis.on('reconnecting', () => { logger.warn('Redis reconnecting'); });

Common Anti-Patterns

1. Using KEYS in Production

Bad: KEYS blocks Redis and scans all keys

// DON'T DO THIS const keys = await redis.keys('user:*');

Good: Use SCAN for iteration

// DO THIS async function* scanKeys(pattern: string) { let cursor = '0'; do { const [nextCursor, keys] = await redis.scan(cursor, 'MATCH', pattern, 'COUNT', 100); cursor = nextCursor; for (const key of keys) yield key; } while (cursor !== '0'); }

2. No Eviction Policy

Bad: Redis runs out of memory and crashes

# redis.conf - no eviction (bad)
maxmemory-policy noeviction

Good: Configure appropriate eviction

# redis.conf - evict least recently used keys
maxmemory 4gb
maxmemory-policy allkeys-lru

3. Ignoring Serialization Overhead

Bad: Serializing complex objects on every request

// Expensive serialization await redis.set('user:123', JSON.stringify(complexUserObject)); const user = JSON.parse(await redis.get('user:123'));

Good: Use Redis hashes for structured data

// Store as hash - no serialization needed await redis.hset('user:123', { name: 'John', email: 'john@example.com', role: 'admin' }); // Get specific fields efficiently const { name, email } = await redis.hgetall('user:123');

4. Thundering Herd Problem

Bad: Many requests hit database when cache expires

// All requests hit database at once async function getData(): Promise<Data> { const cached = await redis.get('data'); if (!cached) { return await expensiveDatabaseQuery(); // Thundering herd! } return JSON.parse(cached); }

Good: Use locking or probabilistic early refresh

async function getDataWithLock(): Promise<Data> { const cacheKey = 'data'; const lockKey = 'data:lock'; const cached = await redis.get(cacheKey); if (cached) return JSON.parse(cached); // Try to acquire lock const acquired = await redis.set(lockKey, '1', 'NX', 'EX', 10); if (!acquired) { // Another process is fetching, wait and retry await delay(100); return getDataWithLock(); } try { const data = await expensiveDatabaseQuery(); await redis.setex(cacheKey, 3600, JSON.stringify(data)); return data; } finally { await redis.del(lockKey); } }

5. Caching Errors

Bad: Caching null results forever

const user = await db.users.findById(userId); await redis.set(`user:${userId}`, JSON.stringify(user)); // null cached!

Good: Short TTL for negative caching or skip

const user = await db.users.findById(userId); if (user) { await redis.setex(`user:${userId}`, 3600, JSON.stringify(user)); } else { // Short TTL for "not found" to prevent repeated queries await redis.setex(`user:${userId}:notfound`, 60, '1'); }

Monitoring and Observability

// Key metrics to track const metrics = { 'redis.hit_rate': hitCount / (hitCount + missCount), 'redis.latency_p99': calculateP99(latencies), 'redis.memory_usage': info.used_memory, 'redis.evicted_keys': info.evicted_keys, 'redis.connected_clients': info.connected_clients }; // Health check async function healthCheck(): Promise<boolean> { try { const start = Date.now(); await redis.ping(); const latency = Date.now() - start; return latency < 100; // Under 100ms is healthy } catch { return false; } }

Key Takeaways

  1. Choose patterns wisely: Cache-aside for most cases, write-behind for high-write workloads
  2. Plan invalidation upfront: TTL is simple but stale; event-driven is accurate but complex
  3. Design keys for clustering: Use hash tags for multi-key operations
  4. Never use KEYS: Always use SCAN for pattern matching
  5. Configure eviction: Set maxmemory and appropriate eviction policy
  6. Prevent thundering herd: Use locking or probabilistic refresh
  7. Monitor everything: Hit rate, latency, memory usage, evictions

Redis is deceptively simple to start with but requires careful architecture for enterprise scale. The patterns that work for 100 requests per second often fail at 10,000. Plan for scale from the beginning.

Share this article