js

How to Build a Distributed Rate Limiter with Redis and Node.js Implementation Guide

Learn to build a scalable distributed rate limiter using Redis and Node.js. Covers Token Bucket, Sliding Window algorithms, Express middleware, and production optimization strategies.

How to Build a Distributed Rate Limiter with Redis and Node.js Implementation Guide

Recently, while scaling an API service, I faced unexpected traffic spikes threatening system stability. Requests overwhelmed our infrastructure, causing cascading failures. That experience motivated me to implement robust rate limiting - not just locally, but across distributed nodes. Why Redis? Its atomic operations and speed make it ideal for coordinating request counts across instances. Let’s build this together.

First, we establish our foundation. Create the project directory and install essentials:

mkdir distributed-rate-limiter
cd distributed-rate-limiter
npm init -y
npm install redis express typescript @types/node @types/express

Our core interface defines rate limiting behavior:

// src/types/index.ts
export interface RateLimitResult {
  allowed: boolean;
  remaining: number;
  resetTime: Date;
}
export interface IRateLimiter {
  checkLimit(identifier: string): Promise<RateLimitResult>;
}

For the Token Bucket algorithm, we use Redis Lua scripts for atomic operations. This handles token refills and consumption in a single round-trip:

// src/rate-limiters/token-bucket.ts
const refillScript = `
  local key = KEYS[1]
  local capacity = tonumber(ARGV[1])
  local tokens = tonumber(ARGV[2])
  local interval = tonumber(ARGV[3])
  local now = tonumber(ARGV[4])
  
  -- Calculate tokens to add
  local bucket = redis.call('HMGET', key, 'tokens', 'last_refill')
  local current_tokens = tonumber(bucket[1]) or capacity
  local last_refill = tonumber(bucket[2]) or now
  local elapsed = math.max(0, now - last_refill)
  local tokens_to_add = math.floor(elapsed / interval * tokens)
  current_tokens = math.min(capacity, current_tokens + tokens_to_add)
  
  -- Check if request allowed
  local allowed = current_tokens >= 1
  if allowed then
    current_tokens = current_tokens - 1
    redis.call('HMSET', key, 'tokens', current_tokens, 'last_refill', now)
  end
  redis.call('EXPIRE', key, math.ceil(interval / 1000))
  return {allowed and 1 or 0, current_tokens}
`;

The sliding window approach offers greater precision. We track timestamps of recent requests:

// src/rate-limiters/sliding-window.ts
const windowScript = `
  local key = KEYS[1]
  local now = tonumber(ARGV[1])
  local windowMs = tonumber(ARGV[2])
  local maxRequests = tonumber(ARGV[3])
  
  -- Remove outdated timestamps
  redis.call('ZREMRANGEBYSCORE', key, 0, now - windowMs)
  local currentCount = redis.call('ZCARD', key)
  
  if currentCount < maxRequests then
    redis.call('ZADD', key, now, now)
    redis.call('EXPIRE', key, windowMs/1000)
    return {1, maxRequests - currentCount - 1}
  end
  return {0, 0}
`;

For Express integration, middleware becomes crucial. How might we handle varying limits per endpoint? Here’s a flexible solution:

// src/middleware/rate-limit-middleware.ts
import { Request, Response, NextFunction } from 'express';

export const rateLimitMiddleware = (
  limiter: IRateLimiter,
  identifierExtractor: (req: Request) => string
) => {
  return async (req: Request, res: Response, next: NextFunction) => {
    const id = identifierExtractor(req);
    const result = await limiter.checkLimit(id);
    
    if (!result.allowed) {
      return res.status(429).json({
        error: 'Too many requests',
        retryAfter: result.resetTime.toISOString()
      });
    }
    
    res.set('X-RateLimit-Remaining', result.remaining.toString());
    res.set('X-RateLimit-Reset', result.resetTime.toISOString());
    next();
  };
};

Handling Redis failures requires careful decisions. Should we block traffic or fail open? This fallback maintains functionality during outages:

// Error handling in checkLimit method
try {
  // Redis operations
} catch (error) {
  console.error('Redis failure:', error);
  return {
    allowed: true, // Fail open
    remaining: config.maxRequests,
    resetTime: new Date(Date.now() + config.windowMs)
  };
}

Performance optimization matters at scale. Consider these techniques:

  1. Pipeline batch operations
  2. Use Redis clusters for sharding
  3. Local caches with short TTLs
  4. Connection pooling
  5. Monitor memory usage patterns

Testing validates our implementation. Use Artillery for load testing:

# load-test.yml
config:
  target: "http://localhost:3000"
  phases:
    - duration: 60
      arrivalRate: 100
scenarios:
  - flow:
      - get:
          url: "/api/resource"

In production deployment:

  • Set appropriate Redis memory policies
  • Enable persistence based on tolerance
  • Monitor Redis metrics closely
  • Implement circuit breakers
  • Establish alert thresholds

What surprises developers most? The token bucket’s burst handling versus sliding window’s precision. Both have valid use cases - choose based on your tolerance for brief spikes.

I encourage testing different configurations under simulated loads. You’ll discover nuances in behavior that documentation can’t capture. What happens when requests arrive in microbursts? How does geographic latency affect distributed coordination?

This implementation balances accuracy with performance. While not perfect, it provides a strong foundation you can extend. The Lua scripts ensure atomicity, while Redis expiration handles cleanup automatically. For most applications, this strikes the right balance.

Found this useful? Implement it in your next project and share your experience! If you improved the approach or found edge cases, comment below - let’s learn together. Like this guide if it saved you research time, and share it with your team.

Keywords: distributed rate limiter, Redis rate limiting, Node.js rate limiter, token bucket algorithm, sliding window rate limiter, Express.js middleware, API rate limiting, distributed systems Redis, rate limiting algorithms, scalable rate limiter



Similar Posts
Blog Image
Build a Distributed Task Queue System with BullMQ, Redis, and TypeScript Tutorial

Learn to build scalable distributed task queues with BullMQ, Redis & TypeScript. Master job processing, error handling, scaling & monitoring for production apps.

Blog Image
Build a High-Performance API Gateway with Fastify Redis and Rate Limiting in Node.js

Learn to build a production-ready API Gateway with Fastify, Redis rate limiting, service discovery & Docker deployment. Complete Node.js tutorial inside!

Blog Image
Build Complete Event-Driven Microservices Architecture with NestJS, RabbitMQ, and Redis

Learn to build scalable event-driven microservices with NestJS, RabbitMQ, and Redis. Master saga patterns, service discovery, and deployment strategies for production-ready systems.

Blog Image
Build a Distributed Task Queue System with BullMQ Redis and TypeScript Complete Guide

Learn to build scalable task queues with BullMQ, Redis & TypeScript. Master job processing, error handling, monitoring & deployment. Complete tutorial with Express.js integration.

Blog Image
Build Event-Driven Architecture with NestJS, Redis Streams, and TypeScript: Complete Implementation Guide

Learn to build scalable event-driven microservices with NestJS, Redis Streams & TypeScript. Master event processing, consumer groups, monitoring & best practices for distributed systems.

Blog Image
Build a Distributed Rate Limiting System: Redis, Node.js & TypeScript Implementation Guide

Learn to build a robust distributed rate limiting system using Redis, Node.js & TypeScript. Implement token bucket, sliding window algorithms with Express middleware for scalable API protection.