Crafting Scalable Event-Driven Microservices with NestJS, RabbitMQ, and Redis
Recently, while designing an e-commerce platform, I faced challenges with synchronous service communication. When the payment service slowed down during peak sales, it cascaded into order processing failures. This pain point led me to explore event-driven architectures - and I want to share how NestJS with RabbitMQ and Redis solved these problems elegantly.
Event-driven systems handle actions as discrete events. When a user places an order, we don’t lock services in synchronous chains. Instead, we emit an “order created” event. Interested services react independently. This isolation prevents system-wide failures. Why let one struggling service drag down others when they can work autonomously?
Let’s build our foundation:
nest new order-service --strict
nest new inventory-service --strict
nest new notification-service --strict
We share core event definitions across services:
// shared/events/order.events.ts
export class OrderCreatedEvent {
constructor(
public readonly orderId: string,
public readonly customerId: string,
public readonly items: OrderItem[],
public readonly totalAmount: number
) {}
}
RabbitMQ acts as our central nervous system. Notice how we configure dead-letter handling:
// order-service/src/messaging/rabbitmq.config.ts
queueOptions: {
durable: true,
arguments: {
'x-dead-letter-exchange': 'dlx.exchange',
'x-dead-letter-routing-key': 'failed_orders'
}
}
This automatically redirects unprocessable messages after retries. How many systems have you seen where failed messages just vanish?
Our order service emits events when orders change state:
// order-service/src/orders/orders.controller.ts
@Post()
async createOrder(@Body() createOrderDto: CreateOrderDto) {
const order = await this.ordersService.create(createOrderDto);
this.eventEmitter.emit('order.created', new OrderCreatedEvent(
order.id,
order.customerId,
order.items,
order.totalAmount
));
return order;
}
The inventory service listens reactively:
// inventory-service/src/events/order-created.listener.ts
@OnEvent('order.created')
async handleOrderCreated(event: OrderCreatedEvent) {
const canReserve = await this.inventoryService.reserveItems(
event.orderId,
event.items
);
canReserve
? this.eventEmitter.emit('inventory.reserved', event)
: this.eventEmitter.emit('inventory.reservation_failed', event);
}
Ever wondered what happens when inventory checks fail mid-process? We’ll cover that shortly.
Redis solves two critical needs: caching product data to reduce database loads and managing distributed sessions:
// inventory-service/src/services/inventory.service.ts
async getProductStock(productId: string): Promise<number> {
const cacheKey = `product:${productId}:stock`;
const cachedStock = await this.redisService.get(cacheKey);
if (cachedStock) return parseInt(cachedStock);
const dbStock = await this.queryInventoryDB(productId);
await this.redisService.set(cacheKey, dbStock, 'EX', 300); // 5 min cache
return dbStock;
}
Notice the cache expiration? It’s a balance between freshness and performance.
For event sourcing, we log every state change:
// order-service/src/events/order-event.sourcing.ts
export class OrderEventSourcing {
constructor(@InjectRedis() private readonly redis: Redis) {}
async recordEvent(orderId: string, eventType: string, payload: object) {
const event = {
timestamp: new Date().toISOString(),
eventType,
payload
};
await this.redis.lpush(`order:${orderId}:events`, JSON.stringify(event));
}
}
This audit trail helps reconstruct order histories - crucial for debugging complex flows.
Errors are inevitable. Our dead-letter queue configuration catches failures:
// shared/config/dlq.config.ts
@RabbitRPC({
exchange: 'dlx.exchange',
routingKey: 'failed_orders',
queue: 'order_dlq',
})
async handleFailedOrder(message: Record<string, unknown>) {
this.logger.error(`Processing failed order: ${message.orderId}`);
await this.dlqService.analyzeFailure(message);
}
What’s your strategy for handling “poison messages” that repeatedly fail?
Testing event flows requires simulating real-world conditions:
// inventory-service/test/inventory.events.spec.ts
it('should process inventory reservation', async () => {
const mockEvent = new OrderCreatedEvent(
'order-123',
'user-456',
[{ productId: 'prod-789', quantity: 2 }],
199.99
);
await publisher.publish('order.created', mockEvent);
await new Promise(resolve => setTimeout(resolve, 500)); // Allow processing
expect(inventoryService.reserveItems).toHaveBeenCalled();
});
That deliberate timeout? It mimics real-world async processing delays.
In production, we monitor everything:
# Prometheus metrics endpoint
curl http://inventory-service:3000/metrics
# RabbitMQ management
docker run -d --rm --name rabbitmq -p 15672:15672 rabbitmq:management
Dashboards showing event throughput and error rates become our early warning system.
Deploying? Consider this Docker Compose snippet:
services:
rabbitmq:
image: rabbitmq:management
ports:
- "5672:5672"
- "15672:15672"
redis:
image: redis:alpine
ports:
- "6379:6379"
order-service:
build: ./order-service
environment:
RABBITMQ_URL: amqp://rabbitmq
REDIS_URL: redis://redis
Kubernetes deployments follow similar patterns with managed cloud services.
This journey transformed how I build resilient systems. The isolation prevents cascading failures. The visibility through event sourcing simplifies debugging. The scalability comes naturally - just add more consumers.
What challenges have you faced with microservice communication? Share your experiences below! If this approach resonates with you, like and share this with your network. Let’s discuss in the comments - I’d love to hear how you implement event-driven patterns.