As I worked on scaling a traditional monolithic application recently, I faced constant bottlenecks in synchronous communication between components. Every new feature caused cascading failures across the system. That frustration led me to explore event-driven microservices as a solution. Today, I’ll walk you through building a robust architecture using NestJS, RabbitMQ, and Redis that handles real e-commerce demands. Ready to transform how your services communicate?
Our architecture features six core services coordinated through events. When a user places an order, the Order Service publishes an OrderCreatedEvent. RabbitMQ routes this to the Inventory Service for stock checks. Upon success, a PaymentRequestedEvent triggers the Payment Service. Finally, the Notification Service confirms the order completion. This asynchronous flow prevents system-wide failures when one service struggles.
Let’s set up our monorepo foundation. I prefer a structured approach that keeps services independent yet manageable:
nest new ecommerce-microservices --package-manager npm
cd ecommerce-microservices
nest generate app api-gateway
nest generate app order-service
# Repeat for user, product, payment, notification services
nest generate library common
Install essential packages:
npm install @nestjs/microservices amqplib redis
npm install @nestjs/config @nestjs/redis ioredis
Why do you think we separate services while sharing common libraries? It balances autonomy with consistency.
For cross-service communication, we establish a shared event contract. In our common
library:
// base-event.interface.ts
export interface DomainEvent<T = any> {
id: string;
timestamp: Date;
eventType: string;
aggregateId: string;
payload: T;
}
We then implement a base service class that handles event emission:
// base.service.ts
@Injectable()
export abstract class BaseService {
protected events: DomainEvent[] = [];
protected addEvent(event: DomainEvent): void {
this.events.push(event);
}
protected async commitEvents(): Promise<void> {
const events = [...this.events];
this.events = [];
for (const event of events) {
await this.eventEmitter.emitAsync(event.eventType, event);
}
}
}
RabbitMQ becomes our nervous system. Notice how we configure durable queues to survive restarts:
// rabbitmq.config.ts
export const getRabbitMQConfig = (queue: string): RmqOptions => ({
transport: Transport.RMQ,
options: {
urls: [process.env.RABBITMQ_URL],
queue,
queueOptions: {
durable: true,
arguments: { 'x-message-ttl': 60000 }
},
prefetchCount: 10,
noAck: false
}
});
In the Order Service, creating an order triggers our event chain:
// order.entity.ts
export class Order {
constructor(public id: string, public userId: string, public items: OrderItem[]) {}
create() {
this.addEvent({
eventType: 'OrderCreated',
aggregateId: this.id,
payload: this.serialize()
});
}
}
What happens if payment fails after inventory reservation? We implement sagas for transactional integrity. The Order Service orchestrates compensation actions:
// order.saga.ts
async handlePaymentFailedEvent(event: PaymentFailedEvent) {
await this.inventoryService.reverseInventory(event.orderId);
await this.notifyUser(`Payment failed for order ${event.orderId}`);
}
Redis solves two critical challenges: distributed caching and session management. In the Product Service:
// product.service.ts
async getProduct(id: string) {
const cached = await this.redis.get(`product:${id}`);
if (cached) return JSON.parse(cached);
const product = await this.repository.findOne(id);
await this.redis.set(`product:${id}`, JSON.stringify(product), 'EX', 60);
return product;
}
For monitoring, we implement health checks that report to our API gateway:
// health.controller.ts
@Get('health')
@HealthCheck()
check() {
return this.health.check([
() => this.rabbitmq.pingCheck('rabbitmq'),
() => this.redis.pingCheck('redis')
]);
}
Testing requires simulating our event bus. I use Jest mocks for unit tests:
// order.service.spec.ts
it('should publish OrderCreatedEvent', async () => {
const emitSpy = jest.spyOn(eventEmitter, 'emitAsync');
await orderService.createOrder(testOrder);
expect(emitSpy).toHaveBeenCalledWith('OrderCreated', expect.any(Object));
});
Deployment becomes straightforward with Docker Compose:
# docker-compose.yml
services:
rabbitmq:
image: rabbitmq:3-management
redis:
image: redis:alpine
order-service:
build: ./apps/order-service
depends_on:
- rabbitmq
- redis
Common pitfalls? I’ve learned to always set message TTLs in RabbitMQ to prevent dead-letter buildup. Also, validate Redis memory policies to avoid out-of-memory crashes during traffic spikes. Instrument everything with OpenTelemetry—you’ll thank me during 3 AM outages.
Building this transformed how I approach distributed systems. The decoupling allows teams to deploy independently while maintaining system integrity. Have you encountered scenarios where event-driven approaches solved your architectural headaches? Share your experiences below! If this guide helped you, please like and share—it helps others discover practical microservices patterns. What topics should I cover next?