Ever found yourself wrestling with distributed systems? I certainly have. After hitting scalability walls with monolithic architectures, I turned to event-driven microservices. This approach transforms how systems communicate, replacing brittle direct calls with resilient, asynchronous events. Today, I’ll walk through building a production-ready e-commerce system using NestJS, RabbitMQ, and MongoDB.
Why these technologies? NestJS provides a structured TypeScript foundation, RabbitMQ ensures reliable messaging, and MongoDB’s flexibility suits diverse service needs. Together, they handle high loads while keeping services decoupled.
Let’s start with our core services: Users, Orders, Inventory, Payment, and Notifications. When a user places an order, the Order Service publishes an OrderCreatedEvent
. The Inventory Service listens, checks stock, and replies with InventoryReservedEvent
or InventoryReservationFailedEvent
. Payments and notifications follow the same event-driven pattern.
Setting up is straightforward with Docker:
# docker-compose.yml
services:
rabbitmq:
image: rabbitmq:3-management
ports: ["5672:5672", "15672:15672"]
mongodb:
image: mongo:6
ports: ["27017:27017"]
Shared events keep services in sync without tight coupling. Notice how these events act as contracts between services:
// Shared event library
export class OrderCreatedEvent {
constructor(
public readonly orderId: string,
public readonly userId: string,
public readonly items: Array<{ productId: string; quantity: number }>
) {}
}
In the Order Service, creating an order triggers the event flow:
@Injectable()
export class OrderService {
async createOrder(createOrderDto: CreateOrderDto) {
const order = await this.orderModel.create({ ...createOrderDto, status: 'PENDING' });
await this.eventEmitter.emit('order.created', new OrderCreatedEvent(order.id, order.userId, order.items));
return order;
}
}
What happens if inventory checks fail? The Inventory Service publishes failure events, triggering compensation logic:
@EventPattern('order.created')
async handleOrderCreated(@Payload() event: OrderCreatedEvent) {
try {
await this.reserveStock(event.items);
this.eventEmitter.emit('inventory.reserved', new InventoryReservedEvent(event.orderId));
} catch (error) {
this.eventEmitter.emit('inventory.reservation.failed',
new InventoryReservationFailedEvent(event.orderId, error.details));
}
}
For distributed transactions, we implement the Saga pattern. Each service executes local transactions and emits events. If any step fails, compensating actions roll back previous steps. Ever wondered how e-commerce platforms handle payment failures after inventory reservation? Sagas solve this by triggering OrderCancelledEvent
to release reserved stock.
Error handling is critical. RabbitMQ’s dead-letter queues capture failed messages for analysis:
// RabbitMQ module configuration
@Module({
imports: [
RabbitMQModule.forRoot(RabbitMQModule, {
exchanges: [{ name: 'orders', type: 'topic' }],
uri: 'amqp://admin:password@localhost:5672',
connectionInitOptions: { wait: true },
queues: [
{
name: 'inventory-queue',
options: { deadLetterExchange: 'dead-letters' }
}
]
}),
],
})
Testing event flows? Use libraries like Jest to mock RabbitMQ:
// Testing event emission
it('should publish OrderCreatedEvent on order creation', async () => {
const emitSpy = jest.spyOn(eventEmitter, 'emit');
await orderService.createOrder(mockOrderDto);
expect(emitSpy).toHaveBeenCalledWith('order.created', expect.any(OrderCreatedEvent));
});
Monitoring is non-negotiable. Tools like Prometheus track message latency, while distributed tracing pinpoints bottlenecks. How do you trace a request across five services? OpenTelemetry correlates events using unique IDs passed through message headers.
For deployment, Kubernetes manages scaling:
# Kubernetes deployment for Order Service
apiVersion: apps/v1
kind: Deployment
spec:
replicas: 3
template:
spec:
containers:
- name: order-service
image: my-registry/order-service:latest
env:
- name: RABBITMQ_URL
value: "amqp://rabbitmq"
Common pitfalls? Message ordering isn’t guaranteed—design services to handle out-of-order events. Also, avoid shared databases; services must own their data. Remember: services should communicate only via events.
This architecture shines in high-traffic scenarios. During peak sales, our system processed 5,000 orders/minute by scaling RabbitMQ consumers and MongoDB shards. The key? Start simple, add resilience patterns gradually, and monitor everything.
Found this useful? Try implementing a returns service using these patterns. Share your experiences below—I’d love to hear what challenges you faced! If this helped you, please like and share with your network.