The idea of building robust microservices has been on my mind ever since I witnessed how monolithic systems crumble under scale. Just last month, a production outage caused by tangled service dependencies cost our team three sleepless nights. That’s when I decided to implement an event-driven architecture using NestJS, RabbitMQ, and MongoDB - and today I’ll show you exactly how to do it. Stick around as I share practical patterns I’ve battle-tested in production.
When designing our e-commerce system, we needed independent services that could evolve separately. The core consists of four services: Orders, Inventory, Payments, and Notifications. Here’s how they interact: when an order is created, the inventory service reserves products. If successful, payment processing kicks in. Finally, notifications inform the customer. But what happens if inventory reservation fails midway? We’ll solve that.
Start by creating our workspace and services:
nest new order-service
nest new inventory-service
# Repeat for payment-service and notification-service
We need shared code between services. Create a shared
library with base event classes:
// shared/src/events/base.event.ts
export abstract class BaseEvent {
readonly id: string = crypto.randomUUID();
readonly eventType: string = this.constructor.name;
readonly timestamp: Date = new Date();
constructor(
public readonly aggregateId: string,
public readonly aggregateType: string
) {}
}
Now for the messaging backbone. RabbitMQ handles our events with dead-letter queues for error handling:
// shared/src/event-bus/event-bus.service.ts
async publish(event: BaseEvent) {
const message = JSON.stringify(event);
this.channel.publish('events', 'order.created',
Buffer.from(message),
{ persistent: true } // Ensure message survives restarts
);
}
async subscribe(queue: string, handler: EventHandler) {
await this.channel.assertQueue(queue, {
deadLetterExchange: 'events.dlx' // Route failed messages
});
this.channel.consume(queue, async (msg) => {
try {
const event = this.parseEvent(msg.content);
await handler.handle(event);
this.channel.ack(msg);
} catch (error) {
this.channel.nack(msg, false, false); // Send to DLQ
}
});
}
In our order service, we publish events like this:
// order-service/src/orders/orders.service.ts
async createOrder(orderDto) {
const order = await this.ordersRepository.create(orderDto);
const event = new OrderCreatedEvent(
order.id,
order.customerId,
order.items,
order.total
);
await this.eventBus.publish(event);
return order;
}
The inventory service then listens and reacts:
// inventory-service/src/listeners/order.listener.ts
@Injectable()
export class OrderCreatedListener implements EventHandler<OrderCreatedEvent> {
async handle(event: OrderCreatedEvent) {
for (const item of event.items) {
await this.inventoryService.reserve(
item.productId,
item.quantity
);
}
// What if we run out of stock mid-process?
}
}
For reliable transactions across services, we use the saga pattern. When payment fails after inventory reservation, we trigger compensation:
// payment-service/src/sagas/order.saga.ts
async run(orderId: string) {
try {
const payment = await this.paymentService.process(orderId);
await this.eventBus.publish(
new PaymentProcessedEvent(orderId, payment.id)
);
} catch (error) {
await this.eventBus.publish(
new OrderCancelledEvent(orderId, 'Payment failed')
);
// Now inventory must revert reservation
}
}
Event sourcing with MongoDB gives us an audit trail. We store all state changes as events:
// order-service/src/events/order.events.ts
@Entity()
export class OrderEventEntity {
@PrimaryColumn()
id: string;
@Column()
aggregateId: string;
@Column()
eventType: string;
@Column('jsonb')
payload: Record<string, any>;
@CreateDateColumn()
timestamp: Date;
}
// Rehydrating order state
async getOrder(id: string) {
const events = await this.eventRepository.find({
where: { aggregateId: id }
});
return events.reduce((order, event) => {
return applyEvent(order, event); // Apply each event in sequence
}, new Order());
}
For monitoring, we correlate events using distributed tracing:
// Global interceptor for tracing
@Injectable()
export class TracingInterceptor implements NestInterceptor {
intercept(context: ExecutionContext, next: CallHandler) {
const request = context.switchToHttp().getRequest();
const traceId = request.headers['x-trace-id'] || uuidv4();
return next.handle().pipe(
tap(() => {
this.logger.log(`[${traceId}] Completed request`);
})
);
}
}
Testing is crucial. We use contract tests to verify event schemas:
// shared/tests/event.contract.test.ts
test('OrderCreatedEvent schema', () => {
const event = new OrderCreatedEvent(
'order-123',
'user-456',
[{ productId: 'prod-789', quantity: 2 }],
99.99
);
const errors = validateSync(event);
expect(errors).toHaveLength(0); // Ensure all required fields exist
});
Deploying? Remember to configure RabbitMQ for high availability and enable MongoDB transactions. Use health checks for all services:
// In each service's main.ts
app.enableShutdownHooks();
const healthCheck = `http://localhost:${port}/health`;
This architecture has handled Black Friday traffic spikes without breaking. The true power lies in how services remain blissfully unaware of each other - they only care about events. Have you considered how you’d add a new recommendation service to this ecosystem?
What challenges have you faced with microservices? Share your experiences below! If this guide helped you, please like and share it with others facing similar architecture decisions. Your feedback fuels future content.