Crafting Event-Driven Microservices with NestJS, RabbitMQ, and MongoDB
As I designed complex systems for e-commerce platforms, I repeatedly faced challenges with tightly coupled services. Synchronous API calls created fragile dependencies that crumbled under load. That’s when I turned to event-driven microservices - an approach that transformed how I build resilient systems. Let me show you how to implement this architecture using NestJS, RabbitMQ, and MongoDB.
Why choose this stack?
NestJS provides a structured TypeScript foundation for microservices. RabbitMQ acts as our central nervous system for message routing, while MongoDB offers flexible data storage per service. Together, they handle distributed operations gracefully.
Start with a workspace structure:
microservices-system/
├── shared/ # Reusable code
├── services/ # Individual microservices
├── api-gateway/
└── docker-compose.yml
Core Communication Setup
RabbitMQ connects our services through exchanges and queues. Here’s how we declare an event publisher:
// services/user-service/src/event-publisher.service.ts
@Injectable()
export class EventPublisher {
constructor(@Inject('EVENT_BUS') private client: ClientProxy) {}
async publish(event: BaseEvent): Promise<void> {
await this.client.emit(event.eventType, event).toPromise();
}
}
When a user registers, we publish an event:
// services/user-service/src/user.service.ts
async createUser(dto: CreateUserDto): Promise<User> {
const user = await this.userModel.create(dto);
await this.eventPublisher.publish({
eventType: 'USER_CREATED',
eventId: uuidv4(),
aggregateId: user.id,
timestamp: new Date(),
data: {
userId: user.id,
email: user.email,
firstName: user.firstName,
lastName: user.lastName
}
});
return user;
}
Handling Distributed Transactions
How do we maintain consistency across services? The Saga pattern coordinates multi-step transactions through events. Consider an order flow:
- Order Service creates order → emits
ORDER_CREATED
- Payment Service processes payment → emits
PAYMENT_PROCESSED
- Inventory Service updates stock → emits
STOCK_UPDATED
Each service listens for relevant events:
// services/payment-service/src/payment.listener.ts
@Controller()
export class PaymentListener {
constructor(private paymentService: PaymentService) {}
@EventPattern('ORDER_CREATED')
async handleOrderCreated(data: OrderCreatedEvent['data']) {
const result = await this.paymentService.processPayment(
data.orderId,
data.totalAmount
);
// Emits PAYMENT_PROCESSED event
}
}
Data Management Strategy
Each service owns its MongoDB database. The User Service manages user data, while the Order Service handles orders. This isolation prevents brittle joins across services.
Define schemas with clear ownership:
// services/order-service/src/schemas/order.schema.ts
@Schema()
export class Order {
@Prop({ required: true })
userId: string; // Reference only - not a foreign key!
@Prop([{ productId: String, quantity: Number, price: Number }])
items: OrderItem[];
@Prop({ default: 'PENDING' })
status: OrderStatus;
}
Observability Essentials
Without centralized logging, troubleshooting becomes guesswork. Winston with Elasticsearch provides clarity:
// shared/logger/logger.module.ts
const winstonElastic = new ElasticsearchTransport({
node: process.env.ELASTICSEARCH_URL
});
export const logger = createLogger({
transports: [
new winston.transports.Console(),
winstonElastic
]
});
In controllers:
@Controller()
export class UserController {
private logger = new Logger(UserController.name);
@Post()
async createUser(@Body() dto: CreateUserDto) {
this.logger.log(`Creating user ${dto.email}`);
// ...
}
}
Deployment Strategy
Docker Compose orchestrates our entire ecosystem:
# docker-compose.prod.yml
services:
user-service:
build: ./services/user-service
environment:
RABBITMQ_URL: amqp://rabbitmq
MONGODB_URI: mongodb://mongodb/user-service
rabbitmq:
image: rabbitmq:3-management
mongodb:
image: mongo:6.0
volumes:
- mongodb_data:/data/db
volumes:
mongodb_data:
Critical Considerations
- Always use idempotent event handlers - what happens if you receive the same event twice?
- Implement dead-letter queues for failed messages
- Version your events for backward compatibility
- Secure RabbitMQ with TLS and proper credentials
I’ve deployed this pattern in production handling 10K+ events/minute. The true power? When payment processing failed during a flash sale, orders queued gracefully instead of crashing the system. Failed payments were re-attempted once dependencies recovered.
Final Tip: Start small. Implement one event flow between two services before scaling. Monitor queue depths and error rates religiously - they’re your first sign of trouble.
This approach transformed how I build systems. What challenges have you faced with microservices? Share your experiences below - I’d love to hear what solutions you’ve discovered. If this guide helped you, please like and share it with others who might benefit!