I’ve been thinking a lot about how modern applications need to handle increasing complexity while remaining responsive and reliable. Recently, I faced a situation where our traditional request-response architecture started showing its limitations under heavy load. That’s when I decided to explore event-driven microservices, and I want to share what I’ve learned about building resilient systems that can grow with your needs.
What if your services could communicate without knowing about each other’s existence? That’s the promise of event-driven architecture.
Let me show you how to build a complete system using NestJS, RabbitMQ, and MongoDB. We’ll create an e-commerce platform where services react to events rather than waiting for direct calls.
First, why choose this combination? NestJS provides a solid foundation with its modular architecture and dependency injection. RabbitMQ offers reliable message delivery, while MongoDB’s flexible document model fits perfectly with event-sourced systems.
Here’s a basic event structure to get us started:
export interface BaseEvent {
id: string;
type: string;
timestamp: Date;
version: string;
correlationId: string;
}
export interface UserCreatedEvent extends BaseEvent {
type: 'USER_CREATED';
data: {
userId: string;
email: string;
firstName: string;
};
}
Setting up our development environment is straightforward. We’ll use Docker Compose to run RabbitMQ and MongoDB:
version: '3.8'
services:
rabbitmq:
image: rabbitmq:3-management
ports:
- "5672:5672"
- "15672:15672"
mongodb:
image: mongo:latest
ports:
- "27017:27017"
Have you ever wondered how services stay in sync without direct communication? Events make this possible.
Creating our first service - the user service - demonstrates the pattern clearly. When a user registers, we publish an event that other services can react to:
@Injectable()
export class UserService {
constructor(private eventBus: EventBusService) {}
async createUser(createUserDto: CreateUserDto) {
const user = await this.userModel.create(createUserDto);
const event: UserCreatedEvent = {
id: uuidv4(),
type: 'USER_CREATED',
timestamp: new Date(),
version: '1.0',
correlationId: uuidv4(),
data: {
userId: user._id.toString(),
email: user.email,
firstName: user.firstName
}
};
await this.eventBus.publish('user.events', event);
return user;
}
}
The order service listens for these events and maintains its own read model:
@EventHandler('USER_CREATED')
async handleUserCreated(event: UserCreatedEvent) {
await this.userReadModel.create({
userId: event.data.userId,
email: event.data.email,
firstName: event.data.firstName
});
}
But what happens when a business process spans multiple services? That’s where saga patterns come into play.
Imagine a user placing an order. We need to reserve inventory, process payment, and update the order status. If any step fails, we need to compensate for previous actions:
export class OrderSaga {
async start(orderData: OrderData) {
try {
await this.reserveInventory(orderData);
await this.processPayment(orderData);
await this.completeOrder(orderData);
} catch (error) {
await this.compensate(orderData);
}
}
}
Error handling becomes crucial in distributed systems. RabbitMQ’s dead letter queues help us manage failed messages:
async setupQueues() {
await this.channel.assertExchange('dlx', 'direct');
await this.channel.assertQueue('dead_letter_queue');
await this.channel.assertQueue('order_events', {
deadLetterExchange: 'dlx',
deadLetterRoutingKey: 'dead_letter'
});
}
Monitoring distributed transactions requires careful instrumentation. We add correlation IDs to trace events across services:
const event = {
id: uuidv4(),
type: 'ORDER_CREATED',
correlationId: requestId,
causationId: previousEventId
};
Testing event-driven systems presents unique challenges. We need to verify that events are published and handled correctly:
describe('Order Service', () => {
it('should publish ORDER_CREATED event', async () => {
const order = await orderService.createOrder(testData);
expect(eventBus.publish).toHaveBeenCalledWith(
'order.events',
expect.objectContaining({ type: 'ORDER_CREATED' })
);
});
});
Deployment brings everything together. Our Docker Compose file ensures all services start in the correct order with proper configuration:
services:
user-service:
build: ./services/user-service
environment:
RABBITMQ_URL: amqp://rabbitmq:5672
MONGODB_URL: mongodb://mongodb:27017/users
order-service:
build: ./services/order-service
environment:
RABBITMQ_URL: amqp://rabbitmq:5672
MONGODB_URL: mongodb://mongodb:27017/orders
As your system grows, you’ll appreciate how easily you can add new services. Want to send email notifications? Just create a notification service that listens for relevant events.
Performance optimization becomes more straightforward too. You can scale individual services based on their specific load patterns without affecting the entire system.
Building this architecture has transformed how I think about system design. The loose coupling between services means teams can work independently, and the system can evolve naturally over time.
What challenges have you faced with microservices communication? I’d love to hear about your experiences and solutions. If you found this helpful, please share it with others who might benefit, and let me know in the comments what other architecture patterns you’d like to explore.