I’ve been thinking about modern system architecture lately, particularly how we can build applications that scale gracefully while remaining maintainable. The shift from monolithic applications to distributed systems presents both opportunities and challenges. This led me to explore event-driven microservices using NestJS, RabbitMQ, and MongoDB - a combination that offers remarkable flexibility and resilience for today’s demanding applications.
Event-driven architecture fundamentally changes how services communicate. Instead of services directly calling each other’s APIs, they emit events that other services can react to. This creates systems where components remain independent yet work together seamlessly. Have you considered how this approach might simplify your current system’s dependencies?
Let me show you how we can implement this pattern. First, we define our events in a shared module:
export class UserCreatedEvent extends BaseEvent {
constructor(
public readonly userId: string,
public readonly email: string,
public readonly firstName: string,
public readonly lastName: string
) {
super('UserCreated', userId);
}
}
Each service becomes responsible for its own domain. The user service handles user management, the order service processes orders, and the notification service manages communications. They don’t call each other directly but communicate through events. This separation allows each service to evolve independently. What happens when one service needs to be updated or scaled?
RabbitMQ serves as our message broker, ensuring reliable delivery between services. Here’s how we configure a service to publish events:
@Injectable()
export class UserService {
constructor(
@Inject('RABBITMQ_CLIENT') private readonly rabbitmqClient: ClientProxy,
private readonly userRepository: UserRepository
) {}
async createUser(createUserDto: CreateUserDto): Promise<User> {
const user = await this.userRepository.create(createUserDto);
const event = new UserCreatedEvent(
user.id,
user.email,
user.firstName,
user.lastName
);
await this.rabbitmqClient.emit('user.created', event);
return user;
}
}
On the consuming side, services listen for relevant events:
@Controller()
export class NotificationController {
@EventPattern('user.created')
async handleUserCreated(data: UserCreatedEvent) {
await this.notificationService.sendWelcomeEmail(
data.email,
data.firstName
);
}
}
MongoDB provides excellent support for event sourcing patterns. We can store events as documents and use them to rebuild state:
@Entity()
export class Order {
@Prop()
orderId: string;
@Prop()
status: string;
@Prop([Event])
events: Event[];
static fromEvents(events: Event[]): Order {
const order = new Order();
events.forEach(event => order.apply(event));
return order;
}
}
Handling failures gracefully is crucial. We implement retry mechanisms and dead letter queues:
@Injectable()
export class OrderService {
async processOrder(event: OrderCreatedEvent) {
try {
await this.fulfillOrder(event.orderId);
} catch (error) {
await this.rabbitmqClient.emit('order.failed', {
...event,
retryCount: (event['retryCount'] || 0) + 1
});
}
}
}
Monitoring distributed systems requires special attention. We add correlation IDs to trace requests across services:
@Injectable()
export class CorrelationIdService {
private readonly correlationId = new AsyncLocalStorage<string>();
runWithId<T>(id: string, fn: () => Promise<T>): Promise<T> {
return this.correlationId.run(id, fn);
}
getId(): string | undefined {
return this.correlationId.getStore();
}
}
Deployment becomes straightforward with Docker Compose:
version: '3.8'
services:
rabbitmq:
image: rabbitmq:3-management
ports:
- "5672:5672"
- "15672:15672"
user-service:
build: ./services/user-service
environment:
- RABBITMQ_URL=amqp://rabbitmq:5672
- MONGODB_URL=mongodb://mongo:27017/users
The beauty of this architecture lies in its adaptability. Services can be added, removed, or modified without disrupting the entire system. New features often require just adding new event handlers rather than modifying existing services. How might this approach change how you plan your next feature rollout?
Testing becomes more focused too. We can test each service in isolation:
describe('UserService', () => {
it('should publish UserCreatedEvent when user is created', async () => {
const rabbitmqClient = { emit: jest.fn() };
const service = new UserService(rabbitmqClient, userRepository);
await service.createUser(testUserDto);
expect(rabbitmqClient.emit).toHaveBeenCalledWith(
'user.created',
expect.any(UserCreatedEvent)
);
});
});
As systems grow, this architecture proves its worth. The loose coupling between services means teams can work independently, deployment pipelines remain simple, and the system can handle increasing loads by scaling individual components. The event log provides a complete audit trail, which is invaluable for debugging and compliance.
Building with event-driven microservices requires shifting our mindset from direct service calls to event flows. The initial setup might seem complex, but the long-term benefits in scalability, maintainability, and resilience make it worthwhile. Have you encountered situations where this architecture would have solved persistent problems in your projects?
I’d love to hear about your experiences with distributed systems. What challenges have you faced, and how did you overcome them? If you found this approach helpful, please share it with others who might benefit. Your comments and questions are always welcome - they help all of us learn and grow together in this ever-evolving field of software architecture.