I’ve been thinking about distributed systems a lot lately, particularly how we can build applications that scale gracefully while remaining maintainable. The challenge of coordinating multiple services while keeping them loosely coupled led me to explore event-driven architectures. What if we could build systems where services communicate through events rather than direct calls, creating more resilient and scalable applications?
Event-driven microservices with NestJS offer a compelling solution to modern distributed system challenges. By combining NestJS’s structured approach with RabbitMQ’s reliable messaging and MongoDB’s flexible data storage, we can create systems that handle complexity while remaining adaptable to change.
Let me show you how to set up the foundation. We’ll start with a shared events package that all our microservices can use:
export abstract class BaseEvent {
public readonly eventId: string;
public readonly eventType: string;
public readonly timestamp: Date;
constructor() {
this.eventId = this.generateEventId();
this.eventType = this.constructor.name;
this.timestamp = new Date();
}
private generateEventId(): string {
return `${Date.now()}-${Math.random().toString(36).substr(2, 9)}`;
}
}
This base event class gives us consistent event structure across all services. But why is consistent event structure so important in distributed systems?
Now let’s create concrete events for our user service:
export class UserCreatedEvent extends BaseEvent {
constructor(
public readonly userId: string,
public readonly email: string,
public readonly firstName: string,
public readonly lastName: string
) {
super();
}
}
Setting up RabbitMQ as our message broker requires careful configuration. Here’s how we create a reusable RabbitMQ module:
@Module({})
export class RabbitMQModule {
static forRoot(options: RabbitMQModuleOptions): DynamicModule {
return {
module: RabbitMQModule,
imports: [
ClientsModule.register([
{
name: options.name,
transport: Transport.RMQ,
options: {
urls: [`amqp://localhost:5672`],
queue: options.queue,
queueOptions: { durable: true },
},
},
]),
],
exports: [ClientsModule],
};
}
}
The durable queue option ensures messages survive broker restarts, providing reliability for our system. How might message durability affect your application’s reliability?
Implementing the user service demonstrates event publishing in action:
@Injectable()
export class UserService {
constructor(
@Inject('RABBITMQ_CLIENT') private client: ClientProxy,
private userModel: Model<User>
) {}
async createUser(createUserDto: CreateUserDto): Promise<User> {
const user = new this.userModel(createUserDto);
await user.save();
const event = new UserCreatedEvent(
user._id.toString(),
user.email,
user.firstName,
user.lastName
);
await firstValueFrom(
this.client.emit('event.UserCreatedEvent', event)
);
return user;
}
}
Notice how the service saves the user first, then publishes the event. This order matters - what happens if we reverse these operations?
The order service shows event consumption patterns:
@Controller()
export class OrderController {
@EventPattern('event.UserCreatedEvent')
async handleUserCreated(event: UserCreatedEvent) {
console.log(`Processing user creation for: ${event.email}`);
// Update local user cache or projection
}
}
This pattern allows services to react to events from other parts of the system without direct dependencies. Can you see how this reduces coupling between services?
Error handling becomes crucial in distributed systems. Here’s a simple retry mechanism:
async publishWithRetry(event: BaseEvent, maxRetries = 3): Promise<void> {
for (let attempt = 1; attempt <= maxRetries; attempt++) {
try {
await firstValueFrom(this.client.emit(`event.${event.eventType}`, event));
return;
} catch (error) {
if (attempt === maxRetries) throw error;
await this.delay(Math.pow(2, attempt) * 1000);
}
}
}
Event sourcing adds another layer of reliability by storing all state changes as events:
@Entity()
export class EventStore {
@Prop({ required: true })
eventId: string;
@Prop({ required: true })
eventType: string;
@Prop({ type: Object, required: true })
eventData: any;
@Prop({ required: true })
aggregateId: string;
@Prop({ required: true })
timestamp: Date;
}
This approach lets us rebuild system state by replaying events, which is invaluable for debugging and analytics. Have you considered how event sourcing could simplify your debugging process?
Monitoring our microservices requires collecting metrics from all services:
@Injectable()
export class MetricsService {
private readonly httpRequests = new Counter({
name: 'http_requests_total',
help: 'Total HTTP requests',
labelNames: ['method', 'route', 'status']
});
incrementRequest(method: string, route: string, status: string) {
this.httpRequests.inc({ method, route, status });
}
}
Deploying everything with Docker Compose ensures consistency across environments:
version: '3.8'
services:
rabbitmq:
image: rabbitmq:3-management
ports:
- "5672:5672"
- "15672:15672"
user-service:
build: ./packages/user-service
depends_on:
- rabbitmq
- mongodb
environment:
RABBITMQ_URL: amqp://rabbitmq:5672
MONGODB_URL: mongodb://mongodb:27017/users
This architecture provides the foundation for building scalable, maintainable systems. The event-driven approach allows each service to evolve independently while maintaining clear communication channels.
What challenges have you faced with microservices communication? I’d love to hear about your experiences and solutions. If you found this helpful, please share it with others who might benefit, and feel free to leave comments with your thoughts or questions.