Recently, I found myself architecting a distributed system that needed to handle unpredictable traffic spikes while maintaining data consistency across services. That’s when event-driven microservices with NestJS became my solution. Let me share how this approach transformed our system’s resilience and scalability. If you’ve ever struggled with tightly coupled services or brittle integrations, you’ll appreciate what comes next.
Event-driven systems communicate through messages rather than direct calls. When something significant occurs—like an order creation—a service publishes an event. Other services react independently. This pattern prevents cascading failures. Why? Because services don’t wait for immediate responses. They handle events when ready. Our e-commerce example uses three services: Order, Inventory, and Notification. Each focuses on one business capability.
Setting up our workspace is straightforward. We create a monorepo structure:
mkdir ecommerce-microservices
cd ecommerce-microservices
mkdir -p shared/events services/{order,inventory,notification}-service
npm init -y
npm install @nestjs/common rxjs
Events are the backbone of our architecture. They’re immutable records with unique identifiers. Here’s how we define a base event class:
// shared/events/base.event.ts
export abstract class BaseEvent {
public readonly eventId: string;
public readonly eventType: string;
public readonly timestamp: Date;
constructor(eventType: string) {
this.eventId = crypto.randomUUID();
this.eventType = eventType;
this.timestamp = new Date();
}
}
Domain-specific events extend this base. Notice how they capture essential business context:
// shared/events/order.events.ts
export class OrderCreatedEvent extends BaseEvent {
constructor(
public readonly orderId: string,
public readonly userId: string,
public readonly items: OrderItem[],
public readonly totalAmount: number
) {
super('OrderCreated');
}
}
Infrastructure setup is simplified with Docker Compose. This single file provisions RabbitMQ, MongoDB, and Redis:
# docker-compose.yml
services:
rabbitmq:
image: rabbitmq:3-management
ports: ["5672:5672", "15672:15672"]
environment:
RABBITMQ_DEFAULT_USER: admin
RABBITMQ_DEFAULT_PASS: password
mongodb:
image: mongo:6
ports: ["27017:27017"]
environment:
MONGO_INITDB_ROOT_USERNAME: admin
MONGO_INITDB_ROOT_PASSWORD: password
Now, let’s build the Order Service. After initializing NestJS, we configure MongoDB and RabbitMQ connections:
// order-service/src/config/database.config.ts
export const getDatabaseConfig = (): MongooseModuleOptions => ({
uri: 'mongodb://admin:password@localhost:27017/orders?authSource=admin'
});
// order-service/src/config/rabbitmq.config.ts
export const getRabbitMQConfig = () => ({
transport: Transport.RMQ,
options: {
urls: ['amqp://admin:password@localhost:5672'],
queue: 'order_queue',
queueOptions: { durable: true }
}
});
Our Order schema includes critical fields like status tracking and item details. How might we extend this for refund scenarios?
// order-service/src/schemas/order.schema.ts
@Schema({ timestamps: true })
export class Order {
@Prop({ required: true })
userId: string;
@Prop({ enum: OrderStatus, default: OrderStatus.PENDING })
status: string;
@Prop({ type: [OrderItemSchema] })
items: OrderItem[];
}
Event publishing is handled through a dedicated service. It abstracts RabbitMQ interactions:
// order-service/src/events/event-publisher.service.ts
@Injectable()
export class EventPublisher {
private client: ClientProxy;
constructor() {
this.client = ClientProxyFactory.create(getRabbitMQConfig());
}
publish(event: BaseEvent) {
this.client.emit(event.eventType, event);
}
}
When an order is created, we publish the event like this:
// order-service/src/orders/orders.service.ts
async createOrder(dto: CreateOrderDto) {
const order = await this.orderModel.create(dto);
this.eventPublisher.publish(
new OrderCreatedEvent(order.id, order.userId, order.items, order.totalAmount)
);
return order;
}
This approach ensures inventory and notification services react independently. What happens if RabbitMQ goes down temporarily? We implement retry logic later. For now, consider how this separation allows each service to evolve independently. The Inventory Service doesn’t care about order details—only relevant product and quantity data.
I’ve seen teams reduce integration bugs by 70% with this pattern. The initial setup requires thought, but the long-term payoff in maintainability is substantial. What challenges have you faced with distributed systems? Share your experiences below—I’d love to hear how others tackle these problems. If this approach resonates with you, please like and share this with colleagues wrestling with similar architecture decisions.