I’ve been designing scalable systems for years, and recently faced a challenge that made me rethink traditional approaches. We needed an e-commerce platform handling 10,000+ transactions per minute with zero downtime during peak sales. That’s when I fully committed to event-driven microservices. Today, I’ll walk you through building this architecture using NestJS, RabbitMQ, and MongoDB - the stack that solved our real-world scalability problems. Follow along and you’ll gain practical insights you can apply immediately.
Our system comprises four core services communicating through events: an API Gateway for routing, User Service for profiles, Order Service for transactions, and Notification Service for alerts. They’re decoupled but coordinated through message patterns.
Consider how events flow through the system:
// Event definition
export interface UserCreatedEvent extends BaseEvent {
type: 'USER_CREATED';
payload: {
userId: string;
email: string;
name: string;
};
}
// Emitting in User Service
await eventEmitter.emit({
type: 'USER_CREATED',
payload: newUser
});
// Handling in Notification Service
@EventHandler('USER_CREATED')
handleUserCreated(event: UserCreatedEvent) {
this.emailService.sendWelcome(event.payload.email);
}
Notice how services remain unaware of each other? That’s the power of event-driven design. But how do we handle service failures without cascading crashes? We’ll cover that shortly.
First, our environment setup. Using Docker Compose ensures consistency across development and production:
# docker-compose.yml
services:
rabbitmq:
image: rabbitmq:3-management
ports: ["5672:5672", "15672:15672"]
mongodb:
image: mongo:6.0
ports: ["27017:27017"]
Run docker-compose up
and we’ve got our backbone services ready. For shared utilities like event handling, we create reusable packages:
// Shared Event Emitter
@Injectable()
export class EventEmitterService {
private client: ClientProxy;
constructor() {
this.client = ClientProxyFactory.create({
transport: Transport.RMQ,
options: {
urls: ['amqp://localhost:5672'],
queue: 'events_queue',
queueOptions: { durable: true }
}
});
}
async emit(event: BaseEvent) {
await this.client.emit(event.type, event);
}
}
Now to our API Gateway - the entry point. It handles authentication, routing, and request validation:
// Gateway bootstrap
async function bootstrap() {
const app = await NestFactory.create(AppModule);
app.useGlobalPipes(new ValidationPipe());
await app.listen(3000);
}
// Auth Guard
@Injectable()
export class AuthGuard implements CanActivate {
constructor(private jwtService: JwtService) {}
canActivate(context: ExecutionContext): boolean {
const request = context.switchToHttp().getRequest();
try {
const token = request.headers.authorization.split(' ')[1];
request.user = this.jwtService.verify(token);
return true;
} catch {
return false;
}
}
}
What happens when a user registers? The User Service creates the profile then emits an event:
// User Service Controller
@Post()
async createUser(@Body() createUserDto: CreateUserDto) {
const user = await this.userService.create(createUserDto);
this.eventEmitter.emit({
type: 'USER_CREATED',
payload: user
});
return user;
}
The Order Service then listens for order events while managing its own MongoDB database:
// Order Event Handler
@EventHandler('ORDER_PLACED')
async handleOrderPlaced(event: OrderPlacedEvent) {
const order = event.payload;
await this.orderModel.create({
...order,
status: 'PROCESSING'
});
// Initiate payment flow
this.paymentService.charge(order.totalAmount);
}
But what if payment fails? We implement compensating actions:
@EventHandler('PAYMENT_FAILED')
async handlePaymentFailed(event: PaymentFailedEvent) {
await this.orderModel.updateOne(
{ id: event.payload.orderId },
{ status: 'FAILED', reason: event.payload.reason }
);
// Trigger inventory compensation
this.eventEmitter.emit({
type: 'INVENTORY_RESTOCK',
payload: event.payload.items
});
}
For database operations, we use repository patterns with Mongoose:
// Order Repository
@Injectable()
export class OrderRepository {
constructor(@InjectModel(Order.name) private orderModel: Model<OrderDocument>) {}
async create(orderData: Partial<Order>): Promise<Order> {
const order = new this.orderModel(orderData);
return order.save();
}
async findById(id: string): Promise<Order | null> {
return this.orderModel.findById(id).lean();
}
}
Error handling deserves special attention. We implement retry mechanisms with exponential backoff:
// Message consumption with retries
async processMessage(message: any) {
let attempts = 0;
const maxAttempts = 5;
while (attempts < maxAttempts) {
try {
await handle(message);
break;
} catch (error) {
attempts++;
const delay = Math.pow(2, attempts) * 1000;
await new Promise(resolve => setTimeout(resolve, delay));
}
}
if (attempts === maxAttempts) {
this.deadLetterQueue.push(message);
}
}
Testing requires simulating event flows. We use Jest for integration tests:
// Testing event flow
describe('Order Flow', () => {
it('should complete order cycle', async () => {
// Publish user created event
eventEmitter.emit(mockUserCreatedEvent);
// Publish order placed event
eventEmitter.emit(mockOrderEvent);
// Verify payment processed
await waitForEvent('PAYMENT_PROCESSED');
const order = await orderRepository.findById('order123');
expect(order.status).toBe('COMPLETED');
});
});
For deployment, we containerize each service independently. Kubernetes manages scaling:
# Sample Dockerfile
FROM node:18-alpine
WORKDIR /app
COPY package*.json ./
RUN npm ci
COPY . .
CMD ["npm", "run", "start:prod"]
Performance tuning includes:
- RabbitMQ prefetch counts
- MongoDB indexing
- Connection pooling
- Caching frequent queries
Common pitfalls I’ve encountered:
- Event duplication - Always make handlers idempotent
- Message ordering - Use consistent hashing in RabbitMQ
- Debugging complexity - Implement correlation IDs
- Database schema drift - Maintain strict version control
After implementing this architecture, our system handled Black Friday traffic with zero downtime. Services scaled independently, failures isolated gracefully, and new features deployed without system-wide restarts.
This approach transformed how we build resilient systems. What challenges are you facing with microservices? Share your experiences below - I’d love to hear what solutions you’ve implemented. If this guide helped you, please like and share it with other developers facing similar architecture decisions.