I’ve been thinking about microservices a lot lately. As systems grow, the traditional request-response model starts showing cracks. Tight coupling between services creates bottlenecks. A single failure can cascade through the entire system. That’s why I turned to event-driven architecture with NestJS, RabbitMQ, and TypeScript. Let me show you how to build resilient, scalable systems that can handle real-world complexity. This approach has transformed how I design distributed systems.
Why RabbitMQ? It’s battle-tested for reliable messaging. Combined with NestJS’s clean architecture and TypeScript’s type safety, we get a powerful foundation. We’ll create three core services: order processing, payment handling, and notifications. Each will react to events independently. How do we ensure these services understand each other without direct dependencies? TypeScript interfaces become our shared language.
First, let’s set up our environment. I prefer Docker for consistency:
# Create our project structure
nest new order-service
nest new payment-service
nest new notification-service
# Docker setup for infrastructure
docker-compose up -d
Our docker-compose.yml
defines RabbitMQ and Redis containers. Notice the management interface on port 15672 - it’s invaluable for debugging.
Shared contracts are crucial. Without them, services become chaotic. Here’s how I define core events:
// Shared event base class
export abstract class EventBase {
readonly eventId: string;
readonly timestamp: Date = new Date();
constructor() {
this.eventId = uuidv4();
}
}
// Order-specific event
export class OrderCreatedEvent extends EventBase {
constructor(
public readonly orderId: string,
public readonly items: OrderItem[]
) { super(); }
}
These contracts live in a shared library. Every service uses the same definitions. Ever tried modifying events across multiple services? TypeScript catches mismatches at compile time.
Connecting to RabbitMQ needs robust configuration. Here’s my production-tested setup:
// RabbitMQ config module
@Module({
providers: [
{
provide: 'RABBITMQ_CONNECTION',
useFactory: async () => {
const connection = await amqp.connect('amqp://admin:admin123@localhost');
return connection.createChannel();
}
}
],
exports: ['RABBITMQ_CONNECTION']
})
export class RabbitMQModule {}
This creates a reusable channel connection. Notice we’re not creating channels per request - that would overwhelm RabbitMQ.
For the order service, event publishing looks like this:
// In OrderService
async createOrder(orderData: CreateOrderDto) {
const order = await this.ordersRepository.create(orderData);
// Publish event
this.eventPublisher.publish(new OrderCreatedEvent(
order.id,
order.items
));
return order;
}
The payment service listens for this event. But what if payment fails? We need to handle that gracefully:
// Payment service handler
@EventHandler(OrderCreatedEvent)
async handleOrderCreated(event: OrderCreatedEvent) {
try {
const payment = await this.paymentProcessor.charge(event);
this.eventPublisher.publish(new PaymentCompletedEvent(payment.id));
} catch (error) {
this.eventPublisher.publish(new PaymentFailedEvent(
event.orderId,
error.message
));
}
}
Notice the try/catch? Without it, failures disappear silently. The notification service then handles both success and failure events.
Dead letter queues save us during outages. Here’s how I configure them:
// Queue setup with DLQ
await channel.assertExchange('orders', 'topic', { durable: true });
await channel.assertQueue('orders.payments', {
durable: true,
deadLetterExchange: 'orders.dlx'
});
When a message fails delivery after retries, RabbitMQ routes it to the DLX. We can inspect these later. How many times have you lost critical messages during incidents?
Sagas manage distributed transactions. Consider an order flow:
// Order saga coordinator
class OrderSaga {
@Saga()
static onOrderCreated(events$: Observable<OrderCreatedEvent>) {
return events$.pipe(
map(event => new ProcessPaymentCommand(event.orderId))
);
}
}
This coordinates payment processing after order creation. Without sagas, we’d need complex callback chains.
Testing event-driven systems requires special attention. I use this pattern:
// Event handler test
it('should process payment on order created', async () => {
const event = new OrderCreatedEvent('order-123', []);
await handler.handleOrderCreated(event);
expect(mockPaymentProcessor.charge)
.toHaveBeenCalledWith('order-123');
});
We verify events trigger the right actions. Without these tests, debugging becomes nightmarish.
For monitoring, I combine RabbitMQ’s management UI with OpenTelemetry. Tracing events across services reveals bottlenecks. Have you ever chased a phantom performance issue? Distributed tracing makes it tangible.
Common pitfalls? Message ordering catches many developers. RabbitMQ guarantees per-queue order, but across services? Use versioning in events. Another gotcha: overusing events for synchronous needs. Sometimes HTTP calls are simpler.
Alternatives exist. Kafka excels at high-throughput logging. SQS works for AWS-centric shops. But RabbitMQ strikes the best balance for most cases. Its management interface and stability won me over.
Building this changed how I view distributed systems. The decoupling lets teams move faster. Failures become isolated. Scaling feels natural. I’m now applying this pattern to other projects.
If you’ve struggled with microservice coordination, try this approach. What challenges have you faced with distributed systems? Share your experiences below. Found this useful? Like and share to help others discover it. Let’s discuss in the comments!