I’ve been thinking about distributed systems lately. Not just in theory, but how we build them in practice. How do we ensure that when one service talks to another, they actually understand each other? That messages don’t get lost in translation? This led me down a path of combining NestJS, RabbitMQ, and Prisma to create something robust, type-safe, and genuinely reliable.
The core idea is simple: services should communicate through events, not direct calls. This loose coupling lets each service focus on its job without worrying about its neighbors. But how do we make sure an “order created” event from the order service is exactly what the inventory service expects? That’s where TypeScript and shared contracts come in.
Let’s start by defining what our events look like. We create a shared package that every service depends on. This package contains type definitions for every event that will travel through our system.
// In our shared package
export interface OrderCreatedEvent {
eventType: 'ORDER_CREATED';
data: {
orderId: string;
userId: string;
items: Array<{
productId: string;
quantity: number;
}>;
totalAmount: number;
};
}
This might seem like extra work, but have you ever spent hours debugging because two services had slightly different interpretations of the same data? This approach eliminates that problem completely. The compiler becomes your first line of defense.
Now, let’s set up our message broker. RabbitMQ gives us the reliability we need with features like persistent messages and acknowledgments. In NestJS, we use the @nestjs/microservices
package to create a clean abstraction.
// In our order service
@Controller()
export class OrdersController {
constructor(private readonly rabbitmqClient: ClientProxy) {}
@Post()
async createOrder(@Body() createOrderDto: CreateOrderDto) {
const order = await this.ordersService.create(createOrderDto);
const event: OrderCreatedEvent = {
eventType: 'ORDER_CREATED',
data: {
orderId: order.id,
userId: order.userId,
items: order.items,
totalAmount: order.totalAmount
}
};
this.rabbitmqClient.emit('order.created', event);
return order;
}
}
Notice how we’re using our shared OrderCreatedEvent
interface. The compiler will complain if we try to send malformed data. This is type safety in action.
But what happens when the inventory service receives this event? Let’s look at that side.
// In our inventory service
@MessagePattern('order.created')
async handleOrderCreated(@Payload() event: OrderCreatedEvent) {
try {
await this.inventoryService.reserveItems(
event.data.orderId,
event.data.items
);
} catch (error) {
// Handle inventory reservation failures
const failedEvent: InventoryReservationFailedEvent = {
eventType: 'INVENTORY_RESERVATION_FAILED',
data: {
orderId: event.data.orderId,
reason: error.message
}
};
this.rabbitmqClient.emit('inventory.reservation_failed', failedEvent);
}
}
This pattern creates a clear flow: order creation triggers inventory reservation, which might succeed or fail. Each outcome produces a new event that other services can react to.
Now, let’s talk about data persistence. Prisma gives us type-safe database access that perfectly complements our event-driven approach.
// Our Prisma schema for the order service
model Order {
id String @id @default(cuid())
userId String
status OrderStatus
total Decimal
createdAt DateTime @default(now())
items OrderItem[]
}
model OrderItem {
id String @id @default(cuid())
orderId String
order Order @relation(fields: [orderId], references: [id])
productId String
quantity Int
price Decimal
}
The beauty here is that our database operations become as type-safe as our events. Try to create an order without a userId? The compiler will stop you. Try to update an order with invalid status? Again, caught at compile time.
But what about errors? In distributed systems, things will fail. Services go down, networks hiccup, databases become unavailable. We need to handle these gracefully.
RabbitMQ’s dead letter exchanges are perfect for this. When a message can’t be processed after several attempts, we move it to a separate queue for manual inspection.
// Setting up a dead letter exchange
async setupDeadLetterQueue() {
await this.channel.assertExchange('dlx', 'direct');
await this.channel.assertQueue('dead_letters');
await this.channel.bindQueue('dead_letters', 'dlx', '#');
// Main queue with DLX configuration
await this.channel.assertQueue('orders', {
deadLetterExchange: 'dlx',
deadLetterRoutingKey: '#',
messageTtl: 60000 // 1 minute
});
}
This setup ensures that problematic messages don’t block our queues indefinitely. They get moved aside after a timeout, letting normal processing continue.
Monitoring is another critical piece. How do we know if events are flowing correctly? We add metadata to every event.
interface EventMetadata {
source: string;
traceId: string;
timestamp: Date;
}
// Enhanced event interface
interface EnhancedEvent extends BaseEvent {
metadata: EventMetadata;
}
With this metadata, we can trace an event’s journey through the system. We can see which service created it, how long it took to process, and where any bottlenecks might be.
Testing event-driven systems requires a different approach. We need to verify that events are published correctly and that handlers respond appropriately.
// Testing our order creation
it('should publish ORDER_CREATED event', async () => {
const rabbitmqClient = { emit: jest.fn() };
const controller = new OrdersController(rabbitmqClient as any);
await controller.createOrder(testOrderData);
expect(rabbitmqClient.emit).toHaveBeenCalledWith(
'order.created',
expect.objectContaining({
eventType: 'ORDER_CREATED',
data: expect.objectContaining({
orderId: expect.any(String)
})
})
);
});
These tests give us confidence that our events are shaped correctly and sent to the right places.
As I built this system, I kept thinking: how many production issues could we avoid if we caught type mismatches at compile time rather than runtime? The answer is probably “most of them.”
The combination of NestJS’s structured approach, RabbitMQ’s reliability, Prisma’s type safety, and TypeScript’s compile-time checks creates a foundation that’s both flexible and robust. We get the benefits of microservices without the common pitfalls.
But this is just the beginning. There are always more patterns to explore, more edge cases to handle, more optimizations to make. What interesting challenges have you faced in your distributed systems? How did you solve them?
If this approach resonates with you, I’d love to hear your thoughts. Share your experiences in the comments, and if you found this useful, pass it along to others who might benefit.