The other day, I was debugging a tangled web of API calls between services. One failed, and everything fell apart. That moment of cascade failure made me think: there has to be a cleaner, more resilient way for services to talk. That’s when the idea of events clicked. Instead of services yelling at each other and waiting for a reply, what if they just whispered what happened and moved on? This led me down the path of building microservices that communicate through events, with a strong emphasis on type safety from start to finish. If you’ve ever felt the pain of a distributed system failure due to a missed data field or a broken contract, this is for you. Let’s build something better.
Think of an event as a simple, immutable fact. “User X registered.” “Order Y was placed.” It’s a record of something that already occurred. Services that care about these facts can listen for them and act independently. This pattern is powerful. It lets each part of your system focus on its own job without needing intimate knowledge of others. What happens if the inventory service is temporarily down? In a traditional setup, the order would fail. In an event-driven system, the order can be placed, and the inventory update happens when that service is back online.
This loose coupling is the main draw. But to build it reliably, we need the right tools. I choose NestJS for its structured approach, RabbitMQ for dependable message delivery, and Prisma to ensure my data layer is just as type-safe as my application code. This combination catches errors at compile time, not in production.
Let’s start with the foundation: defining our events. With TypeScript, we can create clear contracts.
// A shared event definition
export class OrderCreatedEvent {
constructor(
public readonly orderId: string,
public readonly userId: string,
public readonly items: Array<{ productId: string; quantity: number }>,
public readonly timestamp: Date,
) {}
}
Notice this is a plain class. It’s a data container with a strict shape. Any service publishing or consuming this event must adhere to this structure. This is our first layer of safety. How do we ensure these events are actually delivered, even if a service restarts? That’s where a message broker like RabbitMQ comes in.
Setting up a connection to RabbitMQ in a NestJS service is straightforward. We can encapsulate the logic in a dedicated module.
// A RabbitMQ service wrapper
import { connect, Channel, Connection } from 'amqplib';
export class MessageBusService {
private connection: Connection;
private channel: Channel;
async connect(uri: string) {
this.connection = await connect(uri);
this.channel = await this.connection.createChannel();
}
async publish(exchange: string, routingKey: string, message: Buffer) {
await this.channel.assertExchange(exchange, 'topic', { durable: true });
this.channel.publish(exchange, routingKey, message);
}
}
The broker handles the routing. Our services only know about exchanges and routing keys, not about each other’s network locations. But what about the data going into our databases? This is where Prisma transforms the workflow. Its schema file is the single source of truth for your database shape.
// In an Order service's Prisma schema
model Order {
id String @id @default(cuid())
userId String
status String
total Decimal
createdAt DateTime @default(now())
@@map("orders")
}
When you run prisma generate, it creates a TypeScript client that knows the exact shape of your Order model. Every database query is checked for type correctness. Imagine trying to assign a string to the total field; TypeScript will flag it immediately. This eliminates a whole category of runtime data errors.
Now, let’s bring it together in a service. An Order service receives an HTTP request to create an order. It saves the order to its own database using Prisma, then publishes an event—without waiting for anyone to acknowledge it.
// In an OrderService
async createOrder(dto: CreateOrderDto) {
// 1. Type-safe DB operation with Prisma
const order = await this.prisma.order.create({
data: {
userId: dto.userId,
status: 'PENDING',
total: dto.total,
},
});
// 2. Publish a type-safe event
const event = new OrderCreatedEvent(order.id, order.userId, dto.items, new Date());
await this.messageBus.publish('commerce', 'order.created', Buffer.from(JSON.stringify(event)));
return order;
}
Meanwhile, an Inventory service is quietly listening on the order.created routing key. It consumes the event, parses the JSON back into a known TypeScript object, and tries to reserve the items. If something goes wrong—maybe stock is insufficient—it can publish a compensating event like InventoryReservationFailed. The Order service can then listen for that and update the order status accordingly. This is how we manage workflows across service boundaries.
This approach raises interesting questions. For instance, how do you track a business process that now spans multiple autonomous services? You might rely on correlation IDs passed within events or implement a Saga pattern, where each step triggers the next through events. The key is that no single service is in charge of the entire transaction.
Testing these flows is different from testing monolithic apps. You’ll want to test each service in isolation, mocking the message bus, and also perform integration tests with a real, lightweight broker. Focus on ensuring your service reacts correctly to the events it receives and publishes the expected events in response.
Building systems this way changes how you think about design. You start modeling based on domain events—the important things that happen in your business. The services and databases become secondary, organized around these events. It requires discipline, but the payoff is a system that is far more adaptable. You can add a new service that listens to existing events without modifying the publishers. You can scale a service that’s under heavy load by adding more instances that pull from the same queue.
Does this mean eventual consistency is always the right choice? Not necessarily. For features that need immediate, strong consistency, a direct API call might still be the simplest solution. The art is in choosing the right pattern for each interaction.
I’ve found that investing in this type-safe, event-driven foundation dramatically reduces bugs and makes complex systems easier to reason about. The compiler becomes your ally, and the message broker provides a reliable nervous system for your application. It turns a distributed system from a fragile house of cards into a robust, flexible organism.
I hope this walkthrough gives you a practical starting point. The shift in mindset is as important as the technology. Start small, model your core business events, and let the events guide your architecture. If you found this perspective helpful, please share it with a colleague who might be wrestling with microservice communication. Have you tried a similar approach? What challenges did you face? Let me know in the comments—I’d love to hear about your experiences.