I’ve been thinking a lot about how modern applications need to handle complexity while remaining maintainable. Lately, I’ve been exploring how to build systems that scale gracefully without sacrificing developer experience. That’s what led me to combine NestJS, RabbitMQ, and Prisma for type-safe event-driven microservices.
When you’re building distributed systems, how do you ensure that services communicate reliably while maintaining type safety across service boundaries? This challenge is what makes event-driven architecture so compelling.
Let me show you how I approach building these systems. We’ll create a simple e-commerce platform with user and order services that communicate through events.
First, let’s set up our shared types. This is crucial for maintaining consistency across services:
// shared/events/user-events.ts
export class UserCreatedEvent {
constructor(
public readonly userId: string,
public readonly email: string,
public readonly name: string,
public readonly timestamp: Date
) {}
}
export class UserUpdatedEvent {
constructor(
public readonly userId: string,
public readonly email?: string,
public readonly name?: string
) {}
}
Notice how we’re using classes instead of plain objects. This gives us runtime type checking alongside TypeScript’s compile-time safety.
Why do we need separate databases for each service? Because true microservices require data autonomy. Each service owns its data, preventing tight coupling that can cripple scalability.
Here’s how I set up the User service with Prisma:
// user-service/src/users/users.service.ts
@Injectable()
export class UsersService {
constructor(private prisma: PrismaService) {}
async createUser(createUserDto: CreateUserDto) {
const user = await this.prisma.user.create({
data: {
email: createUserDto.email,
name: createUserDto.name,
password: await hash(createUserDto.password, 12)
}
});
// Publish event after successful creation
const event = new UserCreatedEvent(
user.id,
user.email,
user.name,
new Date()
);
await this.eventPublisher.publish('user.created', event);
return user;
}
}
The beauty of this approach is that the User service doesn’t need to know who cares about new users. It simply announces the event and moves on.
But what happens when the Order service goes down? RabbitMQ’s persistence ensures messages aren’t lost. When the service comes back online, it processes the queued events.
Setting up RabbitMQ in NestJS is straightforward:
// main.ts for any service
const app = await NestFactory.createMicroservice<MicroserviceOptions>(
AppModule,
{
transport: Transport.RMQ,
options: {
urls: ['amqp://localhost:5672'],
queue: 'user_queue',
queueOptions: {
durable: true
}
}
}
);
I’ve found that making events type-safe from the start pays dividends. Here’s how we handle event consumption:
// order-service/src/events/user-events.handler.ts
@Controller()
export class UserEventsHandler {
@EventPattern('user.created')
async handleUserCreated(data: UserCreatedEvent) {
// TypeScript knows the shape of data
await this.orderService.createCustomerProfile({
userId: data.userId,
email: data.email,
name: data.name
});
}
}
What about error handling? I implement retry logic with dead letter queues:
@EventPattern('user.created')
async handleUserCreated(data: UserCreatedEvent) {
try {
await this.orderService.createCustomerProfile(data);
} catch (error) {
// Move to retry queue with exponential backoff
await this.retryService.scheduleRetry('user.created', data, {
maxAttempts: 3,
backoffMs: 1000
});
}
}
Testing event-driven systems requires a different approach. I use Docker Compose to spin up test environments:
// order-service/src/events/user-events.handler.spec.ts
describe('UserEventsHandler', () => {
beforeEach(async () => {
await testApp.setup();
await rabbitMQTestHelper.purgeQueues();
});
it('should create customer profile on user.created event', async () => {
const event = new UserCreatedEvent(
'user-123',
'[email protected]',
'Test User',
new Date()
);
await testApp.emitEvent('user.created', event);
await waitForExpect(async () => {
const profile = await testApp.getCustomerProfile('user-123');
expect(profile).toBeDefined();
});
});
});
One question I often get: how do you handle schema evolution? I version events and maintain backward compatibility:
// shared/events/user-events-v2.ts
export class UserCreatedEventV2 {
constructor(
public readonly userId: string,
public readonly email: string,
public readonly name: string,
public readonly timestamp: Date,
public readonly metadata: Record<string, any> = {}
) {}
}
The real power comes from combining these tools. NestJS provides structure, Prisma ensures database type safety, and RabbitMQ handles reliable messaging. Together, they create a foundation that scales both technically and organizationally.
I’ve deployed this pattern in production and seen how it enables teams to work independently. The type safety catches errors early, while the event-driven nature provides fault tolerance.
What surprised me most was how much easier debugging became. With proper event logging, you can trace requests across service boundaries and understand exactly what happened when.
Remember that event-driven architecture isn’t just about technology—it’s about designing systems that mirror business processes. Events represent things that happened in your domain, making the code more expressive and maintainable.
I’d love to hear about your experiences with microservices. Have you tried similar approaches? What challenges did you face? Share your thoughts in the comments below, and if you found this useful, please like and share with your team.