I’ve been wrestling with microservices complexity on my latest project, and I want to share a solution that finally clicked. When services grow independently, type safety often suffers - one service updates an event payload and breaks three others. That frustration led me to build this architecture using NestJS, RabbitMQ, and Prisma. Let me show you how we can maintain type safety across service boundaries while keeping components decoupled.
Our system connects three services through events: Users, Orders, and Notifications. When a user registers, the User Service emits an event. The Order Service catches it to prepare a shopping profile, while Notifications sends a welcome email. All without direct service-to-service calls. Why does this matter? Because when the Order Service needs maintenance, users can still sign up uninterrupted.
Setting up our monorepo is straightforward with NPM workspaces. We keep shared code like event definitions in a shared
package. This ensures all services speak the same language:
// package.json
{
"workspaces": ["packages/*"],
"scripts": {
"dev": "concurrently \"npm run dev:user\" \"npm run dev:order\" ..."
}
}
RabbitMQ acts as our central nervous system. With Docker, we spin it up quickly:
# docker-compose.yml
services:
rabbitmq:
image: rabbitmq:3.12-management
ports:
- "5672:5672"
- "15672:15672"
Now, how do we make events bulletproof? Shared TypeScript interfaces prevent mismatched data:
// shared/src/events/user.events.ts
export class UserCreatedEvent {
readonly eventType = 'user.created';
constructor(
public readonly userId: string,
public readonly email: string,
public readonly createdAt: Date
) {}
}
In the User Service, we publish after database operations:
// user-service/src/user.service.ts
async createUser(userDto: CreateUserDto) {
const user = await this.prisma.user.create({ data: userDto });
this.eventEmitter.emit(new UserCreatedEvent(
user.id,
user.email,
user.createdAt
));
return user;
}
The Notification Service consumes with type guards:
// notification-service/src/events/user-created.listener.ts
@RabbitSubscribe({
exchange: 'user_events',
routingKey: 'user.created'
})
handleUserCreated(event: UserCreatedEvent) {
if (!(event instanceof UserCreatedEvent)) {
throw new Error('Invalid event type');
}
this.mailService.sendWelcome(event.email);
}
What happens when an event fails processing? RabbitMQ’s dead letter queues save us. Messages that repeatedly fail move to a separate queue for inspection. We configure this in our NestJS module:
// order-service/src/rabbitmq.config.ts
RabbitMQModule.forRoot(RabbitMQModule, {
exchanges: [{
name: 'order_events',
type: 'topic',
options: {
deadLetterExchange: 'dead_letters'
}
}]
})
Database operations use Prisma for end-to-end type safety. Notice how we share types between services without coupling:
// user-service/src/prisma/schema.prisma
model User {
id String @id @default(uuid())
email String @unique
createdAt DateTime @default(now())
}
// order-service/src/orders/dto/create-order.dto.ts
import { User } from '@shared/types';
class CreateOrderDto {
@IsUUID()
userId: User['id']; // Shared type reference
}
For deployment, Docker Compose orchestrates everything. Each service runs in its container, with RabbitMQ and databases as separate services. We add health checks to ensure services start in order:
services:
user-service:
build: ./packages/user-service
depends_on:
rabbitmq:
condition: service_healthy
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:3000/health"]
Testing presents interesting challenges. How do we verify events without brittle integration tests? We use RabbitMQ’s test containers and mock producers:
// notification-service/src/events/user-created.listener.spec.ts
test('sends welcome email on UserCreatedEvent', async () => {
const mockMailService = { sendWelcome: jest.fn() };
const listener = new UserCreatedListener(mockMailService);
await listener.handleUserCreated(
new UserCreatedEvent('user-123', '[email protected]', new Date())
);
expect(mockMailService.sendWelcome)
.toHaveBeenCalledWith('[email protected]');
});
Monitoring ties it all together. We track events flowing through the system with OpenTelemetry. Correlation IDs passed in event headers let us trace a user journey across services:
// shared/src/events/base.event.ts
export abstract class BaseEvent {
readonly correlationId: string;
constructor(correlationId?: string) {
this.correlationId = correlationId || generateId();
}
}
This architecture scales beautifully. Last month, we added a Reward Service that listens to order events - no changes to existing components. The type safety prevented four potential field mismatch bugs during implementation. Have you considered how many integration errors might be hiding in your microservices?
Building this taught me that resilience comes from embracing boundaries. Services focus on their domains, events document contracts, and types enforce agreements. The result? Systems that evolve without breaking. If you implement this, start small - one producer, one consumer. You’ll quickly see the patterns emerge.
If this approach resonates with your challenges, share your thoughts below. What patterns have you used to keep microservices in sync? I’d love to hear about your experiences - leave a comment if you’ve implemented something similar or have questions about specific parts!