I’ve been thinking a lot about how modern applications need to handle massive scale while remaining responsive and reliable. After working on several projects that started as monoliths and struggled under load, I realized that event-driven microservices could solve many of these challenges. Today, I want to share a practical approach to building production-ready systems using NestJS, RabbitMQ, and Redis.
Why choose this stack? NestJS provides a solid foundation with its modular architecture and TypeScript support. RabbitMQ ensures reliable message delivery between services, while Redis handles caching and session management efficiently. Together, they create a robust system that can scale horizontally and handle failures gracefully.
Let me walk you through setting up a complete e-commerce system. We’ll have separate services for users, products, orders, and notifications, all communicating through events. Have you ever wondered how services can coordinate without tight coupling? Event-driven architecture makes this possible by decoupling producers and consumers of information.
First, we need to set up our development environment. I prefer using Docker Compose to manage infrastructure services. Here’s a basic setup:
# docker-compose.yml
services:
rabbitmq:
image: rabbitmq:3-management
ports: ["5672:5672", "15672:15672"]
environment:
RABBITMQ_DEFAULT_USER: admin
RABBITMQ_DEFAULT_PASS: password
redis:
image: redis:7-alpine
ports: ["6379:6379"]
This gives us RabbitMQ for message brokering and Redis for caching. Notice how we’re using health checks to ensure services are ready before others depend on them. How do you typically handle service dependencies in your projects?
Now, let’s create our core microservices. I like starting with shared libraries to maintain consistency across services. Here’s a basic event interface:
// shared-libs/common/src/interfaces/base-event.interface.ts
export interface DomainEvent {
id: string;
timestamp: Date;
eventType: string;
aggregateId: string;
data: any;
}
This ensures all events in our system follow the same structure. When building microservices, consistency in data contracts becomes crucial. Have you faced issues with inconsistent data formats between services?
For the user service, we might implement event handlers like this:
// user-service/src/events/user-created.handler.ts
@Controller()
export class UserCreatedHandler {
constructor(private readonly eventBus: EventBusService) {}
@EventPattern('user.created')
async handleUserCreated(event: DomainEvent) {
// Send welcome email
await this.eventBus.emit('notification.send', {
type: 'EMAIL',
userId: event.aggregateId,
template: 'welcome'
});
}
}
This demonstrates how events trigger other actions across the system. The user service doesn’t need to know about notification implementation details.
Now, let’s talk about RabbitMQ configuration. Reliability is key in production systems. Here’s how I set up durable queues:
// shared-libs/common/src/config/rabbitmq.config.ts
export const rabbitMQConfig: RmqOptions = {
transport: Transport.RMQ,
options: {
urls: [process.env.RABBITMQ_URL],
queue: 'user_events',
queueOptions: {
durable: true,
arguments: { 'x-queue-type': 'classic' }
},
noAck: false,
prefetchCount: 1
}
};
Durable queues survive broker restarts, while prefetch controls how many messages a consumer handles simultaneously. What strategies do you use for message reliability?
Redis plays a vital role in performance. I use it for caching frequently accessed data and managing user sessions:
// api-gateway/src/services/cache.service.ts
@Injectable()
export class CacheService {
constructor(private readonly redis: Redis) {}
async getUserSession(userId: string): Promise<UserSession> {
const cached = await this.redis.get(`session:${userId}`);
if (cached) return JSON.parse(cached);
const session = await this.fetchUserSession(userId);
await this.redis.setex(`session:${userId}`, 3600, JSON.stringify(session));
return session;
}
}
This pattern reduces database load and improves response times. Notice the TTL (time-to-live) setting to prevent stale data.
Service discovery and load balancing are essential for horizontal scaling. I implement health checks in all services:
// user-service/src/health/health.controller.ts
@Controller('health')
export class HealthController {
@Get()
check() {
return {
status: 'ok',
timestamp: new Date().toISOString(),
service: 'user-service'
};
}
}
Kubernetes or other orchestrators can use these endpoints to determine service health. How do you handle service discovery in dynamic environments?
Error handling requires careful planning. I implement retry mechanisms and dead letter queues:
// order-service/src/events/order-created.handler.ts
@EventPattern('order.created')
async handleOrderCreated(event: DomainEvent) {
try {
await this.processOrder(event.data);
} catch (error) {
await this.eventBus.emit('order.failed', {
...event,
error: error.message,
retryCount: (event.retryCount || 0) + 1
});
}
}
This ensures failed operations can be retried or investigated separately. What’s your approach to handling partial failures?
Monitoring distributed systems requires correlation IDs. I add them to all events and requests:
// shared-libs/common/src/interceptors/correlation.interceptor.ts
@Injectable()
export class CorrelationInterceptor implements NestInterceptor {
intercept(context: ExecutionContext, next: CallHandler) {
const request = context.switchToHttp().getRequest();
request.correlationId = request.headers['x-correlation-id'] || uuidv4();
return next.handle();
}
}
This helps trace requests across service boundaries. Have you found tracing challenging in microservices?
Testing event-driven systems requires mocking external dependencies:
// user-service/src/events/user-created.handler.spec.ts
describe('UserCreatedHandler', () => {
let handler: UserCreatedHandler;
let eventBus: jest.Mocked<EventBusService>;
beforeEach(async () => {
const module = await Test.createTestingModule({
providers: [
UserCreatedHandler,
{ provide: EventBusService, useValue: { emit: jest.fn() } }
]
}).compile();
handler = module.get(UserCreatedHandler);
eventBus = module.get(EventBusService);
});
it('should send welcome notification', async () => {
await handler.handleUserCreated(mockUserCreatedEvent);
expect(eventBus.emit).toHaveBeenCalledWith('notification.send', expect.any(Object));
});
});
This ensures our event handlers work as expected without depending on other services.
For deployment, I use Docker with multi-stage builds:
# user-service/Dockerfile
FROM node:18-alpine AS builder
WORKDIR /app
COPY package*.json ./
RUN npm ci
COPY . .
RUN npm run build
FROM node:18-alpine AS production
WORKDIR /app
COPY --from=builder /app/dist ./dist
COPY --from=builder /app/node_modules ./node_modules
CMD ["node", "dist/main"]
This keeps production images lean and secure. What deployment strategies have worked well for you?
Common pitfalls include tight coupling through events and insufficient monitoring. I recommend using schema validation for events and implementing comprehensive logging. How do you prevent events from becoming a hidden coupling mechanism?
Building event-driven microservices has transformed how I approach system design. The decoupling and scalability benefits are substantial, though they require careful planning around reliability and monitoring. I hope this practical guide helps you build robust systems that can grow with your needs.
If you found these insights valuable, I’d love to hear about your experiences. Please share this article with colleagues who might benefit, and let me know in the comments what challenges you’ve faced with microservices. Your feedback helps me create better content for our community.