I’ve been working with microservices for years, and there’s one pattern that consistently proves its worth: event-driven architecture. Just last month, I faced a critical challenge in our e-commerce platform where synchronous services caused cascading failures during peak sales. That experience pushed me to build a better solution using NestJS, RabbitMQ, and MongoDB. If you’ve struggled with inter-service communication or data consistency, you’ll find this practical approach valuable.
Setting up our order processing microservice begins with foundational work. We start by creating our NestJS project and installing essential packages. The core setup includes environment configuration, MongoDB connection, and event emitter initialization. Notice how we enable transaction support right from the start - a non-negotiable for production systems.
npm install @nestjs/mongoose mongoose amqplib
// app.module.ts
@Module({
imports: [
ConfigModule.forRoot({ isGlobal: true }),
MongooseModule.forRoot(process.env.MONGODB_URI, {
useNewUrlParser: true,
useUnifiedTopology: true,
}),
EventEmitterModule.forRoot(),
OrderModule
]
})
export class AppModule {}
For our order domain model, we define a schema with critical features like optimistic locking. Why does this matter? Because in distributed systems, concurrent updates can corrupt data. The versioning field prevents that by rejecting stale updates.
// order.schema.ts
@Schema({ timestamps: true })
export class Order {
@Prop({ required: true, unique: true })
orderId: string;
@Prop({ required: true, enum: OrderStatus, default: OrderStatus.PENDING })
status: string;
@Prop({ required: true, default: 1 })
version: number;
}
OrderSchema.pre('save', function(next) {
this.version = this.isNew ? 1 : this.version + 1;
next();
});
Now comes the interesting part: transaction handling. When creating orders, we wrap the entire operation in a MongoDB transaction. This ensures that either all steps succeed or everything rolls back. But what happens if the event emission fails after database commit? That’s where our RabbitMQ integration comes in.
// order.service.ts
async createOrder(dto: CreateOrderDto): Promise<Order> {
const session = await this.orderModel.db.startSession();
try {
session.startTransaction();
const order = new this.orderModel({ ...dto, status: 'pending' });
const savedOrder = await order.save({ session });
await this.eventService.storeEvent(
'ORDER_CREATED',
savedOrder.toObject(),
session
);
this.eventEmitter.emit('order.created', savedOrder);
await session.commitTransaction();
return savedOrder;
} catch (error) {
await session.abortTransaction();
throw error;
} finally {
session.endSession();
}
}
For messaging, RabbitMQ provides reliability through features like acknowledgments and dead letter exchanges. We set up our consumers to automatically retry failed messages before moving them to a quarantine queue. How many retries are optimal? From experience, 3-5 attempts with exponential backoff works well for most scenarios.
// rabbitmq.consumer.ts
@RabbitSubscribe({
exchange: 'orders',
routingKey: 'order.created',
queue: 'order-created-queue',
queueOptions: {
deadLetterExchange: 'dead-letters',
messageTtl: 60000
}
})
async handleOrderCreated(message: any) {
try {
await this.inventoryService.reserveStock(message.items);
} catch (error) {
throw new Error('Processing failed');
}
}
Event sourcing complements our transactional approach by storing every state change as immutable events. This pattern provides an audit trail and enables temporal queries - crucial for debugging production issues. We implement it by appending events to a dedicated collection within the same transaction as domain changes.
Testing requires special attention in distributed systems. We combine unit tests for business logic with contract tests for messaging interfaces. For critical paths like payment processing, we use chaos engineering principles by intentionally injecting failures during integration tests.
Deployment involves containerization with Docker and orchestration via Kubernetes. Our health checks include RabbitMQ connection status and MongoDB ping endpoints. For monitoring, we expose custom metrics like message processing latency and event sourcing lag. Ever wondered how to trace a request across services? OpenTelemetry instrumentation provides that visibility.
Performance optimization focuses on three areas: MongoDB indexing for frequent queries, RabbitMQ consumer prefetch limits to prevent overload, and event batching for high-volume writes. Remember to benchmark under realistic loads - local development performance often differs dramatically from production.
Building this solution taught me valuable lessons. Transactional messaging demands careful design, and idempotency keys are essential for reliable processing. Monitoring isn’t optional - it’s your production safety net. The complete solution handles 500+ transactions per second on a single pod while maintaining data consistency.
What challenges have you faced with microservices? Share your experiences below. If this approach solved problems for you, consider sharing it with your team. Comments and feedback help improve these solutions for everyone. Let’s build more resilient systems together.