I’ve been thinking a lot about how modern applications handle complexity and scale. The shift from monolithic architectures to distributed systems isn’t just a trend—it’s a necessity for building resilient, scalable applications. Today, I want to share my approach to creating high-performance event-driven microservices using NestJS, Apache Kafka, and MongoDB.
Why focus on this particular stack? Each component brings something essential to the table. NestJS provides a structured framework that feels familiar to Angular developers, Apache Kafka ensures reliable message streaming, and MongoDB offers flexible document storage. Together, they form a powerful foundation for distributed systems.
Let me show you how to structure a typical e-commerce order processing system. We’ll create separate services for orders, inventory, payments, and notifications. Each service operates independently but communicates through events.
Here’s how the flow works in practice:
// Order creation triggers a chain of events
async createOrder(orderData: CreateOrderDto) {
const order = await this.orderModel.create({
...orderData,
status: 'PENDING'
});
await this.kafkaClient.emit('order.created', {
eventType: 'ORDER_CREATED',
data: {
orderId: order.orderId,
items: order.items,
totalAmount: order.totalAmount
}
});
return order;
}
Have you ever wondered how systems maintain consistency across multiple services? Event-driven architecture provides the answer through eventual consistency patterns. When an order is created, it doesn’t immediately confirm inventory or process payment. Instead, it publishes an event that other services consume.
Setting up our development environment requires careful planning. We use Docker Compose to manage our dependencies:
# docker-compose.yml
version: '3.8'
services:
zookeeper:
image: confluentinc/cp-zookeeper:latest
environment:
ZOOKEEPER_CLIENT_PORT: 2181
kafka:
image: confluentinc/cp-kafka:latest
depends_on: [zookeeper]
environment:
KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://kafka:9092
mongodb:
image: mongo:5
environment:
MONGO_INITDB_ROOT_USERNAME: admin
MONGO_INITDB_ROOT_PASSWORD: password
What happens when services need to share common types and interfaces? We create a shared library that all microservices can reference. This ensures consistency across our event definitions and data transfer objects.
// Shared event interface
export interface BaseEvent {
eventType: string;
timestamp: Date;
data: any;
}
// Specific event implementation
export interface OrderCreatedEvent extends BaseEvent {
eventType: 'ORDER_CREATED';
data: {
orderId: string;
customerId: string;
items: OrderItem[];
totalAmount: number;
};
}
The real power comes from how services react to these events. The inventory service listens for order creation events and attempts to reserve items:
@Controller()
export class InventoryController {
constructor(private readonly inventoryService: InventoryService) {}
@EventPattern('order.created')
async handleOrderCreated(event: OrderCreatedEvent) {
await this.inventoryService.reserveItems(event.data);
}
}
But what about error handling and retries? Kafka’s consumer groups and commit strategies help us manage processing failures. We can configure our consumers to retry failed messages or send them to dead-letter queues for manual inspection.
Monitoring becomes crucial in distributed systems. We implement comprehensive logging and metrics collection:
// Adding structured logging
private readonly logger = new Logger(InventoryService.name);
async reserveItems(orderData: OrderCreatedEvent['data']) {
this.logger.log(`Reserving items for order ${orderData.orderId}`);
try {
const reservation = await this.reservationModel.create({
orderId: orderData.orderId,
items: orderData.items,
status: 'RESERVED'
});
await this.kafkaClient.emit('inventory.reserved', {
eventType: 'INVENTORY_RESERVED',
data: { orderId: orderData.orderId, reservationId: reservation.id }
});
} catch (error) {
this.logger.error(`Failed to reserve items: ${error.message}`);
throw error;
}
}
Deployment strategies matter too. We package each service in its own Docker container and use Kubernetes for orchestration. This allows us to scale individual services based on their specific load patterns.
The beauty of this architecture lies in its flexibility. New services can join the ecosystem simply by subscribing to relevant events. Want to add a recommendation service? Just listen to order completion events and build customer profiles.
Testing event-driven systems requires a different approach. We use contract testing to verify that events maintain compatibility across service versions. This prevents breaking changes when we update individual services.
How do we ensure data consistency across services? We implement idempotent handlers and use MongoDB’s transactions within service boundaries. While we can’t have ACID transactions across services, we can design our system to handle failures gracefully.
The combination of NestJS’s dependency injection, Kafka’s reliable messaging, and MongoDB’s flexible data model creates a robust foundation. We get the benefits of microservices—scalability, independent deployment, and technology diversity—without sacrificing developer experience.
I encourage you to experiment with this architecture in your next project. Start with a simple event flow and gradually add complexity as needed. Remember that the goal isn’t to build the most complex system, but to create something that scales with your needs.
What challenges have you faced with microservices? I’d love to hear about your experiences and solutions. If you found this useful, please share it with others who might benefit from this approach. Your comments and feedback help improve future content.