I’ve been wrestling with complex systems that buckle under pressure. Scaling traditional architectures often feels like patching a leaky boat. That’s why event-driven approaches caught my attention - they promise resilience and adaptability. Today, I’ll share practical techniques using Node.js, EventStore, and TypeScript that transformed how I build systems. Follow along to implement these patterns yourself.
Event sourcing fundamentally changes how we track state. Instead of updating records, we capture every state change as immutable events. This creates a complete audit trail. CQRS complements this by separating read and write operations. Why struggle with complex queries slowing down your writes? This separation allows independent scaling. Consider an e-commerce system: product updates can stream to one service while order queries run on another.
// Simple event definition
class ProductCreatedEvent {
constructor(
public readonly id: string,
public readonly name: string,
public readonly price: number
) {}
}
Setting up the environment is straightforward. I use Docker for EventStore - no complex installations. Here’s the docker-compose configuration I always start with:
services:
eventstore:
image: eventstore/eventstore:latest
ports:
- "1113:1113" # TCP protocol
- "2113:2113" # HTTP API
Connecting to EventStore requires a reliable client. I wrap it with TypeScript interfaces for type safety. Notice how we handle metadata - it’s crucial for debugging distributed systems:
async appendEvent(stream: string, event: DomainEvent) {
const serialized = JSON.stringify({
data: event,
metadata: {
eventType: event.constructor.name,
timestamp: new Date()
}
});
await this.client.appendToStream(stream, [serialized]);
}
Aggregates enforce business rules. When creating an order aggregate, we validate before emitting events. What happens when inventory runs out during checkout? The aggregate prevents invalid state transitions:
class Order extends AggregateRoot {
create(command: CreateOrder) {
if (!command.items.length) throw new Error("Empty order");
this.apply(new OrderCreatedEvent(command.id, command.items));
}
}
Command handlers bridge user actions to domain logic. They’re where validation lives. Notice how we load the aggregate’s event history before making decisions:
async handle(cmd: UpdateOrderCommand) {
const history = await eventStore.readStream(`order-${cmd.id}`);
const order = new Order(history); // Rebuild from events
order.updateAddress(cmd.address);
await eventStore.appendEvents(order.newEvents);
}
Projections transform events into readable views. They’re the secret to fast queries. How quickly can you add a new reporting dashboard? With projections, it’s minutes not days:
class OrderSummaryProjection {
onOrderCreated(event) {
db.insert('order_summaries', {
id: event.orderId,
total: event.items.reduce((sum, item) => sum + item.price, 0)
});
}
}
Sagas manage distributed transactions. They’re the conductors orchestrating complex workflows. When an order involves payment and inventory, how do you maintain consistency? Sagas coordinate without tight coupling:
class OrderSaga {
async onOrderCreated(event) {
await paymentService.charge(event.orderId, event.amount);
await inventoryService.reserve(event.orderId, event.items);
}
}
Schema evolution is inevitable. We handle breaking changes through versioning and upcasting. Can you change requirements without losing historical data? Event sourcing makes it possible:
function upcastV1ToV2(event) {
// Add new field with default
return { ...event, priority: event.priority || 'standard' };
}
Resilience comes from smart error handling. I implement retries with exponential backoff. What happens when a downstream service fails? We pause without crashing:
async processEvent(event) {
const maxRetries = 3;
for (let attempt = 1; attempt <= maxRetries; attempt++) {
try {
await handler.execute(event);
break;
} catch (error) {
await new Promise(res => setTimeout(res, 100 * attempt));
}
}
}
Monitoring event flows is non-negotiable. I instrument event handlers to track processing times. Dashboards showing event throughput catch bottlenecks before users notice.
This approach has saved countless debugging hours. Replaying events to reproduce bugs is game-changing. The initial setup pays off when requirements inevitably shift.
I’d love to hear your experiences with distributed systems. What challenges have you faced? Share this guide if you found it useful, and comment below with your implementation stories!