I’ve been thinking a lot about building systems that can scale gracefully while maintaining complete auditability. Traditional approaches often leave us guessing about what happened to our data and when. That’s why I want to share how event sourcing with EventStoreDB and Node.js can transform how you build applications.
Have you ever wondered what your application’s state was exactly three weeks ago at 2:15 PM?
Event sourcing stores every state change as an immutable event sequence rather than just the current state. When something happens in your domain, you record it as a fact that can never be changed. This gives you a complete history of everything that’s occurred in your system.
Let me show you how to set this up. First, we’ll run EventStoreDB using Docker:
version: '3.8'
services:
eventstore:
image: eventstore/eventstore:22.10.0-buster-slim
environment:
- EVENTSTORE_INSECURE=true
ports:
- "2113:2113"
Connecting from Node.js is straightforward:
import { EventStoreDBClient } from '@eventstore/db-client';
const client = EventStoreDBClient.connectionString(
'esdb://localhost:2113?tls=false'
);
Now, let’s talk about domain events. These represent things that have happened in your business domain. Here’s how I define a base event structure:
export abstract class BaseDomainEvent {
public readonly eventId: string;
public readonly timestamp: Date;
public readonly eventVersion: number = 1;
constructor(
public readonly aggregateId: string,
public readonly aggregateType: string,
public readonly eventType: string
) {
this.eventId = crypto.randomUUID();
this.timestamp = new Date();
}
}
What happens when business requirements change and you need to modify your event structure?
That’s where event versioning comes in. I handle schema evolution by including version numbers and writing migration scripts that transform old events to new formats when building projections.
Aggregates are the heart of your domain logic. They protect business rules and emit events when state changes:
class Order extends AggregateRoot {
private items: OrderItem[] = [];
private status: OrderStatus = OrderStatus.Pending;
createOrder(customerId: string, items: OrderItem[]) {
this.apply(new OrderCreatedEvent(
this.id,
customerId,
items,
this.calculateTotal(items)
));
}
private onOrderCreatedEvent(event: OrderCreatedEvent) {
this.items = event.items;
this.status = OrderStatus.Created;
}
}
Notice how the aggregate reconstructs its state by applying events? This is crucial for event sourcing.
Building read models through projections lets you optimize queries without affecting write performance:
class OrderSummaryProjection {
async handleOrderCreated(event: OrderCreatedEvent) {
await db.orders.create({
data: {
id: event.aggregateId,
customerId: event.customerId,
totalAmount: event.totalAmount,
status: 'created'
}
});
}
}
How do you handle complex business processes that span multiple aggregates?
Distributed sagas coordinate these processes. They listen for events and dispatch commands to maintain consistency across boundaries:
class OrderFulfillmentSaga {
async handle(orderCreated: OrderCreatedEvent) {
const paymentResult = await this.paymentService.charge(
orderCreated.customerId,
orderCreated.totalAmount
);
if (paymentResult.success) {
await this.commandBus.dispatch(
new ConfirmOrderCommand(orderCreated.aggregateId)
);
}
}
}
Testing event-sourced systems requires a different approach. I focus on verifying that commands produce the correct events and that aggregates rebuild state properly:
describe('Order', () => {
it('should create order with correct events', () => {
const order = Order.create('customer-123', items);
expect(order.getUncommittedEvents()).toContainEqual(
expect.objectContaining({
eventType: 'OrderCreated'
})
);
});
});
In production, monitoring becomes essential. I track event throughput, projection lag, and saga completion rates. Setting up proper alerting helps catch issues before they affect users.
The beauty of this approach reveals itself when debugging production issues. You can replay events to see exactly what led to a problematic state. This forensic capability has saved me countless hours during incident investigations.
Have you considered how event sourcing could change your approach to data consistency?
Remember that event sourcing isn’t a silver bullet. It adds complexity that might not be justified for simple CRUD applications. But for domains where audit trails, temporal queries, and integration with other systems matter, it’s transformative.
I’ve found that starting with a bounded context that clearly benefits from these patterns helps teams learn gradually. The investment in learning pays dividends as systems grow and change requirements emerge.
What challenges do you anticipate when adopting event sourcing in your projects?
I’d love to hear about your experiences with distributed systems. If this approach resonates with you, please share your thoughts in the comments and pass this along to others who might benefit from these patterns.