I’ve been thinking about modern system design lately. How do we build applications that scale gracefully while maintaining data integrity? This question led me to explore event-driven architecture - a powerful approach where state changes are captured as immutable events. Today, I’ll share a practical guide to building such systems using Node.js, EventStore, and TypeScript. Let’s dive in.
When designing our e-commerce order system, we treat every state change as an event. Why is this powerful? Because we never lose history. An order’s journey from creation to fulfillment becomes a series of timestamped events we can replay anytime. Here’s how we define our core events:
// Order event definitions
interface OrderCreated {
eventType: 'OrderCreated';
data: {
customerId: string;
items: { productId: string; quantity: number }[];
};
}
interface OrderCancelled {
eventType: 'OrderCancelled';
data: { reason: string; cancelledAt: Date };
}
Setting up EventStore is straightforward. We connect using its Node.js client:
// EventStore connection setup
import { EventStoreDBClient } from '@eventstore/db-client';
const client = EventStoreDBClient.connectionString(
'esdb://localhost:2113?tls=false'
);
How do we actually store events? We append them to streams representing aggregates like orders:
// Appending events to a stream
const appendResult = await client.appendToStream(
`order-${orderId}`,
jsonEvent({
type: 'OrderCreated',
data: { customerId: 'user123', items: [...] }
})
);
Reconstructing an order’s current state is simply a matter of replaying its event history:
// Rebuilding aggregate state from events
async function getOrder(orderId: string) {
const events = client.readStream(`order-${orderId}`);
let order = { status: 'PENDING' };
for await (const { event } of events) {
switch (event?.type) {
case 'OrderCreated':
order = { ...order, ...event.data };
break;
case 'OrderConfirmed':
order.status = 'CONFIRMED';
break;
}
}
return order;
}
But what happens when we have thousands of events? That’s where snapshots help. Periodically save the current state:
// Creating snapshots
function takeOrderSnapshot(order) {
client.appendToStream(
`snapshot-order-${order.id}`,
jsonEvent({
type: 'OrderSnapshot',
data: { ...order, version: currentVersion }
})
);
}
For our read models, we project events into specialized views. How do we keep these updated? Through event handlers that react to changes:
// Projecting to a MongoDB read model
client.subscribeToAll().on('data', async ({ event }) => {
if (event?.type === 'OrderConfirmed') {
await ordersCollection.updateOne(
{ id: event.data.orderId },
{ $set: { status: 'confirmed' } }
);
}
});
Testing is crucial in event-driven systems. We verify our aggregates behave correctly:
// Aggregate test example
test('Order cancels only when pending', () => {
const order = new Order();
order.create('user123', [items]);
order.confirm(); // Changes status to CONFIRMED
expect(() => order.cancel()).toThrowError(
'Cannot cancel confirmed order'
);
});
Performance matters. We implement backpressure handling in our event processors:
// Controlled event processing
const processor = new EventProcessor({
maxConcurrent: 5,
handleEvent: async (event) => {
// Business logic here
}
});
What about errors? We add retries with exponential backoff:
// Error handling with retries
async function handleWithRetry(event, maxAttempts = 3) {
let attempt = 0;
while (attempt < maxAttempts) {
try {
return await processEvent(event);
} catch (err) {
const delay = 2 ** attempt * 100;
await new Promise(res => setTimeout(res, delay));
attempt++;
}
}
}
Throughout this journey, I’ve found event sourcing transforms how we think about data. Instead of overwriting state, we accumulate truth. Instead of guessing what changed, we know exactly when and why. The initial setup might feel complex, but the auditability and flexibility pay dividends.
Did you notice how events naturally document system behavior? That’s my favorite benefit - the event log becomes living documentation. What business processes could you clarify with this approach?
Now consider your current projects. Where would replayable history help? Where could separate read/write models boost performance? I encourage you to try a small event-sourced module in your next Node.js project.
If you found this guide helpful, please share it with your team. What challenges have you faced with distributed systems? Share your experiences in the comments - I’d love to continue the conversation!