I remember the first time I lost critical data in an application. It was a simple bug, but it corrupted the current state, and there was no way to know what exactly went wrong or how to get back to a valid point. That frustration led me down a path to find a better way to build systems. I wanted a truth that couldn’t be lost. This search introduced me to Event Sourcing. Today, I want to guide you through building a system with Node.js, TypeScript, and EventStore that captures every change, creating a permanent, replayable record of your application’s history. Let’s build something where the past is always accessible.
Think of Event Sourcing like a detailed ledger for a business. Instead of just knowing the current bank balance, you have every single transaction listed in order. Your application’s state becomes the sum of all these recorded changes, or events. This approach changes how we think about data. Why store only the ‘what is’ when you can also store the ‘how it became’?
To start, we need a project. I’ll use Node.js and TypeScript for type safety, which is crucial when dealing with events. Here’s a basic setup to get us going.
mkdir event-sourcing-project
cd event-sourcing-project
npm init -y
npm install express typescript ts-node @eventstore/db-client uuid
npm install -D @types/node @types/express @types/uuid
Next, we configure TypeScript. I prefer strict mode to catch errors early.
// tsconfig.json
{
"compilerOptions": {
"target": "ES2020",
"module": "commonjs",
"strict": true,
"outDir": "./dist"
}
}
The heart of this system is the event store. I use EventStoreDB because it’s built for this job. You can run it easily with Docker. Have you ever considered how a database designed for sequences changes your code?
# docker-compose.yml snippet
services:
eventstore:
image: eventstore/eventstore:latest
ports:
- "2113:2113"
environment:
- EVENTSTORE_INSECURE=true
With the store ready, we define what an event looks like in code. Events are immutable facts. For an e-commerce order system, an event might be “OrderCreated”. Notice how each event carries all the information about that specific change.
// A simple event definition
export class OrderCreatedEvent {
constructor(
public readonly orderId: string,
public readonly customerId: string,
public readonly occurredOn: Date
) {}
}
But how do we manage the current state? We use aggregates. An aggregate, like an ‘Order’, is responsible for its own consistency. It applies events to update its internal state. This is where business logic lives. What rules must pass before an order can be confirmed?
// Part of an Order aggregate
class Order {
private status: string = 'PENDING';
confirm() {
if (this.status !== 'PENDING') {
throw new Error('Order cannot be confirmed');
}
// Apply a confirmation event
this.apply(new OrderConfirmedEvent(this.id, new Date()));
}
private apply(event: any) {
// Update state based on the event
if (event instanceof OrderConfirmedEvent) {
this.status = 'CONFIRMED';
}
}
}
Saving and loading aggregates involves an event repository. It reads all past events for an aggregate, replays them to rebuild the current state, and saves new events. This pattern ensures we always derive state from the event history. Can you see how debugging becomes a matter of replaying events?
// Basic repository pattern
async function save(aggregate: Order, eventStore: EventStoreDBClient) {
const events = aggregate.getUncommittedEvents();
for (const event of events) {
await eventStore.appendToStream(`Order-${aggregate.id}`, event);
}
}
For complex systems, I separate the commands that change state from the queries that read it. This is Command Query Responsibility Segregation (CQRS). It lets you scale reads and writes independently. Imagine a dashboard that needs fast data; you can build a separate, optimized read model just for that.
Read models are created by projections. A projection listens to events and updates a dedicated database table or view. For example, every time an OrderConfirmedEvent happens, a projection might update an ‘orders_summary’ table.
// A simple projection concept
eventStore.subscribeToStream('$ce-Order', (event) => {
if (event.type === 'OrderConfirmedEvent') {
// Update a read-optimized database
updateReadDatabase(event.data.orderId, { status: 'confirmed' });
}
});
Over time, your event definitions might need to change. This is event versioning. You add new properties without breaking old events. I handle this by keeping event version numbers and writing upgrade logic when loading old events. It’s a bit of work, but it prevents data loss.
Testing is straightforward. You can test aggregates by checking the events they produce given certain commands. I often write tests that say, “When I execute this command, I expect these events to be recorded.”
Performance can be a concern with many events. EventStoreDB is efficient, but for very high throughput, I consider snapshotting. A snapshot is the aggregate state at a point in time, so you don’t have to replay all events from the beginning every time.
A common mistake is putting too much data in events or making them too fine-grained. I stick to business-relevant changes. Another pitfall is not planning for schema evolution from the start. Start with versioning in mind.
I’ve built several systems this way, and the clarity it brings is remarkable. When a customer asks, “Why is my order in this state?” I can show them the exact sequence of events. It turns support from a guessing game into a factual review.
This journey from losing data to having a complete history has been rewarding. Event Sourcing with Node.js, TypeScript, and EventStore gives you a robust foundation for complex domains. I encourage you to try it on a small project. See how it feels to have your application’s memory set in stone.
If you found this guide helpful, please like and share it with others who might benefit. Have you tried Event Sourcing before? What challenges did you face? Let me know in the comments below—I’d love to hear your experiences and continue the conversation.