I’ve been thinking a lot about resilient systems lately. What happens when your database crashes mid-operation? How do you track complex business workflows? Traditional CRUD approaches often leave gaps in audit trails and struggle with temporal queries. That’s why I’ve shifted to event sourcing - capturing every state change as immutable facts. Let me show you how to build a production-ready system using Node.js, TypeScript, and PostgreSQL.
Why event sourcing? Consider banking transactions. Instead of overwriting account balances, we record every deposit and withdrawal. This gives us perfect audit history and enables powerful temporal queries. But how do we implement this without performance penalties? Let’s find out.
First, our foundation: TypeScript and PostgreSQL. We’ll start with a clean setup:
npm init -y
npm install pg uuid zod fastify
npm install -D typescript @types/node ts-node-dev
Our tsconfig.json
sets strict type checks - crucial for domain safety:
{
"compilerOptions": {
"target": "ES2022",
"module": "commonjs",
"strict": true,
"esModuleInterop": true,
"outDir": "dist"
}
}
Now, the event store schema - the heart of our system:
CREATE TABLE events (
id UUID PRIMARY KEY DEFAULT uuid_generate_v4(),
stream_id VARCHAR(255) NOT NULL,
event_type VARCHAR(255) NOT NULL,
event_data JSONB NOT NULL,
version INTEGER NOT NULL,
UNIQUE(stream_id, version)
);
Notice the unique constraint on (stream_id, version)
? This prevents concurrent modifications. Ever wondered how financial systems prevent double-spending? This constraint is part of the answer.
Let’s model a bank account domain. Events become our source of truth:
// AccountEvents.ts
export class AccountCreated {
constructor(
public readonly accountId: string,
public readonly owner: string,
public readonly initialBalance: number
) {}
}
export class MoneyDeposited {
constructor(
public readonly accountId: string,
public readonly amount: number,
public readonly transactionId: string
) {}
}
The aggregate root reconstructs state from events:
// AccountAggregate.ts
class Account {
private balance = 0;
constructor(private events: AccountEvent[]) {
this.applyEvents();
}
private applyEvents() {
this.events.forEach(event => {
if (event instanceof MoneyDeposited)
this.balance += event.amount;
// Other event handlers...
});
}
deposit(amount: number) {
if (amount <= 0) throw new Error("Invalid amount");
return new MoneyDeposited(this.id, amount, uuid());
}
}
How do we persist these events? Our repository handles the heavy lifting:
// EventStoreRepository.ts
async append(streamId: string, events: DomainEvent[], expectedVersion?: number) {
const client = await this.pool.connect();
try {
await client.query('BEGIN');
if (expectedVersion !== undefined) {
const currentVersion = await this.getStreamVersion(client, streamId);
if (currentVersion !== expectedVersion) {
throw new ConcurrencyError(streamId, expectedVersion, currentVersion);
}
}
for (const event of events) {
await client.query(
`INSERT INTO events(stream_id, event_type, event_data, version)
VALUES($1, $2, $3, $4)`,
[streamId, event.constructor.name, event, event.version]
);
}
await client.query('COMMIT');
} finally {
client.release();
}
}
Notice the optimistic locking pattern? This prevents lost updates when multiple users modify the same account. But what happens when we have thousands of events? Rebuilding state from scratch becomes slow.
Enter snapshots - our performance saviors:
// SnapshotManager.ts
async takeSnapshot(aggregateId: string) {
const events = await eventStore.getEvents(aggregateId);
const aggregate = new Account(events);
const snapshot = {
state: aggregate.getState(),
version: aggregate.getVersion()
};
await this.storeSnapshot(aggregateId, snapshot);
}
async restore(aggregateId: string) {
const snapshot = await this.loadSnapshot(aggregateId);
const events = await eventStore.getEventsAfterVersion(aggregateId, snapshot.version);
return new Account(snapshot.state, events);
}
We store snapshots separately:
CREATE TABLE snapshots (
stream_id VARCHAR(255) PRIMARY KEY,
data JSONB NOT NULL,
version INTEGER NOT NULL,
created_at TIMESTAMPTZ DEFAULT NOW()
);
Now for projections - how do we support multiple read models?
// ProjectionHandler.ts
class AccountBalanceProjection {
private balances: Map<string, number> = new Map();
async handle(event: DomainEvent) {
if (event instanceof MoneyDeposited) {
const current = this.balances.get(event.accountId) || 0;
this.balances.set(event.accountId, current + event.amount);
}
// Other event handlers...
}
getBalance(accountId: string) {
return this.balances.get(accountId) || 0;
}
}
But what happens when we need to change event structure? Versioning comes to rescue:
// EventUpcaster.ts
function upcast(event: any) {
if (event.event_type === 'MoneyDeposited' && event.version === 1) {
return {
...event,
event_data: {
...event.event_data,
currency: event.event_data.currency || 'USD' // Default added in v2
},
version: 2
};
}
return event;
}
For testing, we use event-based assertions:
test('rejects overdraft', async () => {
const account = new Account([], 'acc1');
account.deposit(100);
expect(() => account.withdraw(200))
.toThrowError(InsufficientFundsError);
const changes = account.getUncommittedEvents();
expect(changes).toHaveLength(1); // Only deposit recorded
});
Performance tip: Batch event writes. Instead of inserting events one-by-one:
// Bulk insert instead of loop
await client.query(
`INSERT INTO events(stream_id, event_type, event_data, version)
SELECT * FROM unnest($1::text[], $2::text[], $3::jsonb[], $4::int[])`,
[streamIds, eventTypes, eventDatas, versions]
);
Common pitfall? Forgetting idempotency in event handlers. What happens if we replay events? Handlers must tolerate duplicates. Always use event IDs for deduplication.
Event sourcing shines for complex domains: financial systems, inventory management, workflow engines. But is it right for simple CRUD? Probably overkill. The sweet spot? When audit trails, temporal queries, or complex state transitions matter.
I’ve deployed this pattern in production for trading systems. The debugging superpower? Reproducing state at any point in time. Customer reported a balance discrepancy last Tuesday? Replay events up to that moment and inspect.
What about schema migrations? We never modify existing events. Instead, we add new events or use upcasters. Our event store remains append-only - that’s the golden rule.
Ready to implement? Start with a bounded context - inventory management or payment processing. You’ll gain confidence before tackling larger domains.
Found this useful? Share your event sourcing experiences in the comments! Like this post if you’d like a follow-up on scaling with Kafka projections.