I’ve been thinking a lot about how we track changes in critical systems lately. When building financial applications, traditional CRUD approaches often fall short - they overwrite history, lose audit trails, and struggle with complex state transitions. That’s what led me down the event sourcing path. Follow along as I share practical steps to build a robust event-sourced system using Node.js, TypeScript, and EventStore DB. You’ll gain tools to handle complex business domains with full auditability.
Let’s start with our foundation. Why capture every change as immutable events? Imagine being able to reconstruct your system’s state at any historical point. That level of transparency transforms how we debug and analyze systems. How might this change how you approach compliance requirements?
Project Setup Essentials
We begin with a clean TypeScript environment and EventStore running in Docker:
npm init -y
npm install @eventstore/db-client uuid zod
docker-compose up -d # Starts EventStore
Our tsconfig.json
enables strict typing and decorator support - crucial for domain modeling. The Docker setup gives us a production-like event store locally in seconds.
Core Architecture Patterns
Here’s our project structure that separates concerns:
src/
├── domain/ # Business logic
├── application/# Command/query handlers
├── api/ # REST endpoints
└── shared/ # Utilities
We define our base AggregateRoot
class to handle event application:
abstract class AggregateRoot {
private _uncommittedEvents: DomainEvent[] = [];
protected addEvent(eventData: any, eventType: string): void {
const event: DomainEvent = {
metadata: {
eventId: crypto.randomUUID(),
eventType,
timestamp: new Date()
},
data: eventData
};
this._uncommittedEvents.push(event);
this.apply(event);
}
public loadFromHistory(events: DomainEvent[]): void {
events.forEach(event => this.apply(event));
}
}
This pattern ensures state changes originate from events. What would happen if we skipped this abstraction?
Concrete Domain Implementation
For a banking context, we define account events:
class Account extends AggregateRoot {
private balance: number = 0;
static open(accountId: string): Account {
const account = new Account(accountId);
account.addEvent({ accountId }, 'AccountOpened');
return account;
}
deposit(amount: number): void {
this.addEvent({ amount }, 'Deposited');
}
private apply(event: DomainEvent): void {
switch (event.metadata.eventType) {
case 'AccountOpened':
this.id = event.data.accountId;
break;
case 'Deposited':
this.balance += event.data.amount;
break;
}
}
}
Notice how state changes only happen through event application. How does this prevent invalid state transitions?
EventStore Integration
Connecting to our event store:
import { EventStoreDBClient } from '@eventstore/db-client';
const client = EventStoreDBClient.connectionString(
'esdb://localhost:2113?tls=false'
);
async function saveEvents(
streamId: string,
events: DomainEvent[],
expectedVersion: number
): Promise<void> {
const serialized = events.map(event => ({
type: event.metadata.eventType,
data: event.data,
metadata: event.metadata
}));
await client.appendToStream(streamId, serialized, {
expectedRevision: expectedVersion
});
}
We use optimistic concurrency control via expectedVersion
to prevent lost updates. What happens when versions mismatch?
Projections for Read Models
We create real-time projections for fast queries:
-- Continuous projection
FROM STREAM 'accounts'
WHEN $any
SELECT *
EMIT LINKTO('account-summary', metadata.streamId)
This feeds into our read model:
class AccountSummaryProjection {
async onDeposited(event: DepositedEvent): Promise<void> {
await db.update('account_summaries', event.streamId, summary => {
summary.balance += event.data.amount;
summary.lastUpdated = event.metadata.timestamp;
});
}
}
Separating reads from writes lets us scale independently. How much latency can your business tolerate for read consistency?
Testing Strategy
We verify behavior through event assertions:
test('rejects overdraft', () => {
const account = Account.open('acc_123');
account.deposit(100);
expect(() => account.withdraw(200))
.toThrow('Insufficient funds');
const events = account.getUncommittedEvents();
expect(events).toHaveLength(2); // Only open + deposit
});
By testing emitted events rather than internal state, we focus on business outcomes.
Production Considerations
For deployment:
- Use persistent subscriptions for reliable event processing
- Implement exponential backoff in projection handlers
- Monitor stream write latencies and projection lags
- Version your event schemas using parent-child streams
What monitoring metrics would give you confidence in production?
I’ve walked you through key implementation details from events to projections. The real power emerges when you need to add new features - like generating quarterly statements from historical events. That’s when the investment pays off. If you found this useful, share it with colleagues facing similar architectural challenges. What other event sourcing topics would you like me to cover? Leave your thoughts in the comments below.