I’ve been thinking about robust data architectures lately. What if we could track every change in an application like a financial ledger? This curiosity led me to build an event sourcing system with Node.js and TypeScript. Let me show you how we can implement this powerful pattern together.
Setting up our environment is straightforward. We’ll create a TypeScript project with Express and EventStoreDB:
npm init -y
npm install express @eventstore/db-client uuid
npm install -D typescript @types/node
Our tsconfig.json
enables decorators for clean domain modeling:
{
"compilerOptions": {
"experimentalDecorators": true,
"emitDecoratorMetadata": true
}
}
Why choose event sourcing? Imagine needing to reconstruct state after a failure or track how data changed over time. Traditional CRUD can’t provide this. With event sourcing, we store state changes as immutable events. Let’s define our base event structure:
// src/core/events.ts
interface DomainEvent {
id: string;
aggregateId: string;
eventType: string;
version: number;
timestamp: Date;
data: Record<string, unknown>;
}
Connecting to EventStoreDB is simple with Docker:
# docker-compose.yml
services:
eventstore:
image: eventstore/eventstore:latest
ports:
- "2113:2113"
Now the core infrastructure. Our repository handles event persistence:
// src/core/repository.ts
import { EventStoreDBClient } from '@eventstore/db-client';
class EventRepository {
private client = EventStoreDBClient.connectionString`esdb://localhost:2113`;
async save(streamName: string, event: DomainEvent, expectedVersion: number) {
await this.client.appendToStream(streamName, jsonEvent({
type: event.eventType,
data: event
}), { expectedRevision: expectedVersion });
}
}
Notice the expectedVersion
parameter? That’s optimistic locking in action. What happens if two users try to update the same record simultaneously? This prevents data corruption by rejecting conflicting changes.
Let’s model a user registration flow. First, our command:
class RegisterUserCommand {
constructor(
public readonly email: string,
public readonly password: string
) {}
}
Then our aggregate root that processes commands and emits events:
class UserAggregate {
private pendingEvents: DomainEvent[] = [];
constructor(private id: string, private version = 0) {}
register(email: string, password: string) {
this.emit(new UserRegisteredEvent(this.id, email, password));
}
private emit(event: DomainEvent) {
this.pendingEvents.push(event);
this.version++;
}
getUncommittedEvents() {
return this.pendingEvents;
}
}
When we save these events, EventStoreDB appends them to an immutable stream. But how do we query current state? That’s where projections come in:
// src/projections/user-projection.ts
class UserProjection {
private users: Map<string, User> = new Map();
applyUserRegistered(event: UserRegisteredEvent) {
this.users.set(event.aggregateId, {
email: event.data.email,
isActive: true
});
}
getUser(id: string) {
return this.users.get(id);
}
}
Projections transform our event stream into read-optimized views. Need different data formats for various services? Create multiple projections from the same event stream.
For our REST API, we’ll expose simple endpoints:
// src/api.ts
app.post('/users', async (req, res) => {
const command = new RegisterUserCommand(req.body.email, req.body.password);
const user = new UserAggregate(uuid());
user.register(command.email, command.password);
await repository.save(`user-${user.id}`, user.getUncommittedEvents());
res.status(201).send({ id: user.id });
});
What about schema changes? Event versioning handles evolving business requirements. When adding new fields, we can upcast old events:
function upcastV1Event(event: any): UserRegisteredEventV2 {
return {
...event,
data: {
...event.data,
registrationDate: new Date(2020, 1, 1) // Default for old events
}
};
}
Testing is crucial. We use Jest to verify our business logic:
test('user registration emits correct event', () => {
const user = new UserAggregate('user-1');
user.register('[email protected]', 'secure123');
const events = user.getUncommittedEvents();
expect(events[0].eventType).toBe('UserRegistered');
});
Performance optimization? EventStoreDB handles millions of events efficiently. For read-heavy applications, consider materialized views updated asynchronously.
Common pitfalls? Avoid putting business logic in projections - they should remain simple transformers. Also, carefully consider your stream partitioning strategy. Too many small streams can impact performance, while too few might cause contention.
I’ve seen event sourcing transform applications from opaque data black boxes into transparent, auditable systems. The initial complexity pays dividends in maintainability and business insight. What problems could you solve with complete historical data?
Try implementing this pattern in your next project. I’d love to hear about your experiences - share your thoughts in the comments below!