js

Event Sourcing with Node.js TypeScript and EventStore Complete Implementation Guide 2024

Master event sourcing with Node.js, TypeScript & EventStore. Complete guide covering aggregates, commands, projections, CQRS patterns & best practices. Build scalable event-driven systems today.

Event Sourcing with Node.js TypeScript and EventStore Complete Implementation Guide 2024

I’ve been thinking a lot lately about how we build systems that truly remember. Most applications treat data as a snapshot—a current state that overwrites what came before. But what if we could build software that never forgets? That’s why I want to share my experience with event sourcing.

Event sourcing changes how we think about application state. Instead of storing just the current data, we capture every change as an immutable event. These events become the single source of truth. Have you ever wondered what your system looked like yesterday at 3 PM? With event sourcing, you can know exactly.

Let me show you how to set this up with Node.js and TypeScript. We’ll use EventStoreDB as our event store—it’s built specifically for this pattern.

First, let’s create our project structure:

mkdir event-sourcing-app
cd event-sourcing-app
npm init -y
npm install typescript @eventstore/db-client uuid
npm install -D @types/node @types/uuid

Now, let’s define our core event interface:

interface DomainEvent {
  id: string;
  aggregateId: string;
  version: number;
  type: string;
  data: Record<string, any>;
  timestamp: Date;
}

Every state change in our system will be represented as one of these events. But how do we actually use these events to build current state?

The magic happens through aggregates. An aggregate is a cluster of domain objects that can be treated as a single unit. Here’s a simple example:

class UserAccount extends BaseAggregate {
  private email: string;
  private status: 'active' | 'suspended';

  constructor(id: string) {
    super(id);
  }

  createAccount(email: string) {
    this.apply(new AccountCreated(this.id, email));
  }

  private whenAccountCreated(event: AccountCreated) {
    this.email = event.email;
    this.status = 'active';
  }
}

Notice how we’re not directly setting the email—we’re applying an event. The event handler (whenAccountCreated) actually updates the state. This separation is crucial.

Now, let’s connect to EventStoreDB:

import { EventStoreDBClient } from '@eventstore/db-client';

const client = EventStoreDBClient.connectionString(
  'esdb://localhost:2113?tls=false'
);

Storing events is straightforward:

async function appendEvents(streamName: string, events: DomainEvent[]) {
  const eventData = events.map(event => ({
    type: event.type,
    data: {
      ...event.data,
      eventId: event.id,
      timestamp: event.timestamp
    }
  }));

  await client.appendToStream(streamName, eventData);
}

But what about reading events back? How do we reconstruct our aggregates?

async function loadAggregate(aggregateId: string, aggregate: BaseAggregate) {
  const events = await client.readStream(`user-${aggregateId}`);
  aggregate.loadFromHistory(events);
}

This ability to rebuild state from events is powerful. It means we can create new read models or fix bugs by replaying events. Have you ever needed to add a new reporting feature that required historical data? Event sourcing makes this natural.

Projections are where things get interesting. They transform events into read-optimized views:

async function buildUserListView() {
  const users = new Map();
  
  const events = await client.readAll();
  for (const event of events) {
    if (event.type === 'AccountCreated') {
      users.set(event.data.aggregateId, {
        id: event.data.aggregateId,
        email: event.data.email,
        createdAt: event.timestamp
      });
    }
  }
  
  return Array.from(users.values());
}

Concurrency control is important too. Event sourcing uses optimistic concurrency:

async function saveAggregate(aggregate: BaseAggregate) {
  const events = aggregate.getUncommittedEvents();
  await appendEvents(
    `user-${aggregate.id}`,
    events,
    aggregate.version - events.length
  );
  aggregate.markEventsAsCommitted();
}

This ensures we don’t have conflicting changes. If someone else modified the aggregate since we loaded it, the append operation will fail.

Event versioning is another consideration. What happens when we need to change an event’s structure?

// Version 1 of our event
interface AccountCreatedV1 {
  email: string;
}

// Version 2 adds a username field
interface AccountCreatedV2 {
  email: string;
  username: string;
}

// We can write a migrator
function migrateV1ToV2(event: AccountCreatedV1): AccountCreatedV2 {
  return {
    email: event.email,
    username: event.email.split('@')[0]
  };
}

Testing event-sourced systems requires a different approach. We test that commands produce the correct events:

test('create account command produces AccountCreated event', () => {
  const command = new CreateAccount('[email protected]');
  const result = handleCommand(command);
  
  expect(result.events).toHaveLength(1);
  expect(result.events[0].type).toBe('AccountCreated');
});

In production, you’ll want to consider snapshotting for aggregates with long event histories. Snapshots capture the state at a point in time, so you don’t need to replay all events:

async function saveSnapshot(aggregateId: string, version: number, state: any) {
  await client.appendToStream(
    `snapshot-${aggregateId}`,
    [{
      type: 'Snapshot',
      data: { state, version }
    }]
  );
}

Event sourcing isn’t just about technical implementation—it changes how we think about our domain. It forces us to model state changes explicitly. Every business decision becomes an event. Can you see how this might make your business logic clearer?

I’ve found that teams using event sourcing tend to have better discussions about their domain. The events become the language they use to describe what the system does.

Remember that event sourcing works particularly well with CQRS (Command Query Responsibility Segregation). The write side handles commands and produces events, while the read side consumes events to build query-optimized views.

If you’re building systems where audit trails matter, where you need temporal queries, or where you want to be able to reconstruct state for debugging, event sourcing is worth considering. It does add complexity, but the benefits in traceability and flexibility can be significant.

What challenges have you faced with traditional data storage that event sourcing might help solve? I’d love to hear your thoughts in the comments below. If you found this helpful, please share it with others who might benefit from this approach.

Keywords: event sourcing Node.js, TypeScript event sourcing, EventStore database tutorial, CQRS event sourcing implementation, Node.js EventStore integration, event sourcing aggregates TypeScript, event projections Node.js, event sourcing concurrency handling, EventStore client Node.js, event sourcing best practices



Similar Posts
Blog Image
Complete Event-Driven Microservices Architecture Guide: NestJS, RabbitMQ, and MongoDB Integration

Learn to build scalable event-driven microservices with NestJS, RabbitMQ & MongoDB. Master CQRS, sagas, error handling & deployment strategies.

Blog Image
Build Type-Safe Event-Driven Architecture with TypeScript Node.js and Redis Streams

Learn to build type-safe event-driven architecture with TypeScript, Node.js & Redis Streams. Includes event sourcing, error handling & monitoring best practices.

Blog Image
Build High-Performance GraphQL API: Apollo Server 4, Prisma ORM & DataLoader Pattern Guide

Learn to build a high-performance GraphQL API with Apollo Server, Prisma ORM, and DataLoader pattern. Master N+1 query optimization, authentication, and real-time subscriptions for production-ready APIs.

Blog Image
Build Type-Safe GraphQL APIs with NestJS, Prisma, and Code-First Development: Complete Guide

Learn to build type-safe GraphQL APIs using NestJS, Prisma & code-first development. Master authentication, performance optimization & production deployment.

Blog Image
Build Distributed Rate Limiter with Redis, Node.js, and TypeScript: Production-Ready Guide

Build distributed rate limiter with Redis, Node.js & TypeScript. Learn token bucket, sliding window algorithms, Express middleware, failover handling & production deployment strategies.

Blog Image
Build Production-Ready GraphQL APIs with Apollo Server, TypeScript, and Prisma: Complete Guide

Learn to build production-ready GraphQL APIs with Apollo Server, TypeScript & Prisma. Complete guide with auth, performance optimization & deployment.