I’ve been thinking about how modern applications manage data changes, especially when we need a complete history of every action. That’s what brought me to event sourcing. It’s not just about where you are; it’s about how you got there. Let me show you how we can build this with Node.js, TypeScript, and PostgreSQL – a stack I’ve found remarkably effective for these systems.
Why does this matter? Imagine needing to trace every transaction in a banking app or track inventory changes in real-time. With traditional databases, you only see the current state. But what if you could replay every decision? That’s the power we’re harnessing today. How would your debugging improve if you could see every state change?
Let’s start with the foundation. Event sourcing stores state changes as immutable events. Instead of updating a balance directly, we record “Deposited $100” as an event. PostgreSQL excels here with transactional safety. Here’s how we structure events:
// Core event interface
interface DomainEvent {
eventId: string;
aggregateId: string;
eventType: string;
eventVersion: number;
timestamp: Date;
data: Record<string, any>;
}
Notice the eventVersion
– crucial for handling schema changes over time. When we add new fields, versioning prevents breaking existing projections. Have you considered how your data models might evolve in five years?
Aggregates are our consistency boundaries. For a bank account, we create an aggregate root that enforces rules:
class BankAccount extends AggregateRoot {
private balance: number = 0;
deposit(amount: number) {
if (amount <= 0) throw new Error("Invalid amount");
this.applyEvent(
"MoneyDeposited",
{ amount }
);
}
private handleMoneyDeposited(event: DomainEvent) {
this.balance += event.data.amount;
}
}
See how the actual state change happens in handleMoneyDeposited
? The deposit
method just records the intent. This separation is vital. What business rules could you enforce this way?
Storing events requires careful serialization. Our PostgreSQL event store uses this schema:
CREATE TABLE events (
event_id UUID PRIMARY KEY,
aggregate_id UUID NOT NULL,
event_type VARCHAR(100) NOT NULL,
event_version INT NOT NULL,
event_data JSONB NOT NULL,
created_at TIMESTAMPTZ DEFAULT NOW()
);
We use JSONB for efficient querying. When rebuilding an account’s state, we fetch all events for that aggregate ID and replay them:
async function getAccount(accountId: string) {
const events = await eventStore.getEvents(accountId);
const account = new BankAccount(accountId);
account.replayEvents(events);
return account;
}
Concurrency is handled through optimistic locking. When saving events, we check that the latest event version matches what we expect. If not, someone else modified the state concurrently:
async function saveEvents(events: DomainEvent[]) {
const currentVersion = await getLatestVersion(events[0].aggregateId);
if (currentVersion !== events[0].eventVersion - 1) {
throw new ConcurrencyError("Conflict detected");
}
// Proceed with insertion
}
Projections transform events into read models. For an account balance view:
class BalanceProjection {
balances: Map<string, number> = new Map();
processEvent(event: DomainEvent) {
if (event.eventType === "MoneyDeposited") {
const current = this.balances.get(event.aggregateId) || 0;
this.balances.set(event.aggregateId, current + event.data.amount);
}
// Handle other event types...
}
}
We can rebuild projections anytime by replaying events. This becomes invaluable when fixing bugs or adding new views. How much easier would audits be with this approach?
Performance optimizations include:
- Snapshots: Periodically save aggregate state
- Batched event loading
- Separate read/write databases
- Caching frequent projections
For error handling, we use compensating actions. If a withdrawal fails after event storage, we might add a “WithdrawalFailed” event to reverse the action.
Testing strategies focus on:
- Command validation
- Event correctness
- Idempotency in replay
- Concurrency scenarios
test("Rejects negative deposit", () => {
const account = BankAccount.open("123", 100);
expect(() => account.deposit(-50)).toThrow();
});
The real beauty emerges in complex systems. We can:
- Analyze historical trends
- Debug by replaying events
- Implement undo/redo features
- Migrate systems with zero downtime
Event sourcing does add complexity. It shines when:
- Audit trails are critical
- You need temporal queries
- Multiple representations of data exist
- The domain has complex business rules
What problems could this solve in your current projects? I’d love to hear about your implementation challenges. If this guide helped, please share it with others who might benefit. Your comments and experiences enrich our community’s knowledge – let’s keep the conversation going!