I’ve been working on scalable systems for years, and the challenge of maintaining data consistency across distributed services kept nagging at me. That’s what led me to explore event-driven architectures with Node.js, EventStore, and Docker. In this article, I’ll walk you through building a robust system that handles complex workflows while keeping everything in sync.
Event sourcing changes how we think about data storage. Instead of just saving the current state, we record every change as an immutable event. This approach gives us a complete history of all actions, making it perfect for systems where audit trails and data integrity matter. Have you ever needed to reconstruct past states or debug complex user journeys? Event sourcing makes this possible.
CQRS takes this further by separating read and write operations. Commands handle state changes, while queries fetch data. This separation allows us to scale each part independently and optimize performance based on specific needs. Why force one database to handle both heavy writes and complex reads when they can be specialized?
Let me show you how to set up a practical order management system. We’ll start with the project structure and dependencies.
mkdir event-driven-orders
cd event-driven-orders
npm init -y
Our package.json includes essential libraries for event handling, validation, and database connections. We’re using TypeScript for better type safety and development experience.
{
"dependencies": {
"node-eventstore-client": "^0.2.19",
"express": "^4.18.2",
"uuid": "^9.0.0",
"joi": "^17.9.2"
}
}
Docker simplifies our infrastructure setup. Here’s a compose file that spins up EventStore, Redis for caching, and PostgreSQL for read models.
services:
eventstore:
image: eventstore/eventstore:23.6.0-buster-slim
environment:
- EVENTSTORE_INSECURE=true
ports:
- "1113:1113"
- "2113:2113"
The core of our system revolves around events and commands. Every action starts as a command that gets validated before producing events. How do we ensure these events are properly structured and versioned?
interface BaseEvent {
eventId: string;
eventType: string;
aggregateId: string;
timestamp: Date;
data: any;
}
interface OrderCreatedEvent extends BaseEvent {
eventType: 'OrderCreated';
data: {
customerId: string;
items: OrderItem[];
};
}
Connecting to EventStore requires careful error handling and connection management. I’ve found that wrapping the client in a service class makes the code more maintainable and testable.
class EventStoreService {
private connection: EventStoreNodeConnection;
async saveEvents(streamName: string, events: BaseEvent[]): Promise<void> {
const eventDataArray = events.map(event => this.createEventData(event));
await this.connection.appendToStream(streamName, ExpectedVersion.Any, eventDataArray);
}
}
Event handlers process these events and update read models. For instance, when an order is confirmed, we might update a PostgreSQL table for fast querying and cache the result in Redis. What happens when multiple services need to react to the same event?
Persistent subscriptions in EventStore ensure reliable event delivery to multiple consumers. This pattern enables different parts of our system to react to events without tight coupling.
async setupPersistentSubscription(stream: string, group: string): Promise<void> {
await this.connection.connectToPersistentSubscription(
stream,
group,
this.handleEvent,
this.handleSubscriptionDropped
);
}
Deployment with Docker ensures consistency across environments. We package our Node.js application alongside EventStore and other dependencies. How do we handle database migrations and ensure the system starts in the correct order?
Event versioning is crucial for long-lived systems. When business requirements change, we need to handle multiple event versions gracefully. I add version numbers to events and use upcasting to transform old events to new formats.
Eventual consistency means that read models might lag behind write operations. We handle this by designing user interfaces that don’t assume immediate consistency and implementing retry mechanisms for failed event processing.
Testing this architecture requires mocking external dependencies and verifying event sequences. I use Jest for unit tests and Docker Compose for integration testing, ensuring our event handlers work correctly under various scenarios.
Monitoring and logging are essential for production systems. We need to track event processing times, error rates, and system health. Winston helps with structured logging, while metrics can be exported to monitoring tools.
Building this system taught me valuable lessons about designing for failure and planning for scale. Every component should be independently deployable and resilient to outages.
What surprised me most was how event sourcing simplifies debugging. With a complete event history, I can replay scenarios and identify exactly where things went wrong.
I’d love to hear about your experiences with distributed systems. Have you faced similar challenges with data consistency? What patterns worked best for your use cases?
If this article helped you understand event-driven architectures, please share it with your team or colleagues. Your comments and feedback help improve future content, so don’t hesitate to join the conversation below.