Recently, I faced a critical challenge in our e-commerce platform—services were tightly coupled, making changes risky and deployments painful. Scaling individual components felt like solving a jigsaw puzzle blindfolded. That frustration sparked my journey into type-safe event-driven systems. Today, I’ll share how we transformed our architecture using TypeScript and RabbitMQ. Stick around—you might find solutions to problems you didn’t know you had.
Event-driven architecture fundamentally changes how services communicate. Instead of direct API calls, services emit events when state changes occur. Others listen and react. This approach eliminates brittle dependencies. But how do we prevent event chaos? TypeScript’s type system becomes our guardrail. Consider this event definition:
// src/types/events.ts
export interface OrderCreatedData {
customerId: string;
items: Array<{
productId: string;
quantity: number;
price: number;
}>;
}
export interface EventTypeMap {
'order.created': OrderCreatedData;
}
export type TypedDomainEvent<T extends keyof EventTypeMap> = {
eventType: T;
data: EventTypeMap[T];
version: 1;
timestamp: Date;
};
Notice how TypedDomainEvent
enforces strict event shapes. Attempt to emit an order.created
event with missing customerId
? TypeScript blocks it during compilation. This catches errors before runtime. But why stop at static types? Let’s make our runtime just as safe.
For event storage, we implemented version-aware streams:
// src/events/event-store.ts
class EventStore {
private streams: Map<string, DomainEvent[]> = new Map();
async append(streamId: string, newEvent: DomainEvent): Promise<void> {
const existingEvents = this.streams.get(streamId) || [];
const lastVersion = existingEvents.slice(-1)[0]?.version || 0;
if (newEvent.version !== lastVersion + 1) {
throw new Error(`Version conflict: Expected ${lastVersion + 1}, got ${newEvent.version}`);
}
this.streams.set(streamId, [...existingEvents, newEvent]);
}
}
Version conflicts prevent data corruption when multiple services modify the same entity. But what about distributing these events reliably? This is where RabbitMQ shines.
RabbitMQ handles message delivery guarantees and routing. Our implementation uses topic exchanges for precise routing:
// src/events/event-bus.ts
class RabbitMQEventBus {
async publish(event: DomainEvent): Promise<void> {
const message = Buffer.from(JSON.stringify(event));
this.channel.publish(
'domain_events',
event.eventType, // e.g., 'order.paid'
message,
{ persistent: true } // Survive broker restarts
);
}
async subscribe(eventType: string, handler: (event: any) => void): Promise<void> {
const queue = await this.channel.assertQueue('', { exclusive: true });
this.channel.bindQueue(queue.queue, 'domain_events', eventType);
this.channel.consume(queue.queue, (msg) => {
if (msg) {
handler(JSON.parse(msg.content.toString()));
this.channel.ack(msg);
}
});
}
}
Notice the persistent: true
flag? That ensures events survive broker restarts. But what happens when a handler fails? We added dead-letter exchanges for automatic retries. Failed messages route to a quarantine queue after three attempts.
For complex workflows like order processing, we combined this with CQRS:
// Order aggregate
class Order {
private state: 'pending' | 'paid' | 'shipped' = 'pending';
applyOrderPaid(event: TypedDomainEvent<'order.paid'>): void {
if (this.state !== 'pending') throw new Error('Invalid state transition');
this.state = 'paid';
// Update read model
orderProjection.update(event.data.orderId, { status: 'paid' });
}
}
The aggregate root enforces business rules during state transitions, while projections update query-optimized views. This separation allows scaling reads independently from writes.
Event versioning proved crucial for backward compatibility. When we modified the order.created
payload, we handled both formats:
function migrateV1ToV2(event: V1OrderCreated): V2OrderCreated {
return {
...event,
currency: 'USD', // New field
version: 2
};
}
New consumers process v2 events directly, while existing ones still handle v1 during the transition. Zero downtime migrations became possible.
Throughout this journey, I learned that type safety isn’t just about preventing bugs—it’s about enabling fearless evolution. When events are contracts, services can evolve independently. Need to change billing logic? Modify its handlers without touching orders or inventory.
This architecture now processes thousands of events per second in our production environment. Deployment nightmares vanished. Scaling feels like turning a dial rather than rebuilding engines mid-flight. But what surprised me most? How domain events became our system’s living documentation. Each event tells a story of business decisions captured in code.
If you’ve struggled with microservice coordination or data consistency, give this pattern a try. Start small—a single event type, two services. You’ll quickly see the benefits compound. Have you encountered similar challenges in your projects? What solutions worked for you? Share your thoughts below—I read every comment. And if this helped you, consider sharing it with a colleague who might benefit.