I’ve been thinking a lot about how modern applications handle complexity while staying maintainable. Recently, I worked on a system where components were tightly coupled, making changes painful. That experience pushed me toward event-driven architecture. It allows systems to grow organically while keeping parts independent. But I wanted more than just loose coupling—I needed type safety and reliability across distributed services. That’s why I combined TypeScript, EventEmitter2, and Redis Streams. This approach ensures events are handled correctly at compile time and scale across instances.
Have you ever struggled with debugging events in a large codebase? TypeScript’s type system can prevent many common errors. Let’s start by defining our events with strict types. This makes the system predictable and self-documenting.
interface UserEvent {
id: string;
type: 'user.created' | 'user.updated' | 'user.deleted';
timestamp: Date;
payload: {
userId: string;
email?: string;
name?: string;
};
}
With this structure, any misuse of event data gets caught early. I use EventEmitter2 for local event handling because it supports wildcards and namespaces. It integrates smoothly with TypeScript when we define event maps.
import EventEmitter2 from 'eventemitter2';
const emitter = new EventEmitter2();
emitter.on('user.*', (event: UserEvent) => {
console.log(`Handling ${event.type} for user ${event.payload.userId}`);
});
But what happens when your application scales beyond a single process? That’s where Redis Streams come in. They provide a persistent, ordered log of events. Each service can read from streams without losing messages, even during failures.
Imagine a scenario where a user signs up, and multiple services need to react. With Redis Streams, we publish events once and let consumers process them at their own pace.
import Redis from 'ioredis';
const redis = new Redis();
async function publishEvent(stream: string, event: UserEvent) {
await redis.xadd(stream, '*', 'event', JSON.stringify(event));
}
Error handling is crucial here. If a service crashes while processing, Redis Streams allow it to resume from the last read position. I implement retry logic with exponential backoff to handle transient issues.
How do you ensure that events are processed in order? Redis Streams maintain order, but consumers must acknowledge processing. Here’s a simple consumer loop:
async function consumeEvents(stream: string, group: string, consumer: string) {
while (true) {
const results = await redis.xreadgroup(
'GROUP', group, consumer, 'BLOCK', 1000,
'STREAMS', stream, '>'
);
if (results) {
for (const [_, messages] of results) {
for (const [id, fields] of messages) {
const event = JSON.parse(fields.event) as UserEvent;
try {
await handleEvent(event);
await redis.xack(stream, group, id);
} catch (error) {
console.error(`Failed to process event ${id}:`, error);
}
}
}
}
}
}
Event sourcing becomes powerful when you can replay events to rebuild state. For instance, if a bug corrupts data, you can reprocess events from a past point. I store events in Redis with metadata like version and aggregate ID.
What about testing? I write unit tests for event handlers and integration tests for the full flow. Mocking Redis in tests helps verify behavior without external dependencies.
In one project, I built a notification system that sends emails and updates dashboards. Events like ‘user.created’ trigger multiple actions. TypeScript ensures that each handler receives the correct payload structure.
Here’s a type-safe way to register handlers:
type EventHandlers = {
'user.created': (event: UserEvent) => Promise<void>;
'user.updated': (event: UserEvent) => Promise<void>;
};
function registerHandler<T extends keyof EventHandlers>(
event: T,
handler: EventHandlers[T]
) {
emitter.on(event, handler);
}
Performance monitoring is key. I use metrics to track event throughput and latency. Redis provides commands to inspect stream lengths and consumer lag.
Have you considered how event-driven systems affect database design? I often use CQRS, separating read and write models. Events update the write model, while queries use optimized read stores.
Security is another aspect. I validate event payloads and use correlation IDs to trace requests across services. This helps in auditing and debugging.
In conclusion, combining TypeScript’s type safety with EventEmitter2’s flexibility and Redis Streams’ durability creates robust systems. It reduces bugs and makes scaling straightforward. I encourage you to try this approach in your next project. If you found this helpful, please like, share, and comment with your experiences or questions. Let’s learn together!