I’ve been thinking a lot about how modern applications handle complex workflows. You know those systems where one action triggers multiple downstream processes? Like when an order placement sets off payment processing, inventory updates, and customer notifications. That’s exactly what led me to explore production-ready event-driven systems using Node.js, Redis Streams, and TypeScript. When designed well, this approach keeps systems decoupled yet coordinated. Stick around - I’ll show you how to build this properly, and you might find some patterns you can apply to your own projects. If this resonates, I’d appreciate your thoughts in the comments later.
Setting up our environment begins with creating a solid foundation. We start by initializing a Node.js project and installing essential packages. Redis becomes our backbone for event streaming, while TypeScript ensures type safety. Here’s how we structure our project:
npm init -y
npm install redis ioredis uuid express helmet cors compression
npm install -D typescript @types/node @types/express nodemon
Our TypeScript configuration (tsconfig.json
) enforces strict typing and modern JavaScript features. The project structure organizes events, services, and utilities logically - keeping publishers, consumers, and domain logic separated but connected.
For event publishing, we first define our domain events using TypeScript interfaces. This gives us compile-time safety and clear contracts:
// OrderCreatedEvent example
export interface OrderCreatedEvent extends BaseEvent {
type: 'order.created';
data: {
orderId: string;
customerId: string;
items: Array<{ productId: string; quantity: number }>;
};
}
Our Redis connection handler ensures reliability with production-grade settings:
// redis.ts
export class RedisConnection {
static getInstance(): Redis {
return new Redis({
host: process.env.REDIS_HOST,
retryDelayOnFailover: 100,
maxRetriesPerRequest: 3,
reconnectOnError: (err) => err.message.includes('READONLY')
});
}
}
The publisher class handles event serialization and stream management. Notice how we publish to both type-specific and general streams - this becomes crucial for monitoring later. How might we extend this for different priority levels?
// publisher.ts
async publishEvent(event: Omit<DomainEvent, 'id'>): Promise<string> {
const fullEvent = { ...event, id: uuidv4(), timestamp: new Date() };
return this.redis.xadd(`events:${event.type}`, '*', 'event', JSON.stringify(fullEvent));
}
For consumers, Redis Consumer Groups enable load balancing and failure recovery. Each consumer instance processes events independently while tracking progress. What happens if a consumer crashes mid-processing?
// consumer.ts
const groupName = 'order-processors';
await redis.xgroup('CREATE', streamKey, groupName, '0', 'MKSTREAM');
while (true) {
const events = await redis.xreadgroup(
'GROUP', groupName, 'consumer-1',
'COUNT', '10', 'BLOCK', 5000,
'STREAMS', streamKey, '>'
);
// Process events then acknowledge
await redis.xack(streamKey, groupName, eventId);
}
Error handling requires multiple safety nets. We implement retries with exponential backoff and dead-letter queues for poison messages:
// error handling
async handleEventWithRetry(event: DomainEvent, attempt = 1): Promise<void> {
try {
await processEvent(event);
} catch (error) {
if (attempt >= MAX_RETRIES) {
await this.sendToDeadLetterQueue(event, error);
return;
}
setTimeout(() => this.handleEventWithRetry(event, attempt + 1), 2 ** attempt * 1000);
}
}
Event sourcing patterns give us an audit trail and state reconstruction capabilities. By storing every state change as an immutable event, we can rebuild system state at any point:
// OrderService applying events
class OrderService {
async applyEvent(event: OrderEvent): Promise<void> {
const order = await this.rebuildState(event.orderId);
const newState = applyBusinessRules(order, event);
await this.eventStore.appendEvent(event);
await this.stateStore.saveState(newState);
}
}
Monitoring becomes critical in production. We track event throughput, processing latency, and error rates. Redis’ built-in commands help:
# Check stream length
XLEN events:order.created
# Inspect consumer group lag
XINFO GROUPS events:order.created
For testing, we verify both happy paths and failure scenarios. Our test harness publishes events and validates outcomes:
// Test example
test('order creates payment event', async () => {
await publishOrderCreatedEvent(testOrder);
const paymentEvents = await getStreamEvents('events:payment.processed');
expect(paymentEvents).toHaveLength(1);
});
Deploying to production requires considering persistence policies, scaling consumers horizontally, and securing Redis connections. We set memory limits and enable TLS encryption between services. Cluster mode Redis prevents single points of failure.
Common pitfalls include under-provisioning Redis memory, ignoring consumer lag, and mishandling idempotency. Best practices we follow:
- Version events from day one
- Enforce idempotent processing
- Monitor consumer group lag
- Set stream max length policies
- Use correlation IDs for tracing
Building this changed how I view distributed systems. The decoupling allows teams to work independently while maintaining system integrity. What challenges have you faced with event-driven systems? Share your experiences below - I’d love to hear what approaches worked for you. If this guide helped, please like and share it with others who might benefit.