I’ve been thinking a lot about building resilient systems lately. When you work on applications that need to handle high traffic while staying responsive, traditional request-response patterns often fall short. That’s why I’ve turned to event-driven architectures - they allow different parts of a system to communicate asynchronously, making everything more robust and scalable. Today, I’ll walk you through creating a type-safe event system using TypeScript, Node.js, and Redis Streams.
Why Redis Streams? They provide persistent storage for events and support consumer groups, making it possible to scale horizontally. Plus, they offer acknowledgment mechanisms to ensure reliable processing. Let me show you how to implement this properly.
First, we set up our project:
npm init -y
npm install redis ioredis zod winston
npm install -D typescript @types/node
Our tsconfig.json
establishes strict type checking:
{
"compilerOptions": {
"target": "ES2020",
"module": "commonjs",
"strict": true,
"esModuleInterop": true
}
}
The foundation is defining our event schemas. I use Zod for runtime validation that aligns with TypeScript’s static types. This dual-layer validation catches errors early:
// events/schemas.ts
import { z } from 'zod';
const BaseEventSchema = z.object({
id: z.string().uuid(),
timestamp: z.number(),
type: z.string(),
});
const UserCreatedEventSchema = BaseEventSchema.extend({
type: z.literal('user.created'),
data: z.object({
userId: z.string(),
email: z.string().email()
})
});
export const EventSchema = z.discriminatedUnion('type', [
UserCreatedEventSchema
// Other events...
]);
Notice how we use discriminated unions? This allows TypeScript to infer the exact event structure based on the type
field. Ever tried debugging mismatched event formats? This prevents those headaches.
For publishing events, we create a dedicated class:
// events/publisher.ts
import Redis from 'ioredis';
import { EventSchema } from './schemas';
class EventPublisher {
constructor(private redis: Redis) {}
async publish(eventType: string, data: object) {
const event = {
id: crypto.randomUUID(),
timestamp: Date.now(),
type: eventType,
data
};
const validated = EventSchema.parse(event); // Runtime validation
await this.redis.xadd(
`events:${eventType}`,
'*',
'payload',
JSON.stringify(validated)
);
}
}
What happens if validation fails? Zod throws an error before the event reaches Redis, protecting our consumers from malformed data.
Consuming events requires careful handling. We use Redis consumer groups for parallel processing:
// events/consumer.ts
class EventConsumer {
constructor(private redis: Redis, private groupName: string) {}
async processEvents(streamKey: string) {
while (true) {
const events = await this.redis.xreadgroup(
'GROUP', this.groupName, 'consumer-1',
'BLOCK', 2000,
'STREAMS', streamKey, '>'
);
if (!events) continue;
for (const [_, messages] of events) {
for (const [id, payload] of messages) {
const rawEvent = JSON.parse(payload[1]);
const event = EventSchema.parse(rawEvent); // Validate
try {
await this.handleEvent(event);
await this.redis.xack(streamKey, this.groupName, id);
} catch (err) {
await this.redis.xadd(`dead-letter-queue`, '*', ...payload);
}
}
}
}
}
}
Notice the dead-letter queue? When processing fails, we move problematic events there for later inspection without blocking the main stream. How often have you seen systems fail because one bad event halted everything?
For replaying events, we leverage Redis’ XRANGE:
async replayEvents(streamKey: string, fromId = '-', toId = '+') {
const events = await this.redis.xrange(streamKey, fromId, toId);
for (const [id, payload] of events) {
const event = JSON.parse(payload[1]);
await this.handleEvent(event);
}
}
This is invaluable when debugging - reproduce issues by replaying exact event sequences. Have you ever needed to recreate a specific user journey?
Monitoring is crucial. I track:
- Pending events (XPENDING)
- Consumer lag
- Processing times
- Dead letter queue size
We also add structured logging:
import winston from 'winston';
const logger = winston.createLogger({
format: winston.format.combine(
winston.format.timestamp(),
winston.format.json()
),
transports: [new winston.transports.Console()]
});
When testing, I focus on:
- Event schema validation
- Consumer failure scenarios
- Order guarantees
- Performance under load
For optimization:
- Batch processing
- Connection pooling
- Parallel consumers
- Smart partitioning
Could we use Kafka instead? Absolutely - but Redis Streams provide an excellent balance for many applications without Kafka’s operational complexity.
I hope this practical walkthrough helps you build more resilient systems. Event-driven architectures have transformed how I design applications, making them more responsive and maintainable. If you found this useful, please share it with others who might benefit. What challenges have you faced with event-driven systems? Let me know in the comments!