I’ve been thinking a lot about building resilient systems lately. What happens when services fail? How do we ensure messages aren’t lost? These questions led me to Redis Streams - a powerful solution for event-driven architectures. Today, I’ll walk you through creating a high-performance microservice using Fastify, Redis Streams, and TypeScript. Let’s build something robust together.
Setting up our project requires key dependencies. We start with a fresh TypeScript environment:
npm init -y
npm install fastify @fastify/redis ioredis zod
npm install -D typescript @types/node
Our tsconfig.json
establishes strict type checking:
{
"compilerOptions": {
"target": "ES2022",
"module": "NodeNext",
"strict": true,
"outDir": "./dist"
}
}
Why use Redis Streams instead of traditional pub/sub? For starters, streams persist messages and support consumer groups. This means if a service restarts, it won’t miss events. How many times have you lost critical messages during deployments?
Defining our event types with Zod ensures validation:
// events.ts
import { z } from 'zod';
export const BaseEventSchema = z.object({
id: z.string().uuid(),
type: z.string(),
timestamp: z.number()
});
export const UserEventSchema = BaseEventSchema.extend({
type: z.literal('user.created'),
data: z.object({ userId: z.string(), email: z.string().email() })
});
The core Redis service handles event publishing:
// redis-stream.service.ts
import Redis from 'ioredis';
export class StreamService {
constructor(private redis: Redis) {}
async publish(stream: string, event: object): Promise<string> {
const serialized = Object.entries(event).flat();
return this.redis.xadd(stream, '*', ...serialized);
}
}
For event producers, we integrate with Fastify routes:
// producer.ts
import { FastifyInstance } from 'fastify';
export async function userRoutes(app: FastifyInstance) {
app.post('/users', async (request, reply) => {
const event = { type: 'user.created', ...request.body };
await app.streamService.publish('user_events', event);
reply.send({ status: 'queued' });
});
}
Now, what about consumers? Here’s where consumer groups shine. They allow parallel processing while tracking progress:
// consumer.ts
async function processEvents() {
const redis = new Redis();
await redis.xgroup('CREATE', 'user_events', 'my_group', '$', 'MKSTREAM');
while (true) {
const events = await redis.xreadgroup(
'GROUP', 'my_group', 'consumer1',
'COUNT', '10', 'BLOCK', '2000',
'STREAMS', 'user_events', '>'
);
if (events) {
// Process events
events.forEach(event => handleEvent(event));
// Acknowledge processing
eventIds.forEach(id => redis.xack('user_events', 'my_group', id));
}
}
}
Error handling is critical. We implement dead-letter queues for failed messages:
async function handleEvent(event) {
try {
// Business logic
} catch (error) {
await redis.xadd('dead_letters', '*', ...serializeError(event, error));
}
}
For monitoring, Redis offers the XINFO
command. We can track consumer lag:
> XINFO GROUPS user_events
1) name: "my_group"
2) consumers: 3
3) pending: 12 # Messages awaiting processing
Performance optimization? Consider these:
- Batch processing with
COUNT
- Non-blocking acknowledgments
- Connection pooling
Testing strategies include:
// test.consumer.ts
test('processes user events', async () => {
await publishTestEvent();
await waitForConsumer();
expect(processedEvents).toContainEqual(expect.objectContaining({type: 'user.created'}));
});
Before deployment, remember:
- Set memory limits with
MAXLEN
- Configure persistent storage
- Monitor consumer group lag
I’ve found this architecture handles 10,000+ events per second on modest hardware. What could you build with this foundation?
If this helped you, share it with your team! Comments? I’d love to hear about your implementation challenges.