Here’s my perspective on building event-driven systems with NestJS and Redis, drawn from practical experience:
I’ve seen too many systems crumble under load because of tight coupling. That’s why event-driven architecture caught my attention - it solves real-world scalability challenges. When services communicate through events rather than direct calls, you gain resilience. Let me show you how we can implement this properly.
First, we establish our foundation. We’ll use Redis as our event store - it’s fast, persistent, and supports the patterns we need. Our core infrastructure starts with defining what an event actually is:
// Base event structure
export abstract class DomainEvent {
public readonly id: string;
public readonly occurredAt: Date;
constructor(
public readonly aggregateId: string,
public readonly eventType: string,
public readonly data: any
) {
this.id = uuidv4();
this.occurredAt = new Date();
}
}
Why does this matter? Because strong typing prevents entire categories of errors. Notice how we’re capturing the exact moment something happened - this becomes crucial for debugging later. How often have you struggled to reproduce timing-related bugs?
Now let’s connect to Redis:
// Redis configuration
const redis = new Redis({
host: process.env.REDIS_HOST,
port: parseInt(process.env.REDIS_PORT),
retryStrategy: (times) => Math.min(times * 100, 3000)
});
This configuration handles network blips gracefully. The retry strategy prevents cascading failures during temporary outages. Ever had a single timeout bring down your entire system? We’re avoiding that from the start.
For actual event storage, we use multiple indexes:
// Saving events with multiple access paths
async saveEvent(event: DomainEvent) {
await redis.multi()
.hset(`events:${event.id}`, { ...event })
.zadd(`aggregate:${event.aggregateId}`, event.occurredAt.getTime(), event.id)
.zadd(`type:${event.eventType}`, event.occurredAt.getTime(), event.id)
.exec();
}
Notice we’re storing events by ID, by aggregate (like a user), and by event type. This lets us retrieve events through different lenses later. What if you need to replay all events for a specific user? Or find every “order_created” event? The indexes make it efficient.
Now let’s publish events from our services:
// Publishing in a service
async createUser(userDto: CreateUserDto) {
const user = await this.usersRepository.create(userDto);
// Publish after successful creation
this.eventBus.publish(new UserCreatedEvent(user.id, {
email: user.email,
createdAt: user.createdAt
}));
return user;
}
The key here? We’re publishing after the database commit succeeds. Never before. This prevents consumers from acting on events that didn’t actually persist. How many times have you seen systems where events fire but the transaction rolls back?
On the consumption side, we need reliability:
// Event handler with retries
@EventHandler(UserCreatedEvent)
async handleUserCreated(event: UserCreatedEvent) {
try {
await this.mailService.sendWelcomeEmail(event.data.email);
} catch (error) {
// Exponential backoff
await this.retryService.scheduleRetry(event, 3, 1000);
}
}
This pattern handles transient failures gracefully. If the email service is down, we’ll retry with increasing delays. We’re also limiting retry attempts - after three failures, we’d move the event to a dead-letter queue for investigation.
For complex workflows, we implement CQRS:
// Separating commands and queries
async updateUserEmail(command: UpdateEmailCommand) {
// Command side - validate and update
const user = await this.usersRepository.get(command.userId);
user.updateEmail(command.newEmail);
await this.usersRepository.save(user);
// Publish event
this.eventBus.publish(new UserEmailUpdatedEvent(user.id, {
oldEmail: user.email,
newEmail: command.newEmail
}));
}
// Query handler
@QueryHandler(GetUserByEmail)
async handleGetUserByEmail(query: GetUserByEmail) {
// Read from optimized read model
return this.userReadModel.findByEmail(query.email);
}
By separating writes from reads, we optimize each path independently. The write side focuses on consistency, while reads can use denormalized data tailored for specific queries. Have you ever had reporting queries slow down your core transactions? This pattern fixes that.
Testing is critical in event-driven systems. We verify behavior by checking emitted events:
// Testing event emission
it('should publish UserCreatedEvent on registration', async () => {
await userService.register('[email protected]');
expect(eventBusSpy).toHaveBeenCalledWith(expect.objectContaining({
eventType: 'UserCreatedEvent',
data: { email: '[email protected]' }
}));
});
We’re not just testing function outputs - we’re verifying the right events get published. This catches situations where code executes but fails to notify other parts of the system.
Performance optimizations come last. We use Redis pipelining for bulk operations:
// Bulk event saving
async saveEvents(events: DomainEvent[]) {
const pipeline = redis.pipeline();
events.forEach(event => {
pipeline.hset(`events:${event.id}`, { ...event })
pipeline.zadd(`aggregate:${event.aggregateId}`, event.occurredAt.getTime(), event.id)
});
await pipeline.exec();
}
This reduces roundtrips when saving multiple events. For read-heavy systems, we’d add Redis replicas. But remember: optimize only after measuring. Premature optimization creates complexity without benefit.
Throughout this journey, I’ve found that the biggest pitfalls are human, not technical. Teams forget that events are immutable facts - you can’t “edit” past events. You can only publish compensating events. This mental shift is crucial.
What surprised me most? How event-driven systems make debugging easier. With a complete event log, we can replay any user’s journey exactly as it happened. No more guessing what led to that bug.
If you’ve struggled with tangled microservices or unpredictable scaling, try this approach. Redis provides the backbone, TypeScript ensures correctness, and NestJS glues it together elegantly. What challenges are you facing that event-driven architecture might solve?
Found this useful? Share it with your team and let me know your thoughts in the comments - I’ll respond to every question.