I’ve been thinking about distributed systems lately, especially how to build resilient applications that handle high loads. Why? Because modern software demands more than just functionality—it requires architectures that scale gracefully under pressure. That’s what led me to explore event-driven patterns with NestJS, Redis Streams, and Bull Queue. If you’ve ever struggled with tightly coupled services or unpredictable traffic spikes, you’ll appreciate this approach. Let’s build something powerful together.
Setting up our project is straightforward. We start with a fresh NestJS installation and key dependencies:
nest new event-driven-app
npm install @nestjs/bull bull redis ioredis @nestjs/config
Our project structure organizes functionality by domain, with shared infrastructure components. Notice how we isolate event handling from business logic? This separation becomes crucial when systems grow. Have you considered how your current project’s structure affects maintainability?
// src/config/configuration.ts
export default () => ({
redis: {
host: process.env.REDIS_HOST || 'localhost',
port: parseInt(process.env.REDIS_PORT) || 6379,
},
queues: {
orders: { name: 'ordersQueue' },
notifications: { name: 'notificationsQueue' }
}
});
Redis Streams form our event backbone. They provide persistent, ordered message storage with consumer groups—perfect for replaying events or adding new subscribers. Here’s how we serialize events:
// src/common/events/base.event.ts
export abstract class BaseEvent {
public readonly id: string;
public readonly timestamp: Date = new Date();
constructor(
public readonly aggregateId: string,
public readonly type: string,
public readonly data: Record<string, any>
) {
this.id = uuid();
}
}
When publishing events, we use a simple but effective pattern:
// src/infrastructure/event-publisher.service.ts
@Injectable()
export class EventPublisher {
constructor(private readonly redis: Redis) {}
async publish(stream: string, event: BaseEvent): Promise<void> {
await this.redis.xadd(
stream,
'*',
'event',
JSON.stringify(event)
);
}
}
Consuming events requires careful planning. How do we ensure no event gets lost during failures? Consumer groups with acknowledgment:
// src/modules/orders/order-consumer.service.ts
@Injectable()
export class OrderConsumer {
@Process('orderEvents')
async handleEvent(job: Job) {
const event = job.data;
try {
await this.processOrder(event);
await job.moveToCompleted();
} catch (error) {
await job.moveToFailed({ message: error.message });
}
}
}
Bull Queue handles background processing. Notice the retry strategy—exponential backoff is essential for transient errors:
// src/queues/orders.queue.ts
@Module({
imports: [
BullModule.registerQueue({
name: 'orders',
redis: { host: 'localhost', port: 6379 },
defaultJobOptions: {
attempts: 3,
backoff: { type: 'exponential', delay: 1000 }
}
})
]
})
export class OrdersQueueModule {}
For dead letter scenarios, we add safety nets:
// src/queues/notifications.queue.ts
@Processor('notifications')
export class NotificationProcessor {
@OnQueueFailed()
handleFailure(job: Job, error: Error) {
if (job.attemptsMade >= job.opts.attempts) {
this.deadLetterQueue.add(job.data);
}
}
}
Monitoring is non-negotiable. We integrate Bull Dashboard for real-time insights:
// src/main.ts
async function bootstrap() {
const app = await NestFactory.create(AppModule);
const serverAdapter = new ExpressAdapter();
serverAdapter.setBasePath('/queues');
createBullBoard({ queues: BullMQQueue.queues, serverAdapter });
app.use('/queues', serverAdapter.getRouter());
}
Testing event flows requires simulating failures. We use Jest to verify our resilience:
// test/orders.consumer.spec.ts
describe('OrderConsumer', () => {
it('should retry failed orders', async () => {
const job = { data: mockEvent, attemptsMade: 0 };
await consumer.handleEvent(job);
expect(mockProcessOrder).toHaveBeenCalledTimes(3);
});
});
In production, we consider alternatives like Kafka or RabbitMQ. But Redis Streams offers simplicity with persistence—ideal when you’re already using Redis for caching. How might different message brokers affect your operational costs?
This architecture shines in e-commerce systems. Imagine processing payments while updating inventory and sending confirmation emails—all without blocking the user. The events flow through dedicated channels, with each service focusing on its specialty.
What challenges have you faced with distributed systems? I’d love to hear about your experiences. If this approach resonates with you, share it with your team. Like, comment, or reach out if you’d like the full code repository!