Here’s a comprehensive guide to building a task queue system with BullMQ and Redis:
I recently rebuilt our notification system at work after users reported delays during peak traffic. The bottleneck? Our server was handling everything synchronously. That’s when I turned to BullMQ and Redis for background processing. Today, I’ll show you how to implement this powerful combination to make your applications more resilient.
Task queues help manage heavy workloads by processing jobs asynchronously. Think of sending bulk emails or generating reports - these shouldn’t block user interactions. Why BullMQ specifically? It provides persistence through Redis, scales horizontally, and offers fine-grained job control. Did you know you can prioritize urgent tasks while delaying less critical ones?
First, let’s set up our environment. You’ll need Node.js 18+ and Redis 6+. Initialize your project with:
npm init -y
npm install bullmq redis
Configure Redis connection settings properly - this is crucial for production:
// src/config/redis.ts
export const redisConfig = {
host: process.env.REDIS_HOST || 'localhost',
port: parseInt(process.env.REDIS_PORT || '6379'),
password: process.env.REDIS_PASSWORD,
maxRetriesPerRequest: 3,
retryDelayOnFailover: 100
};
Now, let’s create a base queue class to avoid repetition:
// src/queues/base-queue.ts
import { Queue, Worker } from 'bullmq';
import { redisConfig } from '../config/redis';
export class BaseQueue<T> {
protected queue: Queue<T>;
constructor(queueName: string) {
this.queue = new Queue(queueName, { connection: redisConfig });
}
async addJob(jobName: string, data: T): Promise<void> {
await this.queue.add(jobName, data, {
attempts: 3,
backoff: { type: 'exponential', delay: 2000 }
});
}
processJobs(handler: (job: any) => Promise<void>): Worker {
return new Worker(this.queue.name, handler, {
connection: redisConfig,
concurrency: 5
});
}
}
Here’s how you’d implement an email queue:
// src/queues/email-queue.ts
import { BaseQueue } from './base-queue';
interface EmailData {
recipient: string;
subject: string;
content: string;
}
export class EmailQueue extends BaseQueue<EmailData> {
constructor() {
super('email-processing');
this.processJobs(this.sendEmail.bind(this));
}
private async sendEmail(job: any): Promise<void> {
const { recipient, subject, content } = job.data;
// Your email sending logic here
console.log(`Sent email to ${recipient}`);
}
}
// Usage
const emailQueue = new EmailQueue();
emailQueue.addJob('welcome-email', {
recipient: '[email protected]',
subject: 'Welcome!',
content: 'Thanks for joining'
});
Notice how we’ve set automatic retries with exponential backoff? This handles temporary failures like network blips. But what about permanent failures? We should monitor those separately.
For job priorities, simply add the priority option:
// High-priority password reset email
emailQueue.addJob('password-reset', data, { priority: 1 });
To monitor your queues, use BullMQ’s dashboard:
npm install @bull-board/express @bull-board/ui
// src/monitoring.ts
import { createBullBoard } from '@bull-board/api';
import { BullMQAdapter } from '@bull-board/api/bullMQAdapter';
import { ExpressAdapter } from '@bull-board/express';
import { emailQueue } from './queues/email-queue';
const serverAdapter = new ExpressAdapter();
createBullBoard({
queues: [new BullMQAdapter(emailQueue.queue)],
serverAdapter
});
// Attach to Express app
serverAdapter.setBasePath('/admin/queues');
app.use('/admin/queues', serverAdapter.getRouter());
In production, always:
- Use separate Redis instances for queues and cache
- Monitor memory usage with
redis-cli info memory
- Scale workers horizontally using processes or containers
- Set up alerts for stuck jobs
Common pitfalls? Forgetting to close connections during shutdown can cause job leaks. Always implement graceful termination:
process.on('SIGTERM', async () => {
await emailQueue.queue.close();
});
While alternatives like RabbitMQ exist, BullMQ’s Redis foundation provides simplicity with powerful features. The built-in dashboard gives immediate visibility without third-party tools.
I’ve seen 40% faster response times since implementing this pattern. Give it a try in your next project! If you found this helpful, share it with your team or leave a comment about your experience with task queues.