Building distributed systems can be challenging, but when I needed to offload resource-intensive tasks from my web application’s main thread, task queues became essential. Imagine processing thousands of images or sending emails without slowing down user interactions. That’s what we’ll achieve today using BullMQ, Redis, and TypeScript. Stick with me – you’ll learn to build a production-ready system that scales. Don’t forget to share your thoughts in the comments!
Setting up our environment starts with a clean foundation. We create a new project and install core dependencies like BullMQ and Redis. TypeScript brings type safety, while Express handles our monitoring dashboard. Here’s how I structure my project:
npm init -y
npm install bullmq redis ioredis express
npm install -D typescript @types/node
My tsconfig.json
ensures strict typing and modern JavaScript features:
{
"compilerOptions": {
"target": "ES2020",
"module": "commonjs",
"strict": true,
"outDir": "./dist"
}
}
For Redis configuration, I establish robust connections with automatic retries:
// src/config/redis.ts
import { Redis } from 'ioredis';
export const redis = new Redis({
host: process.env.REDIS_HOST || 'localhost',
port: 6379,
maxRetriesPerRequest: 3
});
Why prioritize type safety? Defining job interfaces prevents runtime errors. Here’s how I structure email jobs:
// src/types/jobs.ts
export interface EmailJobData {
id: string;
to: string;
subject: string;
template: string;
priority: 'low' | 'high';
}
Creating our first queue takes minutes. This email queue handles failures with exponential backoff:
// src/queues/email-queue.ts
import { Queue } from 'bullmq';
import { redis } from '../config/redis';
import { EmailJobData } from '../types/jobs';
export const emailQueue = new Queue<EmailJobData>('email', {
connection: redis,
defaultJobOptions: {
attempts: 3,
backoff: { type: 'exponential', delay: 2000 }
}
});
Adding jobs feels natural with TypeScript’s autocompletion:
await emailQueue.add('welcome-email', {
id: 'user_123',
to: '[email protected]',
subject: 'Welcome!',
template: 'welcome',
priority: 'high'
});
Workers bring our queues to life. Notice how I handle different priorities:
// src/workers/email-worker.ts
import { Worker } from 'bullmq';
import { redis } from '../config/redis';
import { sendEmail } from '../services/email';
const worker = new Worker('email', async job => {
if (job.name === 'welcome-email') {
await sendEmail(job.data);
}
}, { connection: redis, concurrency: 5 });
worker.on('completed', job => {
console.log(`Sent email to ${job.data.to}`);
});
What happens when jobs fail? BullMQ’s retry logic saves us. I implement custom failure handling:
worker.on('failed', (job, err) => {
console.error(`Job ${job.id} failed: ${err.message}`);
if (job.attemptsMade < 2) {
job.retry();
}
});
For delayed tasks like reminder emails, scheduling is straightforward:
await emailQueue.add(
'reminder-email',
{ /* data */ },
{ delay: 24 * 3600 * 1000 } // 24 hours
);
Monitoring is crucial. I build a simple dashboard with Express:
// src/monitoring/dashboard.ts
import express from 'express';
import { createBullBoard } from '@bull-board/api';
import { ExpressAdapter } from '@bull-board/express';
const serverAdapter = new ExpressAdapter();
createBullBoard({ queues: [emailQueue] }, serverAdapter);
const app = express();
app.use('/queues', serverAdapter.getRouter());
Rate limiting prevents resource overload. Here’s how I restrict image processing:
const imageQueue = new Queue('image-processing', {
limiter: { max: 10, duration: 1000 } // 10 jobs/second
});
Error logging captures critical details without cluttering main logic:
// src/utils/logger.ts
export const jobLogger = {
error: (job: Job, error: Error) => {
fs.appendFileSync('errors.log',
`[${new Date()}] Job ${job.id} failed: ${error.stack}\n`
);
}
};
Testing queues requires simulating real conditions. I use Jest for worker tests:
// tests/email-worker.test.ts
test('processes welcome email', async () => {
await emailQueue.add('welcome-email', mockData);
await new Promise(resolve => worker.on('completed', resolve));
expect(sendEmail).toHaveBeenCalled();
});
Docker simplifies deployment. My docker-compose.yml
includes Redis:
services:
redis:
image: redis:alpine
ports:
- "6379:6379"
Production optimizations include connection pooling and proper shutdown:
worker.on('error', err => {
console.error('Worker error', err);
if (!err.message.includes('connection closed')) {
process.exit(1);
}
});
Common pitfalls? Always validate job data. I learned this the hard way:
const validateEmailJob = (data: any): data is EmailJobData => {
return !!data.to && !!data.subject;
};
Task queues transformed how I build scalable systems. They handle everything from PDF generation to data synchronization without blocking users. What asynchronous challenges are you facing in your projects? Share your experiences below – I’d love to hear how you implement queues. If this helped you, pass it along to other developers!