I’ve been thinking a lot about what happens when an application gets popular. You build a feature, like sending a welcome email, and it works perfectly. Then, a hundred people sign up at once. Suddenly, your main application is stuck waiting for emails to send, and everything grinds to a halt. The user who just clicked “purchase” sees a spinning wheel because the system is busy generating a PDF receipt for someone else.
This is where the magic of a task queue comes in. Instead of doing the hard work immediately, your app can just say, “Hey, I have a job to do,” drop it into a line, and immediately get back to responding to the user. Another part of your system, completely separate, picks jobs from that line and processes them quietly in the background. It’s like having a super-efficient kitchen crew while you’re the waiter who just needs to take orders.
Why BullMQ, Redis, and TypeScript? Because together, they form a system that is both incredibly strong and a pleasure to work with. BullMQ is a Node.js library that makes dealing with job queues straightforward. Redis is its powerhouse—a lightning-fast data store that keeps track of every job. TypeScript is our safety net, ensuring we don’t accidentally send an image where an email should go. You get performance, reliability, and fewer mistakes.
Let’s start by setting up our project. We’ll create a new directory and install our key tools. Think of this as gathering our ingredients before we start cooking.
npm init -y
npm install bullmq ioredis express dotenv zod
npm install -D typescript @types/node ts-node
Next, we define what a “job” actually is in our system. Using TypeScript and Zod for validation, we can be crystal clear. This prevents a lot of “why is this broken?” moments later.
// src/types/jobs.ts
import { z } from 'zod';
export const EmailJobSchema = z.object({
to: z.string().email(),
subject: z.string(),
body: z.string(),
});
export type EmailJobData = z.infer<typeof EmailJobSchema>;
export type JobData = EmailJobData; // We can add more types here later
Now, how do we actually get jobs into the line? We need a queue manager. This is the part of your app that says, “New job here!” and BullMQ, using Redis, handles the rest. Notice how we specify how many times to try a failed job and how long to wait between tries. This is what makes the system robust.
// src/queue/manager.ts
import { Queue } from 'bullmq';
import Redis from 'ioredis';
import { JobData } from '../types/jobs';
const connection = new Redis({ host: 'localhost', port: 6379 });
export class QueueManager {
private emailQueue: Queue;
constructor() {
this.emailQueue = new Queue('email', { connection });
}
async addEmailJob(data: JobData) {
await this.emailQueue.add('send-email', data, {
attempts: 3,
backoff: { type: 'exponential', delay: 2000 },
});
console.log('Email job added to the queue.');
}
}
But what good is a line of jobs if no one is there to do the work? Enter the worker. This is a separate process that constantly asks, “Is there a job for me?” When it finds one, it executes the task. This separation is the core idea.
// src/worker/email.worker.ts
import { Worker } from 'bullmq';
import Redis from 'ioredis';
import { EmailJobSchema } from '../types/jobs';
const connection = new Redis({ host: 'localhost', port: 6379 });
const worker = new Worker('email',
async (job) => {
console.log(`Processing job ${job.id}: Sending email to ${job.data.to}`);
// In reality, you would call your email service here, like SendGrid or Nodemailer.
// Let's simulate a delay:
await new Promise(resolve => setTimeout(resolve, 1000));
console.log(`Email to ${job.data.to} sent successfully.`);
},
{ connection }
);
worker.on('completed', (job) => {
console.log(`Job ${job.id} is done!`);
});
worker.on('failed', (job, err) => {
console.error(`Job ${job?.id} failed with error:`, err.message);
});
Now, what if you have different kinds of jobs? Some are urgent, like charging a credit card. Others can wait, like cleaning up old log files. How do you handle that priority? This is where job options become powerful. You can assign a priority level, delay a job until later, or ensure it runs at a specific time.
You might wonder, how do I know what’s happening in my queues? Is the email queue backed up? Are jobs failing repeatedly? For that, we need a way to peek inside. This is where a simple admin dashboard or API comes in handy. You can build one with Express to list jobs, see their status, or even retry failed ones manually.
Scaling up is the next logical step. What happens when one worker isn’t enough? You simply start another one. BullMQ and Redis ensure that the same job isn’t given to two workers. You can run these workers on the same machine or spread them across different servers. This is how you handle real growth.
Think about failures for a moment. A third-party API goes down for two minutes. Without a retry mechanism, your job would just vanish. But with the settings we defined earlier, the system will patiently try again, waiting a bit longer each time. This simple feature turns a fragile process into a resilient one.
So, you’ve got jobs going in, workers processing them, and a way to monitor everything. The final piece is integrating this into your main application. Your Express route doesn’t send the email; it just uses the QueueManager to add a job and instantly responds to the client. The user gets a fast “Welcome!” message, and the email arrives a second later.
Building this changes how you think about application structure. It encourages you to break your work into small, independent jobs. It makes your application faster for users and much easier to manage and scale as a developer. The peace of mind knowing that a temporary slowdown in one service won’t crash your entire site is priceless.
What’s the first type of slow task you would move to a background job in your current project? Share your thoughts below. If this guide helped you see the structure behind reliable systems, please consider liking and sharing it. Let me know in the comments what other backend architecture topics you’d like me to cover next.