Redis-Based Queues: BullMQ (Node.js) and Laravel Queues
Running work outside the HTTP request-response cycle is how you build responsive apps. Send the welcome email, generate the PDF, call the slow third-party API — all after the user has already seen the success page. This guide covers two mature Redis-backed queue systems — BullMQ for Node.js and Laravel Queues for PHP — with production patterns for retries, scheduling, priorities, and monitoring.
Why queue, not cron?
Cron is scheduled ("run every hour"). Queues are event-driven ("user just registered — send welcome email").
Queues give you:
- Retries with backoff — if the third-party API is temporarily down, retry in 30s, then 2m, then 10m
- Concurrency control — run 4 workers in parallel to process faster, or 1 to stay under a rate limit
- Priorities — "critical payment confirmation" jumps ahead of "daily digest email"
- Delays — "send reminder in 24 hours"
- Rate limiting — "never exceed 10 SMS per second"
Cron can do none of these. For anything event-driven, queues are the right tool.
Why Redis?
Redis is in-memory, fast, and has the right data structures (lists, sorted sets). Both BullMQ and Laravel Queues use Redis as the broker. With Redis's AOF persistence turned on, you get durability — if the server crashes, pending jobs survive.
You need a VPS for this — Redis requires a long-running daemon. See our VPS plans.
Install Redis on Ubuntu VPS:
sudo apt update
sudo apt install redis-server
sudo systemctl enable redis-server
sudo systemctl start redis-server
redis-cli ping # should reply PONGEnable AOF persistence by editing /etc/redis/redis.conf:
appendonly yes
appendfsync everysecThen sudo systemctl restart redis-server.
BullMQ (Node.js)
BullMQ is the modern evolution of the older bull package. Pre-configured retries, delays, priorities, cron-like schedules — all in one library.
Install
npm install bullmq ioredisProducer — add jobs to the queue
// src/queues/email.queue.js
import { Queue } from 'bullmq';
const connection = {
host: process.env.REDIS_HOST || 'localhost',
port: Number(process.env.REDIS_PORT || 6379),
};
export const emailQueue = new Queue('email', { connection });
// Add a job anywhere in your app
export async function queueWelcomeEmail(userId) {
await emailQueue.add(
'welcome-email', // job name
{ userId }, // job data — gets JSON-serialised
{
attempts: 3,
backoff: { type: 'exponential', delay: 5000 },
}
);
}In your user-registration handler:
import { queueWelcomeEmail } from './queues/email.queue.js';
app.post('/register', async (req, res) => {
const user = await createUser(req.body);
await queueWelcomeEmail(user.id); // queued, returns immediately
res.json({ success: true }); // user sees success fast
});Worker — process jobs from the queue
// src/workers/email.worker.js
import { Worker } from 'bullmq';
import { sendEmail } from '../lib/email.js';
import { prisma } from '../lib/prisma.js';
const connection = {
host: process.env.REDIS_HOST || 'localhost',
port: Number(process.env.REDIS_PORT || 6379),
};
new Worker(
'email',
async (job) => {
switch (job.name) {
case 'welcome-email':
const user = await prisma.user.findUnique({ where: { id: job.data.userId } });
if (!user) throw new Error(`User ${job.data.userId} not found`);
await sendEmail(user.email, 'Welcome!', `Hi ${user.name}...`);
return { sent: true };
case 'password-reset':
// handle other job types
break;
}
},
{
connection,
concurrency: 5, // process up to 5 jobs in parallel
limiter: {
max: 100, // max 100 jobs per 60 seconds
duration: 60000,
},
}
);
console.log('Email worker started');Run it as a separate process (kept alive by PM2):
pm2 start src/workers/email.worker.js --name email-workerAdvanced features
Delays:
await emailQueue.add('reminder', { userId }, { delay: 24 * 60 * 60 * 1000 }); // 24hPriorities:
await emailQueue.add('urgent', data, { priority: 1 }); // lower number = higher priority
await emailQueue.add('bulk', data, { priority: 10 });Repeatable / cron-like:
await emailQueue.add(
'daily-digest',
{},
{ repeat: { pattern: '0 9 * * *' } } // every day at 9 AM
);Rate limiting:
const worker = new Worker('email', processor, {
limiter: { max: 10, duration: 1000 }, // 10 jobs per second max
});Monitoring — Bull Board UI
npm install @bull-board/api @bull-board/expressimport { createBullBoard } from '@bull-board/api';
import { BullMQAdapter } from '@bull-board/api/bullMQAdapter.js';
import { ExpressAdapter } from '@bull-board/express';
import { emailQueue } from './queues/email.queue.js';
const serverAdapter = new ExpressAdapter();
serverAdapter.setBasePath('/admin/queues');
createBullBoard({
queues: [new BullMQAdapter(emailQueue)],
serverAdapter,
});
app.use('/admin/queues', requireAdminAuth, serverAdapter.getRouter());Visit /admin/queues — a UI showing pending / active / completed / failed jobs, retry buttons, delete buttons. Protect with admin auth — it can view job data which may include sensitive info.
Laravel Queues
Laravel ships with a queue abstraction. Just configure Redis as the connection, write Job classes, dispatch them.
Configure
In .env:
QUEUE_CONNECTION=redis
REDIS_HOST=127.0.0.1
REDIS_PASSWORD=null
REDIS_PORT=6379Install Redis PHP extension or phpredis (usually bundled with Laravel on modern PHP):
composer require predis/predisCreate a Job
php artisan make:job SendWelcomeEmail<?php
namespace App\Jobs;
use App\Models\User;
use App\Mail\WelcomeEmail;
use Illuminate\Bus\Queueable;
use Illuminate\Contracts\Queue\ShouldQueue;
use Illuminate\Foundation\Bus\Dispatchable;
use Illuminate\Queue\InteractsWithQueue;
use Illuminate\Queue\SerializesModels;
use Illuminate\Support\Facades\Mail;
class SendWelcomeEmail implements ShouldQueue {
use Dispatchable, InteractsWithQueue, Queueable, SerializesModels;
public $tries = 3; // retry attempts
public $timeout = 60; // kill if longer than 60 seconds
public $backoff = [10, 30, 90]; // seconds between retries
public function __construct(public User $user) {}
public function handle(): void {
Mail::to($this->user->email)->send(new WelcomeEmail($this->user));
}
public function failed(\Throwable $e): void {
// Called after all retries exhausted — log, alert, compensate
\Log::error("Welcome email permanently failed for user {$this->user->id}: {$e->getMessage()}");
}
}Dispatch
use App\Jobs\SendWelcomeEmail;
public function register(Request $request) {
$user = User::create($request->validated());
SendWelcomeEmail::dispatch($user)
->delay(now()->addMinutes(5)); // optional delay
// ->onQueue('high'); // optional priority queue
return response()->json(['user' => $user]);
}Run the worker
php artisan queue:work redis --tries=3 --sleep=3 --timeout=60In production, run via Supervisord (see Supervisord guide) so the worker survives crashes and auto-restarts.
Laravel Horizon — the nice monitoring UI
composer require laravel/horizon
php artisan horizon:install
php artisan horizonVisit /horizon — dashboard showing job metrics, failed jobs, worker status.
Running multiple priority queues
In .env:
QUEUE_CONNECTION=redisWorker processes a list of queues in priority order:
php artisan queue:work redis --queue=high,default,lowDispatch to a specific queue:
SendUrgentEmail::dispatch($user)->onQueue('high');Queue patterns worth knowing
Email sending — the classic use case
Instead of sending inside the HTTP request:
// Slow — blocks the user's browser for 2 seconds
Mail::to($user)->send(new WelcomeEmail($user));
// Fast — returns immediately, email sends async
SendWelcomeEmail::dispatch($user);User sees response in < 100 ms regardless of SMTP speed.
Webhook fan-out
One source event → many async tasks:
// When an order is paid
OrderPaid::dispatch($order) // updates order status
->chain([
new SendReceiptEmail($order),
new UpdateInventory($order),
new CalculateCommissions($order),
new NotifyWarehouse($order),
]);Each step retries independently; failures don't cascade.
Rate-limited external APIs
If you send via MSG91 (1000 SMS/sec limit) but want to queue 50,000 SMS at once, configure the worker:
// BullMQ
new Worker('sms', processor, {
limiter: { max: 900, duration: 1000 }, // slightly under limit
});Or in Laravel, use ->rateLimited('sms') on the job with a RateLimiter defined.
Image processing with progress
Long-running jobs can report progress:
// BullMQ worker
await job.updateProgress(25);
// ... more work
await job.updateProgress(50);Front-end polls job.progress to show a progress bar.
Production operational concerns
Worker deployment + restart
When you deploy new code, old workers are still running old code. Restart them:
- BullMQ:
pm2 reload email-worker - Laravel:
php artisan queue:restart(signals running workers to finish current job + exit; supervisor restarts them with new code)
Graceful shutdown
Jobs mid-execution shouldn't be killed. Both BullMQ and Laravel handle SIGTERM properly — workers finish the current job before exiting. Allow 60–120 seconds for graceful shutdown in your process manager config.
Failed jobs
Jobs that exceed attempts land in a "failed" store. Review periodically:
- BullMQ: query via
queue.getFailed()or the Bull Board UI - Laravel:
php artisan queue:failedto list,queue:retryto reprocess
Don't ignore failed jobs — each represents work that didn't happen. Set up alerts.
Redis memory
Jobs pile up if workers are slower than producers. Set a Redis maxmemory and an eviction policy — allkeys-lru is usually OK for queues where old data is expendable.
Common pitfalls
- Large payloads in job data. Don't pass file contents or massive JSON through the queue. Pass a DB id / S3 key and let the worker fetch.
- No retry policy. Transient failures become permanent. Always set
attempts: 2+with backoff. - Silent failures. The
failedevent /failed()method must log somewhere you check. Daily review. - Worker crashes mid-job. Use
pm2/ Supervisord to auto-restart. Job becomes "stalled" in BullMQ; the stall detector reassigns it. - Forgetting `php artisan queue:restart` on deploy. Workers run old code until you signal them.
- Running many workers but hitting DB connection limit.
concurrency: 10means 10 workers each holding a DB connection. Size your DB pool accordingly. - Duplicate job submission. User double-clicks → two jobs queued → email sent twice. Use idempotency keys (
jobId: email-welcome-${userId}) in BullMQ to deduplicate. - Queueing to a Redis that isn't persistent. Default Redis config without AOF loses jobs on crash. Enable
appendonly yes.
Frequently asked questions
Can I run queues on shared hosting?
No — queues need a long-running Redis daemon. Use our VPS plans.
What's the difference between BullMQ and Bull?
Bull is the older library. BullMQ is the rewrite — better types, better worker model, actively maintained. Use BullMQ for new projects.
How many workers should I run?
Start with concurrency: number_of_CPUs. If CPU-bound work (image resize, encryption), match CPU count. If I/O-bound (API calls, SMTP), go higher — 10–50 workers per CPU isn't unusual.
Can I share one Redis instance across multiple apps?
Yes — use the prefix option to namespace. Each app's queue names stay separate. But monitor memory; heavy apps may need their own Redis.
What about SQS / Google Pub-Sub / AWS?
Cloud-native queues are great for high-volume / multi-region. Laravel has a sqs driver. BullMQ is Redis-specific; if you need SQS in Node, look at @aws-sdk/client-sqs.
Should I use Redis Streams instead?
BullMQ uses Redis Streams under the hood for some features. You don't typically interact with Streams directly unless building a custom queue from scratch — use BullMQ / Laravel Queues and inherit their battle-tested patterns.
Can a job enqueue another job?
Yes. From inside the worker handler, dispatch another job. Common pattern: "process order" job dispatches "send email" + "update inventory" jobs in parallel.
Questions on scaling queues for your Node.js or Laravel app? [email protected] — we help VPS customers configure Redis + worker processes as standard support.