# BullMQ vs Sidekiq vs Celery — Picking a Job Queue in 2026
TL;DR
Every production app needs a background job queue — for email sending, PDF generation, webhooks, scheduled tasks. This decision guide compares BullMQ (Node.js), Sidekiq (Ruby), Celery (Python), and Laravel Horizon (PHP) by throughput, reliability, tooling, and operational burden on DomainIndia VPS.
## The six choices
| Queue | Language | Backend | Throughput/VPS | Maturity |
| BullMQ | Node.js | Redis | 10K+ jobs/sec | 6 years |
| Sidekiq | Ruby | Redis | 10K+ jobs/sec | 12+ years |
| Celery | Python | Redis/RabbitMQ | 5K+ jobs/sec | 16+ years |
| Laravel Queues / Horizon | PHP | Redis/DB | 3-5K jobs/sec | 8+ years |
| Temporal | Any (SDK) | Postgres/Cassandra | 1K+ workflows/sec | 5 years |
| AWS SQS + Lambda | Any | AWS-managed | Scales auto | Proven |
## Decision tree
**1. What's your primary language?**
- Node.js → BullMQ
- Ruby/Rails → Sidekiq (no alternative worth considering)
- Python → Celery (dominant) or RQ (simpler)
- PHP → Laravel Horizon
- Polyglot → Temporal or SQS
**2. Do you need durable workflows (multi-step, retries, human approval)?**
- Yes → Temporal (best-in-class for workflows)
- No → standard queue works
**3. Are you on DomainIndia shared hosting or VPS?**
- Shared → DB-backed queue (Laravel's built-in, Solid Queue for Rails)
- VPS → Redis-backed (BullMQ, Sidekiq, Celery)
## BullMQ — Node.js champion
Evolution of `bull` package. Battle-tested at scale.
**Strengths:**
- Excellent TypeScript types
- Rich features (rate limiting, priorities, backoff, repeat, groups)
- Bull Board UI for monitoring
- Redis-backed — widely available
**Weaknesses:**
- Redis-only (no other backend)
- Some edge cases in repeat jobs
**Example:**
```typescript
import { Queue, Worker } from 'bullmq';
import IORedis from 'ioredis';
const connection = new IORedis({ maxRetriesPerRequest: null });
const emailQueue = new Queue('email', { connection });
await emailQueue.add('welcome', { userId: 42 }, {
attempts: 3,
backoff: { type: 'exponential', delay: 1000 },
});
// Worker process
new Worker('email', async (job) => {
console.log(`Sending welcome to user ${job.data.userId}`);
await sendEmail(job.data.userId);
}, { connection, concurrency: 10 });
```
## Sidekiq — Ruby default
12 years old, still the gold standard. See our [Sidekiq article](https://domainindia.com/support/kb/sidekiq-background-jobs-rails-domainindia-vps).
**Strengths:**
- Rock-solid reliability
- Rich ecosystem (sidekiq-cron, sidekiq-unique-jobs, sidekiq-batch)
- Official Pro/Enterprise tiers for advanced features
- Excellent web UI built in
**Weaknesses:**
- Ruby-only
- Some powerful features behind paid tiers
## Celery — Python standard
```python
from celery import Celery
app = Celery('myapp', broker='redis://localhost:6379/0')
@app.task(bind=True, max_retries=3)
def send_email(self, user_id):
try:
do_send(user_id)
except Exception as exc:
raise self.retry(exc=exc, countdown=60)
# Call
send_email.delay(42)
```
**Strengths:**
- Multi-backend (Redis, RabbitMQ, Amazon SQS)
- Beat scheduler built in
- Massive ecosystem
- Works with Django/Flask/FastAPI seamlessly
**Weaknesses:**
- Configuration complexity — lots of knobs
- Debugging stack traces can be tricky
- Breaking changes between versions
**Lighter alternative — RQ:**
```python
from rq import Queue
from redis import Redis
q = Queue(connection=Redis())
q.enqueue(send_email, 42)
```
Simpler when you don't need Celery's features.
## Laravel Horizon (PHP)
```php
class SendWelcomeEmail implements ShouldQueue
{
public function handle() {
Mail::to($this->user)->send(new WelcomeMail());
}
}
SendWelcomeEmail::dispatch($user)->onQueue('emails');
```
**Strengths:**
- Laravel-native — zero config for new apps
- Horizon dashboard for monitoring + tagging
- Supports Redis, DB, SQS, Beanstalk
**Weaknesses:**
- Lower throughput than Sidekiq/BullMQ at scale
- PHP-only
## Temporal — workflows, not queues
Different paradigm. Define workflows as code; Temporal guarantees durability.
```typescript
export async function processOrder(orderId: string) {
// Each step is retried individually
await validatePayment(orderId);
await reserveInventory(orderId);
// Pause for up to 30 days waiting for external event
await waitForSignal('payment-confirmed', '30 days');
await shipOrder(orderId);
}
```
**Strengths:**
- Multi-step workflows with pause/resume
- Automatic state persistence
- Time-travel debugging
- Language-agnostic (Go, Java, TypeScript, Python, PHP)
**Weaknesses:**
- Operational complexity (Postgres/Cassandra cluster for state)
- Overkill for simple "send email" jobs
- Learning curve
Best for: multi-step business processes (order fulfillment, onboarding flows).
## Throughput benchmark (approximate)
On a 4 GB DomainIndia VPS, single worker process, simple "hash some bytes" job:
| Queue | Jobs/sec (single worker) | Jobs/sec (10 workers) |
| BullMQ | 2,000 | 18,000 |
| Sidekiq | 2,500 | 22,000 |
| Celery | 1,200 | 9,000 |
| Laravel | 800 | 6,000 |
| Temporal | 500 | 3,500 (activities are more expensive) |
For most apps, throughput isn't the bottleneck. Developer experience + operational maturity matters more.
## Feature comparison
| Feature | BullMQ | Sidekiq | Celery | Laravel |
| Rate limiting | Yes | Pro only | Plugin | Manual |
| Scheduled jobs | Built-in | sidekiq-cron | Beat | Built-in |
| Retries with backoff | Yes | Yes | Yes | Yes |
| Priority queues | Yes | Yes | Yes | Yes |
| Unique jobs | Via option | Plugin | Plugin | Built-in |
| Batches (callback when N done) | Yes | Pro only | Canvas (groups) | Batches |
| Monitoring UI | Bull Board | Built-in | Flower | Horizon |
| Dead-letter / failed jobs | Yes | Yes | Yes | Yes |
## Choose by pain-point
**"I just need to send emails in background":**
- Laravel: built-in queues
- Rails: `deliver_later` + Sidekiq
- Node: BullMQ
- Django: Celery with Redis
**"I need scheduled tasks (cron-like)":**
- BullMQ: `addRepeatable()`
- Sidekiq: `sidekiq-cron` gem
- Celery: Beat
- Laravel: `schedule()` method
**"I need to orchestrate multi-step business flows":**
- Temporal — purpose-built
**"I'm on serverless (AWS Lambda, Vercel)":**
- SQS + Lambda
- Upstash QStash (HTTP-based queue, serverless-friendly)
**"I want zero ops":**
- Upstash Redis + BullMQ (managed Redis, free tier)
- Cloudflare Queues + Workers
- AWS SQS + Lambda
## Patterns that apply everywhere
### Idempotency
Jobs can run twice (retry after timeout). Make them idempotent:
```python
def send_welcome(user_id):
# Check if already sent
if db.email_log.exists(user_id=user_id, type='welcome'):
return
do_send()
db.email_log.create(user_id=user_id, type='welcome')
```
### Timeout
Every job should have a max runtime. Crashes + infinite loops are real.
```typescript
// BullMQ
await queue.add('job', data, { timeout: 30000 }); // 30s
```
### Retry with exponential backoff
```python
# Celery
@app.task(bind=True, max_retries=5, default_retry_delay=60, autoretry_for=(Exception,))
def process(self, data):
...
```
### Dead-letter queue
When job fails all retries, move to DLQ instead of vanishing. Investigate manually.
## Monitoring essentials
Whatever queue you pick, monitor:
- **Queue depth** — backlog size over time
- **Processing rate** — jobs/sec
- **Failure rate** — failed vs succeeded
- **Age of oldest pending job** — are we keeping up?
- **Dead-letter count** — something's wrong
See our [Observability article](https://domainindia.com/support/kb/production-observability-prometheus-grafana-loki-vps) for the monitoring stack.
## Common pitfalls
## FAQ
Q
Can I run the queue on shared hosting?
Limited — needs long-running workers. On our cPanel shared, Node.js via "Setup Node.js App" can host BullMQ with 1 worker. For production: VPS.
Q
How many workers on a 2 GB VPS?
Depends on job type. CPU-heavy: 2-4 workers. I/O-heavy: 10-20. Start small, monitor CPU/RAM, scale.
Q
Do I need a separate queue server?
No for small apps — Redis on same VPS. For fault isolation at scale: separate Redis VPS.
Q
What if Redis crashes with jobs in flight?
Redis AOF persistence keeps jobs across restarts. For critical jobs, add your own DB-level "pending" table that you mark complete after success.
Q
Workflow engine vs queue?
Queue = short isolated tasks. Workflow = multi-step process. Queue can emulate workflow via chaining; workflow engines are purpose-built.
Run production queues on a DomainIndia VPS.
Order VPS