Supervisord: Running Python, PHP, and Node Processes on a VPS
PM2 is the Node.js favourite. Supervisord is the cross-language equivalent — a process manager that keeps Python, PHP, Ruby, and shell-script workers alive on a Linux server. Widely used for Laravel queue workers, Celery workers, and custom long-running daemons. This guide covers install, config, auto-restart on crash, log rotation, and the common patterns.
Why a process manager?
Anything you run with python worker.py & dies when:
- You close the SSH session
- The server reboots
- The process crashes (unhandled exception, OOM)
- You run
killall pythonby accident
A process manager owns the lifecycle — starts on boot, restarts on crash, logs stdout / stderr, exposes a control interface.
Supervisord's niche: language-agnostic, simple configuration, clean multi-process grouping. Mature and widely deployed.
Install
Ubuntu / Debian:
sudo apt install supervisorAlmaLinux / CentOS:
sudo dnf install supervisorVerify the service:
sudo systemctl status supervisor # Debian
sudo systemctl status supervisord # RHEL familyEnable on boot:
sudo systemctl enable supervisorConfigure your first program
Supervisord reads program configs from /etc/supervisor/conf.d/*.conf (Debian) or /etc/supervisord.d/*.ini (RHEL). Create one file per program:
# /etc/supervisor/conf.d/laravel-worker.conf
[program:laravel-worker]
process_name=%(program_name)s_%(process_num)02d
command=php /home/myapp/artisan queue:work redis --sleep=3 --tries=3 --max-time=3600
directory=/home/myapp
autostart=true
autorestart=true
user=myappuser
numprocs=4
redirect_stderr=true
stdout_logfile=/var/log/laravel-worker.log
stdout_logfile_maxbytes=10MB
stdout_logfile_backups=5
stopwaitsecs=3600Breakdown of the key fields:
process_name— unique name per spawned process (%(process_num)02dgives00,01,02,03suffixes)command— the exact command to run, with full path to the executabledirectory— working directoryautostart=true— start when supervisor startsautorestart=true— restart if it exits (for any reason)user— run as this non-root user (NEVER run workers as root)numprocs=4— spawn 4 instances for concurrencyredirect_stderr=true— merge stderr into stdout (easier log handling)stdout_logfile— where to write logsstdout_logfile_maxbytes/stdout_logfile_backups— log rotationstopwaitsecs=3600— on stop, wait up to 1 hour for graceful shutdown (don't kill mid-job)
Apply the config
sudo supervisorctl reread # scan for new config files
sudo supervisorctl update # apply changes
sudo supervisorctl status # see all programsYou should see:
laravel-worker:laravel-worker_00 RUNNING pid 12345, uptime 0:00:05
laravel-worker:laravel-worker_01 RUNNING pid 12346, uptime 0:00:05
laravel-worker:laravel-worker_02 RUNNING pid 12347, uptime 0:00:05
laravel-worker:laravel-worker_03 RUNNING pid 12348, uptime 0:00:05Day-to-day commands
sudo supervisorctl status # list all programs + status
sudo supervisorctl start laravel-worker:* # start all instances
sudo supervisorctl stop laravel-worker:* # stop all
sudo supervisorctl restart laravel-worker:* # graceful restart
sudo supervisorctl tail -f laravel-worker # tail stdout
sudo supervisorctl tail -f laravel-worker stderr # tail stderrThe * wildcard handles grouped processes — laravel-worker:* hits all 4 instances at once.
Group multiple programs
If your app has several different worker types, group them for atomic start/stop:
# /etc/supervisor/conf.d/myapp-workers.conf
[program:email-worker]
command=php /home/myapp/artisan queue:work redis --queue=emails
user=myappuser
autostart=true
autorestart=true
stdout_logfile=/var/log/myapp/email-worker.log
[program:image-worker]
command=php /home/myapp/artisan queue:work redis --queue=images
user=myappuser
autostart=true
autorestart=true
numprocs=2
stdout_logfile=/var/log/myapp/image-worker.log
[program:payment-worker]
command=php /home/myapp/artisan queue:work redis --queue=payments --timeout=300
user=myappuser
autostart=true
autorestart=true
stopwaitsecs=300
stdout_logfile=/var/log/myapp/payment-worker.log
[group:myapp]
programs=email-worker,image-worker,payment-workerNow you can manage the whole group:
sudo supervisorctl restart myapp:*
sudo supervisorctl status myapp:*Environment variables — secure handling
Two options, both with trade-offs.
Option 1: environment= directive
[program:myapp-worker]
command=php worker.php
environment=
DB_PASSWORD="secret123",
APP_KEY="base64:...",
REDIS_PASSWORD="another-secret"Problem: values are visible to every user on the system via ps or /proc/PID/environ.
Option 2: source from env file (preferred)
[program:myapp-worker]
command=/bin/bash -c "source /home/myapp/.env && exec php worker.php"
user=myappuserHere the env file has restricted permissions:
sudo chown myappuser:myappuser /home/myapp/.env
sudo chmod 600 /home/myapp/.envOnly myappuser can read the env file. ps doesn't show the expanded values — it shows the literal /bin/bash -c "source...".
Log rotation — built-in + logrotate
Supervisord has built-in log rotation:
stdout_logfile_maxbytes=10MB
stdout_logfile_backups=5At 10 MB, the log rotates to .log.1, keeps 5 archives (.log.1 → .log.5), deletes older. Total disk footprint capped at 60 MB.
For complementary rotation via the system's logrotate:
# /etc/logrotate.d/myapp
/var/log/myapp/*.log {
daily
rotate 14
compress
missingok
notifempty
copytruncate
}copytruncate is important — it copies the log file and truncates the original in place, without needing to signal supervisord to reopen the file.
Running as a non-root user
Every program config should have user=. Running workers as root is a security risk — if the worker is compromised, the attacker has root on the VPS.
user=myappuserThe user must exist:
sudo adduser --system --group --no-create-home myappuserFile permissions: the worker user must be able to read the app code and write to log directories and any upload/storage dirs.
Supervisord vs systemd
Modern Linux has systemd, which also manages services. Overlap is significant. When to pick which:
| Choose Supervisord if... | Choose systemd if... |
|---|---|
Multi-process grouping (numprocs=N) matters | Single-process services |
| You want to manage many small workers | You want tight integration with OS services |
You want supervisorctl control | You prefer systemctl |
| Simpler config syntax | You need socket activation, scheduled runs, etc. |
For Laravel queue workers specifically, Supervisord is the canonical recommendation — the Laravel docs show Supervisord config, not systemd. For a single long-running Node service, systemd is fine.
Don't mix — pick one process manager per server and stick with it.
Common patterns
Laravel queue worker
Covered in detail above. Key points:
--max-time=3600in the artisan command — worker self-exits every hour, Supervisord restarts it. Handles slow memory leaks.numprocs=Nfor N parallel workers- Grace period
stopwaitsecs=3600soqueue:restartdoesn't kill mid-job
Python Celery worker
[program:celery-worker]
command=/home/myapp/venv/bin/celery -A myapp worker --loglevel=info --concurrency=4
directory=/home/myapp
user=myappuser
autostart=true
autorestart=true
stdout_logfile=/var/log/myapp/celery-worker.log
stopwaitsecs=600Celery has its own concurrency model — use --concurrency in Celery and numprocs=1 in Supervisord, or vice versa. Don't multiply both.
Node.js script (if not using PM2)
[program:nodeapp]
command=/usr/bin/node /home/myapp/server.js
directory=/home/myapp
user=myappuser
autostart=true
autorestart=true
environment=NODE_ENV="production",PORT="3000"
stdout_logfile=/var/log/myapp/node.logFor Node.js specifically, PM2 offers better ergonomics (cluster mode, zero-downtime reload). Use PM2 for Node, Supervisord for everything else.
Long-running Python ETL script
[program:etl]
command=/home/etl/venv/bin/python /home/etl/pipeline.py
directory=/home/etl
user=etl
autostart=true
autorestart=true
startsecs=30 # must run 30s before considered "started successfully"
startretries=3 # 3 fail attempts then stop trying
stopsignal=TERM
stopwaitsecs=60
stdout_logfile=/var/log/etl.logstartsecs prevents rapid-fail loops. If the process crashes within 30 seconds of starting, it's not counted as "started" — after startretries such crashes, supervisord marks it FATAL and stops.
Troubleshooting
Process keeps flapping (start → crash → start → crash)
sudo supervisorctl statusIf status is BACKOFF or FATAL, the program is crashing within startsecs. Read the log:
sudo supervisorctl tail -f laravel-workerCommon root causes: permissions (file the worker needs is chmod 600 owned by someone else), missing env vars, syntax error in the code, dependency not installed in the venv.
Logs fill the disk
You disabled log rotation, or left the defaults too high. Revisit stdout_logfile_maxbytes and stdout_logfile_backups. df -h /var/log to see disk usage.
Config changes not picked up
supervisorctl update applies config changes but only picks up new/changed configs. For drastic reconfig, sudo systemctl restart supervisor.
Can't connect to supervisorctl socket
sudo supervisorctlGives "unix:///var/run/supervisor.sock no such file". Usually means supervisor isn't running:
sudo systemctl start supervisorProcesses run twice on config change
reread then update — don't forget both. reread finds the new config; update applies it. Skipping update leaves the new config unapplied while an old version runs.
Common pitfalls
- Running as root. Always set
user=. Workers should have the minimum permissions they need. - Shared log files across programs. Each program writes to its own log. Merging causes log interleaving.
- Forgetting `autorestart=true`. Crashes don't recover without it.
- `stopwaitsecs` too short. For jobs that take minutes (large batch, long API call), a 10-second stopwaitsecs kills mid-job on restart.
- Hardcoding paths. Use absolute paths everywhere — supervisord runs with a minimal environment,
/usr/local/bin/nodemay not be on PATH. - Missing `directory=`. Worker runs from
/— relative paths break. - Single log file for all `numprocs` workers. Log rotation gets confused. Use
%(process_num)02din the log file name for per-process logs.
Frequently asked questions
Can Supervisord run on shared hosting?
No. Requires root and long-lived processes. Only VPS.
How do I know if a job completed vs is hanging?
Supervisord doesn't track "job completion" — it tracks process liveness. For job-level observability, your queue system (Laravel Horizon, Bull Board) is the tool.
What's the difference between `stop` and `kill` in supervisorctl?
stop sends SIGTERM, waits stopwaitsecs, then SIGKILL if still running. kill sends SIGKILL immediately. Always prefer stop.
Can I reload config without restarting processes?
reread + update only restarts programs whose config changed. Unchanged programs keep running. For total config reload: supervisorctl reload (stops and restarts supervisord itself — all programs restart).
Should I use Supervisord or Docker Compose for multi-service apps?
If all services are containers: Docker Compose. If you're running bare processes on a VPS: Supervisord. You can also combine — Supervisord inside a container for multi-process containers (generally not recommended; prefer one process per container).
Can I monitor Supervisord remotely?
Yes — enable the HTTP interface in /etc/supervisor/supervisord.conf:
[inet_http_server]
port=127.0.0.1:9001
username=admin
password=secretAccess via http://localhost:9001 (or over SSH tunnel). There's a web UI with basic controls.
Is Supervisord still actively maintained?
Yes. Stable releases continue. It's not rapidly adding features because there's not much more to add — the scope is narrow and well-defined.
Need help setting up Supervisord for your specific app? [email protected] — we help VPS customers with worker deployment as part of standard support.