Client Area

Automated Website Backups with Cron + rclone (S3 / Backblaze B2 / Wasabi)

ByDomain India Team
9 min read22 Apr 20262 views

In this article

  • 1The 3-2-1 rule
  • 2Choosing off-site storage
  • 3Install rclone
  • 4Configure rclone for your provider
  • 5The backup script

Automated Website Backups with Cron + rclone (S3 / Backblaze B2 / Wasabi)

"I have JetBackup" is not a backup strategy. JetBackup on our cPanel plans is excellent, but the backups live on the same hosting account — if that account is compromised or the server has a hardware failure, they go with everything else. Real backups are off-site, on a different provider, encrypted, and tested. This guide shows the cron + rclone pattern that does all four.

The 3-2-1 rule

Generally-accepted backup strategy:

  • 3 copies of your data
  • 2 different storage media / providers
  • 1 copy off-site

For a typical web app: the live data (copy 1), daily JetBackup inside cPanel (copy 2, same media), and a weekly off-site dump to S3-compatible storage (copy 3, different media, off-site). This satisfies 3-2-1.

Choosing off-site storage

All three listed here are S3-compatible — same API, same rclone commands, just different hostnames.

ProviderStorage costEgress costNotes
Backblaze B2$0.006 / GB / mo3× storage price free, then $0.01 / GBCheapest for pure archival
Wasabi$0.0069 / GB / mo$0 (no egress fees ever)Best if you restore frequently or need regular test-restores
AWS S3$0.023 / GB / mo (S3 Standard)$0.09 / GBMost expensive, most integrated if you're already on AWS
Cloudflare R2$0.015 / GB / mo$0 egressGood if already in the Cloudflare ecosystem
Self-hosted MinIOYour VPS disk costYour VPS egressIf you have another DI VPS for redundancy

Recommendation: Backblaze B2 for the lowest monthly cost, Wasabi if you expect to restore / verify often. Both have free tiers to test.

Install rclone

rclone is the Swiss-army knife of cloud storage — a single binary that talks to every major provider.

On cPanel with SSH:

bash
curl https://rclone.org/install.sh | sudo bash

On DirectAdmin with SSH — same command.

On our VPS hosting plans — same command.

On plans without SSH — you can't run rclone locally. Either use a JetBackup → S3 destination (if your cPanel plan supports it), or back up via a different machine that pulls from your hosting account.

Verify install:

bash
rclone version

Configure rclone for your provider

Interactive configuration:

bash
rclone config

You'll be prompted:

  1. n for new remote
  2. Name: b2 (or whatever — used later)
  3. Storage type: Backblaze B2 (or Wasabi / S3 — pick the right one)
  4. Account: your B2 Account ID (or IAM user access key)
  5. Key: your B2 Application Key (or IAM secret key)
  6. Other options: accept defaults

Test the config:

bash
rclone lsd b2:
# Should list your buckets, or return empty if you have none yet

Create a bucket (once, from the provider's dashboard) — e.g., yoursite-backups. Keep the bucket name globally unique (all S3-compatible providers enforce global uniqueness on bucket names).

The backup script

Create /home/youruser/bin/backup.sh:

bash
#!/bin/bash
set -euo pipefail

# Configuration
DOMAIN="yourdomain.com"
BACKUP_DIR="/home/$USER/tmp-backup"
REMOTE="b2:yoursite-backups"           # rclone remote:bucket
RETENTION_DAYS=30
DB_NAME="yoursite_db"
DB_USER="yoursite_user"
DB_PASS="from-env-not-here"            # see note below about env vars

# Mask output from this point — don't leak DB password to cron email
# (For production: source from .env file, don't embed in script)
source /home/$USER/.backup-env

TIMESTAMP=$(date +%Y%m%d-%H%M%S)
mkdir -p "$BACKUP_DIR"

# 1. Dump the database
echo "[$(date)] Dumping database..."
mysqldump \
    --user="$DB_USER" \
    --password="$DB_PASS" \
    --single-transaction \
    --quick \
    --lock-tables=false \
    --default-character-set=utf8mb4 \
    "$DB_NAME" | gzip > "$BACKUP_DIR/db-$TIMESTAMP.sql.gz"

# 2. Tar the site files
echo "[$(date)] Archiving files..."
tar czf "$BACKUP_DIR/files-$TIMESTAMP.tar.gz" \
    -C /home/$USER \
    public_html \
    --exclude='public_html/wp-content/cache' \
    --exclude='public_html/wp-content/uploads/cache' \
    --exclude='*.log'

# 3. Optional: encrypt both
# If BACKUP_PASSWORD is set, pipe through openssl
if [ -n "${BACKUP_PASSWORD:-}" ]; then
    echo "[$(date)] Encrypting..."
    for f in "$BACKUP_DIR/db-$TIMESTAMP.sql.gz" "$BACKUP_DIR/files-$TIMESTAMP.tar.gz"; do
        openssl enc -aes-256-cbc -salt -pbkdf2 \
            -in "$f" \
            -out "$f.enc" \
            -pass "pass:$BACKUP_PASSWORD"
        rm "$f"
    done
fi

# 4. Upload to off-site storage
echo "[$(date)] Uploading to $REMOTE..."
rclone copy "$BACKUP_DIR/" "$REMOTE/$(date +%Y/%m)/" --log-file="/home/$USER/backup.log"

# 5. Clean up local temp files
rm -rf "$BACKUP_DIR"

# 6. Remove off-site backups older than retention
echo "[$(date)] Pruning backups older than $RETENTION_DAYS days..."
rclone delete "$REMOTE/" --min-age "${RETENTION_DAYS}d"

# 7. Optional: ping monitoring service
if [ -n "${HEARTBEAT_URL:-}" ]; then
    curl -fsS --retry 3 "$HEARTBEAT_URL" > /dev/null
fi

echo "[$(date)] Backup complete."

Make it executable:

bash
chmod 700 /home/$USER/bin/backup.sh

Create the env file with credentials:

bash
cat > /home/$USER/.backup-env << 'EOF'
export DB_PASS=your-actual-db-password
export BACKUP_PASSWORD=optional-encryption-password
export HEARTBEAT_URL=https://hc-ping.com/your-uuid-here
EOF

chmod 600 /home/$USER/.backup-env

Schedule with cron

Edit your crontab (see our Cron Expression Guide if you need a refresher):

# Daily backup at 3:30 AM (off-peak)
30 3 * * * /home/youruser/bin/backup.sh >> /home/youruser/backup.log 2>&1

On cPanel: add via cPanel → Cron Jobs.

The time matters: pick off-peak (2–4 AM) to avoid impacting your site's performance if the backup is large.

Retention strategy

A sensible retention schedule:

  • Daily backups for 7 days (1 per day)
  • Weekly backups for 4 weeks (1 per week)
  • Monthly backups for 12 months (1 per month)

Implementing this well requires either:

  1. Simple approach: keep daily backups for 30 days, accept the extra storage cost. The script above does this.
  2. Advanced approach: tag each backup with frequency, delete by age-within-tag. Most backup tools (Borg, Restic) support this natively.

For most small-to-medium sites, option 1 is fine. 30 daily backups × 500 MB each = 15 GB = roughly $0.09 / month on B2. Not worth over-engineering.

Test-restore monthly

A backup you've never restored is not a backup — it's a hope. Test restore every month:

  1. Pick a random recent backup
  2. Download it: rclone copy b2:yoursite-backups/2026/04/files-20260415-033000.tar.gz .
  3. Extract to a staging subdomain: tar xzf files-20260415.tar.gz
  4. Restore the database dump: gunzip -c db-20260415.sql.gz | mysql staging_db
  5. Load the staging subdomain in a browser — data intact?

If restore fails, you found the problem before you needed the backup in anger. Fix it, re-verify.

Encryption considerations

The openssl enc -aes-256-cbc in the script above encrypts the files before upload. Without encryption, anyone who steals your storage credentials can read your data. With encryption, they get ciphertext.

Trade-offs:

  • With encryption: strictly more secure. But losing the password means losing the backup — store the password in a password manager or a different secret store.
  • Without encryption: backup storage relies on the provider's access controls. B2 and S3 both have strong access controls — access key + bucket policy — so this is reasonable for most.
  • rclone crypt remote: a middle path — rclone transparently encrypts/decrypts at rclone layer. Cleaner than openssl pipes.

For critical data (healthcare, finance), use encryption. For general content sites, a proper credential-managed B2 bucket is sufficient.

Alternatives — when cron + rclone is not the right tool

  • JetBackup on cPanel with remote destinations. Our cPanel plans include JetBackup; on higher tiers you can add an S3-compatible destination from JetBackup directly, no script needed. Less flexible but zero-maintenance.
  • Borg + borgmatic. Deduplicated, encrypted, rolling backups. Used for backing up VPS servers. More setup than rclone but much smaller incremental backups.
  • Restic. Similar to Borg with a cloud-native design. Works well with B2, S3, Wasabi.
  • AWS Backup. If your infrastructure is already in AWS, use this — integrated across EC2, RDS, EFS.
  • Database-specific tools. MariaBackup (for MariaDB), pg_basebackup (for PostgreSQL). Better than mysqldump for very large databases.

Common pitfalls

  1. DB password in the crontab line. Visible to every user on the system via ps -ef. Put it in a file with chmod 600.
  2. No retention policy. Backups accumulate forever, costs grow, old backups that would pass legal hold policy linger. Set a retention, implement it in the script.
  3. Storing the encryption password next to the backup. If someone steals the bucket, they also steal the password. Keep them separate — encrypted backups in B2, password in your password manager.
  4. Never testing restore. You'll find out the hard way that your mysqldump was missing a table, or your tar excluded a critical directory.
  5. Running backups during peak hours. 500 MB backup at 10 AM slows your site. Schedule for 2–4 AM.
  6. Backing up only files or only database, not both. WordPress and most CMSes need both. Files alone are useless without the DB.
  7. Missing uploads / user-generated content. If public_html/wp-content/uploads/ has your files and you excluded public_html/wp-content/, you've backed up nothing useful. Review excludes carefully.
  8. No monitoring. A cron that silently stops firing leaves you backup-less without warning. Use Healthchecks.io or similar to alert on missing heartbeats.

Frequently asked questions

How big should my backup be?

Varies wildly. Typical WordPress site with a medium database and a year of uploads: 500 MB to 2 GB. Laravel SaaS with 10,000 users: 1–5 GB. Large forum with image uploads: 10 GB+. Monitor the first few runs to estimate.

Will backup impact my site's performance?

Yes, briefly. mysqldump --single-transaction doesn't lock tables, but disk I/O spike is noticeable for 1–10 minutes. Schedule at 3 AM to minimise user impact.

Do I need to back up daily if I have JetBackup?

JetBackup gives you same-server backups — excellent for "oops I deleted a file" recovery. Off-site backups handle "server compromised / hardware failed" scenarios. Different purposes; do both.

Can I back up my Domain India VPS this way?

Yes, identically. Our VPS plans give you full root SSH, so rclone runs without issue. You can even replicate this setup to back up multiple sites from one VPS.

What if my backup script fails silently?

That's why step 7 (the heartbeat ping) matters. Healthchecks.io (free for personal use, open-source option) expects a ping every day — if it doesn't arrive, it emails you. Without this, your cron could silently die for weeks.

How do I restore only a single file or table?

Download the backup: rclone copy b2:bucket/2026/04/files-20260415.tar.gz . — then tar xzf files-20260415.tar.gz path/to/specific/file. For database table: extract DB dump, grep for the table definition + data rows, or restore the full DB to a staging environment and copy over just the needed tables.

Is cron reliable enough for backups?

For most single-server setups — yes. For mission-critical: complement cron with a monitoring service that alerts if the expected backup file doesn't appear on schedule. Cron jobs that silently fail are the #1 backup failure mode.


Need help setting up off-site backups for your Domain India hosting? [email protected] — we help customers configure JetBackup remote destinations or the cron + rclone pattern as a standard support request.

Was this article helpful?

Your feedback helps us improve our documentation

Still need help? Submit a support ticket