Client Area
SSH Key ManagementIntermediate

Useful Shell for Code Development Safe Defaults & Patterns

6 min readPublished 4 Mar 2026Updated 17 Apr 2026498 views

In this article

  • 11) Why these shell practices matter
  • 22) The safest starting line
  • 3What each option means
  • 4When these options can surprise you (and how to handle)
  • 53) A productionready Bash template (dropin)

Article Summary (TL;DR)

  • Start every script with #!/usr/bin/env bash + set -euo pipefail + safe IFS.

  • Guard variables (${VAR:} / ${VAR:-default}), trap cleanups, and prefer atomic writes.

  • Use shellcheck/shfmt to keep scripts robust; jq/yq/rg/fd for daily work.

  • Structure your workflows with a Makefile; tag builds with Git; deploy safely with rsync --dry-run.

  • Know the gotchas of -euo pipefail and how to intentionally allow/handle failures.


1) Why these shell practices matter

Shell scripts glue your build, test, package, and deploy steps. Good defaults prevent subtle bugs, failed pipelines, and data loss. This article provides copypasteready snippets you can adapt to any stack (Node, PHP, Python, Go, Java, etc.).


2) The safest starting line

#!/usr/bin/env bash
set -euo pipefail
IFS=$'\n\t'

What each option means

  • #!/usr/bin/env bash -- Resolves bash from your environment for portability.

  • set -e (errexit) -- Exit immediately when any simple command fails.

  • set -u (nounset) -- Treat unset variables as errors.

  • set -o pipefail -- In a | b | c, return the first nonzero exit code in the pipeline (not just c).

  • IFS=$'\n\t' -- Word splitting occurs only on newline and tab (not space), avoiding many filename bugs.

When these options can surprise you (and how to handle)

# Commands that may fail but should not abort the script
grep -q "pattern" file.txt || true

# Guard optional env vars when nounset is on
: "${ENV:=dev}" # set default silently
PORT="${PORT:-8080}" # or assign to a new var

# Pipelines whose first step may fail intentionally
grep -r "TODO" src | wc -l || true

# Explicit conditional check is clearer with -e\ nif ! command_that_may_fail; then
 echo "handling failure"
fi

Tip: Prefer if ! cmd; then ... fi over cmd || ... in complex scripts, because -e can interact with || in nonobvious ways.


3) A productionready Bash template (dropin)

#!/usr/bin/env bash
set -euo pipefail
IFS=$'\n\t'

SCRIPT_DIR="$(cd -- "$(dirname -- "${BASH_SOURCE[0]}")" && pwd)"
readonly SCRIPT_DIR

log() { printf '[%(%F %T)T] %s\n' -1 "$*" >&2; }
die() { log "ERROR: $*"; exit 1; }
usage(){ cat <<'EOF'
Usage: build.sh [-v] [-e dev|staging|prod] [--] [extra args...]

Options:
 -v verbose (set -x)
 -e <env> environment (default: dev)

Examples:
 ./build.sh -v -e staging
EOF
}

verbose=false
env="dev"

while getopts ":ve:h" opt; do
 case "$opt" in
 v) verbose=true ;;
 e) env="$OPTARG" ;;
 h) usage; exit 0 ;;
 \) die "unknown option: -$OPTARG" ;;
 :) die "option -$OPTARG requires an argument" ;;
 esac
done
shift $((OPTIND-1))

$verbose && set -x

tmp="$(mktemp -d)"
cleanup(){ rm -rf -- "$tmp"; }
trap cleanup EXIT INT TERM

need(){ command -v "$1" >/dev/null || die "missing: $1"; }
need git; need jq

main() {
 log "environment: $env"
 # your build/test steps go here...
 # e.g., npm ci && npm test or go test ./... or composer install
}

main "$@"

4) Filesystem & safety patterns you'll use daily

  • Create directories idempotently

    mkdir -p dist/logs
    
  • Safe temp files/dirs

    tdir="$(mktemp -d)"; trap 'rm -rf -- "$tdir"' EXIT
    
  • Readable dir jumps

    pushd backend >/dev/null
    go build ./...
    popd >/dev/null
    
  • Atomic writes

    tmp="$(mktemp)"; generate >"$tmp" && mv "$tmp" output.json
    
  • Never rm -rf on empty vars

    TARGET="${TARGET:set TARGET path}"; rm -rf -- "$TARGET"
    

5) Test, lint, and verify quickly

  • Parallel tests

    find packages -maxdepth 1 -type d -name 'pkg-*' -print0 \
     | xargs -0 -n1 -P"$(nproc)" -I{} bash -lc 'cd "{}" && npm test'
    
  • Format + lint shell

    shfmt -w .
    shellcheck -x script.sh
    
  • JSON/YAML sanity

    jq . package.json >/dev/null
    yq . docker-compose.yml >/dev/null
    
  • Hash & verify

    sha256sum file.tar.gz
    gpg --verify artifact.asc artifact
    

6) Search, filter, transform (the power trio)

  • ripgrep (fast grep)

    rg -n "TODO|FIXME" --glob '!dist' .
    
  • fd (friendly find)

    fd -e ts -x sed -i 's/var /let /g'
    
  • awk/sed (surgical edits)

    awk -F, 'NR>1 {sum+=$3} END{print sum}' report.csv
    sed -i 's/API_URL=.*/API_URL=https:\/\/api.example.com/' .env
    

7) Moving code & artifacts safely

  • rsync deploy (dryrun first!)

    rsync -avz --delete --dry-run dist/ user@server:/var/www/app/
    
  • curl with strict failures

    curl --fail --location --silent --show-error \
     -H "Authorization: Bearer $TOKEN" \
     -o artifact.tgz "$URL"
    

8) Git commands that belong in scripts

git fetch --all --prune
git rev-parse --short HEAD
git diff --name-only origin/main...HEAD
git tag -a "v$(date +%Y.%m.%d.%H%M)" -m "CI tag" && git push --tags

Precommit hook suggestion

#!/usr/bin/env bash
set -euo pipefail
shfmt -d .
shellcheck -x scripts/*.sh

Make it executable:

chmod +x .git/hooks/pre-commit

9) A simple Makefile that works with any stack

.PHONY: all deps build test lint clean

NODE = npm
all: deps lint test build

deps:
 $(NODE) ci

build:
 $(NODE) run build

test:
 $(NODE) test -- --ci

lint:
 $(NODE) run lint

clean:
 rm -rf dist coverage

Then your CI only runs: make.


10) Docker & Compose oneliners

  • Rebuild without cache (force env changes)

    docker compose build --no-cache web
    
  • Up with logs, then follow API only

    docker compose up -d
    docker compose logs -f api
    
  • Exec with a clean env

    docker compose exec -e "NODE_ENV=production" web bash
    

11) tmux minikit (for long builds)

tmux new -s build
# inside: run your long task
# detach: Ctrl+b then d
tmux attach -t build

12) Robust script utilities (copy/paste)

Require tools & versions

need(){ command -v "$1" >/dev/null || die "missing: $1"; }
need git; need jq

Retry with exponential backoff

retry() {
 local tries=${1:-5} delay=1
 shift
 for ((i=1;i<=tries;i++)); do
 "$@" && return 0
 sleep "$delay"; delay=$((delay*2))
 done
 return 1
}
# usage: retry 5 curl --fail "$URL"

Withtimeout (kills on hang)

timeout 30s bash -lc 'npm ci && npm test'

13) Pitfalls checklist with -euo pipefail

  • Prefer if ! cmd; then ... fi for conditional flows; avoid relying on cmd || ... in pipelines.

  • Use ${VAR:-} when reading envs that may be empty; use ${VAR:message} to enforce presence with a helpful error.

  • Wrap globs: rm -rf -- "${OUT_DIR:must set OUT_DIR}"/* to avoid catastrophic deletes.

  • Read lines safely: mapfile -t arr < <(cmd) avoids wordsplitting bugs.

  • Don't hide errors with bare || true unless you also log why.


14) Quick selftest

shellcheck -x your_script.sh
shfmt -w your_script.sh

15) Copyready header for any new script

#!/usr/bin/env bash
set -euo pipefail
IFS=$'\n\t'
# --- script metadata ---
# name: <script-name>
# usage: <how to run>
# deps: <commands/tools>
# desc: <oneline purpose>

Final note

These patterns are stackagnostic and fit well into CI, local dev, and production maintenance. For hostingspecific workflows (e.g., packaging PHP for cPanel/DirectAdmin/Webuzo or containerized Node/Go deployments), you can layer these snippets directly into your existing scripts and Makefiles.

Was this article helpful?

Your feedback helps us improve our documentation

Still need help? Submit a support ticket