Client Area
KubernetesAdvanced

Lightweight Kubernetes with k3s on DomainIndia VPS (Single & Multi-Node)

ByDomain India Team·DomainIndia Engineering
5 min read24 Apr 20263 views
# Lightweight Kubernetes with k3s on DomainIndia VPS (Single & Multi-Node)
TL;DR
k3s is a production-grade Kubernetes distribution that runs on a single VPS with 1 GB RAM — perfect for learning K8s or running real workloads without the overhead of full Kubernetes. This guide covers single-node setup, multi-node cluster, Helm, Ingress with SSL, and persistent storage on DomainIndia VPS.
## Why k3s instead of full Kubernetes Standard Kubernetes (kubeadm) needs 2 GB+ RAM per node just for the control plane. k3s strips down to essentials: - Single binary (~60 MB) - Built-in SQLite (no etcd for single-node) - Built-in load balancer, Ingress (Traefik), local storage provisioner - Works on 512 MB RAM VPS (though 2 GB is practical minimum) Same `kubectl`, same YAML, same Helm charts. Production-ready — k3s powers AWS EKS Anywhere, Rancher, and edge deployments. ## When to use k3s
Use caseGood fit?
Learning KubernetesExcellent
Small production app (1-5 services)Great
Edge / IoT deploymentsDesigned for it
Large-scale microservices (100+ pods)Upgrade to full K8s or managed (EKS, GKE)
For most DomainIndia VPS customers: k3s is enough. Only upgrade to full K8s when you outgrow it. ## Single-node setup Order a 4 GB+ DomainIndia VPS (AlmaLinux 9 or Ubuntu 22.04 recommended).
1
SSH as root
2
Disable firewalld temporarily (we'll reconfigure after):
3
Install k3s:
4
Verify:
5
Copy kubeconfig for your laptop:
Info

By default k3s runs as root. For multi-tenant use, install rootless k3s or use RBAC to restrict namespaces.

## Deploy your first app Create `app.yaml`: ```yaml apiVersion: apps/v1 kind: Deployment metadata: name: hello spec: replicas: 2 selector: matchLabels: { app: hello } template: metadata: labels: { app: hello } spec: containers: - name: hello image: nginxdemos/hello:latest ports: - containerPort: 80 resources: limits: memory: 128Mi cpu: 100m --- apiVersion: v1 kind: Service metadata: name: hello spec: selector: { app: hello } ports: - port: 80 targetPort: 80 ``` Apply: ```bash kubectl apply -f app.yaml kubectl get pods # NAME READY STATUS RESTARTS AGE # hello-7c5cbb5d4f-abc12 1/1 Running 0 10s # hello-7c5cbb5d4f-def34 1/1 Running 0 10s ``` ## Ingress with SSL (Traefik + Let's Encrypt) k3s ships with Traefik ingress controller. Configure it to get Let's Encrypt certs automatically. Edit `/var/lib/rancher/k3s/server/manifests/traefik-config.yaml`: ```yaml apiVersion: helm.cattle.io/v1 kind: HelmChartConfig metadata: name: traefik namespace: kube-system spec: valuesContent: |- ports: web: redirectTo: port: websecure additionalArguments: - --certificatesresolvers.letsencrypt.acme.email=admin@yourcompany.com - --certificatesresolvers.letsencrypt.acme.storage=/data/acme.json - --certificatesresolvers.letsencrypt.acme.tlschallenge=true ``` Wait 30 seconds for Traefik to reload. Then define your Ingress: ```yaml apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: hello annotations: traefik.ingress.kubernetes.io/router.entrypoints: websecure traefik.ingress.kubernetes.io/router.tls.certresolver: letsencrypt spec: rules: - host: hello.yourcompany.com http: paths: - path: / pathType: Prefix backend: service: name: hello port: { number: 80 } tls: - hosts: [hello.yourcompany.com] ``` Point DNS `hello.yourcompany.com → your VPS IP`. Traefik obtains SSL cert on first visit. ## Multi-node cluster For HA or scaling beyond one VPS: ### Control plane (first node) Install k3s + note the token: ```bash # On VPS 1 (master) curl -sfL https://get.k3s.io | sh - sudo cat /var/lib/rancher/k3s/server/node-token # prints: K10xxxxx::server:abc123... ``` ### Worker nodes ```bash # On VPS 2, 3, ... (workers) curl -sfL https://get.k3s.io | K3S_URL=https://:6443 K3S_TOKEN=K10xxxxx::server:abc123... sh - ``` Verify from master: ```bash sudo kubectl get nodes # NAME STATUS ROLES AGE # master Ready control-plane,master 1h # worker-1 Ready 30s # worker-2 Ready 30s ``` ### HA control plane (optional) For >99% uptime, run 3 control-plane nodes with external DB (etcd or PostgreSQL): ```bash curl -sfL https://get.k3s.io | sh -s - server --cluster-init --datastore-endpoint "postgres://k3s:[email protected]:5432/k3s" ``` Then join other masters with same token. ## Persistent storage k3s ships with `local-path-provisioner` — provisions HostPath volumes automatically. ```yaml apiVersion: v1 kind: PersistentVolumeClaim metadata: name: app-data spec: accessModes: [ReadWriteOnce] storageClassName: local-path resources: requests: { storage: 5Gi } ``` For multi-node with shared storage, use Longhorn (k3s-friendly distributed storage): ```bash kubectl apply -f https://raw.githubusercontent.com/longhorn/longhorn/v1.6.0/deploy/longhorn.yaml ``` ## Helm charts Install anything from Helm Hub (Redis, PostgreSQL, Prometheus, Grafana): ```bash curl https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3 | bash helm repo add bitnami https://charts.bitnami.com/bitnami helm install redis bitnami/redis --set auth.password=yourpass # Use in deployment: # REDIS_URL=redis://:yourpass@redis-master:6379/0 ``` ## Monitoring with Prometheus + Grafana One-liner install: ```bash helm repo add prometheus-community https://prometheus-community.github.io/helm-charts helm install monitoring prometheus-community/kube-prometheus-stack --set grafana.service.type=LoadBalancer --set prometheus.prometheusSpec.resources.requests.memory=512Mi ``` Expose Grafana via Ingress + get the admin password: ```bash kubectl get secret monitoring-grafana -o jsonpath="{.data.admin-password}" | base64 -d ``` ## Backups Back up etcd + PVs regularly. For single-node k3s, `/var/lib/rancher/k3s/server/db/state.db` is your state — cron-rsync to S3: ```bash 0 3 * * * tar czf /tmp/k3s-backup.tgz /var/lib/rancher/k3s/server/db /var/lib/rancher/k3s/storage && rclone copy /tmp/k3s-backup.tgz remote:k3s-backups/ ``` See our [Automated Backups guide](https://domainindia.com/support/kb/automated-backups-cron-rclone-s3). ## Security hardening - Change the k3s API port from 6443 if exposed; firewall to trusted IPs only - Use Network Policies to restrict pod-to-pod traffic - Enable RBAC (default in k3s); don't give users cluster-admin - Scan images for CVEs: `trivy image your-app:latest` - Keep k3s updated: `curl -sfL https://get.k3s.io | INSTALL_K3S_VERSION=v1.30.X+k3s1 sh -` ## Common pitfalls ## FAQ
Q Can I use the same kubectl commands I learned for EKS/GKE?

Yes — 95%+ compatible. Some features (CSI drivers, cloud-specific integrations) differ.

Q Does k3s work with public Docker images?

Yes — pulls from Docker Hub, GHCR, GCR, etc. For private registries, add secrets.

Q Can I migrate from k3s to full K8s later?

Yes — kubectl manifests transfer. You may need to adjust storage classes and ingress controller choice.

Q How many nodes can k3s handle?

Up to ~50 nodes tested. For >50, consider RKE2 or upstream Kubernetes.

Q k3s vs MicroK8s vs minikube?

k3s: production-ready, minimal overhead. MicroK8s: Ubuntu-friendly, snap-based. minikube: dev/test only, don't run in production.

Kubernetes on a DomainIndia VPS — start lean with k3s. View VPS plans

Was this article helpful?

Your feedback helps us improve our documentation

Still need help? Submit a support ticket