Skip to main content
Tesslate OpenSail This guide walks you from an empty machine to a working OpenSail cluster on minikube. You will end up with the full production storage stack (btrfs CSI driver plus Volume Hub plus CAS to MinIO) and the same Ingress topology we run in AWS, but entirely on http://localhost.
For the faster inner-loop dev flow without Kubernetes, use the Docker Setup guide instead. The K8s path is for contributors who need to reproduce storage, ingress, or snapshot bugs.
Every kubectl command in this guide includes --context=tesslate. That is a hard project rule: background cronjobs and other processes can flip your active context mid-session, so the context must be pinned on every call. Never use kubectl config use-context or any context-switching helper.

1. What you will run

OpenSail workloads

Backend, frontend, worker, Postgres, and Redis in the tesslate namespace. Same manifests the cloud runs.

NGINX Ingress

Installed via the minikube ingress addon. Answers on localhost and *.localhost when minikube tunnel is running.

btrfs CSI + Volume Hub

Per-node btrfs subvolumes, instant snapshot-clone, and CAS sync to MinIO. Mirrors the AWS storage stack one-for-one.

MinIO

S3-compatible object store in the minio-system namespace. Backs content-addressable project snapshots.
ComponentNamespacePurpose
OpenSail backendtesslateFastAPI orchestrator
OpenSail frontendtesslateReact UI served by NGINX
PostgreSQLtesslatePrimary database
RedistesslatePub/sub and task queue
MinIOminio-systemS3-compatible object store
btrfs CSI driverkube-systemPer-node subvolumes and snapshots
Volume Hubkube-systemVolume orchestrator, cache placement, S3 sync
Snapshot controllerkube-systemVolumeSnapshot CRDs
NGINX Ingressingress-nginxHTTP routing
Profile name is tesslate. Every --context=tesslate flag below refers to this profile.

2. Prerequisites

minikube 1.33+

brew install minikube, choco install minikube, or download from minikube.sigs.k8s.io.

kubectl 1.29+

brew install kubectl or choco install kubernetes-cli.

Docker 24.x

Docker Desktop on macOS/Windows, or docker-ce on Linux. Used as the minikube driver.

System resources

  • CPU: 4 cores available to Docker
  • RAM: 8 GB available to Docker
  • Disk: 40 GB free for the minikube VM

btrfs requirement

The btrfs CSI driver needs btrfs inside the minikube VM. The Docker driver’s base image already ships with btrfs-progs and the driver auto-creates its pool at /mnt/tesslate-pool inside the node. No host filesystem changes are required when using --driver docker. If you switch to kvm2 or hyperkit, make sure the guest image has btrfs-progs and a mountable btrfs partition.

Hosts file

Add these entries:
  • Linux / macOS: /etc/hosts
  • Windows: C:\Windows\System32\drivers\etc\hosts
127.0.0.1 localhost
127.0.0.1 minio.localhost
Project container URLs follow http://<slug>-<container>.localhost and are resolved by NGINX Ingress when minikube tunnel --profile tesslate is running. Modern browsers resolve *.localhost to 127.0.0.1 automatically; if yours does not, add the specific project host to hosts too.

3. Start the cluster

minikube start \
  --profile tesslate \
  --cpus 4 \
  --memory 8g \
  --disk-size 40g \
  --driver docker \
  --addons ingress \
  --addons storage-provisioner \
  --addons metrics-server
The ingress addon installs NGINX Ingress in the ingress-nginx namespace. The tunnel exposes the Ingress controller on 127.0.0.1; keep that terminal open while you use the cluster.

4. Install snapshot controller and btrfs CSI driver

OpenSail depends on Kubernetes VolumeSnapshot resources for hibernation and project timeline. Install the CRDs and controller first, then build and deploy the btrfs CSI driver.
1

Install the snapshot controller CRDs

SNAP_VERSION=v8.2.0
CRD_BASE="https://raw.githubusercontent.com/kubernetes-csi/external-snapshotter/${SNAP_VERSION}/client/config/crd"
CTRL_BASE="https://raw.githubusercontent.com/kubernetes-csi/external-snapshotter/${SNAP_VERSION}/deploy/kubernetes/snapshot-controller"

kubectl --context=tesslate apply -f ${CRD_BASE}/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
kubectl --context=tesslate apply -f ${CRD_BASE}/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
kubectl --context=tesslate apply -f ${CRD_BASE}/snapshot.storage.k8s.io_volumesnapshots.yaml

kubectl --context=tesslate apply -f ${CTRL_BASE}/rbac-snapshot-controller.yaml
kubectl --context=tesslate apply -f ${CTRL_BASE}/setup-snapshot-controller.yaml
2

Build and load the btrfs CSI image

docker build -t tesslate-btrfs-csi:latest -f services/btrfs-csi/Dockerfile services/btrfs-csi/
minikube -p tesslate image load tesslate-btrfs-csi:latest
3

Configure CSI credentials

The minikube overlay at services/btrfs-csi/overlays/minikube/ ships an example secrets file. Copy it first:
cp services/btrfs-csi/overlays/minikube/csi-credentials.example.yaml \
   services/btrfs-csi/overlays/minikube/csi-credentials.yaml
Edit csi-credentials.yaml only if you change the MinIO admin password. The default value matches the example MinIO secret you will edit in the next section. Both secrets must agree.
4

Deploy the CSI driver and Volume Hub

kubectl --context=tesslate apply -k services/btrfs-csi/overlays/minikube
kubectl --context=tesslate rollout status daemonset/tesslate-btrfs-csi-node -n kube-system --timeout=120s
kubectl --context=tesslate rollout status deployment/tesslate-volume-hub -n kube-system --timeout=120s
This installs:
  • tesslate-btrfs-csi-node DaemonSet (one pod per node)
  • tesslate-volume-hub Deployment (Hub plus CSI provisioner / snapshotter sidecars)
  • tesslate-image-precache DaemonSet (pre-pulls the devserver image on every node)

What Volume Hub does

Volume Hub is the storageless orchestrator that sits above the per-node btrfs CSI driver. It exposes a gRPC API at tesslate-volume-hub.kube-system.svc:9750 and the backend talks to it through orchestrator/app/services/hub_client.py.
RPCWhat it does
CreateVolumePick a node with capacity, create an empty or template-cloned subvolume
EnsureCachedGuarantee a volume is present on the node where a pod is about to be scheduled
TriggerSyncPush CAS content to MinIO when a project hibernates
DeleteVolumeRemove the subvolume and any S3 objects it owns
The minikube overlay patches imagePullPolicy: Never so the locally loaded tesslate-btrfs-csi:latest image is always used, even if a registry rebuild changes the :latest digest upstream.

5. Configure secrets

Each cluster secret has a *.example.yaml under k8s/overlays/minikube/secrets/. Copy and edit every file.
1

Copy the example secrets

cd k8s/overlays/minikube/secrets

cp app-secrets.example.yaml      app-secrets.yaml
cp postgres-secret.example.yaml  postgres-secret.yaml
cp s3-credentials.example.yaml   s3-credentials.yaml
cp minio-credentials.example.yaml minio-credentials.yaml

cd -
2

Fill in required values

FileKeyNotes
app-secrets.yamlSECRET_KEYpython -c "import secrets; print(secrets.token_hex(32))"
app-secrets.yamlINTERNAL_API_SECRETSame generator; must match ORCHESTRATOR_INTERNAL_SECRET in tesslate-btrfs-csi-config
app-secrets.yamlDATABASE_URLpostgresql+asyncpg://tesslate_user:<password>@postgres:5432/tesslate_dev
app-secrets.yamlLITELLM_API_BASE, LITELLM_MASTER_KEYYour LiteLLM proxy or OpenAI-compatible endpoint
postgres-secret.yamlPOSTGRES_PASSWORDMust match the password embedded in DATABASE_URL
s3-credentials.yamlS3_ACCESS_KEY_ID, S3_SECRET_ACCESS_KEYMust match MINIO_ROOT_USER / MINIO_ROOT_PASSWORD
minio-credentials.yamlMINIO_ROOT_USER, MINIO_ROOT_PASSWORDMinIO admin credentials
Leave OAuth, Stripe, and SMTP blank unless you need those flows during local development. The backend boots fine with empty values.
3

Optional: Llama API secret for seeded apps

The seeded crm-demo and nightly-digest apps reference a cluster secret called llama-api-credentials. Without it, those pods fail to start.
kubectl --context=tesslate -n tesslate create secret generic llama-api-credentials \
  --from-literal=api_key='<your-llama-api-key>'
OAuth callbacks in the example secrets point at http://localhost/api/auth/<provider>/callback. Providers that enforce HTTPS callbacks will not work against pure minikube. Use Cloudflare Tunnel (k8s/overlays/minikube/cloudflare-tunnel/) if you need to exercise them.

6. Deploy OpenSail

1

Build and load application images

docker build -t tesslate-backend:latest    -f orchestrator/Dockerfile          orchestrator/
docker build -t tesslate-frontend:latest   -f app/Dockerfile.prod              app/
docker build -t tesslate-devserver:latest  -f orchestrator/Dockerfile.devserver .

minikube -p tesslate image load tesslate-backend:latest
minikube -p tesslate image load tesslate-frontend:latest
minikube -p tesslate image load tesslate-devserver:latest
The minikube overlay sets K8S_DEVSERVER_IMAGE=tesslate-devserver:latest and K8S_IMAGE_PULL_POLICY=Never, so the devserver image must already be inside the node before any user project starts.
2

Deploy MinIO

kubectl --context=tesslate apply -k k8s/overlays/minikube/minio
kubectl --context=tesslate wait --for=condition=ready pod -l app=minio -n minio-system --timeout=180s
The init job creates two buckets: tesslate-projects (used by the backend) and tesslate-btrfs-snapshots (used by the CSI driver’s CAS sync).
3

Apply the OpenSail overlay

kubectl --context=tesslate apply -k k8s/overlays/minikube
This pulls in everything from k8s/base/ (namespace, backend, frontend, Postgres, Redis, Ingress, security, Volume Hub references) and applies the minikube-specific patches (local images, imagePullPolicy: Never, HTTP-only Ingress, single replicas).
4

Wait for rollouts

kubectl --context=tesslate rollout status deployment/postgres          -n tesslate --timeout=180s
kubectl --context=tesslate rollout status deployment/redis             -n tesslate --timeout=120s
kubectl --context=tesslate rollout status deployment/tesslate-backend  -n tesslate --timeout=300s
kubectl --context=tesslate rollout status deployment/tesslate-frontend -n tesslate --timeout=180s
The backend runs Alembic migrations on startup. If the first pod fails with a migration error it will retry; give it a minute before investigating.
If all four rollouts succeed, OpenSail is live. Keep the minikube tunnel terminal open.

7. Access the app

With the tunnel running, NGINX Ingress answers on localhost. No port-forwarding needed for normal use.
URLWhat it serves
http://localhost/Frontend
http://localhost/api/Backend API
http://<slug>-<container>.localhost/User project preview (for example, http://my-app-k3x8n2-frontend.localhost)
http://minio.localhost/MinIO S3 API (after the Ingress rule or via port-forward below)
Without the tunnel, use port-forwards:
kubectl --context=tesslate port-forward -n tesslate svc/tesslate-frontend-service 5000:80
kubectl --context=tesslate port-forward -n tesslate svc/tesslate-backend-service  8000:8000
kubectl --context=tesslate port-forward -n minio-system svc/minio 9001:9001
OpenSail does not run Traefik in Kubernetes. Traefik is only the Docker Compose dev mode router; NGINX Ingress serves the same role on minikube and in AWS.

8. Seed the database

BACKEND_POD=$(kubectl --context=tesslate get pods -n tesslate \
  -l app=tesslate-backend -o jsonpath='{.items[0].metadata.name}')

for script in \
  seed_marketplace_bases.py \
  seed_marketplace_agents.py \
  seed_opensource_agents.py \
  seed_skills.py \
  seed_themes.py \
  seed_mcp_servers.py \
  seed_community_bases.py
do
  kubectl --context=tesslate cp "scripts/seed/$script" "tesslate/${BACKEND_POD}:/tmp/$script"
  kubectl --context=tesslate exec -n tesslate "$BACKEND_POD" -- python "/tmp/$script"
done
Individual failures are non-fatal; the remaining scripts still run. Every seed script is idempotent, so it is safe to re-run.

9. Create a project

From http://localhost/, sign up and create a project. Behind the scenes the backend:
  1. Creates a namespace proj-<uuid> with a NetworkPolicy isolating it from other project namespaces.
  2. Asks Volume Hub to pick a node with capacity and provision a btrfs subvolume, cloning from the template snapshot if one exists.
  3. Creates a PVC bound to storage class tesslate-btrfs. PVC size defaults to K8S_PVC_SIZE=5Gi.
  4. Creates one Deployment and Service per container declared in the project’s .tesslate/config.json. Multiple containers are kept on the same node via pod affinity so they share the volume without cross-node traffic.
  5. Adds an Ingress rule per exposed container on http://<slug>-<container>.localhost.
Inspect what landed:
kubectl --context=tesslate get ns | grep proj-
NS=<proj-uuid>
kubectl --context=tesslate get all,pvc,ingress -n $NS
kubectl --context=tesslate describe pod -n $NS -l app=frontend

10. Snapshots and hibernation

OpenSail keeps up to K8S_MAX_SNAPSHOTS_PER_PROJECT=5 VolumeSnapshot objects per project as a rolling timeline. Idle projects hibernate after K8S_HIBERNATION_IDLE_MINUTES=10 minutes: the backend calls Volume Hub TriggerSync to push the CAS content to MinIO, then tears down the compute pod while keeping the volume cached on its node.
NS=<proj-uuid>
kubectl --context=tesslate get volumesnapshots     -n $NS
kubectl --context=tesslate get volumesnapshotcontents
kubectl --context=tesslate get pvc                 -n $NS
When you click Start on a hibernated project, Volume Hub uses the fast path if the cached subvolume is still on its node, or peer-transfers or restores from MinIO otherwise.

11. Common commands

Always include --context=tesslate.
kubectl --context=tesslate get pods -n tesslate
kubectl --context=tesslate get pods -A | grep proj-
kubectl --context=tesslate get events -n tesslate --sort-by='.lastTimestamp'
On Windows Git Bash, prefix kubectl and docker exec calls with MSYS_NO_PATHCONV=1 so paths are not mangled.
Use kubectl delete pod instead of kubectl rollout restart when swapping an image with the same tag. Rollout restart can reuse cached image layers; deleting the pod forces a fresh pull from the node’s Docker cache.

12. Teardown

./k8s/scripts/minikube/teardown.sh
Destroying the cluster loses all user project data. PVCs survive a minikube stop / minikube start cycle; they do not survive minikube delete.

13. Troubleshooting

The minikube VM image must have btrfs tools available. Use --driver docker (confirmed working) and check the init container:
kubectl --context=tesslate describe pod -n kube-system -l app=tesslate-btrfs-csi-node
kubectl --context=tesslate logs       -n kube-system -l app=tesslate-btrfs-csi-node -c init-btrfs
If you changed driver, the safest fix is to recreate the cluster with the Docker driver.
Symptom: no matches for kind "VolumeSnapshot" in version "snapshot.storage.k8s.io/v1". Re-run section 4.
kubectl --context=tesslate get crds | grep snapshot.storage.k8s.io
kubectl --context=tesslate get pods  -n kube-system -l app=snapshot-controller
  1. Confirm minikube tunnel --profile tesslate is still running in another terminal.
  2. kubectl --context=tesslate get ingress -A | grep proj- and verify the rule exists.
  3. kubectl --context=tesslate describe ingress -n proj-<uuid> and look for controller errors.
If your OS does not resolve *.localhost automatically, add the specific host to /etc/hosts pointing at 127.0.0.1.
Local images were not loaded into the minikube node, or the tag drifted.
minikube -p tesslate image ls | grep tesslate
minikube -p tesslate image load tesslate-backend:latest
kubectl --context=tesslate rollout restart deployment/tesslate-backend -n tesslate
kubectl --context=tesslate describe pod <pod> -n <namespace>
kubectl --context=tesslate logs <pod> -n <namespace> --previous
Common causes: wrong DATABASE_URL in app-secrets.yaml, Postgres not ready yet, INTERNAL_API_SECRET mismatch between tesslate-app-secrets and tesslate-btrfs-csi-config, or missing llama-api-credentials for seeded apps.
kubectl --context=tesslate delete pvc postgres-pvc -n tesslate
kubectl --context=tesslate delete pod -l app=postgres -n tesslate
kubectl --context=tesslate rollout restart deployment/tesslate-backend -n tesslate
The orchestrator re-runs Alembic migrations on the next boot and you can re-seed the database.

Next steps

AWS Production

Deploy OpenSail to EKS with NLB ingress, cert-manager, and Cloudflare DNS.

Docker Setup

Faster inner-loop dev without Kubernetes. Same code paths, different orchestrator.

Publishing Apps

Package and publish an app to the Tesslate marketplace from your local cluster.

Architecture

High-level tour of how the orchestrator, agents, and runtimes fit together.

Getting help

Discord

Real-time help from the Tesslate community.

GitHub

Source, issues, and release notes.

Email

Direct support at [email protected].