Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.tesslate.com/llms.txt

Use this file to discover all available pages before exploring further.

Tesslate OpenSail

General

OpenSail is an open platform for building, running, and sharing AI-powered software. You describe a job, and OpenSail helps turn it into a working agent, app, scheduled automation, webhook handler, or MCP tool. Agents write code, use connected tools, remember context, and run in sandboxed environments.Key capabilities:
  • Natural-language agent coding with a real IDE (Monaco, terminal, git)
  • Multi-container projects with live preview
  • btrfs-based workspaces with instant snapshots and forking
  • 22+ deployment targets and 6+ messaging channels
  • Marketplace for agents, skills, MCP connectors, and installable apps
  • Runs on any model via LiteLLM
Yes. OpenSail is Apache 2.0 licensed. You can:
  • Use it for personal or commercial projects
  • Self-host on your own infrastructure
  • Modify, fork, and redistribute the source
You only pay for:
  • Infrastructure (if you deploy to a cloud)
  • AI API usage, or use free local models via Ollama or vLLM
  • Optional credit purchases on the hosted service at tesslate.com
PathWho runs itData locationWhen to pick
Cloud (tesslate.com)TesslateTesslate infrastructureZero setup, managed updates
DesktopYou, on your machineLocal disk under OPENSAIL_HOMESingle user, fully offline capable
Self-hostedYou, on your infraYour serversTeam deployments, data sovereignty, air-gapped
The desktop app can also pair to a cloud instance (ours or your own self-hosted) to get remote sandboxed compute while keeping projects local.
Yes. Apache 2.0 allows commercial use, paid products, client work, selling hosting services, and proprietary extensions. No restrictions on users, revenue, or business type.

Models and AI

All model calls route through LiteLLM, so anything LiteLLM supports works out of the box. That includes:
  • Anthropic (Claude Sonnet, Opus, Haiku)
  • OpenAI (GPT-4, GPT-4o, reasoning models)
  • Google (Gemini Pro, Flash)
  • DeepSeek, Qwen, Mistral, Meta, Moonshot, MiniMax, Z.AI, xAI
  • OpenRouter as a meta-provider
  • Self-hosted via Ollama, vLLM, or any OpenAI-compatible endpoint
Configure available models with LITELLM_DEFAULT_MODELS. See the model management guide.
Not necessarily. Three options:
  1. BYOK: attach your own key from OpenAI, Anthropic, OpenRouter, Groq, Together, DeepSeek, Fireworks, or anyone else. Pay your provider directly.
  2. Local models: run Ollama or vLLM for zero AI cost and full offline operation.
  3. Hosted credits on tesslate.com.
Yes. Two supported paths:
  • Desktop with Ollama: the Tauri app talks to a local model server over HTTP. No network required.
  • Self-hosted with a local LiteLLM pointed at vLLM, Ollama, or any OpenAI-compatible endpoint.
Features that require network (marketplace, cloud pairing, external deploys) are gracefully degraded when offline.

Deployment

No. Three supported modes selected by DEPLOYMENT_MODE:
  • desktop: SQLite + local subprocess or Docker runtime. Best for single users.
  • docker: Docker Compose with Postgres, Redis, Traefik. Best for single-server installs and dev.
  • kubernetes: per-project namespaces, btrfs CSI, Volume Hub. Best for multi-tenant production.
Start on Docker, migrate to Kubernetes when you need isolation, hibernation, or horizontal scaling.
Anywhere that can run Docker or Kubernetes. Tested paths:
  • Local: laptop, home server, dev box
  • Cloud: AWS (EC2/EKS), GCP (GCE/GKE), Azure (VM/AKS), DigitalOcean, Hetzner, Linode
  • On-prem: company datacenter, private cloud, air-gapped networks
See AWS production and Kubernetes on Minikube.
Yes. Set APP_DOMAIN=opensail.example.com, APP_PROTOCOL=https, COOKIE_DOMAIN=.opensail.example.com. Projects become {container}.{project-slug}.opensail.example.com under Kubernetes and {container}.localhost under Docker.TLS is handled by Traefik + Let’s Encrypt (Docker) or cert-manager + Cloudflare DNS (Kubernetes).
Minimum (Docker): 8 GB RAM, 10 GB disk, Docker Engine + Compose v2.Recommended dev: 16 GB RAM, 20 GB disk.Production Kubernetes: two or more btrfs-capable nodes with 4 vCPU / 8 GB RAM each, NGINX Ingress, managed Postgres, managed Redis, S3 bucket.Desktop: 4 GB RAM, 2 GB disk, macOS 12+, Windows 10+, or modern Linux.

Storage and workspaces

Each project lives on a btrfs subvolume managed by the OpenSail btrfs CSI driver and the Volume Hub orchestrator. Subvolumes support instant snapshot-clone, which enables:
  • Fork a running workspace in seconds
  • Roll back to any snapshot in the per-project timeline (up to 5)
  • Hibernate and restore a full multi-container project atomically
Object persistence uses content-addressed storage on S3 (or MinIO in dev). See the architecture page.
Projects auto-hibernate after K8S_HIBERNATION_IDLE_MINUTES of inactivity (default 10). The Volume Hub triggers an S3 CAS sync, then the pods are torn down. On next use, EnsureCached brings the volume back: fast path if still cached on a node, otherwise peer-transfer from another node, otherwise restore from S3.Multi-container projects hibernate and restore atomically because they share a volume and pod affinity pins them to one node.
Yes. Every project is a git repo inside its workspace. Push to any remote, clone elsewhere, or kubectl cp/docker cp the files out. Apps can also be exported as a CAS bundle via the publish pipeline.

Privacy and security

When self-hosted: everything stays on your infrastructure. No telemetry. The only network calls are to AI providers you configure and to marketplaces or deployment targets you connect.On tesslate.com: projects live on Tesslate infrastructure. Prompts go to the configured AI provider. We do not sell or share customer data.
No. Agents are sandboxed:
  • Docker/K8s: agents run inside the project container and can only see files in that container
  • Desktop local runtime: agents see only the project directory under OPENSAIL_HOME (or a symlinked adopted folder)
  • Shell commands run in the project container
  • Tools gated by .tesslate/permissions.json require approval for sensitive actions
  • Passwords: bcrypt hashed in Postgres
  • OAuth tokens, API keys, channel credentials, deployment credentials: Fernet encrypted at rest
  • 2FA codes: Argon2 hashed, 5-attempt cap, short TTL
  • Desktop tsk_ keys: Tauri Stronghold vault (encrypted on disk)
See Authentication.
Self-hosted is neutral: compliance depends on your infrastructure. The hosted service at tesslate.com follows SOC 2 Type II controls. Contact [email protected] for a compliance pack.

Pricing and billing

Self-hosted: you pay your own cloud and AI bills. OpenSail itself is free.Hosted (tesslate.com): tiered subscriptions plus optional credit packs. Stripe handles checkout. Marketplace creators get paid via Stripe Connect on a configurable creator/platform revenue split.For self-hosted Stripe setup, configure STRIPE_SECRET_KEY, STRIPE_WEBHOOK_SECRET, and tier price IDs. See the billing guide.
Agent runs are bounded by AGENT_MAX_COST, AGENT_MAX_COST_PER_RUN, and AGENT_MAX_ITERATIONS. Container tiers are bounded by COMPUTE_MAX_CONCURRENT_PODS. MCP installs per user cap at MCP_MAX_SERVERS_PER_USER. All are configurable.

Building and publishing

Build your project in a workspace, then publish via the Architecture Panel. The publisher:
  1. Serializes the workspace to a manifest and CAS bundle
  2. Runs the staged approval pipeline (stage0 to stage3) for public listings
  3. Creates an immutable AppVersion
Private and team installs skip the public listing gate. See Publishing apps.
Configure a channel under Settings -> Channels. Each channel (Slack, Telegram, Discord, WhatsApp, Signal) stores Fernet-encrypted credentials. The Gateway v2 runner maintains a persistent connection, routes messages to the right agent, and delivers responses back to the platform.Schedules can fire at cron intervals with per-schedule delivery targets. See Communication gateways.
The platform ships React/TypeScript bases, plus community bases for Next.js, Vite + FastAPI, Vite + Go, Django, Rails, Laravel, .NET, Flutter, Expo, and more. Agents can generate code in any language; containers run whatever the base specifies.
Yes. Options:
  • Git import: paste a GitHub/GitLab/Bitbucket URL
  • Base template: pick a pre-wired stack
  • Describe: let the agent scaffold the stack
  • Desktop: adopt any existing folder via symlink or marker file

Troubleshooting

Refresh the preview panel. Check the dev server container logs (docker compose logs <container> or kubectl logs -n proj-<uuid> <pod>). If hot reload is broken on mounted volumes, set CHOKIDAR_USEPOLLING=true.
Verify LITELLM_API_BASE is reachable and LITELLM_MASTER_KEY is correct. Check the model selector shows valid entries. Look at the orchestrator logs for LiteLLM errors and per-user budget rejections.
Check Traefik is running (docker compose ps). On systems that do not auto-resolve *.localhost, add 127.0.0.1 studio.localhost to your hosts file.
Run kubectl describe pod -n proj-<uuid> and check events. Common causes: missing K8S_DEVSERVER_IMAGE, PVC still provisioning, image pull secret wrong. Verify the Volume Hub is healthy in kube-system.

Getting help

Docs

Full documentation.

GitHub

Source and issue tracker.

Discord

Community support.

Email

Direct support.