Skip to main content

General Questions

Tesslate Studio is an open-source AI-powered development platform that helps you build full-stack web applications using natural language. It can be self-hosted, giving you complete control over your development environment and data.Key features:
  • AI code generation from natural language prompts
  • Live preview with hot module replacement
  • Self-hosted or cloud-hosted at tesslate.com
  • Container isolation for every project
  • Support for 100+ AI models via LiteLLM
  • Multi-container projects (frontend, backend, database)
  • Marketplace for agents and project templates
Yes. Tesslate Studio is 100% open source under the Apache 2.0 license. You can:
  • Use it for free (personal or commercial)
  • Self-host on your own infrastructure
  • Modify and customize the source code
  • Fork and distribute it
You only pay for:
  • Infrastructure costs (if deploying to cloud providers)
  • AI API usage (OpenAI, Anthropic, etc.) or use free local models with Ollama
  • Optional credit purchases on the hosted platform
Data Sovereignty:
  • Your code never leaves your infrastructure (when self-hosted)
  • Complete control over data storage
  • No vendor lock-in
Cost Control:
  • No per-seat pricing
  • Pay only for infrastructure and AI APIs
  • Use free local models via Ollama for zero AI costs
Customization:
  • Fork and modify the entire platform
  • Create custom agents with your own system prompts
  • Add proprietary features and integrations
Self-Hosting:
  • Deploy anywhere that runs Docker (laptop, cloud, on-prem)
  • Works offline with local models
  • No internet dependency for core features
Not necessarily. You have three options:1. Bring Your Own API Keys (BYOK)
  • Use OpenAI, Anthropic, Google Gemini, OpenRouter, and more
  • Pay your provider directly
2. Local Models (Free)
  • Install Ollama or LM Studio
  • Use free models like Llama, Mistral, CodeLlama
  • 100% offline, zero AI costs
3. Hybrid Approach
  • Use local models for quick iteration
  • Use cloud models for complex tasks
  • Switch models per conversation
Yes. The Apache 2.0 license allows:
  • Commercial use
  • Building paid products with it
  • Using it for client work
  • Selling hosting services
  • Creating proprietary extensions
No restrictions on:
  • Number of users
  • Revenue generated
  • Type of business

Technical Questions

Frontend:
  • React 19, TypeScript, Vite 7
  • Tailwind CSS for styling
  • Monaco Editor (the same engine as VS Code)
  • XYFlow for architecture diagrams
Backend:
  • FastAPI (Python 3.11)
  • PostgreSQL with asyncpg
  • SQLAlchemy ORM with Pydantic schemas
  • LiteLLM for AI model routing
Infrastructure:
  • Docker / Docker Compose (development)
  • Kubernetes with Kustomize (production)
  • Traefik (reverse proxy in Docker mode)
  • NGINX Ingress (routing in Kubernetes mode)
  • AWS S3 / MinIO for project storage
  • Terraform for infrastructure as code
Minimum (Docker mode):
  • 8GB RAM
  • 10GB disk space
  • Docker Desktop (or Docker Engine on Linux)
  • Modern browser (Chrome, Firefox, Safari, Edge)
Recommended:
  • 16GB RAM
  • 20GB disk space
  • Fast internet (for cloud AI API calls)
For Kubernetes production:
  • 2+ nodes with 4 vCPUs and 8GB RAM each
  • Persistent volume provisioner (e.g., AWS EBS)
  • NGINX Ingress Controller
Supported Operating Systems:
  • Windows 10/11 (with Docker Desktop or WSL2)
  • macOS 12+
  • Linux (Ubuntu, Debian, Fedora, etc.)
Not recommended, but possible. Docker provides:
  • Easy setup with a single docker compose up command
  • Container isolation for user projects
  • Consistent environments across machines
  • Simplified networking with Traefik
Without Docker, you need to manually:
  • Install and configure PostgreSQL
  • Set up Traefik or NGINX as a reverse proxy
  • Run Node.js dev servers for each project
  • Manage project isolation yourself
For most users, Docker is much simpler. See the local development guide if you need native setup.
Tesslate Studio supports 100+ AI models through LiteLLM:Cloud Providers:
  • OpenAI (GPT-4o, GPT-4, GPT-3.5)
  • Anthropic (Claude Sonnet, Claude Haiku, Claude Opus)
  • Google (Gemini Pro, Gemini Flash)
  • Azure OpenAI
  • OpenRouter (access to dozens of models)
  • Cohere, Together AI, and many more
Local Models:
  • Ollama (Llama, Mistral, CodeLlama, Qwen, etc.)
  • LM Studio
  • Any OpenAI-compatible API endpoint
You can configure which models are available, set per-model pricing, and even add custom OpenRouter models.
No (when self-hosted). Everything runs on your infrastructure:
  • Code stays on your machine/server
  • Database is local to your instance
  • No telemetry or tracking
  • No external dependencies except AI APIs (if you choose to use them)
On the hosted platform (tesslate.com):
  • Your projects are stored on our infrastructure
  • AI prompts go to the configured AI provider
  • We do not share or sell your data
Tesslate Studio includes four AI agent types, each designed for different workflows:
AgentBest ForHow It Works
StreamAgentQuick UI tasks, fast iterationSingle LLM call with streaming response
IterativeAgentMulti-step tasks, complex logicLoop of LLM calls with tool execution between each
ReActAgentReasoning-heavy tasksExplicit Reason then Act cycle for each step
TesslateAgentGeneral purpose (default)Full-featured agent with native function calling, planning mode, subagents, and context compaction
You can switch agents per conversation based on what you are working on.

Deployment Questions

Anywhere that runs Docker:Local:
  • Your laptop/desktop
  • Home server
  • Development machines
Cloud:
  • AWS (EC2, EKS, ECS)
  • Google Cloud (GKE, Compute Engine)
  • Azure (AKS, Virtual Machines)
  • DigitalOcean, Hetzner, Linode, etc.
On-Premises:
  • Company datacenter
  • Private cloud
  • Air-gapped networks (with local AI models via Ollama)
Production (Kubernetes):
  • AWS EKS with EBS persistent volumes
  • Minikube for local Kubernetes development
  • Any managed Kubernetes service
Tesslate supports two deployment modes controlled by the DEPLOYMENT_MODE environment variable:
FeatureDocker ModeKubernetes Mode
Use caseLocal development, small teamsProduction, scaling
RoutingTraefik (*.localhost)NGINX Ingress (*.yourdomain.com)
IsolationDocker networksPer-project Kubernetes namespaces
StorageLocal filesystemPersistent Volume Claims (PVC)
SnapshotsNot availableEBS VolumeSnapshots for project versioning
Setupdocker compose upkubectl apply -k
Yes. Configure in your environment:
APP_DOMAIN=studio.yourcompany.com
APP_PROTOCOL=https
Projects will be available at:
  • studio.yourcompany.com (main app)
  • container.project-slug.studio.yourcompany.com (user projects in K8s mode)
See the Deployment Guide for SSL setup with Traefik (Docker) or cert-manager (Kubernetes).
Yes. Set DATABASE_URL in your environment:
DATABASE_URL=postgresql+asyncpg://user:pass@your-db-host:5432/tesslate
Supported:
  • AWS RDS PostgreSQL
  • Google Cloud SQL
  • Azure Database for PostgreSQL
  • Any managed or self-hosted PostgreSQL 13+
In Kubernetes mode, Tesslate uses the S3 Sandwich pattern for cost-efficient project storage:
  1. Hydration: When a project starts, files are downloaded from S3 to a local PVC
  2. Runtime: The dev server uses fast local disk I/O
  3. Dehydration: When a project hibernates, files are uploaded back to S3
With EBS VolumeSnapshots (on AWS EKS), projects can also keep persistent block storage that survives pod restarts, with up to 5 snapshots per project for version history.
Small Team (1-10 users):
  • 2 vCPUs, 8GB RAM, 50GB SSD
  • Estimated cloud cost: $40-80/month
Medium Team (10-50 users):
  • 4 vCPUs, 16GB RAM, 100GB SSD
  • Estimated cloud cost: $80-160/month
Large Team (50+ users):
  • 8+ vCPUs, 32GB+ RAM, 200GB+ SSD
  • Consider multi-node Kubernetes with load balancing
  • Estimated cloud cost: $200+/month
Costs vary by cloud provider and region.

Usage Questions

Primary Focus:
  • React + TypeScript (frontend)
  • JavaScript/TypeScript (full-stack)
Backend Options (via multi-container projects):
  • Python (FastAPI, Flask, Django)
  • Go
  • Node.js (Express, Next.js)
  • Rust, Ruby, PHP, and more through community bases
AI agents can generate code in any language. Templates and bases provide pre-configured stacks for the most popular combinations.
Yes. Multiple methods:1. GitHub/GitLab/Bitbucket Import
  • Connect your Git provider account
  • Import any public or private repository
  • Tesslate creates an isolated container and installs dependencies
2. From a Base Template
  • Choose from 60+ community bases (Next.js, Go, Rust, Django, Laravel, Rails, Flutter, .NET, etc.)
  • Start with a configured stack and customize from there
3. Git Clone via Terminal
  • Create a blank project
  • Use the integrated terminal to git clone any repository
Currently: Tesslate Studio is designed for individual users with isolated projects.Workaround:
  • Use GitHub for collaboration
  • Each team member uses their own Tesslate instance
  • Share work via Git commits and pull requests
Planned Features:
  • Real-time collaboration
  • Team workspaces
  • Shared projects
Multiple options:1. GitHub Integration (recommended)
  • Push to a GitHub repository
  • Clone the repository anywhere else
2. Git Clone
  • Projects are Git repositories inside their containers
  • Use the Git panel to push to a remote
3. Container File Copy
docker cp <container-id>:/app ./my-project
Or in Kubernetes:
kubectl cp proj-<uuid>/<pod-name>:/app ./my-project
When you delete a project:
  1. The database record is removed
  2. The container (Docker or Kubernetes pod/namespace) is stopped and deleted
  3. All project files are removed from local storage and S3
This is permanent and cannot be undone.Best practices:
  • Push to GitHub before deleting
  • Download important files locally
  • In Kubernetes mode, snapshots are soft-deleted and retained for 30 days before permanent removal
Partially:What works offline:
  • Core application UI
  • Code editor (Monaco)
  • Live preview
  • Project management
  • AI generation (if using Ollama with local models)
What requires internet:
  • Cloud AI models (OpenAI, Anthropic, etc.)
  • GitHub/GitLab/Bitbucket integration
  • npm/pip package installs
  • External deployments (Vercel, Netlify)
For full offline usage: Install Ollama and use local models.

Troubleshooting

Try these steps:
  1. Refresh the preview: Click the refresh button in the preview panel
  2. Check for errors: Open browser console (F12) and look for errors
  3. Restart the dev server: Go to Settings and restart the development server
  4. Check container logs:
    • Docker: docker compose logs <container-name>
    • Kubernetes: kubectl logs -n proj-<uuid> <pod-name> -c dev-server
  5. Clear browser cache: Hard refresh with Ctrl+Shift+R (or Cmd+Shift+R on Mac)
Common causes:1. Invalid API key
  • Check your API key configuration in Settings
  • Verify the key with your AI provider
  • For self-hosted: check the LITELLM_API_BASE and related environment variables
2. Rate limiting
  • You have hit API rate limits from your provider
  • Wait a moment and try again, or switch to a different model
3. Model not available
  • The selected model might be deprecated or unavailable
  • Try a different model from the chat selector
  • Check LiteLLM compatibility
4. Insufficient credits
  • Check your credit balance in the billing page
  • Purchase additional credits or switch to a BYOK model
Solutions:
  1. Check Docker is running: Open Docker Desktop and verify status
  2. Check containers are up: docker compose ps (all should show “Up”)
  3. Add hosts entry if needed:
    • Mac/Linux: Add 127.0.0.1 studio.localhost to /etc/hosts
    • Windows: Add the same line to C:\Windows\System32\drivers\etc\hosts
  4. Restart everything: docker compose down && docker compose up -d
  5. Check Traefik: Verify Traefik is running and routing correctly
For user project containers:
  1. Check the devserver image exists:
    • Docker: docker images | grep tesslate-devserver
    • Minikube: minikube -p tesslate ssh -- docker images | grep tesslate-devserver
  2. Check pod events (Kubernetes): kubectl describe pod -n proj-<uuid>
  3. Check PVC is bound (Kubernetes): kubectl get pvc -n proj-<uuid>
  4. ImagePullBackOff: The devserver image is not available. Build and load it.
  5. Check environment variables: Ensure K8S_DEVSERVER_IMAGE is set correctly

Security and Privacy

Security features:
  • JWT authentication with short-lived access tokens and refresh tokens
  • Bcrypt password hashing
  • Fernet symmetric encryption for stored credentials (OAuth tokens, API keys)
  • CSRF protection on all state-changing requests
  • Container isolation per project (Docker networks or Kubernetes namespaces with NetworkPolicy)
  • Agent command validation and audit logging
  • HTTPS/TLS support in production (Traefik with Let’s Encrypt or cert-manager)
  • Optional email-based two-factor authentication
Best practices:
  • Use strong passwords and a strong SECRET_KEY
  • Enable HTTPS in production
  • Keep Docker and Kubernetes updated
  • Perform regular database backups
  • Use separate credentials for development and production
No. AI agents are sandboxed:
  • Can only access files within the project’s container
  • Cannot access the host file system
  • Cannot access other project containers
  • Shell commands execute inside the project container only
  • Dangerous operations require user approval before execution
Database (PostgreSQL):
  • User passwords: bcrypt hashed
  • Git provider tokens: Fernet encrypted
  • Deployment credentials: Fernet encrypted
  • API keys: Fernet encrypted
  • 2FA codes: argon2 hashed, limited attempts, auto-expire
Environment variables (.env):
  • SECRET_KEY: Used for JWT signing (keep secure, never commit)
  • ENCRYPTION_KEY: Used for Fernet encryption of stored credentials
  • AI provider keys: Stored in environment, not in database
Best practices:
  • Use strong, random values for SECRET_KEY and ENCRYPTION_KEY
  • Never commit .env files to Git
  • Rotate keys regularly
  • Use different keys for development and production

Getting Help

Documentation

Browse the full documentation

GitHub Discussions

Ask questions and share ideas

GitHub Issues

Report bugs and request features

Email Support

Direct support for urgent issues

Still Have Questions?

Ask in Discussions

Can’t find your answer? Ask the community in GitHub Discussions.