Overview
Tesslate Studio is an AI-powered web application builder with a multi-tier, container-based architecture. Users describe what they want in natural language, an AI agent writes the code, and the platform handles isolated containerized deployment for each project. This page explains how the system is structured, how data flows between components, how security is enforced, and what technologies power each layer. Whether you are evaluating the platform, planning a deployment, or troubleshooting an issue, this document gives you the full picture.High-Level Architecture
- Docker Mode (Development)
- Kubernetes Mode (Production)
All services run on a single host machine via Docker Desktop.Request routing (Traefik reverse proxy):
Running services:
Storage: Local filesystem at
| Incoming Path | Routed To | Port |
|---|---|---|
/api/*, /ws/* | Orchestrator | 8000 |
/* (everything else) | App (React frontend) | 5173 |
| Service | Port | Role |
|---|---|---|
| Traefik | 80, 443, 8080 | Reverse proxy, *.localhost subdomain routing |
| App | 5173 | React frontend (Vite dev server) |
| Orchestrator | 8000 | FastAPI backend (business logic, AI agents) |
| PostgreSQL | 5432 | Database |
| User Projects | Dynamic | One container per project, created on demand |
orchestrator/users/{user_id}/{project_slug}/Core Components
Orchestrator (FastAPI Backend)
The orchestrator is the central brain of Tesslate Studio. It handles all business logic, coordinates between the frontend and containers, and manages the AI agent system. Responsibilities:| Area | Details |
|---|---|
| Authentication | JWT tokens (7-day access, 14-day refresh), OAuth 2.0 (GitHub, Google), HTTP-only cookies with CSRF protection |
| Project Management | CRUD operations, file tree navigation, project metadata |
| Container Orchestration | Start/stop containers, health checks, log streaming, namespace lifecycle |
| AI Agent System | Agent instantiation, tool execution, LLM calls via LiteLLM, streaming responses |
| File Operations | Read/write/edit files in project containers (local filesystem in Docker, pod exec in K8s) |
| Git Operations | Clone, commit, push, pull, branch management |
| External Deployment | Build and deploy to Vercel, Netlify, Cloudflare |
| Billing | Stripe subscriptions, usage tracking, creator payouts |
| Component | Technology | Purpose |
|---|---|---|
| Framework | FastAPI | Modern async Python web framework |
| Language | Python 3.11 | Backend programming language |
| ORM | SQLAlchemy 2.x | Database ORM with async support |
| DB Driver | asyncpg | Async PostgreSQL driver |
| Authentication | fastapi-users | User auth and OAuth integration |
| AI Gateway | LiteLLM | Multi-model AI proxy (OpenAI, Anthropic, custom) |
| K8s Client | kubernetes (Python) | Kubernetes API interaction |
| S3 Client | boto3 | Object storage (AWS S3, MinIO, DO Spaces) |
| Payments | Stripe SDK | Subscription billing |
Frontend (React Application)
The frontend is a single-page application that provides the development workspace: code editor, live preview, chat interface, file browser, and project dashboard. Technology stack:| Component | Technology | Purpose |
|---|---|---|
| Framework | React 19 | UI component library |
| Language | TypeScript 5.x | Type-safe JavaScript |
| Build Tool | Vite 5.x | Fast dev server and bundler |
| Styling | Tailwind CSS 3.x | Utility-first CSS framework |
| Code Editor | Monaco Editor | VSCode-based code editing |
| State | React Context | Auth, project, and agent state |
| Routing | React Router 6.x | Client-side navigation |
| HTTP Client | Axios | API communication |
| Streaming | EventSource (SSE) | Real-time agent responses |
- Monaco Editor with IntelliSense, syntax highlighting, and multi-file support
- Live Preview via embedded iframe showing user project output in real time
- Chat UI for conversational interaction with AI agents, with streamed responses
- File Browser for navigating, creating, renaming, and deleting project files
- Marketplace for browsing and installing agents and templates
Database (PostgreSQL)
PostgreSQL stores all persistent application data. Schema management is handled by Alembic migrations that run automatically on startup. Key tables:| Table | Purpose |
|---|---|
users | User accounts, OAuth tokens, subscriptions |
projects | Project metadata, owner, slug, settings |
project_snapshots | EBS VolumeSnapshot records for versioning/timeline |
containers | Individual services per project (frontend, backend, db) |
container_connections | Dependency graph between containers |
chats | Chat sessions with AI agents |
messages | Individual chat messages |
marketplace_agents | Pre-built AI agents |
deployments | External deployment records |
deployment_credentials | OAuth tokens for Vercel/Netlify/Cloudflare |
shell_sessions | Persistent bash sessions |
kanban_tasks | Project task management |
- Async connection pooling via SQLAlchemy
- Indexed foreign keys (
project_id,user_id) - Lazy loading for related objects
- No blocking on database I/O (fully async)
Container Runtime
The container runtime provides isolated development environments for each user project. The orchestrator automatically selects the correct backend based onDEPLOYMENT_MODE.
- Docker Mode (Local Development)
- Kubernetes Mode (Production)
| Component | Technology | Purpose |
|---|---|---|
| Runtime | Docker Desktop | Run containers on local machine |
| Orchestration | Docker Compose | Multi-container project management |
| Routing | Traefik | Reverse proxy with *.localhost routing |
| Storage | Local filesystem | Direct volume mounts to orchestrator/users/ |
AI Agent System
The agent system enables AI-driven code generation within user projects. Agents receive user requests, call LLMs, execute tools (file operations, shell commands), and stream results back to the frontend.| Component | Technology | Purpose |
|---|---|---|
| LLM Gateway | LiteLLM | Unified interface for multiple AI model providers |
| Streaming | Server-Sent Events | Real-time agent responses to the frontend |
| Tool System | Custom Python tools | File ops, bash, sessions, web fetch, task tracking |
| Agent Loop | Custom streaming agent | Iterative tool-calling loop with LLM |
- OpenAI: GPT-4, GPT-3.5
- Anthropic: Claude 3.5 Sonnet, Claude 3 Opus
- Custom: Qwen, DeepSeek, any LiteLLM-compatible endpoint
| Tool | Purpose |
|---|---|
read_write.py | Read/write files in the project |
edit.py | Edit specific sections of a file |
bash.py | Execute shell commands in the project container |
session.py | Persistent shell sessions |
fetch.py | HTTP requests for web content |
todos.py | Task planning and tracking |
metadata.py | Query project information |
Data Flow
User Request Lifecycle
Agent Chat Flow
User sends a message
The user types a request in the chat UI (for example, “Add a dark mode toggle”).
Frontend opens SSE connection
The frontend sends
POST /api/chat/stream with the message and agent ID. The response is a Server-Sent Events stream.Orchestrator creates the agent
The
agent/factory.py module loads the agent configuration (system prompt, tools, model) from the database.Agent loop executes
The
agent/stream_agent.py runs an iterative loop:- Call the LLM with the system prompt, conversation history, and available tools
- If the LLM returns tool calls (e.g.,
write_file,bash), execute them in the project container - Stream tool execution events to the frontend in real time
- Call the LLM again with tool results
- Repeat until the LLM produces a final response
Container Start Flow (Kubernetes)
Orchestrator creates namespace
A new namespace
proj-{uuid} is created with labels for the project and user.PVC is provisioned
A 10Gi PVC is created. If a VolumeSnapshot exists from a previous session, the PVC is created from the snapshot (EBS lazy-loads data on access).
File manager pod starts
A file manager pod starts immediately (under 10 seconds). It handles file operations and git commands.
Dev containers start
Dev container Deployments, Services, and Ingress resources are created. Pod affinity ensures all containers land on the same node to share the RWO PVC.
Security Architecture
Authentication and Authorization
Mechanisms:-
JWT Tokens (Bearer authentication)
- Access tokens: 7-day expiry
- Refresh tokens: 14-day expiry
- Sent in
Authorization: Bearer {token}header
-
HTTP-Only Cookies (Session authentication)
- Secure, HttpOnly, SameSite=Lax
- Domain-scoped for subdomain access
- CSRF protection via double-submit cookie pattern
-
OAuth 2.0 (Third-party login)
- GitHub and Google providers
- Authorization Code Grant flow
- State token validation to prevent CSRF
CORS and Content Security Policy
CORS (Dynamic middleware):- Allowed origins:
localhost:*,*.localhost,APP_DOMAIN,*.APP_DOMAIN - Credentials enabled (cookies, auth headers)
- Methods: GET, POST, PUT, DELETE, PATCH, OPTIONS
Credential Encryption
| Data | Encryption Method |
|---|---|
| GitHub tokens | Fernet encryption (derived from SECRET_KEY) |
| Deployment OAuth tokens | Fernet encryption (DEPLOYMENT_ENCRYPTION_KEY) |
| User passwords | bcrypt hashing (via fastapi-users) |
Container Isolation (Kubernetes)
- Namespace Isolation
- NetworkPolicy
- Pod Affinity
Each project gets a dedicated namespace (
proj-{uuid}) with:- Resource quotas per namespace (CPU, memory, storage)
- RBAC rules preventing cross-namespace access
- Automatic cleanup on project deletion (delete namespace cascades to all resources)
SSL/TLS
| Environment | Method |
|---|---|
| Docker (local) | HTTP only (*.localhost); no SSL needed |
| Kubernetes (Minikube) | HTTP only; no TLS configured |
| Kubernetes (AWS EKS) | Wildcard certificate (*.yourdomain.com) via cert-manager + Let’s Encrypt, DNS-01 challenge through Cloudflare |
Secrets Management
Kubernetes secrets:tesslate-app-secrets: API keys, OAuth credentials, domain configpostgres-secret: Database credentialss3-credentials: S3/MinIO access keys
- Secrets are never stored in Git
- Minikube: generated from
.env.minikubeviagenerate-secrets.sh - AWS EKS: created via
kubectl create secretor managed by Terraform
Scalability
Horizontal Scaling
| Component | Scalable? | Notes |
|---|---|---|
| Orchestrator (backend) | Yes | Stateless design; all state is in PostgreSQL. Multiple replicas supported via K8s Deployment. |
| Frontend | Yes | Static build served by NGINX; easily replicated or served via CDN. |
| PostgreSQL | Partially | Single instance by default. Read replicas planned for the future. Connection pooling handles high concurrency. |
| User project pods | Yes | Each project runs in its own namespace; cluster autoscaler adds nodes as needed. |
Non-Blocking Design
All long-running operations are designed to be non-blocking:- Project creation: Background task for container setup; the API returns immediately.
- Deployments: Async build process; webhook on completion.
- Agent chat: Streaming responses (no wait for full LLM completion).
- File operations: Async I/O throughout.
- Snapshots: VolumeSnapshot creation returns immediately; frontend polls for ready status.
Load Balancing
- Kubernetes: NGINX Ingress Controller with round-robin distribution. Session affinity is not required because the backend is stateless.
- Docker: Traefik automatically discovers and routes to containers.
Container Images
| Image | Dockerfile | Base | Purpose | Approximate Size |
|---|---|---|---|---|
tesslate-backend | orchestrator/Dockerfile | python:3.11-slim | FastAPI orchestrator | ~1.5 GB |
tesslate-frontend | app/Dockerfile.prod | node:20-alpine (build), nginx:alpine (runtime) | React UI served via NGINX | ~200 MB |
tesslate-devserver | orchestrator/Dockerfile.devserver | node:20-alpine | Universal dev environment for user projects | ~1.2 GB |
- Node.js 20 + npm + Bun
- Python 3 + pip
- Go + Air (hot reload)
- Git, git-lfs, curl, bash, tmux
- Pre-cached npm packages (Vite, React, TypeScript, ESLint, Tailwind)
External Integrations
Authentication Providers
| Provider | Purpose | Flow |
|---|---|---|
| GitHub | User login, repository import | OAuth Authorization Code Grant |
| User login | OAuth Authorization Code Grant | |
| GitLab | Repository import | OAuth Authorization Code Grant |
| Bitbucket | Repository import | OAuth Authorization Code Grant |
Deployment Providers
| Provider | Purpose |
|---|---|
| Vercel | Frontend hosting (OAuth + API, git push auto-deploy) |
| Netlify | Frontend hosting (OAuth + API, git push auto-deploy) |
| Cloudflare Pages | Frontend hosting (direct upload API) |
| Cloudflare Workers | Serverless backend (wrangler CLI) |
Payment Provider
| Provider | Features |
|---|---|
| Stripe | Recurring subscriptions, one-time credit purchases, creator payouts (Stripe Connect), usage-based billing |
Storage Providers
| Provider | Use Case | Protocol |
|---|---|---|
| AWS S3 | Production project storage | S3 API (boto3) |
| DigitalOcean Spaces | Production project storage | S3 API (boto3) |
| MinIO | Local/development S3-compatible storage | S3 API (boto3) |
Architecture Principles
Container-per-project isolation
Container-per-project isolation
Every project runs in its own isolated environment (Docker container or Kubernetes namespace). This ensures no conflicts between projects, independent dependency management, resource limits, and clean teardown (deleting a container or namespace removes everything).
Non-blocking operations
Non-blocking operations
User requests never block on long-running tasks. Project creation, deployments, and agent chat all run asynchronously. The frontend polls for status or receives streamed updates.
Bring your own models
Bring your own models
Tesslate Studio uses LiteLLM as a universal AI gateway, supporting 100+ providers. You can use OpenAI, Anthropic, Google, self-hosted open-source models, or any LiteLLM-compatible endpoint. There is no vendor lock-in.
Self-hosted, data sovereign
Self-hosted, data sovereign
When self-hosted, all data stays on your infrastructure. The only external calls are to AI model providers (if you choose cloud models) and optional OAuth/deployment providers. You have complete control over your data.
Stateless backend
Stateless backend
The orchestrator is fully stateless. All persistent state lives in PostgreSQL or the container runtime. This means you can run multiple orchestrator replicas behind a load balancer without any special coordination.
Infrastructure as Code
Infrastructure as Code
All Kubernetes manifests use Kustomize with base/overlay separation. AWS infrastructure is provisioned with Terraform. Every resource is reproducible and version-controlled.
Health Checks and Monitoring
Endpoints
| Endpoint | Purpose |
|---|---|
GET /health | Backend liveness (returns 200 if alive) |
GET /api/config | Public configuration (deployment mode, app domain) |
Kubernetes Probes
| Probe | Path | Interval | Purpose |
|---|---|---|---|
| Startup | /health | 5s (24 failures = 2 min max) | Allow time for boot and migrations |
| Liveness | /health | 10s | Restart pod if it becomes unresponsive |
| Readiness | /health | 5s | Remove from load balancer if unhealthy |
Application Logging
- Format:
%(asctime)s - %(name)s - %(levelname)s - %(message)s - Level: Configurable via
LOG_LEVELenvironment variable - Key loggers:
app.main,app.services.orchestration.kubernetes_orchestrator,app.agent.stream_agent,app.routers.*
Future Enhancements
| Enhancement | Description |
|---|---|
| Distributed task queue | Move background tasks to Celery + Redis for reliability |
| Prometheus + Grafana | Metrics collection, dashboards, and alerting |
| PostgreSQL read replicas | Improved read performance and high availability |
| CDN integration | CloudFlare CDN for frontend assets; faster global access |
| Multi-region deployment | K8s clusters in multiple regions with geo-routing |
| Cluster autoscaler | HPA for backend pods; node autoscaler for K8s nodes |
Next Steps
Quickstart
Get running locally with Docker Compose in minutes
Configuration Reference
Every environment variable, explained and categorized
Deployment Guide
Deploy to production with Docker Compose, Kubernetes, or AWS EKS
Developer Guide
Explore the system internals and API reference