System Overview
Tesslate Studio is built as a self-hosted, container-based AI development platform that creates isolated environments for each project. This architecture provides security, scalability, and complete data sovereignty.- Docker Mode (Development)
- Kubernetes Mode (Production)
Core Components
Orchestrator (Backend)
FastAPI Backend
The orchestrator is the brain of Tesslate Studio, built with FastAPI (Python).
- User Authentication & Authorization: JWT-based auth with refresh tokens
- Project Management: Create, update, delete projects
- Agent Execution: Run AI agents and manage their lifecycle
- Container Orchestration: Spin up Docker containers for each project
- Database Operations: Store user data, projects, agents, and settings
- API Gateway: Expose REST API for frontend consumption
- FastAPI: Modern async Python web framework
- SQLAlchemy: ORM for database operations
- Alembic: Database migrations
- Docker SDK: Container management via Docker API
- LiteLLM: Unified AI model gateway
AuthService: User authentication and session managementProjectService: Project CRUD operationsAgentService: AI agent execution and managementContainerService: Docker container lifecycle managementGitHubService: GitHub OAuth and repository operations
Frontend (React)
React 19 + Vite 7
Modern single-page application built with the latest React and Vite.
- User Interface: Dashboard, project editor, chat interface
- Live Preview: Real-time application preview with HMR
- Code Editor: Monaco editor integration (VSCode engine)
- Chat Interface: Real-time AI agent communication
- File Management: File tree, file operations, save/load
- React 19: Component-based UI framework
- Vite 7: Lightning-fast build tool and dev server
- TypeScript: Type-safe JavaScript
- Tailwind CSS: Utility-first CSS framework
- Zustand: State management
- React Query: Server state management
- Monaco Editor: VSCode editor engine
- WebSocket: Real-time agent communication
- Project Dashboard: View and manage all projects
- Code Editor: Full-featured editor with syntax highlighting
- Live Preview: Browser-based preview with subdomain routing
- Chat Interface: Conversational AI agent interaction
- Marketplace: Browse and install agents and templates
Database (PostgreSQL)
PostgreSQL 15+
Production-grade relational database for all persistent data.
- Users
- Projects
- Agents
- Agent Logs
Reverse Proxy (Traefik)
Traefik
Cloud-native reverse proxy for routing and load balancing.
- Subdomain Routing: Route
project-name.studio.localhostto correct container - SSL Termination: Handle HTTPS/TLS in production
- Load Balancing: Distribute traffic across services
- Service Discovery: Automatically discover Docker containers
- HTTP/2 Support: Modern protocol support
studio.localhost→ Frontend (React app)studio.localhost/api→ Orchestrator (FastAPI){project}.studio.localhost→ Project container
Data Flow
Project Creation Flow
1
User Initiates
User clicks “Create Project” in frontend and selects a template.
2
API Request
Frontend sends POST request to
/api/projects:3
Database Record
Orchestrator creates project record in PostgreSQL with
status: "initializing".4
Container Creation
ContainerService spins up a new Docker container:
- Pull base image (
node:18-alpine) - Create container with project files
- Attach to Traefik network
- Set container labels for routing
5
Development Server
Inside container, Vite dev server starts:
6
Update Database
Orchestrator updates project:
container_id: Docker container IDstatus: “active”url:http://my-todo-app.studio.localhost
7
Response to Frontend
Orchestrator sends success response:
AI Agent Execution Flow
1
User Sends Message
User types in chat: “Add a dark mode toggle”
2
WebSocket Connection
Frontend establishes WebSocket connection to orchestrator.
3
Agent Selection
Frontend includes agent type in request:
4
Agent Initialization
Orchestrator loads agent configuration:
- Retrieves system prompt from database
- Loads agent tools (file_read, file_write, etc.)
- Initializes LiteLLM client with user’s model
5
LLM Call
Agent sends prompt to AI model via LiteLLM:
6
Stream Response
As AI generates code, orchestrator streams chunks to frontend via WebSocket:
7
File Operations
When agent detects code blocks, it auto-saves files to project container:
8
Live Preview Update
Vite HMR detects file change and updates browser preview instantly.
9
Log to Database
Orchestrator logs operation to
agent_command_logs:Security Architecture
Authentication Flow
1
User Login
User submits email/password to
/api/auth/login.2
Password Verification
Orchestrator verifies password using bcrypt:
3
Token Generation
Generate JWT access token (15min expiry) and refresh token (7 days):
4
Store Refresh Token
Refresh token stored in
refresh_tokens table with revocation support.5
Return Tokens
Send both tokens to frontend:
6
Frontend Storage
Frontend stores:
- Access token: Memory only (Zustand store)
- Refresh token: HTTPOnly cookie (secure)
Container Isolation
Each project runs in an isolated Docker container:- Network Isolation
- Resource Limits
- File System Isolation
- Command Validation
Containers are on separate Docker networks and can’t communicate with each other unless explicitly allowed.
Credential Encryption
Sensitive data is encrypted at rest:Scalability Patterns
Horizontal Scaling
While the current version runs on a single host, the architecture supports horizontal scaling:- Stateless Orchestrator
- Database Connection Pooling
- Load Balancer
The FastAPI orchestrator is stateless and can be replicated:All state is in PostgreSQL or Docker containers.
Technology Stack Summary
Backend
- FastAPI - Async Python web framework
- SQLAlchemy - ORM and query builder
- PostgreSQL - Relational database
- Docker SDK - Container management
- LiteLLM - AI model gateway
Frontend
- React 19 - UI framework
- Vite 7 - Build tool and dev server
- TypeScript - Type safety
- Tailwind CSS - Styling
- Zustand - State management
- Monaco Editor - Code editor
Infrastructure
- Docker - Containerization
- Traefik - Reverse proxy
- PostgreSQL - Database
- Node.js - Project runtimes
AI Integration
- OpenAI - GPT models
- Anthropic - Claude models
- Google - Gemini models
- Ollama - Local LLMs
- LiteLLM - Unified gateway
Architecture Principles
Container-per-project
Container-per-project
Each project runs in its own isolated Docker container, ensuring:
- No conflicts between projects
- Independent dependency management
- Resource isolation and limits
- Easy cleanup (delete container = delete project)
Subdomain Routing
Subdomain Routing
Clean URLs for each project:
studio.localhost- Main applicationmy-app.studio.localhost- User’s project- Easy sharing and bookmarking
- Professional development experience
Bring Your Own Models
Bring Your Own Models
No vendor lock-in for AI:
- Support for 100+ AI providers via LiteLLM
- Use local models (Ollama) for free
- Switch models per project or agent
- Control your AI costs
Self-Hosted
Self-Hosted
Complete infrastructure control:
- Deploy anywhere (laptop, cloud, on-prem)
- Your data never leaves your infrastructure
- No external dependencies (except AI APIs if chosen)
- Customize and extend as needed