Skip to main content

System Overview

Tesslate Studio is built as a self-hosted, container-based AI development platform that creates isolated environments for each project. This architecture provides security, scalability, and complete data sovereignty.
┌─────────────────────────────────────────────────────┐
│  Your Machine / Your Cloud / Your Datacenter       │
├─────────────────────────────────────────────────────┤
│                                                     │
│  ┌──────────────────────────────────────────┐     │
│  │  Tesslate Studio (You control this)     │     │
│  │                                           │     │
│  │  • FastAPI Orchestrator (Python)         │     │
│  │  • React Frontend (TypeScript)           │     │
│  │  • PostgreSQL Database                    │     │
│  │  • AI Agent Marketplace                   │     │
│  └───────────┬──────────────────────────────┘     │
│              │                                      │
│              ▼                                      │
│  ┌──────────────────────────────────────────┐     │
│  │  Project Containers (Isolated)           │     │
│  │                                           │     │
│  │  todo-app.studio.localhost               │     │
│  │  dashboard.studio.localhost              │     │
│  │  prototype.studio.localhost              │     │
│  └──────────────────────────────────────────┘     │
│                                                     │
│  ┌──────────────────────────────────────────┐     │
│  │  Your AI Models (You choose)             │     │
│  │                                           │     │
│  │  • OpenAI GPT-5 (API)                    │     │
│  │  • Anthropic Claude (API)                │     │
│  │  • Local LLMs via Ollama                 │     │
│  │  • Or any LiteLLM-compatible provider    │     │
│  └──────────────────────────────────────────┘     │
└─────────────────────────────────────────────────────┘

Core Components

Orchestrator (Backend)

FastAPI Backend

The orchestrator is the brain of Tesslate Studio, built with FastAPI (Python).
Responsibilities:
  • User Authentication & Authorization: JWT-based auth with refresh tokens
  • Project Management: Create, update, delete projects
  • Agent Execution: Run AI agents and manage their lifecycle
  • Container Orchestration: Spin up Docker containers for each project
  • Database Operations: Store user data, projects, agents, and settings
  • API Gateway: Expose REST API for frontend consumption
Technology Stack:
  • FastAPI: Modern async Python web framework
  • SQLAlchemy: ORM for database operations
  • Alembic: Database migrations
  • Docker SDK: Container management via Docker API
  • LiteLLM: Unified AI model gateway
Key Services:
  • AuthService: User authentication and session management
  • ProjectService: Project CRUD operations
  • AgentService: AI agent execution and management
  • ContainerService: Docker container lifecycle management
  • GitHubService: GitHub OAuth and repository operations

Frontend (React)

React 19 + Vite 7

Modern single-page application built with the latest React and Vite.
Responsibilities:
  • User Interface: Dashboard, project editor, chat interface
  • Live Preview: Real-time application preview with HMR
  • Code Editor: Monaco editor integration (VSCode engine)
  • Chat Interface: Real-time AI agent communication
  • File Management: File tree, file operations, save/load
Technology Stack:
  • React 19: Component-based UI framework
  • Vite 7: Lightning-fast build tool and dev server
  • TypeScript: Type-safe JavaScript
  • Tailwind CSS: Utility-first CSS framework
  • Zustand: State management
  • React Query: Server state management
  • Monaco Editor: VSCode editor engine
  • WebSocket: Real-time agent communication
Key Features:
  • Project Dashboard: View and manage all projects
  • Code Editor: Full-featured editor with syntax highlighting
  • Live Preview: Browser-based preview with subdomain routing
  • Chat Interface: Conversational AI agent interaction
  • Marketplace: Browse and install agents and templates

Database (PostgreSQL)

PostgreSQL 15+

Production-grade relational database for all persistent data.
Schema Overview:
  • Users
  • Projects
  • Agents
  • Agent Logs
users
- id (uuid, primary key)
- email (unique)
- username (unique)
- hashed_password
- is_active
- is_admin
- created_at
- updated_at
Stores user accounts with authentication details.

Reverse Proxy (Traefik)

Traefik

Cloud-native reverse proxy for routing and load balancing.
Responsibilities:
  • Subdomain Routing: Route project-name.studio.localhost to correct container
  • SSL Termination: Handle HTTPS/TLS in production
  • Load Balancing: Distribute traffic across services
  • Service Discovery: Automatically discover Docker containers
  • HTTP/2 Support: Modern protocol support
Routing Rules:
  • studio.localhost → Frontend (React app)
  • studio.localhost/api → Orchestrator (FastAPI)
  • {project}.studio.localhost → Project container
Configuration:
# Traefik discovers routes via Docker labels
labels:
  - "traefik.enable=true"
  - "traefik.http.routers.app.rule=Host(`studio.localhost`)"
  - "traefik.http.services.app.loadbalancer.server.port=5173"

Data Flow

Project Creation Flow

1

User Initiates

User clicks “Create Project” in frontend and selects a template.
2

API Request

Frontend sends POST request to /api/projects:
{
  "name": "My Todo App",
  "template": "vite-react",
  "description": "A simple todo application"
}
3

Database Record

Orchestrator creates project record in PostgreSQL with status: "initializing".
4

Container Creation

ContainerService spins up a new Docker container:
  • Pull base image (node:18-alpine)
  • Create container with project files
  • Attach to Traefik network
  • Set container labels for routing
5

Development Server

Inside container, Vite dev server starts:
npm run dev --host 0.0.0.0 --port 3000
6

Update Database

Orchestrator updates project:
  • container_id: Docker container ID
  • status: “active”
  • url: http://my-todo-app.studio.localhost
7

Response to Frontend

Orchestrator sends success response:
{
  "id": "uuid",
  "name": "My Todo App",
  "url": "http://my-todo-app.studio.localhost",
  "status": "active"
}

AI Agent Execution Flow

1

User Sends Message

User types in chat: “Add a dark mode toggle”
2

WebSocket Connection

Frontend establishes WebSocket connection to orchestrator.
3

Agent Selection

Frontend includes agent type in request:
{
  "message": "Add a dark mode toggle",
  "agent_id": "stream-builder-id",
  "project_id": "project-uuid"
}
4

Agent Initialization

Orchestrator loads agent configuration:
  • Retrieves system prompt from database
  • Loads agent tools (file_read, file_write, etc.)
  • Initializes LiteLLM client with user’s model
5

LLM Call

Agent sends prompt to AI model via LiteLLM:
response = await litellm.acompletion(
    model="gpt-5o",
    messages=[
        {"role": "system", "content": agent.system_prompt},
        {"role": "user", "content": user_message}
    ],
    stream=True
)
6

Stream Response

As AI generates code, orchestrator streams chunks to frontend via WebSocket:
{"type": "chunk", "content": "```jsx\n"}
{"type": "chunk", "content": "const DarkMode"}
{"type": "chunk", "content": "Toggle = () => {\n"}
7

File Operations

When agent detects code blocks, it auto-saves files to project container:
await container_service.write_file(
    container_id=project.container_id,
    path="/app/src/components/DarkModeToggle.jsx",
    content=generated_code
)
8

Live Preview Update

Vite HMR detects file change and updates browser preview instantly.
9

Log to Database

Orchestrator logs operation to agent_command_logs:
await db.execute(
    insert(AgentCommandLog).values(
        agent_id=agent.id,
        project_id=project.id,
        command="write_file",
        output="DarkModeToggle.jsx created",
        status="success"
    )
)

Security Architecture

Authentication Flow

1

User Login

User submits email/password to /api/auth/login.
2

Password Verification

Orchestrator verifies password using bcrypt:
password_hash = bcrypt.hashpw(password, salt)
verified = bcrypt.checkpw(password, user.hashed_password)
3

Token Generation

Generate JWT access token (15min expiry) and refresh token (7 days):
access_token = jwt.encode(
    {"sub": user.id, "exp": now + timedelta(minutes=15)},
    SECRET_KEY,
    algorithm="HS256"
)

refresh_token = jwt.encode(
    {"sub": user.id, "exp": now + timedelta(days=7)},
    SECRET_KEY,
    algorithm="HS256"
)
4

Store Refresh Token

Refresh token stored in refresh_tokens table with revocation support.
5

Return Tokens

Send both tokens to frontend:
{
  "access_token": "eyJ...",
  "refresh_token": "eyJ...",
  "token_type": "bearer"
}
6

Frontend Storage

Frontend stores:
  • Access token: Memory only (Zustand store)
  • Refresh token: HTTPOnly cookie (secure)

Container Isolation

Each project runs in an isolated Docker container:
  • Network Isolation
  • Resource Limits
  • File System Isolation
  • Command Validation
Containers are on separate Docker networks and can’t communicate with each other unless explicitly allowed.
networks:
  - project-network-{project_id}

Credential Encryption

Sensitive data is encrypted at rest:
from cryptography.fernet import Fernet

# GitHub tokens, API keys encrypted before storage
cipher_suite = Fernet(ENCRYPTION_KEY)
encrypted_token = cipher_suite.encrypt(github_token.encode())

# Decrypted only when needed
decrypted_token = cipher_suite.decrypt(encrypted_token).decode()

Scalability Patterns

Horizontal Scaling

While the current version runs on a single host, the architecture supports horizontal scaling:
  • Stateless Orchestrator
  • Database Connection Pooling
  • Load Balancer
The FastAPI orchestrator is stateless and can be replicated:
services:
  orchestrator:
    image: tesslate/orchestrator:latest
    replicas: 3
    deploy:
      mode: replicated
All state is in PostgreSQL or Docker containers.

Technology Stack Summary

Backend

  • FastAPI - Async Python web framework
  • SQLAlchemy - ORM and query builder
  • PostgreSQL - Relational database
  • Docker SDK - Container management
  • LiteLLM - AI model gateway

Frontend

  • React 19 - UI framework
  • Vite 7 - Build tool and dev server
  • TypeScript - Type safety
  • Tailwind CSS - Styling
  • Zustand - State management
  • Monaco Editor - Code editor

Infrastructure

  • Docker - Containerization
  • Traefik - Reverse proxy
  • PostgreSQL - Database
  • Node.js - Project runtimes

AI Integration

  • OpenAI - GPT models
  • Anthropic - Claude models
  • Google - Gemini models
  • Ollama - Local LLMs
  • LiteLLM - Unified gateway

Architecture Principles

Each project runs in its own isolated Docker container, ensuring:
  • No conflicts between projects
  • Independent dependency management
  • Resource isolation and limits
  • Easy cleanup (delete container = delete project)
Clean URLs for each project:
  • studio.localhost - Main application
  • my-app.studio.localhost - User’s project
  • Easy sharing and bookmarking
  • Professional development experience
No vendor lock-in for AI:
  • Support for 100+ AI providers via LiteLLM
  • Use local models (Ollama) for free
  • Switch models per project or agent
  • Control your AI costs
Complete infrastructure control:
  • Deploy anywhere (laptop, cloud, on-prem)
  • Your data never leaves your infrastructure
  • No external dependencies (except AI APIs if chosen)
  • Customize and extend as needed

Next Steps