Overview
Every user interaction in Tesslate Studio follows a consistent request/response pattern that flows through the frontend, orchestrator, database, and (optionally) the container runtime. This page documents the lifecycle of each major flow: general API requests, agent chat, file operations, container management, Git operations, deployments, and streaming patterns. If you are new to the codebase, start with the General API Request Flow to understand the common pattern, then explore the specific flows relevant to your work.General API Request Flow
All user interactions follow this eight-step lifecycle.Frontend sends request
The React app sends an HTTP or WebSocket request to the Orchestrator.
Authentication is included via
Authorization: Bearer {jwt} header or session cookie.Orchestrator validates auth
FastAPI middleware decodes the JWT token, verifies the user session, and checks permissions (RBAC).
Perform operation
Depending on the request type, the Orchestrator delegates to the appropriate subsystem:
- File operation: Container filesystem (direct in Docker, pod exec in K8s)
- Container operation: Docker Compose or Kubernetes API
- AI chat: LiteLLM proxy to OpenAI/Anthropic
- Deployment: Vercel/Netlify/Cloudflare API
Build response
The Orchestrator assembles the JSON response from the operation result and database state.
Return to frontend
The response is sent back to the React app over the same HTTP connection (or as SSE/WebSocket events for streaming).
Request Flow Diagram
Agent Chat Flow
The agent chat is the most complex data flow, involving LLM calls, tool execution, and real-time streaming to the frontend via Server-Sent Events (SSE).User types a message
The user enters a message in the chat UI (e.g., “Create a React component for a todo list”).
Frontend opens SSE connection
The frontend sends
POST /api/chat/stream with { project_id, message, chat_id } and opens an EventSource for streaming.Load chat history
The chat router loads previous messages from the database and builds conversation context.
Create agent instance
agent/factory.py instantiates a StreamAgent with the appropriate system prompt, available tools (read_file, write_file, bash_exec, etc.), and LLM model.Agent execution loop
The
StreamAgent enters a loop:- Call the LLM with system prompt + conversation history
- If the LLM returns tool calls, execute them (e.g.,
write_file,bash_exec) - Stream each tool execution event to the frontend
- Call the LLM again with tool results
- Repeat until the LLM produces a final text response
Agent Tool Execution Example
User prompt: “Create a React component for a todo list”Available Agent Tools
| Tool | File | Purpose |
|---|---|---|
read_file / write_file | agent/tools/file_ops/read_write.py | Read and write files in the project |
patch_file / multi_edit | agent/tools/file_ops/edit.py | Edit specific file sections |
bash_exec | agent/tools/shell_ops/bash.py | Execute shell commands |
shell_exec / shell_open | agent/tools/shell_ops/session.py | Persistent shell sessions |
web_fetch | agent/tools/web_ops/fetch.py | HTTP requests for web content |
todos | agent/tools/planning_ops/todos.py | Task planning and tracking |
get_project_info | agent/tools/project_ops/metadata.py | Query project information |
File Operations Flow
File reads and writes differ depending on deployment mode. In Docker mode, the orchestrator accesses the filesystem directly. In Kubernetes mode, it executes commands inside the file-manager pod.- Read File
- Write File
Container Operations Flow
Container start and stop operations are non-blocking. The Orchestrator returns immediately and the frontend polls for status updates.Start Project Containers
Validation and background task
The Orchestrator validates auth, checks that the project is not already running, queues a background task for container setup, and returns
{ "status": "starting" } immediately.Background task executes (Kubernetes mode)
- Create namespace (
proj-{uuid}) - Create PVC (shared storage, e.g. 10Gi RWO)
- Restore from VolumeSnapshot if hibernated (or hydrate from S3 for legacy projects)
- Create file-manager pod (always running)
- For each container: create Deployment + Service + Ingress
- Create NetworkPolicy for isolation
- Update project status in database to “running”
- Return container URLs
Stop Project Containers
Background task: dehydrate and delete
- Create VolumeSnapshot from PVC (under 5 seconds)
- Wait for snapshot readiness
- Delete namespace (cascades to all resources: Deployments, Services, Ingress, PVC, NetworkPolicy)
- Update project status to “hibernated”
Git Operations Flow
Clone Repository
Commit and Push
Deployment Flow (External Providers)
External deployments to Vercel, Netlify, or Cloudflare follow a consistent non-blocking pattern.User initiates deployment
Frontend sends
POST /api/deployments with provider name, project ID, and configuration.Retrieve OAuth credentials
The Orchestrator decrypts the user’s stored
DeploymentCredential for the chosen provider.Background build and deploy
- Build the project locally (e.g.,
npm run build) - Push to Git if needed (create/update GitHub repo)
- Call provider API to create deployment
- Poll provider API until deployment status is “READY”
- Save deployment record to database
WebSocket and SSE Streaming Patterns
Tesslate Studio uses two streaming mechanisms for real-time communication.- Server-Sent Events (Agent Chat)
- Polling (Status Checks)
- WebSocket (Bidirectional)
Backend (FastAPI):Frontend (EventSource):Use cases: Agent chat streaming, build output streaming.
Performance Optimizations
Non-Blocking Operations
Non-Blocking Operations
Long-running operations return immediately and execute in the background. The frontend polls for status.
Database Query Optimization
Database Query Optimization
Use
selectinload() to prevent N+1 queries and load related objects in a single query.Streaming vs. Polling Decision Guide
Streaming vs. Polling Decision Guide
| Pattern | When to Use | Example |
|---|---|---|
| SSE (Server-Sent Events) | Unidirectional, real-time data from server | Agent chat responses |
| Polling | Simple status checks, stateless | Container startup status |
| WebSocket | Bidirectional, real-time communication | Live terminal, shell sessions |
Key Source Files
| File | Purpose |
|---|---|
orchestrator/app/routers/projects.py | Project CRUD, file operations, container lifecycle |
orchestrator/app/routers/chat.py | Agent chat and streaming |
orchestrator/app/routers/git.py | Git operations |
orchestrator/app/routers/deployments.py | External deployments |
orchestrator/app/agent/stream_agent.py | Streaming AI agent implementation |
orchestrator/app/agent/factory.py | Agent creation from config |
orchestrator/app/agent/tools/ | Agent tool implementations |
orchestrator/app/services/orchestration/kubernetes_orchestrator.py | K8s container management |
orchestrator/app/services/s3_manager.py | S3 hydration/dehydration |