Documentation Index
Fetch the complete documentation index at: https://docs.tesslate.com/llms.txt
Use this file to discover all available pages before exploring further.

The agent runner
OpenSail ships one first-class agent runner: tesslate-agent. It is a Python package vendored in the monorepo atpackages/tesslate-agent/ and also published standalone. One class (TesslateAgent), 33 tools across 8 categories, LiteLLM for model access, and ATIF v1.4 trajectory recording.
There is no longer a StreamAgent, IterativeAgent, or ReActAgent. All four were replaced by this single runner.
33 built-in tools
File ops, shell, navigation, git, memory, web, planning, delegation.
Any LiteLLM model
OpenAI, Anthropic, DeepSeek, Qwen, Gemini, Mistral, xAI, Z.AI, local Ollama, any OpenAI-compatible endpoint.
Progressive persistence
Every step streams to the database as it happens. Pods can die mid-run, you can resume later.
Context compaction
At 80 percent of the model window, the agent compacts older messages with a cheap model and keeps going.
The agent loop
Everything happens insideTesslateAgent.run(user_request, context). It is an async generator that yields events and never touches the filesystem directly. Every tool call goes through the ToolRegistry.
Full protocol, tool schemas, edit modes, and quirks live in the agent repo: DOCS.md on GitHub.
Tool catalog
33 tools across 8 categories. The registry is fixed; extensions happen through skills and MCP bridges, not new tool modules.- File operations (8)
- Shell commands (8)
- Git (4)
- Memory (2)
- Web (2)
- Planning (1)
- Delegation (5)
read_file, write_file, read_many_files, patch_file, multi_edit, apply_patch, view_image, file_undo.Surgical edits use patch_file with fuzzy matching. Atomic multi-file changes use multi_edit. apply_patch takes unified diffs. file_undo rolls back the most recent edit.OpenSail extends this set with platform tools for project control, graph editing (
apply_setup_config), and kanban (TSK-NNNN refs). The core 33 ship with every install.Subagents and delegation
An agent can spawn subagents for focused subtasks. Each subagent gets its own system prompt, its own tool subset, and its own budget. The parent agent sees subagent output and decides what to do next. Typical patterns:- Divide and conquer Frontend agent and backend agent run in parallel inside the same workspace
- Specialist lookup Spawn a
doc-writersubagent to generate changelog entries - Verification Spawn a
reviewersubagent to audit the main agent’s diff before finalizing
task, wait_agent, send_message_to_agent, close_agent, and list_agents tools. They run in the same workspace, so file edits are visible to everyone.
Skills and progressive disclosure
Skills are reusable capabilities (writing style, code review checklist, deployment playbook, research methodology). They attach to any agent. Skills load progressively:- The lightweight catalog (name + description) is injected into the agent’s context
- The full body is pulled on demand only when the agent calls
load_skill
project-architecture use live marker tokens ({{MARKER}}) that get resolved at load time with the current config schema, service catalog, and URL patterns, so skill bodies never drift out of sync with the platform.
MCP integration
The agent supports MCP (Model Context Protocol) natively. Installed MCP servers appear as native tools in the agent’s registry. Tool schemas are cached in Redis (MCP_TOOL_CACHE_TTL, default 300 seconds) and refreshed on OAuth or server update.
The MCP bridge also surfaces:
- Citations returned by MCP tools render as
CitationCardin chat - Re-auth prompts trigger a yellow
ReauthBannerlinking to/settings/connectors - Structured outputs per MCP spec 2025-06-18+
Context compaction
When a session crosses 80 percent of the active model’s context window, the agent compacts older messages with a cheap model (compaction_summary_model, usually a small, fast model). Compaction preserves tool outputs by reference and keeps the most recent turns verbatim. The agent continues without hitting a wall.
Multi-hour runs are normal. Compaction is automatic and costless from the user’s perspective.
Progressive persistence
Every agent step (AgentStep) streams to the database as it happens. The row goes in before the next tool call. If the pod dies, the browser closes, or the network hiccups, you can come back later and see the full trajectory.
Resuming picks up from the last persisted step. No re-execution, no duplicate tool calls.
Edit modes and approval
Edit mode controls how much autonomy the agent has. Switch it any time from the chat input.- Ask (default)
- Allow
- Plan
Dangerous tools (writes, shell, git push) trigger an
ApprovalRequestCard in chat. Three buttons: Allow Once, Allow All, Stop. Read-only tools run without prompting..tesslate/permissions.json per project. Each capability (shell, network, git push, file write, process spawn) has an allow, deny, or ask policy with optional always-allow persistence.
Orchestration backends
tesslate-agent has a pluggable orchestration layer. LocalOrchestrator ships in the open-source repo. OpenSail registers Docker, Kubernetes, and desktop-local backends via OrchestratorFactory.register(mode, cls), so the same agent code runs against subprocesses, containers, or pods without change.
Running an agent
From the chat UI, pick an agent and type. From the external API, callPOST /api/external/agent/invoke with your tsk_ API key and subscribe to SSE events at /api/external/agent/events/{task_id}. From the CLI (inside the tesslate-agent repo), run tesslate-agent run --task "..." --workdir . --output trajectory.json.
Every execution produces an ATIF v1.4 trajectory recording (
trajectory.json). Use it for replay, debugging, and benchmark analysis.Related
Full tool reference
Every tool, every parameter, every quirk.
Chat interface
How agents stream to the chat panel.
Marketplace
Install specialized agents, skills, and MCP servers.
External agent API
Invoke agents from outside the UI with
tsk_ keys.