Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.tesslate.com/llms.txt

Use this file to discover all available pages before exploring further.

Tesslate OpenSail

Why customize

An agent in OpenSail is a tesslate-agent instance plus a system prompt, a bound skill catalog, a bound MCP connector set, a model preference, and an approval policy. Customization is how you turn a generic coding agent into a specialist: a Tailwind UI builder, a FastAPI API author, a security reviewer, a product-ops bot that lives in Slack.

Encode standards

Lock in coding conventions, frameworks, and patterns once, reuse forever

Scope capabilities

Restrict tools, skills, and MCP connectors so the agent does one thing well

Ship it

Publish to your team or the public marketplace as an immutable agent version

Keep control

Approval policies travel with the agent, so dangerous actions still gate the same way

The five levers

The character sheet for the agent: role, tone, rules, tech stack, and anti-patterns. System prompts support runtime markers like {mode_instructions}, {project_name}, and {git_branch} that are substituted before each model call.

Creating an agent

Fork an existing one

The fastest path. You start with a working system prompt and adjust.
1

Find a base

Open the Marketplace or the Library and find an agent close to what you want.
2

Fork

Click Fork. A copy lands in your Library with your ownership.
3

Edit the prompt

Rewrite the system prompt to match your conventions. Keep {mode_instructions} somewhere in the prompt so edit-mode behavior stays consistent.
4

Attach skills and MCPs

Go to the agent’s detail page. Add skills from the Library tab. Add MCP connectors from the MCP tab.
5

Pick a model

Set a default model. Choose something faster for quick iteration, something stronger for reasoning-heavy tasks.
6

Test

Open a test project, run the agent against a representative task, and iterate on the prompt.

Create from scratch

1

Open the Library

Library then the Agents tab. Click Create New Agent.
2

Name and metadata

Give it a slug, a display name, an icon, a category, and a short description. These power marketplace search.
3

System prompt

Write the prompt. Keep it focused: one role, one stack, one set of rules.
4

Skills

Attach the skills this agent needs. Leave off anything speculative: skills are lazy-loaded and only cost context window on use.
5

MCP connectors

Attach the connectors this agent must have. Connectors are per-user, so installers attach their own credentials after install.
6

Model and policy

Set the default model and the default approval policy.
7

Save and test

Save. Open a project and run through real tasks.

Writing system prompts

A good system prompt has five sections: role, guidelines, stack, constraints, runtime markers.
One line. Who the agent is and what it does.
You are an expert TypeScript and React developer. You build accessible, responsive components using Tailwind and React Hook Form.
How the agent should work.
Always:
- Read the target file before editing
- Use named exports
- Type every prop and return value
- Follow the patterns already in the project
Preferred libraries.
Stack:
- React 18+ with TypeScript strict mode
- Tailwind CSS, no CSS-in-JS
- React Hook Form + Zod
- React Query for server state
What to avoid.
Avoid:
- Class components
- Default exports
- Hardcoded colors outside the Tailwind palette
- Monolithic components; split above 200 lines
Dynamic context injected at each call.
{mode_instructions}

Project: {project_name}
Branch: {git_branch}
Available tools: {tool_list}

Common markers

MarkerSubstituted with
{mode_instructions}Full text for the current edit mode (allow, ask, plan)
{mode}The mode name only
{project_name}Current project display name
{project_description}Project description
{project_path}Workspace path inside the container
{git_branch}Current branch
{tool_list}Comma-separated tool names available this session
{user_name}The signed-in user’s display name
{timestamp}ISO timestamp at call time
Always keep {mode_instructions} in your prompt. It guarantees the agent still honors Plan Mode and Ask Before Edit even if your other instructions conflict.

Binding skills

Skills are the unit of portable know-how in OpenSail. Each skill is a markdown document with a name, a short description, and a body. The agent only sees the name and description at session start. When it decides it needs one, it calls load_skill and the full body is injected into the turn. Three sources feed the catalog:
  1. Built-in skills shipped with the platform (for example, project-architecture). Available to every agent.
  2. Skills you installed on this agent via the Library. You attach them explicitly.
  3. Project-file skills discovered at .agents/skills/SKILL.md in the workspace. Any agent working in that project sees them.
For authoring skills, see /guides/skills.

Binding MCP connectors

MCP (Model Context Protocol) is how OpenSail bridges third-party tool servers into the agent’s tool registry. You install a connector once on your account (with your credentials, encrypted at rest), then attach it to any agent. When the agent starts a session, the worker:
  1. Looks up every AgentMcpAssignment for the agent
  2. Loads the user’s UserMcpConfig for each (with credentials)
  3. Connects to each MCP server over streamable HTTP
  4. Bridges the server’s tools, resources, and prompts into the agent’s ToolRegistry
From the agent’s perspective, connector tools look identical to built-in tools. They get used the same way. For connector setup and credential management, see /guides/connectors-mcp.

Approval policy

Every agent carries a default edit mode. Users can override per session, but your default matters because it sets the expected behavior.
DefaultWhen to pick it
Ask Before EditAny agent that writes files or runs shell commands. Safest default.
Allow All EditsAgents meant for trusted contexts (personal projects, sandboxed environments).
Plan ModeAgents that should only plan and never execute. Useful for review agents, security scanners, architects.

Publishing to the marketplace

1

Polish

Clear system prompt, representative test runs, working skill and MCP bindings.
2

Describe

Fill in the marketplace description, category, and tags. Add example prompts that show the agent at its best.
3

Choose visibility

Private (just you), team-only, or public. Public listings go through the approval pipeline.
4

Set pricing

Free, one-time purchase, subscription, or API-metered. Creator payouts pay 90% to you, 10% to the platform. See /guides/billing.
5

Submit

Submit for review. The approval pipeline runs automated security and manifest checks, then a sandbox evaluation, then a human review.
6

Iterate

Publish updates as new immutable versions. Installers can pin, auto-update, or manually update per their policy.
Public marketplace agents are visible to everyone. Do not hardcode secrets or environment-specific paths into system prompts. Use runtime markers for project-specific context.

The MarketplaceAgent model

Agents are stored as MarketplaceAgent rows with item_type="agent". The relevant columns for customization:
FieldPurpose
slugStable identifier
name, description, icon, category, tagsMarketplace metadata
system_promptThe prompt body (markdown with runtime markers)
toolsOptional allowlist of tool names; null means all available tools
modelDefault LiteLLM model id
approval_modeDefault edit mode
visibilityPrivate, team, or public
priceFree, one-time, subscription, or API-metered
forkableWhether installers can fork the agent
is_builtinReserved for platform-shipped agents; not user-writable
Attachments live in adjacent tables:
  • AgentSkillAssignment binds a skill to an agent
  • AgentMcpAssignment binds an installed MCP server to an agent

Testing your agent

1

Use a test project

Make a dedicated workspace. Clone a typical project you’d use this agent on.
2

Run representative tasks

Not just “hello world”. Run the actual tasks users will throw at this agent.
3

Review every tool call

Put the agent in Ask Before Edit. Review what it wants to do. Note where it picks the wrong tool or the wrong file.
4

Check for drift

Does it follow your guidelines or ignore them under pressure? Add explicit rules for the failure modes.
5

Stress-test the model

Long-horizon tasks reveal whether your prompt holds up. Run multi-step workflows and watch for off-pattern code.

Best practices

Focused agents outperform generalists. “React + Tailwind UI specialist” beats “full-stack developer” on UI tasks and vice versa.
Every time the agent does something wrong, add a rule that prevents it. Every time it does something right, generalize it into the prompt. Prompts are versioned like code.
A review agent does not need bash_exec. A docs writer does not need multi_edit. Narrow tools reduce drift and lower risk.
LLMs follow examples better than abstractions. Include a short code snippet in your system prompt showing the pattern you want.
Ship new agents in Ask Before Edit. Let users opt into Allow All once they trust the behavior.

Next steps

Using Agents

The user-side guide to running agents in chat

Skills

Author and publish reusable agent skills

Connectors (MCP)

Wire external tools into any agent

Model Management

Choose models, BYOK, or self-host