Why Customize Agents?
Custom agents let you encode your preferences, frameworks, coding standards, and workflows into a reusable AI assistant. Instead of repeating instructions in every chat, you bake them into the agent’s system prompt and configuration.Enforce Standards
Ensure consistent coding style, naming conventions, and patterns
Specialize Behavior
Focus the agent on specific frameworks, languages, or domains
Control Tool Access
Restrict which tools the agent can use for security or focus
Share with Others
Publish to the marketplace for your team or the community
What You Can Customize
- System Prompt
- Agent Type
- AI Model
- Tool Permissions
- Metadata
The core instructions that define agent behavior. This is the single most important customization.
- Role definition (what the agent is)
- Coding style and standards
- Framework and library preferences
- Behavioral guidelines and constraints
- Dynamic markers for runtime context
Creating a Custom Agent
Method 1: Fork an Existing Agent
The easiest path. Start with a working agent and modify it.Find an Open-Source Agent
Browse your library or the marketplace for an open-source agent that is close to what you want.
Customize the System Prompt
Edit the system prompt to match your preferences. Add your coding standards, framework choices, and behavioral guidelines.
Forking is the recommended approach for beginners. You start with a proven system prompt and modify it incrementally.
Method 2: Create from Scratch
Build a fully custom agent.Choose Agent Type
Select the architecture: StreamAgent, IterativeAgent, ReActAgent, or TesslateAgent. This determines how the agent processes requests.
Write System Prompt
Craft the system prompt that defines the agent’s behavior. See the section below for guidance.
Configure Tools
Choose which tools the agent can access. Leave empty for full access, or specify a list.
Writing System Prompts
The system prompt is the most critical part of agent customization. It defines the agent’s role, capabilities, constraints, and style.Prompt Structure
A well-structured system prompt includes five sections:1. Role Definition
1. Role Definition
Define who the agent is:
2. Capabilities and Guidelines
2. Capabilities and Guidelines
What the agent should do and how:
3. Technology Stack
3. Technology Stack
Specific libraries and preferences:
4. Constraints
4. Constraints
What NOT to do:
5. Dynamic Markers
5. Dynamic Markers
Runtime placeholders that get substituted automatically:
Available Markers
System prompts support dynamic markers that are replaced at runtime with actual values:| Marker | Replaced With | Example |
|---|---|---|
{mode} | Current edit mode | allow, ask, plan |
{mode_instructions} | Full mode-specific behavior instructions | [FULL EDIT MODE] You have full access... |
{project_name} | Project display name | My E-Commerce App |
{project_description} | Project description | React e-commerce application |
{project_path} | Container project path | /app |
{git_branch} | Current git branch | feature/auth |
{tool_list} | Comma-separated available tools | read_file, write_file, bash_exec |
{timestamp} | Current ISO timestamp | 2025-01-15T10:30:00 |
{user_name} | User’s display name | Jane Smith |
Mode Instructions
The{mode_instructions} marker is especially important. It injects behavior-specific instructions based on the current edit mode:
- Allow Mode
- Ask Mode
- Plan Mode
Always include
{mode_instructions} in your system prompt. This ensures your agent respects the user’s chosen edit mode regardless of what other instructions say.Example System Prompts
- Tailwind Specialist
- Backend API Developer
- Full-Stack Builder
Configuring Tool Access
By default, agents have access to all tools in the global registry. You can restrict this for focused or security-conscious agents.Restricting to Specific Tools
Specify a tool list to limit what the agent can do:Custom Tool Descriptions
You can override tool descriptions and examples for your agent. This helps the LLM understand how the tools should be used in your specific context:Advanced: The Factory Pattern
For advanced users, understanding the factory pattern helps when building integrations or debugging agent creation.How Agent Creation Works
Thecreate_agent_from_db_model function is the central point for creating agents:
- Validates that the agent has a system prompt
- Looks up the agent type in
AGENT_CLASS_MAP(StreamAgent,IterativeAgent,ReActAgent,TesslateAgent) - Creates a scoped tool registry based on the agent’s
toolsfield (or uses the global registry) - Applies custom tool configurations if specified
- Instantiates the correct agent class with the system prompt, tools, and model adapter
Tool Registry Scoping Priority
When the factory creates a tool registry, it follows this priority:- tools_override (highest): A pre-configured registry passed directly (used for view-scoped tools)
- agent_model.tools: A list of tool names from the database. The factory creates a scoped registry containing only those tools.
- Global registry (lowest): If no tool restriction is specified, the full registry is used.
View-Scoped Tools
Different frontend views provide different tool sets:- Code view: Standard file and shell tools (read_file, write_file, patch_file, bash_exec, etc.)
- Graph view: Container management tools (graph_start_container, graph_add_connection, etc.)
Registering Custom Agent Types
For developers extending the platform, new agent types can be registered at runtime:Selecting AI Models
For open-source agents, you can choose which LLM powers the agent:- GPT-4
- Claude
- Qwen
- OpenRouter Models
Strengths: Excellent reasoning, strong TypeScript knowledge, good architecture decisionsBest for: Complex features, API integration, business logic, debuggingCost: Higher
Model adapters handle the differences between providers. Tesslate supports OpenAI, Anthropic, and other LLM providers through LiteLLM, which provides a unified interface for all models.
Testing Custom Agents
Test Simple Requests
Start with basic tasks: “Create a Button component” or “List all files in src/”
Test Complex Scenarios
Try multi-step tasks: “Add authentication with JWT tokens, including login form, protected routes, and token refresh”
Verify Tool Usage
Check that the agent uses the correct tools and follows your system prompt guidelines.
Test Edge Cases
Try unusual requests to see how the agent handles ambiguity or conflicting instructions.
What to Check
Follows Guidelines
Does the agent use the libraries and patterns specified in your prompt?
Code Quality
Is the generated code clean, typed, and well-structured?
Consistency
Does the agent produce similar quality across different requests?
Error Handling
Does the agent handle failures gracefully and retry when appropriate?
Publishing to the Marketplace
Share your custom agent with the community:Polish the Agent
Ensure the system prompt is clear, the agent handles common tasks well, and behavior is consistent.
Write Documentation
Create a detailed description: what the agent does, when to use it, which technologies it specializes in, and example use cases.
Set Details
Choose a category, add tags, set pricing (Free or Paid), and optionally upload screenshots.
Best Practices
Focus on One Specialty
Focus on One Specialty
An agent that does one thing well outperforms a generic one. A “React Form Builder” agent will produce better forms than a “General Purpose Developer” agent.
Always Include Mode Markers
Always Include Mode Markers
Include
{mode_instructions} in every system prompt. This ensures your agent respects edit mode settings, which is critical for user safety.Scope Tool Access for Security
Scope Tool Access for Security
A documentation agent does not need
bash_exec. A read-only analyzer does not need write_file. Restrict tools to what the agent actually needs.Test with Real Tasks
Test with Real Tasks
Do not just test with toy examples. Use your agent for real work and iterate the prompt based on actual results.
Iterate the Prompt
Iterate the Prompt
Start simple, add rules as you find issues, and remove instructions that cause confusion or contradictions. Prompt engineering is iterative.
Provide Examples in the Prompt
Provide Examples in the Prompt
If you want the agent to follow a specific pattern, include a code example directly in the system prompt. LLMs follow examples more reliably than abstract instructions.
Next Steps
Agent Types
Understand the four agent architectures
Using Agents
Practical guide to prompting and tool calls
Agent Marketplace
Browse and publish agents
AI Agents Overview
Core concepts and architecture