Agent Configuration
Agents are the primary actors in the Aurite framework, responsible for executing tasks by interacting with tools and models. The agent configuration defines an agent's identity, its capabilities, and its behavior.
An agent configuration is a JSON or YAML object with a type
field set to "agent"
.
Configuration Location
Agent configurations can be placed in any directory specified in your project's .aurite
file (e.g., config/agents/
, shared/agents/
). The framework will automatically discover them.
Schema
The AgentConfig
defines the structure for an agent configuration. Below are the available fields, categorized for clarity.
These fields define the fundamental properties of the agent.
Field | Type | Required | Description |
---|---|---|---|
name |
string |
Yes | A unique identifier for the agent. This name is used to reference the agent in workflows and commands. |
description |
string |
No | A brief, human-readable description of what the agent does. |
llm_config_id |
string |
None |
The name of an llm component to use. This is the recommended way to assign an LLM, allowing for reusable configurations. |
system_prompt |
string |
No | The primary system prompt for the agent. This can be overridden by the system_prompt in the llm block. |
mcp_servers |
list[string] |
[] |
A list of mcp_server component names this agent can use. The agent gains access to all tools, prompts, and resources from these servers. |
config_validation_schema |
dict |
None |
A JSON schema for validating agent-specific configurations. |
These fields control which tools and resources the agent can access.
Field | Type | Default | Description |
---|---|---|---|
exclude_components |
list[string] |
None |
A list of component names (tools, prompts, resources) to explicitly exclude, even if provided by allowed mcp_servers . |
auto |
boolean |
false |
If true , an LLM dynamically selects the most appropriate mcp_servers at runtime based on the user's prompt. |
These fields control the Large Language Model that powers the agent's reasoning.
Field | Type | Default | Description |
---|---|---|---|
model |
string |
None | Override the model name (e.g., "gpt-3.5-turbo" ). |
temperature |
float |
None | Override the sampling temperature for the agent's LLM. |
max_tokens |
integer |
None | Override the maximum token limit for responses. |
system_prompt |
string |
None | Provide a more specific system prompt for this agent. |
api_base |
string |
None | Custom API endpoint base URL for the LLM provider. |
api_key |
string |
None | Custom API key for the LLM provider. |
api_version |
string |
None | Custom API version for the LLM provider. |
other fields | various | None | Any other provider-specific parameters supported by the LLM Configuration. |
LLM Overrides
Agent Configurations can include llm variables (See LLM Overrides in the table above). These variables will replace the corresponding values in the LLM Configuration referenced by llm_config_id
. This allows for agent-specific customization while still using a shared LLM configuration.
These fields fine-tune how the agent executes its tasks.
Field | Type | Default | Description |
---|---|---|---|
max_iterations |
integer |
50 |
The maximum number of conversational turns before stopping automatically. This is a safeguard to prevent infinite loops. |
include_history |
boolean |
None |
If true , the entire conversation history is included in each turn. If false or None , the agent is stateless and only sees the latest message. |
Configuration Examples
Here are some practical examples of agent configurations.
A basic agent that uses a centrally-defined LLM and has access to a set of tools.
{
"type": "agent",
"name": "code-refactor-agent",
"description": "An agent that helps refactor Python code by using static analysis tools.",
"mcp_servers": ["pylint-server", "file-system-server"],
"llm_config_id": "claude-3-opus",
"system_prompt": "You are an expert Python programmer. You will be given a file and your goal is to refactor it to improve readability and performance.",
"max_iterations": 10
}
This agent uses a base LLM configuration but overrides the model and temperature for its specific task.
This agent is configured to be stateful (include_history
is true
), allowing it to maintain context across multiple turns.
This agent includes a custom validation schema to ensure its configuration adheres to specific rules.
{
"type": "agent",
"name": "data-validation-agent",
"description": "An agent that validates data formats.",
"mcp_servers": ["data-validator-server"],
"llm_config_id": "gpt-3.5-turbo",
"config_validation_schema": {
"type": "object",
"properties": {
"input_format": { "type": "string" },
"output_format": { "type": "string" }
},
"required": ["input_format", "output_format"]
}
}