Documentation for the praisonaiagents.agent.agent module
The main class representing an AI agent with specific role, goal, and capabilities.
name: str
- Name of the agentrole: str
- Role of the agentgoal: str
- Goal the agent aims to achieve
backstory: str
- Background story of the agent
instructions: Optional[str] = None
- Direct
instructions that override role, goal, and backstory when
provided
llm: str | Any | None = 'gpt-4o'
-
Language model to use
tools: List[Any] | None = None
- List of tools
available to the agent
function_calling_llm: Any | None = None
- LLM
for function calling
max_iter: int = 20
- Maximum iterationsmax_rpm: int | None = None
- Maximum requests
per minute
max_execution_time: int | None = None
- Maximum
execution time
memory: bool = True
- Enable memoryverbose: bool = True
- Enable verbose output
allow_delegation: bool = False
- Allow task
delegation
step_callback: Any | None = None
- Callback for
each step
cache: bool = True
- Enable cachingsystem_template: str | None = None
- System
prompt template
prompt_template: str | None = None
- Prompt
template
response_template: str | None = None
- Response
template
allow_code_execution: bool | None = False
-
Allow code execution
max_retry_limit: int = 2
- Maximum retry
attempts
respect_context_window: bool = True
- Respect
context window size
code_execution_mode: Literal['safe',
'unsafe'] = 'safe'
- Code execution mode
embedder_config: Dict[str, Any] | None = None
-
Embedder configuration
knowledge: List[str] | None = None
- List of
knowledge sources (file paths, URLs, or text)
knowledge_config: Dict[str, Any] | None = None
- Configuration for knowledge processing
use_system_prompt: bool | None = True
- Use
system prompt
markdown: bool = True
- Enable markdownself_reflect: bool = True
- Enable self
reflection
max_reflect: int = 3
- Maximum reflections
min_reflect: int = 1
- Minimum reflections
reflect_llm: str | None = None
- LLM for
reflection
stream: bool = True
- Enable streaming
responses from the language model
guardrail:
Optional[Union[Callable[['TaskOutput'],
Tuple[bool, Any]], str]] = None
- Validation for outputs
handoffs: Optional[List[Union['Agent',
'Handoff']]] = None
- Agents for task delegation
base_url: Optional[str] = None
- Base URL for
custom LLM endpoints
reasoning_steps: int = 0
- Number of reasoning
steps to extract
Alternative to role/goal/backstory. Provide concise instructions for the agent’s behaviour and purpose.
Enable streaming responses from the agent. When True, responses are streamed in real-time.
Output validation for the agent. Can be a function or natural language description of requirements.
List of agents this agent can hand off tasks to. Enables agent-to-agent delegation.
Custom LLM client instance. Overrides the default client configuration.
Advanced LLM configuration options. Merged with default configuration.
Mark agent as human-controlled. Useful for human-in-the-loop workflows.
Callback function for processing streamed chunks. Called when chunks are merged.
Knowledge base for the agent. Can be file paths, URLs, or Knowledge instance.
Named knowledge sources to use. References pre-configured knowledge collections.
Output reasoning steps. When True, agent explains its thought process.
Custom API endpoint URL. Override default provider endpoints.
chat(self, prompt, temperature=0.2, tools=None,
output_json=None)
- Chat with the agent
achat(self, prompt, temperature=0.2, tools=None,
output_json=None)
- Async version of chat method
clean_json_output(self, output: str) → str
-
Clean and extract JSON from response text
clear_history(self)
- Clear chat historyexecute_tool(self, function_name, arguments)
-
Execute a tool dynamically based on the function name and
arguments
_achat_completion(self, response, tools)
-
Async version of _chat_completion method
The Agent class provides async support through the following methods:
achat
: Async version of the chat method for
non-blocking communication
_achat_completion
: Internal async method for
handling chat completions
Example usage: