Output validation and quality assurance for tasks
Guardrails provide output validation and quality assurance for agent tasks, ensuring results meet specified criteria before being accepted.
Guardrails ensure task outputs meet quality and safety criteria through:
Function guardrails provide programmatic validation:
LLM guardrails use natural language for validation:
Configure retry behaviour for failed validations:
Combine multiple validation criteria:
Access detailed validation results:
Ensure generated content is safe and appropriate
Validate analysis results and reports
Ensure generated code is safe and functional
Meet regulatory and policy requirements
Guardrails provide validation and quality assurance mechanisms for agent outputs, supporting both function-based and LLM-based validation to ensure outputs meet specific criteria.
Guardrails allow you to:
Function guardrails provide programmatic validation:
LLM guardrails use natural language validation:
Agent executes the task and produces output
Guardrail validates the output
If validation fails:
max_retries
times)
If validation passes:
Function guardrails return a tuple:
Where:
success
: Whether validation passedresult
: Modified output or error messagemax_retries
based on task complexity