What Are Policies?

Policies are the rules that define what AI requests need human review. Velatir uses advanced AI to automatically evaluate every request against your active policies, determining compliance status, risk levels, and whether human intervention is required.

How Policy Evaluation Works

When an AI request comes in, Velatirโ€™s policy engine:
  1. Analyzes the Request - Examines function name, arguments, AI explanation, and metadata
  2. Applies All Active Policies - Runs the request through every enabled policy
  3. Generates Assessments - Creates detailed evaluation for each policy
  4. Determines Actions - Decides auto-approval vs. human review based on results

Policy Assessment Output

For each policy, the AI evaluator returns:

Compliance Status

Compliant: Request meets policy requirements
Non-Compliant: Request violates policy or needs review

Risk Level

Low: Minimal risk, standard logging
Medium: Some risk, may need disclaimers
High: Significant risk, human oversight required
Critical: Major violation, should be blocked

Confidence Score

0.0 - 1.0: How confident the AI is in its assessment
Higher scores mean more certain evaluations

Recommendation

Auto-Approve: Safe to proceed automatically
Human Intervention Required: Needs human approval

Assessment Details

Every policy evaluation includes:
  • Reason - Single clear sentence explaining the decision
  • Tags - Categorization labels (e.g., #PersonalData, #FinancialData)
  • Policy Version - Exact version used for the assessment
  • Evaluation Timestamp - When the assessment was performed

Built-in Policy Types

Advanced Features

Policy Conflict Detection

The system automatically detects when policies might conflict and suggests resolutions.

What-If Simulations

Test how changes to policies would affect historical requests before implementing them.

Industry Templates

Pre-built policy sets for common industries and compliance frameworks.