Overview

The Bias & Fairness policy detects AI interactions that may involve profiling, discrimination, or unfair treatment based on protected attributes, ensuring equitable AI decision-making.

What It Detects

Protected Attributes

Decisions based on race, gender, age, religion, sexual orientation, disability

Algorithmic Bias

AI systems showing systematic unfairness against specific groups

Discriminatory Profiling

Automated categorization that may lead to unfair treatment

Disparate Impact

Outcomes that disproportionately affect protected groups

Assessment Criteria

The AI evaluates requests for potential bias and fairness issues:

High Risk Scenarios

  • Employment screening and hiring decisions
  • Credit scoring and financial services
  • Healthcare treatment recommendations
  • Criminal justice risk assessments
  • Housing and accommodation decisions

Medium Risk Scenarios

  • Marketing personalization based on demographics
  • Content recommendation algorithms
  • Educational assessment tools
  • Customer service prioritization

Low Risk Scenarios

  • Anonymous content filtering
  • Technical performance optimization
  • Non-human-facing automation