What is AI Bias?
AI bias is systematic, unjustified favoritism or discrimination against specific groups by an AI model. Bias stems from inequalities in training data, labeling errors, or creator design assumptions.
Types of bias
Data bias — training set doesn't equally represent all groups. Algorithmic bias — model architecture amplifies existing inequalities. Deployment bias — system used in contexts it wasn't designed for. Confirmation bias — model reinforces user's existing beliefs.
Regulatory requirements
The AI Act requires bias assessment and minimization for high-risk systems (Art. 10 — data quality and representativeness). Companies must document: training set composition, fairness metrics, bias testing procedures, and corrective mechanisms.