Back to glossary Enterprise & Governance

Explainable AI (XAI)

Techniques enabling understanding of why an AI model made a given decision — critical for trust, auditing, and AI Act compliance.

What is Explainable AI?

Explainable AI (XAI) is a collection of techniques for understanding and explaining why an AI model made a specific decision. Unlike a "black box," XAI provides insight into the model's reasoning process.

Why does the AI Act require explainability?

The AI Act (Art. 13) mandates "transparency" for high-risk systems — users must understand how the system reached its decision. This applies to credit scoring, recruitment, medical diagnostics. Lack of explainability = regulatory non-compliance.

XAI techniques

SHAP — shows each feature's contribution to the decision. LIME — local approximation with a simpler, interpretable model. Attention maps — visualization of what the model "looked at" in input data. Chain of Thought — explicit step-by-step reasoning for generative models.