Main takeaways
AI learns from human data, so it inherits human biases
Historical bias bakes past discrimination into models
If a group is missing from training data, the model fails for them
Seemingly neutral proxies (ZIP code, test scores) can encode inequality
The impossibility theorem: you cannot satisfy all fairness criteria at once
Choosing a fairness definition is a values question, not a technical one