6.31 Classifier performance & fairness (2)

  • Algorithmic fairness can be assessed with respect to an input characteristic \(C\) (e.g., race, sex)
  • False positive parity…
    • …with respect to characteristic C is satisfied if the false positive rate for inputs with \(C = 0\) (e.g., black) is the same as the false positive rate for inputs with \(C = 1\) (e.g., white)
    • ProPublica found that the false positive rate for African-American defendants (i.e., the percentage of innocent African-American defendants classified as likely to re-offend) was higher than for white defendants (NO false positive parity)
  • Calibration…
    • …with respect to characteristic \(C\) is satisfied if an individual who was labeled “positive” has the same probability of actually being positive, regardless of the value of \(C\)
    • …and if an individual who was labeled “negative” has the same probability of actually being negative regardless of the value of \(C\)
    • COMPAS makers claim that COMPAS satisfies calibration!