fairness and bias in machine learning - qconsp...fairness and bias in machine learning thierry...

Post on 11-Jul-2020

5 Views

Category:

Documents

0 Downloads

Preview:

Click to see full reader

TRANSCRIPT

Fairness and bias in Machine Learning

Thierry Silbermann, Tech Lead Data Science at Nubank

QCon 2019

A quick review on tools to detect biases in machine learning model

• thierry.silbermann@nubank.com.br

Data collection• Today’s applications collect and mine vast quantities of

personal information.

• The collection and use of such data raise two important challenges.

• First, massive data collection is perceived by many as a major threat to traditional notions of individual privacy.

• Second, the use of personal data for algorithmic decision-making can have unintended and harmful consequences, such as unfair or discriminatory treatment of users.

Data collection• Today’s applications collect and mine vast quantities of

personal information.

• The collection and use of such data raise two important challenges.

• First, massive data collection is perceived by many as a major threat to traditional notions of individual privacy.

• Second, the use of personal data for algorithmic decision-making can have unintended and harmful consequences, such as unfair or discriminatory treatment of users.

• Fairness is increasingly important concern as machine learning models are used to support decision making in high-stakes applications such as:

• Mortgage lending

• Hiring

• Prison sentencing

• (Approve customers, increase credit line)

Fairness

Definitions of fairness

http://fairware.cs.umass.edu/papers/Verma.pdf

Definitions of fairness• It is impossible to satisfy all definitions of fairness at the

same time [Kleinberg et al., 2017]

• Although fairness research is a very active field, clarity on which bias metrics and bias mitigation strategies are best is yet to be achieved [Friedler et al., 2018]

• In addition to the multitude of fairness definitions, different bias handling algorithms address different parts of the model life-cycle, and understanding each research contribution, how, when and why to use it is challenging even for experts in algorithmic fairness.

Tutorial: 21 fairness definitions and their politics: https://www.youtube.com/watch?v=jIXIuYdnyyk

Example: Prison sentencing

Tutorial: 21 fairness definitions and their politics: https://www.youtube.com/watch?v=jIXIuYdnyyk

True Negative False Positive

False Negative True Positive

Did not recidivate

Recidivate

Label low-risk

Label high-risk

Example: Prison sentencing

Tutorial: 21 fairness definitions and their politics: https://www.youtube.com/watch?v=jIXIuYdnyyk

True Negative False Positive

False Negative True Positive

Did not recidivate

Recidivate

Label low-risk

Label high-risk

Decision maker: Of those I’ve labeled high-risk, how many will recidivate ?

Predictive value

Example: Prison sentencing

Tutorial: 21 fairness definitions and their politics: https://www.youtube.com/watch?v=jIXIuYdnyyk

True Negative False Positive

False Negative True Positive

Did not recidivate

Recidivate

Label low-risk

Label high-risk

Decision maker: Of those I’ve labeled high-risk, how many will recidivate ?

Predictive value

Defendant: What’s the probability I’ll be incorrectly classifying high-risk ?

False positive rate

Example: Prison sentencing

Tutorial: 21 fairness definitions and their politics: https://www.youtube.com/watch?v=jIXIuYdnyyk

True Negative False Positive

False Negative True Positive

Did not recidivate

Recidivate

Label low-risk

Label high-risk

Decision maker: Of those I’ve labeled high-risk, how many will recidivate ?

Predictive value

Defendant: What’s the probability I’ll be incorrectly classifying high-risk ?

False positive rate

Society [think hiring rather than criminal justice]: Is the selected set demographically balanced ?

Demography

https://en.wikipedia.org/wiki/Confusion_matrix

18 scores/metrics

Terminology• Favorable label: a label whose value corresponds to an outcome that provides an advantage

to the recipient.

• receiving a loan, being hired for a job, and not being arrested

• Protected attribute: attribute that partitions a population into groups that have parity in terms of benefit received

• race, gender, religion

• Protected attributes are not universal, but are application specific

• Privileged value of a protected attribute: group that has historically been at a systematic advantage

• Group fairness: the goal of groups defined by protected attributes receiving similar treatments or outcomes

• Individual fairness: the goal of similar individuals receiving similar treatments or outcomes

Terminology• Bias: systematic error

• In the context of fairness, we are concerned with unwanted bias that places privileged groups at a systematic advantage and unprivileged groups at a systematic disadvantage.

• Fairness metric: a quantification of unwanted bias in training data or models.

• Bias mitigation algorithm: a procedure for reducing unwanted bias in training data or models.

But wait ! • I’m not using any feature that is discriminatory for my

application !

• I’ve never used gender or even race !

But wait !

https://demographics.virginia.edu/DotMap/index.html

But wait !

https://demographics.virginia.edu/DotMap/index.html

Chicago Area, IL, USA

Fairness metric•Confusion matrix

• TP, FP, TN, FN, TPR, FPR, TNR, FNR

• Prevalence, accuracy, PPV, FDR, FOR, NPV

• LR+, LR-, DOR, F1

Fairness metric•Difference of Means

•Disparate Impact

• Statistical Parity

•Odd ratios

•Consistency

•Generalized Entropy Index

Statistical parity difference• Group fairness == statistical parity difference == equal acceptance rate

== benchmarking

• A classifier satisfies this definition if subjects in both protected and unprotected groups have equal probability of being assigned to the positive predicted class.

• Example, this would imply equal probability for male and female applicants to have good predicted credit score:

• P(d = 1 | G = male) = P (d = 1 | G = female)

• The main idea behind this definition is that applicants should have an equivalent opportunity to obtain a good credit score, regardless of their gender.

Disparate impactX=0 X=1

Predicted condition

FALSE A B

TRUE C D

The 80% test was originally framed by a panel of 32 professionals assembled by the State of California Fair Employment Practice Commission (FEPC) in 1971

Disparate impact

The 80% rule can then be quantified as:

X=0 X=1

Predicted condition

FALSE A B

TRUE C D

Aequitas approach

https://dsapp.uchicago.edu/projects/aequitas/

How about some solutions?

Disparate impact remover

Relabelling

Learning Fair representation

Disparate impact remover

Prejudice remover regulariser

Optimised Preprocessing

Relabelling

Reject Option Classification

Learning Fair representation

Adversarial Debiasing

Disparate impact remover

Prejudice remover regulariser

Additive counterfactually fair estimatorOptimised Preprocessing

Equalised Odds Post-processing

Relabelling

Reweighing

Reject Option Classification

Calibrated Equalised Odds Post-processing Learning Fair representation

Adversarial Debiasing

Meta-Algorithm for Fair Classification

Tools

• There are three main paths to the goal of making fair predictions:

• fair pre-processing,

• fair in-processing, and

• fair post-processing

How about fixing predictions?

AIF360, https://arxiv.org/abs/1810.01943

Pre-Processing• Reweighing generates weights for the training examples in each

(group, label) combination differently to ensure fairness before classification.

• Optimized preprocessing (Calmon et al., 2017) learns a probabilistic transformation that edits the features and labels in the data with group fairness, individual distortion, and data fidelity constraints and objectives.

• Learning fair representations (Zemel et al., 2013) finds a latent representation that encodes the data well but obfuscates information about protected attributes.

• Disparate impact remover (Feldman et al., 2015) edits feature values to increase group fairness while preserving rank-ordering within groups.

In-Processing• Adversarial debiasing (Zhang et al., 2018) learns a

classifier to maximize prediction accuracy and simultaneously reduce an adversaries ability to determine the protected attribute from the predictions. This approach leads to a fair classifier as the predictions cannot carry any group discrimination information that the adversary can exploit.

• Prejudice remover (Kamishima et al., 2012) adds a discrimination-aware regularization term to the learning objective

Post-Processing• Equalized odds postprocessing (Hardt et al., 2016) solves a

linear program to find probabilities with which to change output labels to optimize equalized odds.

• Calibrated equalized odds post-processing (Pleiss et al., 2017) optimizes over calibrated classifier score outputs to find probabilities with which to change output labels with an equalized odds objective.

• Reject option classification (Kamiran et al., 2012) gives favorable outcomes to unprivileged groups and unfavorable outcomes to privileged groups in a confidence band around the decision boundary with the highest uncertainty.

ExperimentsDatasets Adult Census Income, German Credit, COMPAS

Metrics

Disparate impactStatistical parity differenceAverage odds difference

Equal opportunity difference

Classifiers Logistic Regression (LR), Random Forest Classifier (RF), Neural Network (NN)

Pre-processing Algorithms

Re-weighing (Kamiran & Calders, 2012)Optimized pre-processing (Calmon et al., 2017)Learning fair representations (Zemel et al., 2013)Disparate impact remover (Feldman et al., 2015)

In-processing Algorithms

Adversarial debasing (Zhang et al., 2018)Prejudice remover (Kamishima et al., 2012)

Post-processing Algorithms

Equalized odds post-processing (Hardt et al., 2016)Calibrated eq. odds post-processing (Pleiss et al., 2017)

Reject option classification (Kamiran et al., 2012)AIF360, https://arxiv.org/abs/1810.01943

Results - Statistical Parity Difference (SPD)

SPD Fair Value is 0AIF360, https://arxiv.org/abs/1810.01943

Results - Disparate Impact (DI)

DI Fair Value is 1AIF360, https://arxiv.org/abs/1810.01943

Results

AIF360, https://arxiv.org/abs/1810.01943

Adult census datasetProtected attribute: race

Results

AIF360, https://arxiv.org/abs/1810.01943

Thank you

References• Conference

• ACM Conference on Fairness, Accountability, and Transparency (ACM FAT*) https://fatconference.org/

• IJCAI 2017 Workshop on Explainable Artificial Intelligence (XAI) http://home.earthlink.net/~dwaha/research/meetings/ijcai17-xai/

• Interpretable ML Symposium - NIPS 2017 http://interpretable.ml/

References• Books

• https://fairmlbook.org/

• Course materials

• Berkeley CS 294: Fairness in machine learning

• Cornell INFO 4270: Ethics and policy in data science

• Princeton COS 597E: Fairness in machine learning

References• Papers

• Fairness Definitions Explained: http://fairware.cs.umass.edu/papers/Verma.pdf

• AIF360: An Extensible Toolkit for Detecting, Understanding, and Mitigating Unwanted Algorithmic Bias https://arxiv.org/pdf/1810.01943.pdf

• Aequitas: A Bias and Fairness Audit Toolkit: https://arxiv.org/pdf/1811.05577.pdf

• FairTest: Discovering Unwarranted Associations in Data-Driven Applications: https://arxiv.org/pdf/1510.02377.pdf

References

• Videos

• Tutorial: 21 fairness definitions and their politics https://www.youtube.com/watch?v=jIXIuYdnyyk

• AI Fairness 360 Tutorial at ACM FAT* 2019 https://www.youtube.com/watch?v=XCFDckvyC0M

top related