程序代写代做代考 AI case study python scheme flex algorithm data science Machine Learning for Financial Data

Machine Learning for Financial Data
March 2021
ETHICAL & PRIVACY CONSIDERATIONS

Contents
◦ Algorithmic Fairness
◦ Source of Bias
◦ Aequitas Discrimination & Bias Audit Toolkit
◦ Bias Mitigation
◦ Ethical Machine Learning
◦ Deon Data Science Ethics Checklist
Copyright (c) by Daniel K.C. Chan. All Rights Reserved. 2
Ethical & Privacy Considerations

Algorithmic Fairness

Ideally, we would use statistics that cleanly separate categories but overlapping categories are the norm
Credit Score: higher scores represent higher likelihood of payback https://research.google.com/bigpicture/attacking-discrimination-in-ml/
◦ A single statistic can stand in for many different variables, boiling them down to one number
◦ In the case of a credit score, which is computed looking at a number of factors, including income, promptness in paying
debts, etc., the number might correctly represent the likelihood that a person will pay off a loan, or default
◦ Or it might not
◦ The relationship is usually fuzzy – it is rare to find a statistic that correlates perfectly with real-world outcomes
An Ideal World
Copyright (c) by Daniel K.C. Chan. All Rights Reserved. 4
Ethical & Privacy Considerations
The Reality

People whose credit scores are below the cut-off / threshold are denied the loan, people above it are granted the loan
Threshold Classifier
Copyright (c) by Daniel K.C. Chan. All Rights Reserved. 5
Ethical & Privacy Considerations

All paying applicants get granted the loan but the number of defaulters getting the loan also increases
Copyright (c) by Daniel K.C. Chan. All Rights Reserved. 6
Ethical & Privacy Considerations
Maximized Customer Satisfaction

All defaulters are denied the loan but a large number of paying applicants are also wrongly denied the loan
Minimized Default Risks
Copyright (c) by Daniel K.C. Chan. All Rights Reserved. 7
Ethical & Privacy Considerations

Optimal profit is attained using a threshold credit score of 54
Copyright (c) by Daniel K.C. Chan. All Rights Reserved. 8
Ethical & Privacy Considerations
Profit could be a driver of the threshold

Maximum correctness @ threshold 50
Maximum profit @ threshold 54
© 2019 Daniel K.C. Chan. All rights reserved. 9
Ethical & Privacy Considerations

The statistic behind a score may distribute differently across various groups
▪ The issue of how the correct decision is defined and with sensitivities to which factors, becomes particularly thorny when a statistic like a credit score ends up distributed differently between two groups
▪ Imagine we have two groups of people: blue and orange
▪ We are interested in making small loans, subject to the following rules
◦ A successful loan makes $300
◦ An unsuccessful loan costs $700
◦ Everyone has a credit score between 0 and 100
Copyright (c) by Daniel K.C. Chan. All Rights Reserved. 10
Ethical & Privacy Considerations

The two distributions are slightly different, even though blue and orange people are equally likely to pay off a loan
Copyright (c) by Daniel K.C. Chan. All Rights Reserved. 11
Ethical & Privacy Considerations

Copyright (c) by Daniel K.C. Chan. All Rights Reserved. 12
Ethical & Privacy Considerations

To maximize profit, the two groups have different thresholds, meaning they are held to different standards
Copyright (c) by Daniel K.C. Chan. All Rights Reserved. 13
Ethical & Privacy Considerations

Copyright (c) by Daniel K.C. Chan. All Rights Reserved. 14
Ethical & Privacy Considerations

Same threshold but orange has fewer loans overall. Among paying applicants, orange is also at a disadvantage.
Copyright (c) by Daniel K.C. Chan. All Rights Reserved. 15
Ethical & Privacy Considerations

Copyright (c) by Daniel K.C. Chan. All Rights Reserved. 16
Ethical & Privacy Considerations

Same proportion of loans given to each group but among paying applicants, blue is at a disadvantage
Copyright (c) by Daniel K.C. Chan. All Rights Reserved. 17
Ethical & Privacy Considerations

Copyright (c) by Daniel K.C. Chan. All Rights Reserved. 18
Ethical & Privacy Considerations

Same proportion of loans to paying participants for each group, similar profit & grants as demographic parity
Copyright (c) by Daniel K.C. Chan. All Rights Reserved. 19
Ethical & Privacy Considerations

Copyright (c) by Daniel K.C. Chan. All Rights Reserved. 20
Ethical & Privacy Considerations

Group Unaware (一視同仁)
Demographic Parity (群体均等)
Equal Opportunity (機會均等)
▪ Fairness through unawareness
▪ The group attribute is not used in the classification
▪ All groups to the same & one standard
▪ Ignore real differences between groups
▪ Women generally pay less for life insurance than men since they tend to live longer
▪ Differences in score distributions causes the orange group gets fewer loans if the most profitable group- unaware threshold is used
▪ Aka statistical parity or group fairness
▪ Same fraction of each group will receive intervention
▪ Same positive rate for each group
▪ The bank uses different loan thresholds that yield the same fraction of loans to each group
▪ Similar individuals (having similar attribute values) but in different groups may be discriminated
▪ Same chance for the positive ones in each group
▪ True Positive Rate (TPR) is identical between groups
▪ For people who can pay back a loan, the same fraction in each group should actually be granted a loan
Copyright (c) by Daniel K.C. Chan. All Rights Reserved.
21
Ethical & Privacy Considerations

Why does fairness matter?
Regardless of one’s definition of fairness, everyone wants to be treated fairly
Ensuring fairness is a moral and ethical imperative
© 2019 Daniel K.C. Chan. All rights reserved. 22
Ethical & Privacy Considerations

What is fairness anyway?
There are 20+ definitions of fairness Some of the definitions are contradictory The way fairness is defined impacts bias
© 2019 Daniel K.C. Chan. All rights reserved. 23
Ethical & Privacy Considerations

Data + Math =≠ Objectivity
© 2019 Daniel K.C. Chan. All rights reserved. 24
Ethical & Privacy Considerations

Given essentially any scoring system, it is possible to efficiently find thresholds that meet the criteria earlier
In other words, even if you don’t have control over the underlying scoring system (a common case) it is still possible to attack the issue of discrimination
© 2019 Daniel K.C. Chan. All rights reserved. 25
Ethical & Privacy Considerations

Source of Bias

Most bias come from data used in classification
bias induced by sensitive information
direct bias indirect bias
protected attribute
non-protected attributes
target
Gender
Spending Power

Number of Accidents
ZIP Code
Premium
M
8

5
06230
?
F
8

3
92100
?
F
10

0
75300
?
F
8

0
13500
?
M
12

2
45000
?





Copyright (c) by Daniel K.C. Chan. All Rights Reserved. 27
Ethical & Privacy Considerations
model bias
representation bias
sample bias
label bias

Copyright (c) by Daniel K.C. Chan. All Rights Reserved. 28
Ethical & Privacy Considerations

Bias can be induced from sample representation
OVERSAMPLING
UNDERSAMPLING
ORIGINAL DATASET
ORIGINAL DATASET
copies of minority class
Copyright (c) by Daniel K.C. Chan. All Rights Reserved.
29
Ethical & Privacy Considerations
samples of majority class

Copyright (c) by Daniel K.C. Chan. All Rights Reserved. 30
Ethical & Privacy Considerations
Bias can be induced from sample
collection, e.g. using biased sources

Copyright (c) by Daniel K.C. Chan. All Rights Reserved. 31
Ethical & Privacy Considerations

Copyright (c) by Daniel K.C. Chan. All Rights Reserved. 32
Ethical & Privacy Considerations

Copyright (c) by Daniel K.C. Chan. All Rights Reserved. 33
Ethical & Privacy Considerations
Female Doctor

Copyright (c) by Daniel K.C. Chan. All Rights Reserved. 34
Ethical & Privacy Considerations

Copyright (c) by Daniel K.C. Chan. All Rights Reserved. 35
Ethical & Privacy Considerations

Copyright (c) by Daniel K.C. Chan. All Rights Reserved. 36
Ethical & Privacy Considerations

Fairness Terminology
Protected Attributes
An attribute that partitions a population into groups whose outcomes should have parity (e.g. race, gender, age, and religion).
Group Fairness
Groups defined by protected attributes receiving similar treatments or outcomes.
Fairness Metric
A measure of unwanted bias in training data or models.
Privileged Protected Attributes
A protected attribute value indicating a group that has historically been at systematic advantage.
Individual Fairness
Similar individuals receiving similar treatments or outcomes.
Favorable Label
A label whose value correspond to an outcome that provides an advantage to the recipient.
Copyright (c) by Daniel K.C. Chan. All Rights Reserved.
37
Ethical & Privacy Considerations

Removing the protected attributes may not be sufficient due to the problem of proxies
▪ A common concern with AI models is that they may create proxies for protected attributes, where the complexity of the model leads to class membership being used to make decisions in a way that cannot easily be found and improved
▪ If the attributes used in the model have a strong relationship with the protected attributes, spurious correlation or poor model building could lead to a proxy problem
▪ Measuring within-class disparities (differences in treatment that only occurs for some members of a class) is much harder
Copyright (c) by Daniel K.C. Chan. All Rights Reserved. 38
Ethical & Privacy Considerations

Median household income could be a proxy of race
Copyright (c) by Daniel K.C. Chan. All Rights Reserved. 39
Ethical & Privacy Considerations

Most of the research on the topic of bias and fairness in AI is about making sure that your system does not have a disproportionate effect on some group of users relative to other groups.
The primary focus of AI ethics is on distribution checks and similar analytics.
© 2019 Daniel K.C. Chan. All rights reserved. 40
Ethical & Privacy Considerations

Aequitas Discrimination & Bias Audit Toolkit

Interest in algorithmic fairness and bias has been growing recently
▪ Machine Learning based predictive tools are being increasingly used in problems that can have a drastic impact on people’s lives
▪ e.g., criminal justice, education, public health, workforce development, and social services
▪ Recent work has raised concerns on the risk of unintended bias in these models, affecting individuals from certain groups unfairly
Copyright (c) by Daniel K.C. Chan. All Rights Reserved.
42
Ethical & Privacy Considerations

While a lot of bias metrics and fairness definitions have been proposed, there is no consensus on which definitions and metrics should be used in practice to evaluate and audit these systems

Aequitas audits the predictions of ML-based risk assessment tools to understand different types of biases
▪ The Aequitas toolkit is a flexible bias-audit utility for algorithmic decision-making models, accessible via Python API, command line interface (CLI), and through a web application
▪ Aequitas is used to evaluate model performance across several bias and fairness metrics, and utilize the most relevant metrics in model selection.
▪ Aequitas will help
• Understand where biases exist in the model(s)
• Compare the level of bias between groups in the samples (bias disparity)
• Visualize absolute bias metrics and their related disparities for rapid comprehension and decision-making
Copyright (c) by Daniel K.C. Chan. All Rights Reserved. 43
Ethical & Privacy Considerations

Aequitas audits the evidence of disparate representation and disparate errors
▪ Aequitas can audit risk assessment systems for two types of biases
▪ Disparate representation: biased actions / interventions that are not allocated in a way that is
representative of the population
▪ Disparate errors: biased outcomes through actions or interventions that are result of the system
being wrong about certain groups of people
▪ To assess these biases, the following data are needed
• Data about the overall population considered for interventions along with the protected attributes that are to be audited (e.g., race, gender, age, income)
• The set of individuals in the target population that the risk assessment system recommended / selected for intervention or action
• Unseen data, not the training dataset
• To audit for biases due to disparate errors of the system, actual outcomes for the individuals who
were selected and not selected are also required Copyright (c) by Daniel K.C. Chan. All Rights Reserved. 44
Ethical & Privacy Considerations

Different bias and fairness criteria need to be used for different types of interventions
Equal Parity (均等) Also known as
Demographic Parity (人 口平價 / 人口平价) or Statistical Parity (統計 平價 / 统计平价)
Proportional Parity
Also known as Impact Parity or Minimizing Disparate (完全不同) Impact
False Positive Parity
Desirable when interventions are punitive (懲罰性的 / 惩 罚性的)
False Negative Parity
Desirable when interventions are assistive / preventative
Each group represented equally among the selected set
Copyright (c) by Daniel K.C. Chan. All Rights Reserved.
45
Ethical & Privacy Considerations
Each group represented proportional to their representation in the overall population
Each group to have equal False Positive Rates
Each group to have equal False Negative Rates

The fairness tree describes the interpretation of the metrics
representation
Do you need to select equal number of people from each group OR
proportional to their % in the overall population?
errors
Do you trust the labels?
Do you want to be fair based on disparate representation OR
based on disparate errors of your system?
equal numbers
EQUAL SELECTION PARITY
punitive (could hurt individuals)
Among which group are you most concerned with ensuring predictive equity?
yes
Are your interventions punitive or assistive?
no
COUNTERFACTUAL FAIRNESS
proportional
DEMOGRAPHIC PARITY
assistive
(will help individuals)
Can you intervene with most people with need or only a small fraction?
everyone without regard for actual outcome
FP/GS PARITY
people for whom intervention is taken
FDR PARITY
intervention not warranted
FPR PARITY
small fraction
RECALL PARITY
most people
Among which group are you most concerned with ensuring predictive equity?
everyone without people not
regard for actual need receiving assistance
FN/GS PARITY FOR PARITY
people with actual need
FNR PARITY
Copyright (c) by Daniel K.C. Chan. All Rights Reserved.
46
Ethical & Privacy Considerations

Preliminary Concepts
Name
Notation
Definition
Score
∈𝑆
0,1
A real-valued score assigned to each entity by the predictor.
Decision

0,1 ∈𝑌
A binary-valued prediction assigned to an entity.
True Outcome
0,1 ∈𝑌
A binary-valued label (ground truth) of an entity.
Attribute
𝑛𝑎,⋯,𝑎1,𝑎2 =𝐴
Amulti-valuedattributewithmultiplepossiblevalues,e.g.,gender= female,male,other.
Group
𝑖𝑎 𝑔
A group formed by all entities having the same attribute value, e.g., race Asian .
Reference Group
𝑟𝑎 𝑔
A reference group formed by all entities having the reference attribute values, e.g., gender male .
Labelled Positive
𝑃𝐿 𝑔
Number of entities within 𝑔 𝑎𝑖 with positive label, i.e., 𝑌 = 1.
Labelled Negative
𝑁𝐿 𝑔
Number of entities within 𝑔 𝑎𝑖 with negative label, i.e., 𝑌 = 0.
Predicted Positive
𝑃𝑃 𝑔
෠ Number of entities within 𝑔 𝑎𝑖 with positive prediction, i.e., 𝑌 = 1.
Total Predicted Positive
𝑛𝑎=𝐴
𝐾 = ෍ 𝑃𝑃𝑔 𝑎𝑖 𝑎1=𝐴
Total number of entities with positive prediction across all groups 𝑔 𝑎𝑖 formed by all possible attribute values of 𝐴.
Predicted Negative
𝑁𝑃 𝑔
෠ Number of entities within 𝑔 𝑎𝑖 with negative prediction, i.e., 𝑌 = 0.
False Positive
𝑃𝐹 𝑔

.0 = 𝑌 ٿ 1 = 𝑌 ,.Number of entities within 𝑔 𝑎𝑖 with false positive prediction, i.e
False Negative
𝑁𝐹 𝑔

.1 = 𝑌 ٿ 0 = 𝑌 ,.Number of entities within 𝑔 𝑎𝑖 with false negative prediction, i.e
True Positive
𝑃𝑇 𝑔

.1 = 𝑌 ٿ 1 = 𝑌 ,.Number of entities within 𝑔 𝑎𝑖 with true positive prediction, i.e
True Negative
𝑁𝑇 𝑔

.0 = 𝑌 ٿ 0 = 𝑌 ,.Number of entities within 𝑔 𝑎𝑖 with true negative prediction, i.e
Copyright (c) by Daniel K.C. Chan. All Rights Reserved. 47
Ethical & Privacy Considerations

Basic Metrics
Name
Notation
Definition & Example
Prevalence (Prev)
𝐿𝑃
𝑃𝑟𝑒𝑣 = 𝑔=𝑃𝑌=1𝐴=𝑎 𝑔𝑔𝑖
Fraction of entities within 𝑔 𝑎𝑖 with positive label.
Given your race, what is your chance of being denied bail?
Predicted Prevalence (PPrev)
𝑃𝑃 𝑔
෠ =𝑃𝑌=1𝐴=𝑎
𝑃𝑃𝑟𝑒𝑣 = 𝑔𝑔𝑖
Fraction of entities within 𝑔 𝑎𝑖 with positive prediction.
Given your race, what is your predicted chance of being denied bail?
Predicted Positive Rate (PPR)
𝑃𝑃 𝑔
෠ 𝑃𝑃𝑅𝑔 = 𝐾 = 𝑃 𝐴 = 𝑎𝑖 𝑌 = 1
Ratio of number of entities within 𝑔 𝑎𝑖 with positive prediction to the Total Predicted Positive over all 𝑔 𝑎𝑖 formed by all possible attribute values of 𝐴.
Given the predicted denials of bail over all races, what is the chance of your race being denied bail?
Recall / True Positive Rate (TPR)
𝑇𝑃 𝑔

𝑇𝑃𝑅𝑔 =𝐿𝑃 =𝑃 𝑌=1𝑌=1,𝐴=𝑎𝑖
𝑔
Fraction of entities within 𝑔 𝑎𝑖 with positive label that are also with positive prediction.
Among people with need, what is your chance of receiving assistance given your gender?
Copyright (c) by Daniel K.C. Chan. All Rights Reserved. 48
Ethical & Privacy Considerations

Basic Metrics
Name
Notation
Definition & Example
False Negative Rate (FNR)
𝐹𝑁 𝑔

𝐹𝑁𝑅𝑔 = 𝐿𝑃 =𝑃 𝑌=0𝑌=1,𝐴=𝑎𝑖
𝑔
Fraction of entities within 𝑔 𝑎𝑖 with positive label but have negative prediction.
Among people with need, what is your chance of not receiving any assistance given your gender?
False Positive Rate (FPR)
𝐹𝑃 𝑔

𝐹𝑃𝑅𝑔 =𝐿𝑁 =𝑃 𝑌=1𝑌=0,𝐴=𝑎𝑖
𝑔
Fraction of entities within 𝑔 𝑎𝑖 with negative label but have positive prediction.
Among people who should be granted bail, what is your chance of being denied bail given your race?
False Discovery Rate (FDR)
𝐹𝑃 𝑔

𝐹𝐷𝑅𝑔 =𝑃𝑃 =𝑃 𝑌=0𝑌=1,𝐴=𝑎𝑖
𝑔
Fraction of entities within 𝑔 𝑎𝑖 with positive prediction that are false.
Among people being denied bail, what is your chance of being innocent given your race?
False Omission Rate (FOR)
𝐹𝑁 𝑔

𝐹𝑂𝑅𝑔 =𝑃𝑁 =𝑃 𝑌=1𝑌=0,𝐴=𝑎𝑖
𝑔
Fraction of entities within 𝑔 𝑎𝑖 with negative prediction that are false.
Among people who do not receive assistance, what is your chance of requiring assistance given your gender?
Copyright (c) by Daniel K.C. Chan. All Rights Reserved. 49
Ethical & Privacy Considerations

Basic Metrics
Name
Notation
Definition & Example
False Positive over Group Size (FP/GS)
𝐹𝑃 𝑔

𝐹𝑃/𝐺𝑆𝑔 = 𝑔 =𝑃 𝑌=1,𝑌=0𝐴=𝑎𝑖
Ratio of number of entities within 𝑔 𝑎𝑖 with positive prediction that is wrong to the number of entities within 𝑔 𝑎𝑖
What is your chance of being wrongly denied bail given your race?
False Negative over Group Size (FN/GS)
𝐹𝑁 𝑔

𝐹𝑁/𝐺𝑆𝑔 = 𝑔 =𝑃 𝑌=0,𝑌=1𝐴=𝑎𝑖
Ratio of number of entities within 𝑔 𝑎𝑖 with negative prediction that is wrong to the number of entities within 𝑔 𝑎𝑖
What is your chances of being wrongly left out of assistance given your race?
Copyright (c) by Daniel K.C. Chan. All Rights Reserved. 50
Ethical & Privacy Considerations

Unfair Disparities in COMPAS

COMPAS was reported to have unfair disparities, Northpointe pushed back, who is right and who is wrong
▪ In 2016, Propublica reported on racial inequality in COMPAS, a risk assessment tool
▪ The algorithm was shown to lead to unfair disparities in False Negative Rates and False
Positive Rates
▪ In the case of recidivation (累犯), it was shown that black defendants faced disproportionately high risk scores, while white defendants received disproportionately low risk scores
▪ Northpointe, the company responsible for the algorithm, responded by arguing they calibrated the algorithm to be fair in terms of False Discovery Rate, also known as calibration
▪ The Bias Report provides metrics on each type of disparity, which add clarity to the bias auditing process
Copyright (c) by Daniel K.C. Chan. All Rights Reserved. 52
Ethical & Privacy Considerations

The COMPAS Dataset

COMPAS Recidivism Risk Assessment Dataset
score is a binary assessment made by the predictive model and 1 denotes an individual selected for the intervention
label_value is the binary valued ground truth and 1 denotes a biased case based on disparate errors
Copyright (c) by Daniel K.C. Chan. All Rights Reserved. 54
Ethical & Privacy Considerations

The Audit Process

http://aequitas.dssg.io/upload.html
Copyright (c) by Daniel K.C. Chan. All Rights Reserved. 56
Ethical & Privacy Considerations

Copyright (c) by Daniel K.C. Chan. All Rights Reserved. 57
Ethical & Privacy Considerations

threshold ≤ Fair Parity ≤ 1 threshold
Fair: between 80~125% of the reference group
metric value
Unfair: outside the 80~125% range of the reference group metric value
Copyright (c) by Daniel K.C. Chan. All Rights Reserved. 58
Ethical & Privacy Considerations

The Bias Report

The Bias Report
Copyright (c) by Daniel K.C. Chan. All Rights Reserved. 60
Ethical & Privacy Considerations

All groups in all attributes show disparity outside the 80~125% range hence failing the Equal Parity
Copyright (c) by Daniel K.C. Chan. All Rights Reserved. 61
Ethical & Privacy Considerations

All groups in all attributes show disparity outside the 80~125% range hence failing the Proportional Parity
Copyright (c) by Daniel K.C. Chan. All Rights Reserved. 62
Ethical & Privacy Considerations

Copyright (c) by Daniel K.C. Chan. All Rights Reserved. 63
Ethical & Privacy Considerations

Copyright (c) by Daniel K.C. Chan. All Rights Reserved. 64
Ethical & Privacy Considerations

Copyright (c) by Daniel K.C. Chan. All Rights Reserved. 65
Ethical & Privacy Considerations

Metric values for each group is provided
Copyright (c) by Daniel K.C. Chan. All Rights Reserved. 66
Ethical & Privacy Considerations

Only a few bias metric values fall within the 80~125% range
FDR Disparity𝐴𝑠𝑖𝑎𝑛 = FDR𝐴𝑠𝑖𝑎𝑛 = 0.25 = 0.61 FDR𝐶𝑎𝑢𝑐𝑎𝑠𝑖𝑎𝑛 0.41
Copyright (c) by Daniel K.C. Chan. All Rights Reserved. 67
Ethical & Privacy Considerations

References

References
“Big Data and Social Science – Data Science Methods and Tools for Research and Practice”, 2nd edition, Ian Foster, Rayid Ghani, Ron S. Jarmin, Frauke Kreuter and Julia Lane, Chapman and Hall/CRC, November 2020 (https://textbook.coleridgeinitiative.org/)
▪ Aequitasprojectwebsite (http://www.datasciencepublicpolicy.org/projects/aequitas/)
▪ AequitasGitHubpage (https://dssg.github.io/aequitas/30_seconds_aequitas.html)
▪ “DealingwithBiasandFairnessinDataScienceSystems”, Pedro Saleiro et al, 2020 (https://www.youtube.com/watch?v=N67pE1AF5cM)
▪ Tutorial:FairnessinDecision-MakingwithAI:aPracticalGuide & Hands-On Tutorial using Aequitas , YouTube, October 2019 (https://www.youtube.com/watch?v=yOR71zBm3Uc)
▪ “Chapter11BiasandFairness”in”BigDataandSocial Science”, (https://textbook.coleridgeinitiative.org/)
▪ “Aequitas:aBiasandFairnessAuditToolkit”,PedroSaleiroet al, 2019 (https://arxiv.org/pdf/1811.05577.pdf)
Copyright (c) by Daniel K.C. Chan. All Rights Reserved.
69
Ethical & Privacy Considerations

Bias Mitigation

What is modifiable determines what mitigation algorithms can be used
Pro-Processing Algorithm
Bias mitigation algorithms applied to training data
Copyright (c) by Daniel K.C. Chan. All Rights Reserved. 71
In-Processing Algorithm
Bias mitigation algorithms applied to a model during its training
In-Processing Algorithm
Bias mitigation algorithms applied to predicted labels

Bias mitigation can be applied at different phases of the machine learning pipeline
Pre-processing Algorithms
Mitigates Bias in Training Data
In-processing Algorithms Post-processing Algorithms
Mitigates Bias in Classifiers Mitigates Bias in Prediction
Reweighing
Modifies the weights of different training examples
Adversarial Debiasing
Uses adversarial techniques to maximise accuracy & reduce evidence of protected attributes in prediction
Copyright (c) by Daniel K.C. Chan. All Rights Reserved.
72
Ethical & Privacy Considerations
Reject-Option Classification
Changes predictions from a classifier to make them fairer
Disparate Impact Remover
Edits feature values to improve group fairness
Prejudice Remover
Adds a discrimination-aware regularization term to the learning objective
Calibrated Equalized Odds
Optimizes over calibrated classifier score outputs that lead to fair output labels
Optimized Preprocessing
Modifies training data features & labels
Equalized Odds
Modifies the predicted label using an optimization scheme to make predictions fairer
Meta Fair Classifier
Takes the fairness metric as part of the input & returns a classifier optimized for the metric
Learning Fair Representation
Learns fair representation by obfuscating information about protected attributes

Ethical Machine Learning

A fully autonomous car is transporting a human being (A) to its desired destination. Suddenly, in a twist of fate, some living being (B) appears on the road. The AI (i.e., the computer) that controls the vehicle (i.e., the machine) must come to a decision within a fraction of a second: take evasive action or continue straight ahead. If it does try to dodge B, the vehicle skids and hits a tree, A dies, and B survives. If not, A survives, but B dies. For simplification purposes, we shall assume that collateral damage is negligible or identical in both cases.
© 2019 Daniel K.C. Chan. All rights reserved. 74
Ethical & Privacy Considerations

Deon
Data Science Ethics Checklist

▪ To facilitate data scientists to practice data ethics
▪ To evaluate considerations related to advanced analytics and machine learning applications from data collection through deployment
▪ To ensure that risks inherent to AI-empowered technology do not escalate into threats to an organization’s constituents, reputation, or society more broadly
▪ To provide concrete, actionable reminders to the developers that have influence over how data science gets done
▪ A lightweight, open-source command line tool that facilitates integration into ML workflows
Copyright (c) by Daniel K.C. Chan. All Rights Reserved.
76
Ethical & Privacy Considerations

Bias mitigation can be applied at different phases of the machine learning pipeline
DATA COLLECTION DATA STORAGE
ANALYSIS
MODELING
DEPLOYMENT
REDRESS ROLL BACK CONCEPT DRIFT UNINTENDED USE
INFORMED CONSENT
DATA SECURITY
MISSING PERSPECTIVE
PROXY DISCRIMINATION
COLLECTION BIAS
RIGHT TO BE FORGOTTEN
DATASET BIAS
FAIRNESS ACROSS GROUPS
LIMITING PII EXPOSURE
DATA RETENTION PLAN
HONEST REPRESENTATION
METRIC SELECTION
PRIVACY IN ANALYSIS
EXPLAINABILITY
DOWNSTREAM BIAS MITIGATION
AUDITABILITY
COMMUNICATING BIAS
Copyright (c) by Daniel K.C. Chan. All Rights Reserved. 77
Ethical & Privacy Considerations

Data Collection Checklist

A. Data Collection
A.1
INFORMED CONSENT
If there are human subjects, have they given informed consent, where subjects affirmatively opt-in and have a clear understanding of the data uses to which they consent?
A.2
COLLECTION BIAS
Have we considered sources of bias that could be introduced during data collection and survey design and taken steps to mitigate those?
A.3
LIMIT PII EXPOSURE
Have we considered ways to minimize exposure of personally identifiable information (PII) for example through anonymization or not collecting information that isn’t relevant for analysis?
A.4
DOWNSTREAM BIAS MITIGATION
Have we considered ways to enable testing downstream results for biased outcomes (e.g., collecting data on protected group status like race or gender)?
Copyright (c) by Daniel K.C. Chan. All Rights Reserved. 79
Ethical & Privacy Considerations

Facebook uses phone numbers provided for two-factor authentication to target users with ads

Yes Facebook is using your 2FA phone number to target you with ads


▪ FB confirmed it in fact used phone numbers that users had provided it for security purposes to also target them with ads
▪ Specially a phone number handed over for two factor authentication (2FA)
▪ SFA is a security technique that adds a second layer of authentication to help keep accounts secure
Copyright (c) by Daniel K.C. Chan. All Rights Reserved. 80
Ethical & Privacy Considerations
A.1 Informed Consent

Low smartphone penetration areas contribute less to big data and consequently become digitally invisible
https://hbr.org/2013/04/the-hidden-biases-in-big-data
▪ Data fundamentalism (數據原教旨主義) is the notion that correlation always indicates causation and that massive data sets & predictive analytics always reflect objective truth
▪ Datasets are not objective, they are creations of human design
▪ We give numbers their voice, draw inferences from them, and define their
meaning through our interpretation
▪ Hidden biases in both the collection and analysis stages present considerable risks
▪ Biases are as important to the big-data equation as the numbers themselves
Copyright (c) by Daniel K.C. Chan. All Rights Reserved. 81
Ethical & Privacy Considerations
A.2 Collection Bias

Low smartphone penetration areas contribute less to big data and consequently become digitally invisible
https://hbr.org/2013/04/the-hidden-biases-in-big-data
▪ The greatest number of tweets (20 millions) were generated from Manhattan around the strike of Hurricane Sandy creating the illusion that Manhattan was the hub of the disaster
▪ Very few messages originated from more severely affected locations, such as Breezy Point, Coney Island, and Rockaway
▪ As extended power backouts drained batteries and limited cellular access, even fewer tweets came from the worst hit areas
▪ In fact, there was much more going on outside the privileged, urban experience of Sandy that Twitter data failed to convey, e

Leave a Reply

Your email address will not be published. Required fields are marked *