Standards of Fairness for Disparate Impact Assessment of Big Data Algorithms
This paper describes and assesses several competing group and individual statistical standards of fairness, including the mathematical conflict between predictive parity and equal error rates that requires organizations to choose which measure to satisfy. The choice between statistical concepts of group fairness and individual fairness recalls the dispute between those who think the anti-discrimination laws aim at group disadvantaging practices and those who think they target arbitrary misclassification of individuals. Those analysts who embrace statistical measures of group fairness such as statistical parity and equal group error rates aim at reducing the subordination of disadvantaged groups; those data scientists who favor measures of individual fairness are aiming to avoid the arbitrary misclassification of individuals. Group fairness calls for analytics to aim for statistical parity or equal group error rates for protected groups, while individual fairness says analytics should aim only at accurate predictions. The goal of individual fairness is satisfied with equal accuracy in classification; while the goal of group fairness allows for some sacrifice of equal accuracy to protect vulnerable groups. To bring this normative dimension into sharper focus, this paper explores the extent to which the choice between the statistical concepts of individual and group fairness is related to a fundamental difference toward the principle that people are entitled to reap the rewards of their own talents and skills. The idea that similar people ought to be treated similarly and its image in the statistical concept of equal predictive accuracy gain strength from the normative principle that rewards ought to be distributed according to talents and skills. This paper will address this normative dimension through contrasting Robert Nozick and John Rawls' approaches to rewarding talent. It will argue in favor of carving out an exception to the principle of basing rewards on merit to allow the use of group fairness measures. The paper also explores the extent to which current relevant Supreme Court decisions would permit designing or modifying algorithms to move toward statistical parity or equalized group error rates
Year of publication: |
2018
|
---|---|
Authors: | MacCarthy, Mark |
Publisher: |
[2018]: [S.l.] : SSRN |
Saved in:
freely available
Extent: | 1 Online-Ressource (79 p) |
---|---|
Series: | |
Type of publication: | Book / Working Paper |
Language: | English |
Notes: | Nach Informationen von SSRN wurde die ursprüngliche Fassung des Dokuments April 2, 2018 erstellt |
Other identifiers: | 10.2139/ssrn.3154788 [DOI] |
Source: | ECONIS - Online Catalogue of the ZBW |
Persistent link: https://www.econbiz.de/10012922887
Saved in favorites
Similar items by person
-
Privacy as a Parameter of Competition in Merger Reviews
MacCarthy, Mark, (2020)
-
MacCarthy, Mark, (2017)
-
In Defense of Big Data Analytics
MacCarthy, Mark, (2018)
- More ...