Bias does not equal bias: A socio-technical typology of bias in data-based algorithmic systems
This paper introduces a socio-technical typology of bias in data-driven machine learning and artificial intelligence systems. The typology is linked to the conceptualisations of legal anti-discrimination regulations, so that the concept of structural inequality-and, therefore, of undesirable bias-is defined accordingly. By analysing the controversial Austrian "AMS algorithm" as a case study as well as examples in the contexts of face detection, risk assessment and health care management, this paper defines the following three types of bias: firstly, purely technical bias as a systematic deviation of the datafied version of a phenomenon from reality; secondly, socio-technical bias as a systematic deviation due to structural inequalities, which must be strictly distinguished from, thirdly, societal bias, which depicts-correctly-the structural inequalities that prevail in society. This paper argues that a clear distinction must be made between different concepts of bias in such systems in order to analytically assess these systems and, subsequently, inform political action.
Year of publication: |
2021
|
---|---|
Authors: | Lopez, Paola |
Published in: |
Internet Policy Review. - Berlin : Alexander von Humboldt Institute for Internet and Society, ISSN 2197-6775. - Vol. 10.2021, 4, p. 1-29
|
Publisher: |
Berlin : Alexander von Humboldt Institute for Internet and Society |
Subject: | Artificial intelligence | Machine learning | Bias |
Saved in:
Saved in favorites
Similar items by subject
-
Das, Sanjiv R., (2023)
-
Kelley, Stephanie, (2022)
-
Machine learning in credit risk : measuring the dilemma between prediction and supervisory cost
Alonso, Andrés, (2020)
- More ...
Similar items by person