The project investigates whether and, if so, to what extent the established philosophical discourse on discrimination is applicable to new forms of disadvantage caused by artificial intelligence methods.
Some problems that arise in the context of AI-assisted decision-making, e.g., through the use of biased training data, can be well captured by established concepts of discrimination. In addition, however, there are more complicated phenomena that give rise to novel forms of disadvantage. For example, in “redundant encoding,” particularly protected characteristics, such as ethnicity, correlate so closely with supposedly unproblematic data, such as zip code, that use of the supposedly unproblematic data results in protected groups being disadvantaged. Further, the use of AI may lead to a systematic disadvantage of “random groups,” such as people with certain shopping behaviors that correlate with non-payment. Here, it needs to be clarified to what extent the conventional understanding of discrimination can be solved by classical characteristics, such as skin color, gender or age.
In all phenomenon areas considered, the project will investigate how established notions of discrimination need to be adapted in order to adequately capture phenomena of AI-based disadvantage. The project is located in the “Use Case Law” of the Manchot research group “Decision Making Using Artificial Intelligence Methods”.
Contact
Prof. Dr. Frank Dietrich
Philosophy
Prof. Dr. Frank Dietrich has held the Chair of Practical Philosophy at HHU-Düsseldorf since 2012. His research and teaching focuses on political philosophy, the philosophy of law, and ethics.
In the context of DIID, he is concerned with the democratic-theoretical legitimacy of online participation procedures as well as the protection of privacy.