Wednesday March 16, 2022
From data to automatic systems. Study of the links between training data and predictive gender-bias in automatic speech recognition systems
Abstract :
 
Machine learning systems contribute to the reproduction of social inequalities, because of the data they use and for lack of critical approaches, thus feeding a discourse on the "biases of artificial intelligence". This thesis aims at contributing to the collective thinking on the biases of automatic systems by investigating the existence of gender biases in automatic speech recognition (ASR) systems.
Critically thinking about the impact of systems requires taking into account both the notion of bias (linked with the architecture, or the system and its data) and that of discrimination, definded at the level of each country's legislation. A system is considered discriminatory when it makes a difference in treatment on the basis of criteria defined as breaking the social contract. In France, sex and gender identity are among the 23 criteria protected by law.
Based on theoretical considerations on the notions of bias, and in particular on the predictive (or performance) bias and the selection bias, we propose a set of experiments to try to understand the links between selection bias in training data and predictive bias of the system. We base our work on the study of an HMM-DNN system trained on French media corpus, and an end-to-end system trained on audio books in English. We observe that a significant gender selection bias in the trainng data contributes only partially to the predictive bias of the ASR system, but that the latter emerges nevertheless when the speech data contain different utterance situations and speaker roles. This works has also led us to question the representation of women in speech data, and more generally to rethink the links between theoretical conceptions of gender and ASR systems.
 
Mis à jour le 8 March 2022