Authors that are external to the inLab FIB: Sara García Arias, criminologist and specialist in gender and equality
Artificial Intelligence (AI) is increasingly present in decision-making in many aspects of our lives: from purchase recommendation algorithms to the selection of candidates in job interviews. Due to the importance of AI systems in our society, it is necessary to ask whether at the time of creating these algorithms not only ethical parameters are followed, but also, more specifically, if the gender approach is applied.
AI and biases: gender inequalitites
Gender differences prevade in society and sexist stereotypes, a fact that invariably leads AI to feed on cognitive biases, based on prejudices, and to form sexist automatic translations as well as sexist image search engines. However, these are examples of the most obvious biased data sets and solutions have already being applied: Google Translate, for example, translates a word without gender difference in English into both feminine and masculine.
What would happen if we find a dataset representative of reality, without explicit sexism, without cognitive biases, and we do not apply a gender approach? We would surely find ourselves facing a situation similar to Amazon’s, which was forced to cancel a project in which an AI expert in recruitment discriminated CVs that included the word “woman” in their text. This algorithm was fed by a data set where the majority of candidates were male, therefore, it learned to prioritize men over women.
What is the gender approach?
The problems of AI without gender focus
Integration of the gender perspective in the STEM world
Each of these topics is a research area by itself and we invite the STEM world (Science, Technology, Engineering and Mathematics) to enter, without fears, in the application of gender approach and in equality in general.
We know that this change is not going to happen overnight, but it is our duty to create effective and fair technology for all people and the world. Becoming aware will allow us to be part of the solution and stop perpetuating discriminatory patterns that shape our reality but not the world that we want to build.
BIBLIOGRAPHY / REFERENCES
Institut Català de la Dona, 2020. Dossier Dones i Treball
Kuczmarski, J. (2018). Reducing gender bias in Google Translate
https://www.blog.google/products/translate/reducing-gender-bias-google-translate/
Otterbacher, J., Bates, J. & Clough, P. (2017) Competent Men and Warm Women: Gender Stereotypes and Backlash in Image Search Results. In: Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems. 2017 CHI Conference on Human Factors in Computing Systems, 6-11 May 2017, Colorado Convention Center, Denver, CO. Association for Computing Machinery , pp. 6620-6631. http://eprints.whiterose.ac.uk/111419/7/Exploring_bias_FINAL_6_toshare.pdf
Zhou, P., Shi, W., Zhao, J., Huang, K., Chen, M., Cotterell, R. & Chang, K. (2019). Examining gender bias in languages with grammatical gender. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 5276–5284, Hong Kong, China. Association for Computational Linguistics. https://arxiv.org/pdf/1909.02224.pdf