The importance of creating and AI under the prism of equality

Authors: 

Authors that are external to the inLab FIB: Sara García Ariascriminologist and specialist in gender and equality

Artificial Intelligence (AI) is increasingly present in decision-making in many aspects of our lives: from purchase recommendation algorithms to the selection of candidates in job interviews. Due to the importance of AI systems in our society, it is necessary to ask whether at the time of creating these algorithms not only ethical parameters are followed, but also, more specifically, if the gender approach is applied.
 

AI and biases: gender inequalitites

Gender differences prevade in society and sexist stereotypes, a fact that invariably leads AI to feed on cognitive biases, based on prejudices, and to form sexist automatic translations as well as sexist image search engines. However, these are examples of the most obvious biased data sets and solutions have already being applied: Google Translate, for example, translates a word without gender difference in English into both feminine and masculine.

What would happen if we find a dataset representative of reality, without explicit sexism, without cognitive biases, and we do not apply a gender approach? We would surely find ourselves facing a situation similar to Amazon’s, which was forced to cancel a project in which an AI expert in recruitment discriminated CVs that included the word "woman" in their text. This algorithm was fed by a data set where the majority of candidates were male, therefore, it learned to prioritize men over women.

That’s why the gender perspective is essential from the first moment of an algorithm creation, despite having a seemingly unbiased data. 
 

What is the gender approach?

When we speak of a gender perspective, we refer to an analysis and identification of the existence of social and cultural constructions between women and men with a background of inequality and discrimination against women. The aim of applying the gender approach is not only to be aware of these differences but also to create and implement solutions to prevent them from perpetuating.
 
One of the concepts that explain these inequalities is differential gender socialization: women and men learn and reproduce the roles and stereotypes of each gender. Therefore, time and spaces are distributed differently depending on whether you are socialized as a woman or as a man.
 
This implies, for example, that women deal more with the private sphere –care and family–, while men develop more in a public sphere (according to the Institut Català de la Dona (2020), women request leave in 91.1% of the cases to take care of their daughters, sons and/or relatives, while men request it in 9.8% of the cases).
 

The problems of AI without gender focus

How does this affect the configuration of an AI? Well, for example, this gender differentiation causes a significant percentage of women to have a part-time job (Institut Català de la Dona, 2020). Therefore, if we wanted to form an AI with the aim of looking for an optimal candidate to fill a full-time job, with the data set with which it will be trained, the AI will end up ruling out women. Despite having representative data, if they have not been treated with a gender perspective, the AI will make discriminatory decisions.
 

Integration of the gender perspective in the STEM world

In order to avoid perpetuating these gender differences, at the very first moment of the creation of an AI algorithm it is necessary to take into account all these variables with a gender approach. How to do it?
 
First of all, in the short term, create multidisciplinary teams between engineering and social sciences with the aim of merging and complementing knowledge, so that the possible inequalities on the data set are identified and ethical and egalitarian algorithms can be created. Second, create non-discriminatory algorithms and audit and monitor them, so that any bias can be detected. Finally, advocate for transparency and algorithm disclosure so that society can understand and question them.
 
In the long term, on one hand, insist on the mainstreaming of the formation and sensitization on gender perspective in engineering to make a data analysis and treatment aimed at avoiding inequalities and, on the other hand, amplify and diversify the world of engineering specifically promoting women’s access to science.

Each of these topics is a research area by itself and we invite the STEM world (Science, Technology, Engineering and Mathematics) to enter, without fears, in the application of gender approach and in equality in general.

We know that this change is not going to happen overnight, but it is our duty to create effective and fair technology for all people and the world. Becoming aware will allow us to be part of the solution and stop perpetuating discriminatory patterns that shape our reality but not the world that we want to build.


BIBLIOGRAPHY / REFERENCES 

Institut Català de la Dona, 2020. Dossier Dones i Treball

http://dones.gencat.cat/web/.content/03_ambits/Observatori/03_dossiers_estadistics/2020-Dossier-Dones-i-Treball.pdf

Kuczmarski, J. (2018). Reducing gender bias in Google Translate

https://www.blog.google/products/translate/reducing-gender-bias-google-translate/

Otterbacher, J., Bates, J. & Clough, P. (2017) Competent Men and Warm Women: Gender Stereotypes and Backlash in Image Search Results. In: Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems. 2017 CHI Conference on Human Factors in Computing Systems, 6-11 May 2017, Colorado Convention Center, Denver, CO. Association for Computing Machinery , pp. 6620-6631. http://eprints.whiterose.ac.uk/111419/7/Exploring_bias_FINAL_6_toshare.pdf

Zhou, P., Shi, W., Zhao, J., Huang, K., Chen, M., Cotterell, R. & Chang, K. (2019). Examining gender bias in languages with grammatical gender. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 5276–5284, Hong Kong, China. Association for Computational Linguistics. https://arxiv.org/pdf/1909.02224.pdf

Segueix-nos a

Els nostres articles del bloc d'inLab FIB

         
         

inLab FIB incorporates esCert

Icona ESCERT

First LogoCSIRT Logo

inLab is member of

inLab és centre TECNIO

ACCIO