Addressing the gender bias in artificial intelligence and automation – OpenGlobalRights

Posted: April 11, 2020 at 7:07 pm

Geralt/Pixabay

Twenty-five years after the adoption of the Beijing Declaration and Platform for Action, significant gender bias in existing social norms remains. For example, as recently as February 2020, the Indian Supreme Court had to remind the Indian government that its arguments for denying women command positions in the Army were based on stereotypes. And gender bias is not merely a male problem: a recent UNDP report entitled Tackling Social Norms found that about 90% of people (both men and women) hold some bias against women.

Gender bias and various forms of discrimination against women and girls pervades all spheres of life. Womens equal access to science and information technology is no exception. While the challenges posed by the digital divide and under-representation of women in STEM (science, technology, engineering and mathematics) continue, artificial intelligence (AI) and automation are throwing newer challenges to achieving substantive gender equality in the era of the Fourth Industrial Revolution.

If AI and automation are not developed and applied in a gender-responsive way, they are likely to reproduce and reinforce existing gender stereotypes and discriminatory social norms. In fact, this may already be happening (un)consciously. Let us consider a few examples:

Despite the potential for such gender bias, the growing crop of AI standards do not adequately integrate a gender perspective. For example, the Montreal Declaration for the Responsible Development of Artificial Intelligence does not make an explicit reference to integrating a gender perspective, while the AI4Peoples Ethical Framework for a Good AI Society mentions diversity/gender only once. Both the OECD Council Recommendation on AI and the G20 AI Principles stress the importance of AI contributing to reducing gender inequality, but provide no details on how this could be achieved.

The Responsible Machine Learning Principles do embrace bias evaluation as one of the principles. This siloed approach of embracing gender is also adopted by companies like Google and Microsoft, whose AI Principles underscore the need to avoid creating or reinforcing unfair bias and to treat all people fairly, respectively. Companies related to AI and automation should adopt a gender-response approach across all principles to overcome inherent gender bias. Google should, for example, embed a gender perspective in assessing which new technologies are socially beneficial or how AI systems are built and tested for safety.

What should be done to address the gender bias in AI and automation? The gender framework for the UN Guiding Principles on Business and Human Rights could provide practical guidance to states, companies and other actors. The framework involves a three-step cycle: gender-responsive assessment, gender-transformative measures and gender-transformative remedies. The assessment should be able to respond to differentiated, intersectional, and disproportionate adverse impacts on womens human rights. The consequent measures and remedies should be transformative in that they should be capable of bringing change to patriarchal norms, unequal power relations. and gender stereotyping.

States, companies and other actors can take several concrete steps. First, women should be active participantsrather than mere passive beneficiariesin creating AI and automation. Women and their experiences should be adequately integrated in all steps related to design, development and application of AI and automation. In addition to proactively hiring more women at all levels, AI and automation companies should engage gender experts and womens organisations from the outset in conducting human rights due diligence.

Second, the data that informs algorithms, AI and automation should be sex-disaggregated, otherwise the experiences of women will not inform these technological tools and in turn might continue to internalise existing gender biases against women. Moreover, even data related to women should be guarded against any inherent gender bias.

Third, states, companies and universities should plan for and invest in building capacity of women to achieve smooth transition to AI and automation. This would require vocational/technical training at both education and work levels.

Fourth, AI and automation should be designed to overcome gender discrimination and patriarchal social norms. In other words, these technologies should be employed to address challenges faced by women such as unpaid care work, gender pay gap, cyber bullying, gender-based violence and sexual harassment, trafficking, breach of sexual and reproductive rights, and under-representation in leadership positions. Similarly, the power of AI and automation should be employed to enhance womens access to finance, higher education and flexible work opportunities.

Fifth, special steps should be taken to make women aware of their human rights and the impact of AI and automation on their rights. Similar measures are needed to ensure that remedial mechanismsboth judicial and non-judicialare responsive to gender bias, discrimination, patriarchal power structures, and asymmetries of information and resources.

Sixth, states and companies should keep in mind the intersectional dimensions of gender discrimination, otherwise their responses, despite good intentions, will fall short of using AI and automation to accomplish gender equality. Low-income women, single mothers, women of colour, migrant women, women with disability, and non-heterosexual women all may be affected differently by AI and automation and would have differentiated needs or expectations.

Finally, all standards related to AI and automation should integrate a gender perspective in a holistic manner, rather than treating gender as merely a bias issue to be managed.

Technologies are rarely gender neutral in practice. If AI and automation continue to ignore womens experiences or to leave women behind, everyone will be worse off.

This piece is part of a blog series focusing on the gender dimensions of business and human rights. The blog series is in partnership with the Business & Human Rights Resource Centre, the Danish Institute for Human Rights and OpenGlobalRights. The views expressed in the series are those of the authors. For more on the latest news and resources on gender, business and human rights, visit thisportal.

Go here to see the original:

Addressing the gender bias in artificial intelligence and automation - OpenGlobalRights

Related Posts