Artificial intelligence and gender equality: how can we make AI a force for inclusion, instead of division?
Although it may not seem so, artificial intelligence and gender equality are closely intertwined. In an interview for this year’s International Women’s Day looking at technology and innovation for gender equality, Nele Roekens, Legal Officer at Unia (Interfederal Centre for Equal Opportunities and Opposition to Racism) and Co-Chair of ENNHRI’s Working Group on Artificial Intelligence, reflects on this complex topic.
What is the link between artificial intelligence (AI) and gender equality?
The progress of AI relies on using massive amounts of data to train algorithms. But what if the data used reflects existing inequalities, such as gender inequality, rooted in society? Due to previous discrimination against women, historical databases can lack sufficiently gender-balanced data. When such databases are then used to train algorithms, this leads to equally biased decisions.
Such use of historical databases is currently extensive. With AI permeating almost all parts of our lives, including ones as fundamental as policing, migration management, and access to the labour market and housing, its impact is immense. Existing inequalities are being perpetuated and institutionalised.
While women are affected, people at the intersection of historically disadvantaged groups experience even more pronounced discrimination, for example women of colour, older women, and women with disabilities. This is on both an individual and collective level.
How can legal frameworks ensure that artificial intelligence helps, rather than hinders, gender equality?
Existing legislative frameworks are insufficient to meet specific AI-related challenges. This is widely recognised and legislators – including the European Commission and Council of Europe – are at an advanced stage of negotiating new legal frameworks.
To address gender equality properly, we need to use an intersectional lens. An algorithm’s output and the resulting decision will often not be based on a single characteristic, for instance gender or ethnic origin. Instead, it will be a combination of several. Yet it will not be possible to identify the most important one in the decision-making process. If we do not recognise this when drafting frameworks, it will lead to incomplete protection, and even worse, exacerbate existing inequalities.
Cases like the Amazon hiring tool and French Parcoursup system show how AI used to promote inclusion of vulnerable groups can still lead to discrimination. Almost ten years after the first of these scandals, we still need:
- Accuracy standards for AI that set maximum “error rates” (to give an example – how often a facial recognition algorithm incorrectly identifies a person) for the decision-making systems influencing our lives;
- Prohibitions on AI systems demonstrated to be harmful;
- Supervisory authorities with a strong mandate, resources and expertise capable of monitoring and (if necessary banning) AI systems that pose an unacceptable risk to human rights, democracy and rule of law.
While some people worry that regulation and enforcement will hamper innovation, I disagree. Did making seat belts mandatory make cars worse? No – they became safer. Only by taking steps like the above can we make AI a driver of inclusion, not division, and provide effective redress for those harmed by AI.
What role do National Human Rights Institutions (NHRIs) have in ensuring AI is a positive force for gender equality?
NHRIs are crucial in ensuring a human rights-based approach to AI’s use and development. With their role, expertise and mandate, cooperation with NHRIs is critical for guaranteeing strong, independent oversight of AI systems and their adherence to relevant standards. In turn, NHRIs can advise on the development of these standards.
As a bridge between civil society and governments, NHRI are also uniquely positioned to encourage the public consultation we need to ensure that the huge impact of mass, digital technologies is comprehensively monitored, debated, and addressed. How accurate do we want the AI systems that shape our lives to be? Are there even ones that we don’t want to use at all?
Ultimately, we as NHRIs need the appropriate mandate and financial and technical means to conduct the vital work I mention above. Only then can we effectively protect and promote human rights for all in the digital age.