NHRI Academy 2022:
artificial intelligence and human rights

20-24 June, Tirana, Albania

About the event

The 9th edition of the NHRI Academy focused on one of the most important emerging human rights topics: artificial intelligence (AI). NHRI representatives from across Europe gathered to look at how to go about addressing challenges linked to AI within their mandates.

Over the course of the week, they learned about how they can:

  • assess the impact of AI on human rights;
  • use the NHRI mandate to tackle human rights issues related to AI;
  • take advantage of opportunities brought by digital technology and AI for more effective promotion and protection of human rights.

Alongside AI-related topics, training was also be held on the independence of NHRIs, including mainstreaming the Paris Principles and prioritising gender in the work of NHRIs.

The NHRI Academy is an annual flagship event organised by ENNHRI and the OSCE Office for Democratic Institutions and Human Rights (ODIHR) and ENNHRI. It offers continual professional development of NHRI staff and supports networking between NHRIs in the OSCE region, and between NHRIs, ODIHR, ENNHRI and other stakeholders and organisations. Read about previous NHRI Academies.

Trainers

Francesca Fanucci

Senior Legal Advisor, European Center
For Not-For-Profit Law

Line Gamrath Rasmussen

Special Adviser on Good e-Governance, International Capacity-Building & Partnerships, The Danish Institute for Human Rights

Deniz Wagner

Adviser, Office of the OSCE Representative on Freedom of the Media

Julia Haas

Project Officer, Office of the OSCE Representative on Freedom of the Media

Evguenia Klementieva

Special Advisor, tech and human rights, The Danish Institute for Human Rights

Gabriel Almeida

Human Rights Officer, ENNHRI

Oleksandra Zub

Consultant, ENNHRI

Session summaries and materials

In this session, Gabriel Almeida from ENNHRI presented the UN Paris Principles and explained how they can serve as a guide for NHRI’s work in the field of Artificial Intelligence (AI). Participants were presented to the General Observations of GANHRI’s Sub-Committee on Accreditation and explored together their added-value and concrete application in this field. As a result, participants learned to identify the international standards underpinning NHRIs’ work and making them unique actors in the field of AI and human rights. 

Download the presentation. 

In this session, Oleksandra Zub from ENNHRI looked at how NHRIs can go about mainstreaming gender in their work. She opened by looking at four key terms: gender, gender equality; gender mainstreaming and gender specialisation. She then set out why gender mainstreaming is important for NHRIs, both in terms of their internal and external work, while linking this into AI.  

Download the presentation on this topic.

In this introductory session, participants looked at facts and myths linked to artificial intelligence (AI) and considered how to solve the challenges identified.

Francesca Fanucci opened by clarifying that, although there is not a universally agreed definition, when AI is discussed it usually refers to automated decision-making or decision-supporting systems that operate via algorithms.

She continued by saying that currently AI is a way, via machine learning, to identify trends and patterns in datasets to achieve certain objectives and make decisions more efficient and effective. Participants learnt about forms of learning, for instance supervised, unsupervised and reinforced – including via “deep learning” neural networks.

At the same time, Francesca commented that AI cannot yet mimic certain types of complex intelligent human behaviour, such as interpersonal (“empathetic”) intelligence and speculative (“existential”) intelligence.

She added that some AI systems are designed only to perform tasks for specific purposes (“narrow AI”). Yet some are developed with a broad range of possible uses and purposes, both intended and unintended (“general AI”). Eventually, these systems may become capable of learning and applying their intelligence to solve any problems.

Looking at the question of whether non-biased AI can be developed, she remarked that AI is bound to have either data bias and/or design bias. Data bias arises due to historical datasets, with times and contexts changing; design bias when the coded algorithm reflects developer bias (conscious or unconscious). This prompted a further key question: should we use “synthetic datasets” (i.e., data created artificially to meet certain objectives) to remove all bias?

Francesca closed by considering if AI systems can (and should) always be explainable. She explored this while referring to the SyRI judgement in the Netherlands.

Download the presentation from this session.

During this session with Francesca Fanucci, participants examined the following fundamental question: do we need new rights for new technologies? As part of this, they considered if existing human rights frameworks are sufficient to approach and address AI-related challenges. If not, how to adapt them?

Following discussions, participants broadly agreed that there is no need to create new rights, although the creation of an international regulatory framework could bring consistency. At the same time, new definitions are needed within some existing rights to tackle the challenges that AI-related technologies bring with them in terms of impact on human rights. Using the right to data protection as an example, something not explicitly included in the original international norms protecting the right to privacy, participants explored the possibility of “extrapolating rights” (rearticulating/redefining/ deriving) from existing ones as a potential solution.

Participants agreed that NHRIs should be involved in the adoption of new AI legislation to ensure that human rights are taken into account and fully respected. Alongside this, NHRIs discussed seeking authoritative interpretation from judges to promote progressive case law/jurisprudence on technologies and human rights. They also highlighted the challenges this may pose, such as lengthy proceedings.

Download the presentation from this session.

Participants learned about recent international and regional initiatives to set standards and/or regulate AI, including from UNESCO, the Council of Europe (CoE), OECD, and the EU. Many of the examples presented were soft law.

Considering the UNESCO Recommendation on Ethics of AI, Francesca commented that it gives a framework of values and actions to guide states in their AI-related actions, and it helps tailor the interpretation of a legally binding human rights framework. But ethics, she said, cannot replace such a binding framework.

Looking at the Council of Europe, she introduced several Council of Minister Recommendations linked to AI and its human rights implications, while mentioning the Council of Europe’s legally binding modernised data protection framework (“Convention 108+”) . While not on AI per se, it addresses the automatic processing of individuals’ personal data.

She then focused on the Commissioner of Human Rights’ Recommendation on human rights and AI. Among its 10 steps, it calls for member states to develop a procedure for public authorities to conduct human rights impact assessments on AI systems. Crucially, this should include meaningful external review – NHRIs are mentioned as possible partners.

Francesca also introduced the aspects of the EU’s General Data Protection Regulation linking to AI-related rights, particularly Article 22 and the right to not be subjected to automated decisions. Examining if GPDR is enough to protect fundamental rights in the context of AI, a number of possible gaps were identified.

The session concluded by looking at the relationship between EU and national AI strategies, with Francesca pointing out that NHRIs and civil society are not involved in the drafting of national legislation in this area.

Download the presentation. Find the full list of relevant standards in the “additional resources” section further down the page.

In this session, Francesca Fanucci gave an overview of ongoing regulatory initiatives at regional level. She opened by raising the Council of Europe ideas for  possible binding and non-binding regulatory frameworks. These would apply to developing, designing, and applying AI systems of both public and private actors: a risk classification methodology would emphasise human rights, democracy, and the rule of law.

Moving on to the original proposal for an EU AI Act, she set out its risk-based approach to assessment, its four prohibited types of AI, and argued that its compliance obligations focus on AI systems’ providers. She also raised other Act’s only limitation: it lacks a complaint mechanism; foresees no obligation to carry out fundamental rights impact assessments (focusing on “conformity assessments” based on existing EU legislation); and includes neither individual nor collective rights of redress and remedy. In addition, existing large-scale EU IT systems are exempt from the Act, and the list included shows these are mostly linked to EU justice, home affairs, and security.

Eric Töpfer from the German Institute for Human Rights, the German NHRI, then discussed his institution’s work to raise awareness on the digital rights. He opened with the recent German and EU context concerning AI’s use in asylum and migration, for instance biometrics, and looked at data exchange practices. In response to related challenges, the German NHRI has presented an annual report chapter on refugees’ digital rights to the German parliament and organised a multi-stakeholder conference on the digital rights of migrants.

  • Download Francesca Fanucci’s presentation from this session.
  • Download Eric Töpfer’s presentation from this session.  

In this session, Francesca Fanucci introduced different approaches to conduct a Human Rights Based Assessment (HRIA): a risk-based and a rights-based approach. A risk-based approach identifies a situation that could pose a potential threat – not human rights-related – and then assesses its scope and scale. In contrast, a rights-based approach identifies how a set of rights might be impacted, regardless of the level of risk, and then assesses the level of risk. The EU AI Act utilises a risk-based approach, while the GDPR utilises a rights-based one. She then described how the proposed Council of Europe Human Rights, Democracy and Rule of Law assessment model could represent be a hybrid approach. It would identify relevant risks for these areas, the likelihood and severity of those effects, involve a governance assessment, and examine mitigation and evaluation.

NHRIs then presented their own work on or linked to HRIA pilot models. Christiaan Duijst from the Netherlands Institute for Human Rights presented two tools for conducting a HRIA in the public sector, including the Fundamental Rights and AIAI Framework: its four-step process includes considering safeguards and bias. It has been piloted with positive results in the City of Rotterdam. He also gave insight into a Dutch Court of Audit investigation into algorithms used by central government.

Line Rasmussen closed presenting materials developed by the Danish Institute of Human Rights. This included their Digital Rights Check questionnaire that allows organisations to examine if their digital solutions have negative human rights impacts. The other tool presented was their guidance on human rights impact assessment of digital activities.

  • Download the initial presentation introducing this topic from Francesca Fanucci. 
  • Download the presentation from Christiaan Duijst.
  • Download the presentation from Line Gamrath Rasmussen.

Machine-learning technologies, like artificial intelligence, are becoming the main tools for shaping and arbitrating information online, using automated procedures.  

Deniz Wagner and Julia Haas illustrated how these could endanger human rights, and freedom of expression and media freedom especially. This is in particular due to a lack of uniformed definitions of key concepts such as ‘hate speech’ or  ‘terrorism’ and the inability for the AI to fully take into account crucial contextual differences.  

They also addressed the potential biases in AI design and how these could lead to unwanted consequences, especially when AI, having been trained in a certain cultural setting, is being used in societies with different cultural communication rules.

In terms of media pluralism and diversity, Deniz Wagner and Julia Haas pointed out that platforms have power to promote or hinder the public’s right to access pluralistic and diverse information, but their business model does not encourage exposing users to all potentially available content. They are often using AI to increase the user’s time spent on the platform and expose them to content that tends to correspond with, or strengthen, their existing interests creating an information asymmetry.  

Finally, Deniz and Julia highlighted the increased exploitation of behavioural patterns interfering with people’s freedom to form an opinion.

NHRIs also did group work on different scenarios – based on true stories – in which they might use their mandates for AI-based violations of freedom of expression. Their suggestions included advocating with relevant authorities on regulatory gaps; flagging harmful content; taking on relevant cases; and partnering with media regulators, NGOs and human rights defenders for more impact.  

Download the presentation given on this topic. This was delivered by Julia Haas and Deniz Wagner from the OSCE Representative on Freedom of the Media. They drew on a recent policy manual on artificial intelligence and the freedom of expression.

This documentary takes viewers on a journey to see how much the personal data we leave online can be used to recreate our lives. The OSCE Representative for Freedom of the Media has been involved its development. See the trailer and read more here.

Protecting and promoting human rights are the main function of NHRIs. This makes them the key institutions to ensure that a human rights approach is applied in all AI-related work.  

Protection may be understood as addressing and seeking to prevent actual human rights violations. Many issues depend on interpretation and the margin for acceptance which is being adopted by different governments and institutions. Often the challenges lie in unclear standards, differences in interpretation, and replacing human rights issues with due diligence and compliance processes.

Promotion on the other hand is aimed at creating a society where human rights are more broadly understood and respected. In order to reach this objective, NHRIs should look into the functions and responsibilities they have according to international and national commitments.

A crucial aspect to remember is that AI is designed by people to work with people. Thus, the functions of advising and legal assistance that NHRIs have under the UN Paris Principles fall under both of promotion and protection. This also implies that NHRIs are fully aware of the functioning of AI, the human rights challenges raised by the development of AI, and have good awareness of the development of legislation and policies at local and global levels.

NHRIs need to look in depth at the ways technologies impact human rights in areas such as surveillance and social control (face recognition and data collection); digital exclusion; datafication; and profiling through the algorithms. The main tasks of NHRIs are engagement with regional and international organisations, while also advising on the process of developing guidelines, principles and related policies and legislation through working groups, networks, and other official procedures.

Download the presentations used for these sessions. They were delivered by Line Gamrath Rasmussen and Evguenia Klementieva from the Danish Institute of Human Rights.

Christaian Duijst from the Netherlands Institute for Fundamental Rights also gave an overview of his NHRI’s work on digitalisaiton and human rights. 

When NHRIs exercise their monitoring and reporting functions, it is important that they look at external and internal aspects of AI. External aspects relate to how the model is perceived by the users and how the outcome can be interpreted, while internal aspects require understanding of how different types of AI models work.  

During this session on ‘human rights and rule of law in the use of profiling models by the authorities’, the Danish Institute for Human Rights (DIHR) focused on regression and classification models of supervised learning and their impact on profiling.  

Participants learned more about the differences between these two models and how they are used. The DIHR explained that:  

  • The regression model determines levels on which algorithms possess dependent variables. It is used to determine the relationship between age, type of illness and lifetime.  
  • The classification model determines whether the objects possess dependent variables. This second model is used to determine whether a citizen is eligible for welfare payments, insurance compensation, and other things.  

It is thus important for NHRIs to look into the model to be able to identify the biases and report them. At national level, outlined procedures for reporting and information dissemination to the public are crucial.  

Participants have identified the following challenges: 

  • A lack of technical knowledge and understanding of the AI processes (technological aspects, where to look to find a violation, etc.); 
  • A lack of provisions in legislation to give NHRIs all the tools they need to address/work on the topic; and
  • A low level of knowledge and understanding of what the rights are that can be infringed by the AI. 

People do not always realise what rights are affected by AI and what violations they should report and how to do so.  

The examples from the Netherlands and Portugal presented during this session demonstrated that a growing number of complaints on AI-related human rights infringements are linked to the digitalisation of public services.

In Portugal, the lack of digital literacy is aggravated by the lack of access to computers and infrastructure. Furthermore, there are additional challenges related to mistakes that algorithms make due to poor input and misunderstanding the goals of work conducted by AI.

In the Netherlands, the situation is similar. The extremely high number of people (over 10,000) whose rights were violated when algorithms were used in the areas of tax payments and childcare benefits demonstrate the importance of the NHRI engagement and monitoring.  

The groups have identified the following solutions to these challenges: 

  • Ensuring more internal education for NHRIs on the topic; 
  • Cooperating with educational establishments to get more information, as well as to develop and deliver capacity building and awareness raising events and campaigns for the purpose of educating the general public;  
  • Doing further research into the topic, in cooperation with other NHRIs and with the Council of Europe and other international organisations; 
  • Conducting needs assessments and looking at various models of AI to identify key issues to address/human rights violations. The research may include human rights impact assessments of AI. 
  • Creating complaints templates reflecting potential human rights violations. Some aspects that NHRI may focus on are the right to access an individual’s data, the ability to withdraw consent, and the possibility to have data corrected, deleted, and transferred. 

Additional resources

NHRI Academy 2022

Main organisers:

ENNHRI logo

In partnership with: