
Alessandro Mantelero is an Associate Professor of Private Law and Law & Technology at the Polytechnic University of Turin. In 2022, he was awarded the Jean Monnet Chair in Mediterranean Digital Societies and Law by the European Commission. An expert in law and technology, he led the working group of the ‘DPD en xarxa’ community that developed the FRIA (Fundamental Rights Impact Assessment) model for artificial intelligence (AI). In this interview, he explains how this methodology allows us to assess AI’s impact on fundamental rights and why its importance is growing in Europe. Mantelero highlights the need for clear regulation and the application of the model in real-world cases to ensure technologies are not only safe, but also fair.
What is the relevance of the FRIA model?
Fundamental rights are protected both at the European and national levels. This means that if they are not respected, there is legal liability for any harm or damage that may occur. FRIA is the unified methodology that enables the assessment of all these situations.
AI service providers are required to assess their tools to verify their safety and the impact they may have on fundamental rights. We are more used to doing the first part of this process—checking for safety—because we already have the technical tools and well-defined standards for that. However, the part related to fundamental rights is very new, and we lack experience in it. That’s why developing a methodology like the FRIA model is so important.
Who is the model aimed at?
The fundamental rights impact assessment is aimed at those deploying artificial intelligence. On one hand, there's the company that develops and distributes the AI solution—such as Microsoft—and on the other hand, the organization that applies it in a specific domain—such as the Government of Catalonia. The organization applying the AI must go through this assessment, so they need the tool.
What was the main challenge in designing and developing this model?
The biggest challenge was making it simple. Most models of this kind tend to be too complex, using too many variables and too many questions. They don’t fit well with the structure of fundamental rights impact assessment. Another challenge was creating a model that considers risk logic while also incorporating a legal perspective.
The development of the FRIA model focused on specific use cases. What were the main takeaways?
Only by implementing a model can you truly understand how it works. It’s easy to develop models that work in theory but are never actually used. These case studies help validate the FRIA model. In the cases studied—education, social benefits, hiring processes, and biomedical research—misusing artificial intelligence can have serious consequences. The idea was to verify that the FRIA model worked well in these real-world scenarios, and we were successful.
Do you think the Catalan FRIA model could be adopted by other countries?
This model can be applied in multiple countries. In fact, the Croatian data protection authority has already adopted it. Currently, other organizations and authorities are also interested in applying it in their areas of work. It’s a universal model that can be adopted by any organization.
Besides the FRIA assessment, in certain cases a data protection impact assessment is also required. How might having to carry out both assessments affect things?
These two assessments should be connected. The FRIA model has a very similar structure to the data protection and privacy impact assessment, so integrating the two models is straightforward.
The report mentions the need to involve expert professionals in the assessment process. What professional profiles should be involved in the FRIA?
We need experts who are well-versed in this field. Right now, there are very few. The most suitable profile is that of the Data Protection Officer (DPO). This role already exists in many organizations and companies and is familiar with conducting data protection impact assessments. In the working group that developed the model, DPOs were involved, and we found that with the right training, they can take on this role without any issues.
One of the key elements of the model is the impact matrix on fundamental rights. How was this model designed, and why was a four-level risk scale chosen?
Risk can be assessed using either symmetric or asymmetric scales. In our case, we use a symmetric scale because it distributes the risk levels more evenly. We decided to use four levels to avoid everyone placing the risk in the middle. That is, if we had three or five levels, there would be a tendency to treat level 2 or 3 as a catch-all category. The four levels correspond to low, moderate, high, and very high risk, using a descriptive rather than mathematical formulation, which is typical in the field of fundamental rights.
The document mentions that the FRIA should have a circular approach. What does that mean?
It’s important for the model to be circular because this methodology deals with a contextual field that requires ongoing re-evaluation. It assesses a current risk, in a specific context, affecting specific groups of people. All these variables can change, just as the technology itself can change. The model must account for the fact that parameters are not static.
What aspects of the FRIA model could be improved in the future?
Experience shows that all models evolve, become more refined, and improve as they are applied. The FRIA is a solid and well-developed model, and the use cases confirm that. We are aware that we’ve applied it to specific cases, so using it on a larger scale will allow us to adjust the questionnaire and automate certain parts. That said, the final result will always depend on the skill and judgment of the expert applying the model.
Among the use cases analyzed—education, human resources, health, and social welfare—which do you think best demonstrates the benefits of a FRIA assessment?
Each one helps us understand different aspects of the methodology. I’d highlight the CaixaBank case, because the results were quite positive. In the health sector, using AI to detect cancer is also particularly interesting, as it's applied both within and outside the EU and allows us to study specific risks in each context.
Some organizations might see FRIA as a bureaucratic burden. What arguments would you use to convince them of its value?
There are two arguments. First, when developing technology, it’s better for everyone if it respects fundamental rights. A technology that violates human rights is not a good technology. The second is a bit more direct: if the technology fails to respect fundamental rights, the company’s reputation can suffer, and it may face penalties.
What role do you think data protection authorities will play in relation to FRIA in the future?
The Catalan Data Protection Authority (APDCAT), like other data protection authorities, has the competence to enforce the Artificial Intelligence Regulation (AIR) and is responsible for ensuring the regulation is correctly applied with respect to fundamental rights. The FRIA model should help support compliance with this regulation.
Do you think the European Union’s Artificial Intelligence Regulation (AIR) provides sufficient guidance for assessing AI’s impact on fundamental rights, or are there still legal gaps?
The AIR was designed without giving much weight to fundamental rights. Its focus was more on product safety from a market perspective. We need clear guidelines that explain how to properly assess risks to fundamental rights. In fact, the EU’s first draft addressed fundamental rights only very briefly and generically. I worked with a colleague and the team of rapporteur Benifei to introduce an article with a much broader section on fundamental rights impact. After political negotiations, the text was somewhat watered down and eventually included in Article 27 of the Regulation, which concerns the fundamental rights impact assessment for high-risk AI systems.
How will the FRIA model evolve in the context of rapid advancements in AI like those we are seeing today?
The advantage of this model is that, since it focuses on fundamental rights, it is not as vulnerable to changes in technical developments. A similar thing happened with data protection impact assessment models. Even though they were developed years ago, they’re still relevant because they adapt well to any context. That’s why it’s important to develop a model that remains neutral with respect to specific technological evolutions.
At a time when artificial intelligence is becoming part of everyday life for individuals and institutions, ensuring its ethical use and respect for fundamental rights is more important than ever