
The director of the Catalan Data Protection Authority, Meritxell Borràs, introduced the Catalan FRIA model for assessing the impact on fundamental rights in AI applications during the 2nd Meeting of the Working Groups of the Ibero-American Data Protection Network.
This model is a pioneer in Europe, as it provides real use cases where impact assessments have been carried out to identify risks and mitigate them in projects involving the use of AI, which may pose a high risk to fundamental rights. These assessments, known as FRIA (Fundamental Rights Impact Assessments), are mandatory under the new Artificial Intelligence Regulation (RIA). Therefore, the Catalan FRIA model serves as a reference point to guide AI system developers and promoters in designing ethical and rights-respecting AI applications.
During her speech, Borràs expressed her gratitude for the contribution of Alessandro Mantelero, an expert from the European Data Protection Board, Professor of Private Law, and Chair of the Jean Monnet Mediterranean Digital Societies and Law at the Polytechnic University of Turin. Mantelero led the 'DPD en xarxa' working group, which developed the Catalan FRIA model with the participation of data protection officers (DPOs) from public and private entities in Catalonia.
This group has been working since 2024 on developing a methodology for assessing the impact on fundamental rights in AI systems, testing it in five real use cases to identify risks and establish mitigation measures. The document provides a practical response to the obligation established in the RIA, which does not specify how such an assessment should be conducted.
Borràs encouraged attendees to submit new use cases for analysis by the 'DPD en xarxa' working group, emphasizing that the Catalan FRIA model remains active and will continue publishing results as new AI projects are studied.
'DPD en xarxa' is a learning and collaboration community for data protection officers in Catalonia, launched by the APDCAT more than a year ago to promote knowledge exchange, training, and interaction among professionals ensuring compliance with data protection regulations in organizations.
This group has worked since 2024 on the development of this methodology for the assessment of the impact on fundamental rights in AI systems, and has tested it in five real use cases, which have been analyzed to identify risks and establish measures to minimize them. The document thus provides a practical response to the obligation established in the RIA, which does not specify how such an assessment should be carried out.
Borràs encouraged the attendees of the meeting to present new proposals for use cases to analyze to the 'DPD en xarxa' working group that has developed the Catalan FRIA model, because he assured that it is still alive and will continue to publish the results as it studies new projects with AI systems.
'DPD en xarxa' is the learning and collaboration community of data protection delegates in Catalonia, promoted by APDCAT more than a year ago to promote the exchange of knowledge, training and interaction between those who ensure compliance with data protection regulations in organizations.
As part of the 2nd Meeting of the Working Groups of the Ibero-American Data Protection Network, the Catalan Data Protection Authority (APDCAT) has presented in Brazil the Catalan model for assessing the impact on fundamental rights in the use of artificial intelligence (AI), developed within the 'DPD en xarxa' community.