
In the framework of the International Data Protection Day, the Catalan Data Protection Authority (APDCAT) has presented to the Parliament of Catalonia a pioneering model in Europe for developing artificial intelligence (AI) solutions that respect fundamental rights. This is the first methodology for the impact assessment on fundamental rights in the field of artificial intelligence (FRIA) applied to specific cases in Europe.
In an event chaired by the President of the Parliament of Catalonia, Josep Rull i Andreu, and the Director of the APDCAT, Meritxell Borràs i Solé, the Authority has presented this model for developing reliable and human-centered AI solutions, designed to specify the new obligations established by the Artificial Intelligence Regulation (RIA). Specifically, the RIA requires an FRIA to be carried out when AI is used in a project and this may pose a high risk to people. The FRIA objective is to detect these risks and mitigate them, to avoid possible biases, discrimination, etc. Currently, there are no guidelines from the competent authorities at European level in this area.
This pioneering model has been developed within the framework of the working group of the data protection officers network of Catalonia 'DPD en xarxa' led by the tenured professor of Civil Law at the Polytechnic University of Turin, Alessandro Mantelero. Thus, it has been developed based on the interaction of the data protection officers (DPO) of public and private entities that make up it, analyzing artificial intelligence (AI) solutions from projects of various entities.
The deputy first secretary of the Bureau of the Parliament, Glòria Freixa i Vilardell, and the director of the APDCAT, Meritxell Borràs i Solé, were in charge of welcoming the event. In her speech, Borràs defended the need for good design and use of AI to avoid risks. The director also thanked the DPOs who participated in the development of this pioneering methodology, led by Professor Alessandro Mantelero, and coordinated by the DPO and head of strategic projects at APDCAT, Joana Marí.
Borràs recalled that data protection authorities play a very important role in supervising artificial intelligence systems, a role that the RIA has reinforced. In this context, the director asked the Parliament and Government of Catalonia to define the Catalan legal framework, which must make competitiveness and the defense of everyone's rights compatible. "The model that we present today will contribute to Catalonia being a pioneer in research and investigation focused on people," concluded Borràs.
For his part, Professor Alessandro Mantelero presented the new methodology applied, which includes the first cases worked on. These are four real cases in which the impact on fundamental rights in the use of AI systems has been assessed. The cases are framed in areas of action where AI solutions are increasingly used, and have a greater impact on people. These are education (assessment of learning outcomes and prediction of school dropout), personnel management (decision support systems in human resource management), access to healthcare (cancer treatment based on medical images) and social welfare services (voice assistant for the elderly). Later, as new real cases are analysed within the framework of this working group, new results will be published in ‘DPD en xarxa’.
The methodology adopted is developed in three phases, taking into account the context and the typology of people exposed to the risk, the potential harm to fundamental rights, and the necessary prevention and mitigation measures. The first phase is planning, scoping and risk identification. It includes the description of the AI system and its context of use, taking into account intrinsic risks (related to the system itself) and extrinsic risks (related to the interaction between the system and the environment in which it is implemented). The second phase is risk analysis, which must go beyond a general identification of potential areas of impact and estimate the level of impact on each right or freedom. And finally, the third phase is risk mitigation and management. Only by defining the level of risk before and after the adoption of prevention and mitigation measures is it possible to demonstrate that the risk has been addressed in a specific and effective manner.
The aim of the work is to contribute to the international debate on the fundamental rights impact assessment model, as it provides evidence on crucial issues such as the relevant variables to be taken into account; the methodology for assessing and creating risk indices; the role that standard questionnaires can play and their limitations; and the role of experts in this type of assessment. Thus, this model can serve as a source of inspiration for organizations in Europe and non-EU countries that want to adopt a fundamental rights approach to AI, but do not have a proven reference model, with concrete cases to compare their experiences with.
New roles and case studies
The presentation was held within the framework of the conference ‘Artificial Intelligence and Fundamental Rights: A Look Beyond Privacy’, which was held this morning with the presence of numerous experts in the field.
In this sense, the head of the Legal Advisory Service of the APDCAT, Xavier Urios Aparisi, has talked about the new roles and obligations established by the RIA regarding the minimization of the impacts of the use of AI.
In addition, the use cases analyzed in the methodology have been presented, in a table moderated by the DPO and head of strategic projects of the APDCAT, Joana Marí Cardona. In addition to Alessandro Mantelero, the participants are Cristina Guzmán Álvarez, DPO of the Universitat Politècnica de Catalunya - BarcelonaTech (UPC); Esther Garcia Encinas, head of the Privacy Office of CaixaBank; Ruben Ortiz Uroz, DPO of the University of Barcelona; and M. Ascensión Moro Cordero, head of the Open Government Department and Coordinator of the Presidency of the Sant Feliu de Llobregat City Council.
Coinciding with the commemoration of International Data Protection Day, the Catalan Data Protection Authority has organized the conference 'Artificial Intelligence and Fundamental Rights: Looking beyond Privacy', with the participation of experts.