The Catalan Data Protection Authority (APDCAT) and the Brazilian National Data Protection Authority (ANPD) have signed a memorandum of understanding to collaborate in the design, implementation, and promotion of tools that ensure the development of artificial intelligence systems (AIS) that respect fundamental rights.
At the 47th Global Privacy Assembly, held in Seoul, the director of APDCAT, Meritxell Borràs i Solé, and the president of ANPD, Waldemar Gonçalves, signed a set of commitments to share strategic information and promote joint projects on artificial intelligence (AI). Specifically, to create common spaces for the exchange of experiences to improve the protection of fundamental rights in AI.
“The main point of the agreement is collaboration for the implementation of the Catalan model of identification and mitigation of risks to fundamental rights in the development of AI systems as a reference in Brazil. The methodology is pioneering in Europe and was developed by APDCAT,” said Waldemar Gonçalves, referring to FRIA, the acronym for Fundamental Rights Impact Assessment.
For her part, the director of APDCAT recalled that as of August 2026, fundamental rights impact assessments will be mandatory for high-risk AI projects in Europe. “The alliance with ANPD will therefore help promote the methodology in Brazil, sharing new use cases and results,” she explained.
The Catalan FRIA model in Brazil
Specifically, the two parties will collaborate on the implementation of the Catalan FRIA model for trustworthy AI as a reference framework in their respective fields of action. This is a pioneering methodology in Europe, driven by APDCAT within the framework of the ‘DPD in network’ community, and in collaboration with professor and expert Alessandro Mantelero. It serves to guide AIS developers in identifying potential risks to fundamental rights and how to mitigate them and has been applied to concrete cases that serve as examples. The newly signed agreement will help promote the Catalan methodology among organizations, companies, and entities in Brazil, and to share new use cases in which it may be applied, along with the results obtained.
Since its official presentation at the Parliament of Catalonia on January 28, APDCAT has exported the Catalan FRIA model to make it a national and international benchmark for guiding AI system developers so that their products and services respect fundamental rights and are trustworthy. In this regard, the Authority has presented and promoted it throughout the year at conferences, networks, and forums worldwide, such as the Ibero-American Data Protection Network, the Spring Conference, and in countries including Italy, Brazil, Colombia, Georgia, and Costa Rica. As a result of this work, in June APDCAT and the Basque Data Protection Authority signed an agreement to promote the Catalan model among entities in the Basque Country, and the Croatian Data Protection Authority has also translated it into its language and recommends it within its scope of action.
Testing environments for trustworthy AI
Furthermore, the agreement will provide technical support and exchange of knowledge and experiences regarding the deployment of regulatory sandboxes in AI. These are controlled testing environments where competent authorities and AIS providers work together to define good practices in the use of AI before it reaches the market. The goal is for AI projects to be safe, trustworthy, and respectful of fundamental rights. Sandboxes therefore make it possible to work on specific cases in a supervised manner, for a defined period, and under specific conditions. These concrete projects should serve as a guide and reference for developing AI projects in line with current regulations, particularly for small and medium-sized enterprises, entrepreneurs, and startups.
In Europe, the Artificial Intelligence Regulation (AI Act) establishes the obligation for competent authorities in all member states to provide regulatory sandboxes in AI at national, regional, and local levels starting in August 2026.
The goal is to improve legal certainty, support the exchange of good practices, foster innovation and competitiveness, facilitate the development of an AI ecosystem, contribute to regulatory learning based on verified data, and make it easier and faster for AI systems to access the market, particularly for SMEs and startups.
In this context, last June ANPD launched a call for participation in a pilot regulatory sandbox on Artificial Intelligence and Data Protection in Brazil, with the aim of experimenting with innovative techniques, technologies, or business models.
Joint programs and research
Likewise, the memorandum provides for the development of education, training, and awareness programs on personal data protection, as well as the promotion of joint studies and research, particularly regarding artificial intelligence and privacy. Finally, the parties will exchange information on best practices in privacy policies and personal data protection, with the aim of strengthening the defense of citizens’ fundamental rights.
Within the framework of the 47th Global Privacy Assembly, the Catalan Data Protection Authority and the Brazilian National Data Protection Authority have signed a memorandum of understanding to share and promote tools, testing environments, and models that serve as a guarantee when developing artificial intelligence projects