Specifically, the two parties will collaborate on the implementation of the Catalan FRIA model for trustworthy AI as a reference framework in their respective fields of action. This is a pioneering methodology in Europe, driven by APDCAT within the framework of the ‘DPD in network’ community, and in collaboration with professor and expert Alessandro Mantelero. It serves to guide AIS developers in identifying potential risks to fundamental rights and how to mitigate them and has been applied to concrete cases that serve as examples. The newly signed agreement will help promote the Catalan methodology among organizations, companies, and entities in Brazil, and to share new use cases in which it may be applied, along with the results obtained.
Since its official presentation at the Parliament of Catalonia on January 28, APDCAT has exported the Catalan FRIA model to make it a national and international benchmark for guiding AI system developers so that their products and services respect fundamental rights and are trustworthy. In this regard, the Authority has presented and promoted it throughout the year at conferences, networks, and forums worldwide, such as the Ibero-American Data Protection Network, the Spring Conference, and in countries including Italy, Brazil, Colombia, Georgia, and Costa Rica. As a result of this work, in June APDCAT and the Basque Data Protection Authority signed an agreement to promote the Catalan model among entities in the Basque Country, and the Croatian Data Protection Authority has also translated it into its language and recommends it within its scope of action.
Furthermore, the agreement will provide technical support and exchange of knowledge and experiences regarding the deployment of regulatory sandboxes in AI. These are controlled testing environments where competent authorities and AIS providers work together to define good practices in the use of AI before it reaches the market. The goal is for AI projects to be safe, trustworthy, and respectful of fundamental rights. Sandboxes therefore make it possible to work on specific cases in a supervised manner, for a defined period, and under specific conditions. These concrete projects should serve as a guide and reference for developing AI projects in line with current regulations, particularly for small and medium-sized enterprises, entrepreneurs, and startups.
In Europe, the Artificial Intelligence Regulation (AI Act) establishes the obligation for competent authorities in all member states to provide regulatory sandboxes in AI at national, regional, and local levels starting in August 2026.
The goal is to improve legal certainty, support the exchange of good practices, foster innovation and competitiveness, facilitate the development of an AI ecosystem, contribute to regulatory learning based on verified data, and make it easier and faster for AI systems to access the market, particularly for SMEs and startups.
In this context, last June ANPD launched a call for participation in a pilot regulatory sandbox on Artificial Intelligence and Data Protection in Brazil, with the aim of experimenting with innovative techniques, technologies, or business models.
Likewise, the memorandum provides for the development of education, training, and awareness programs on personal data protection, as well as the promotion of joint studies and research, particularly regarding artificial intelligence and privacy. Finally, the parties will exchange information on best practices in privacy policies and personal data protection, with the aim of strengthening the defense of citizens’ fundamental rights.
Last update: 10.02.2026