Fundamental rights are protected both at the European and national levels. This means that if they are not respected, there is legal liability for any harm or damage that may occur. FRIA is the unified methodology that enables the assessment of all these situations.
AI service providers are required to assess their tools to verify their safety and the impact they may have on fundamental rights. We are more used to doing the first part of this process—checking for safety—because we already have the technical tools and well-defined standards for that. However, the part related to fundamental rights is very new, and we lack experience in it. That’s why developing a methodology like the FRIA model is so important.