What happened
The EU introduced the AI Act, classifying AI systems used for creditworthiness assessment as high-risk, mandating transparency, explainability, and human oversight. This requires financial institutions to maintain AI system inventories and implement risk management policies. Regulators are considering applying Digital Operational Resilience Act provisions to AI models to mitigate concentration risks from increasing reliance on foreign tech firms for AI solutions in credit scoring and fraud detection. The Act features phased implementation and non-compliance penalties.
Why it matters
The classification of AI systems for creditworthiness as high-risk introduces a significant operational constraint for financial institutions, increasing the burden on compliance and risk management teams. The mandate for transparency, explainability, and human oversight, alongside requirements for AI system inventories and risk management policies, creates a visibility gap if not adequately addressed. This raises due diligence requirements for procurement and platform operators regarding third-party AI solutions, increasing exposure to operational and data privacy vulnerabilities and potential accountability gaps in AI oversight.




