European banks face increasing systemic risks due to their growing reliance on foreign tech firms for AI solutions. The Netherlands' financial regulator is urging prompt action to mitigate these risks. As banks integrate AI for credit scoring, fraud detection and other services, they become more exposed to operational and data privacy vulnerabilities associated with third-party providers. Regulators are concerned about governance, compliance, and the potential for an accountability gap in AI oversight. The EU's AI Act attempts to address these concerns by creating a legal framework promoting safe and trustworthy AI. It classifies AI systems used for creditworthiness assessment as high-risk, mandating transparency, explainability, and human oversight. Financial institutions must maintain AI system inventories and implement risk management policies to ensure compliance.
The European Central Bank (ECB) is monitoring AI adoption, focusing on governance and risk management. The European Banking Authority (EBA) has mapped the AI Act against existing rules for banks and payment firms, finding no significant contradictions but emphasising the need for effective integration. The European Parliament has also adopted a resolution on AI in financial services, highlighting opportunities and risks, including data bias and over-reliance on a few providers.
To mitigate concentration risks, regulators are considering applying Digital Operational Resilience Act provisions to AI models. The AI Act includes a phased implementation with penalties for non-compliance.




