Financial services firms are under more regulatory scrutiny for their AI use than almost any other sector. MaRisk, DORA, and GDPR each carry direct implications for how models are deployed, monitored, and governed. The EU AI Act adds a fourth layer for high-risk use cases.
Most firms have compliance programmes that address IT risk and data protection in general. Very few have translated those programmes into controls that are specifically designed for AI. That gap is where regulators are increasingly focused — and where the examination questions are getting harder.
This article walks through what each framework expects, how the requirements overlap, and what a compliance-grade AI governance infrastructure looks like in practice.
The regulatory landscape
MaRisk (Minimum Requirements for Risk Management, Germany)
The German Federal Financial Supervisory Authority (BaFin) updated MaRisk in 2022 to address algorithmic decision-making and model risk. The key obligations relevant to AI governance are:
AT 7.2 (Technical-organisational resources): Institutions must have adequate controls for IT systems, including processes for approving new systems and monitoring their behaviour. AI models used in business processes fall within this scope. The regulator expects documented approval workflows before production deployment and ongoing performance monitoring.
AT 8.2 (Changes in business activities): New AI-based capabilities — including AI agents that take actions on behalf of the firm — qualify as changes in business activities that require risk assessment, documentation, and management sign-off.
BTO 1.4 (Model risk): This module is explicit about model validation requirements. Institutions must validate models before use, document the validation process, and re-validate when material changes occur. For AI, this means the validation obligation covers foundation models used through APIs, not just internally developed models — a position BaFin has confirmed in supervisory guidance.
Outsourcing (AT 9): When AI services are delivered by cloud providers or API vendors, MaRisk outsourcing rules apply. Institutions must assess the materiality of the outsourcing arrangement, maintain exit strategies, and ensure they can audit their service providers. Using OpenAI, Anthropic, or similar API services for business-critical AI workflows triggers these obligations.
DORA (Digital Operational Resilience Act)
DORA, which became applicable in January 2025, is primarily a resilience and ICT risk framework — but its scope encompasses AI systems used in critical business functions.
ICT risk management (Articles 5-16): Institutions must maintain an ICT risk management framework that covers AI systems used in critical or important functions. This includes identifying and documenting these systems, assessing their risk profile, and implementing controls proportionate to that risk.
Third-party ICT provider risk (Articles 28-44): When AI is delivered through third-party providers — cloud-based model APIs, AI-enabled SaaS — DORA requires contractual provisions including audit rights, data location requirements, and exit planning. For AI specifically, this means institutions need to assess whether their model providers can satisfy DORA’s audit and inspection requirements, and ensure contracts include the required clauses.
Incident management and reporting (Articles 17-23): AI-related incidents — a model behaving unexpectedly, a data exposure through an AI prompt, an AI agent taking an unintended action — fall within DORA’s incident classification and reporting requirements. Institutions must have processes to detect, classify, and (where material) report these incidents.
Resilience testing (Articles 24-27): Critical functions supported by AI systems must be included in resilience testing. For AI, this means testing failure modes, fallback procedures, and the robustness of AI-dependent workflows.
GDPR
GDPR’s implications for AI governance are often underestimated. The key obligations are:
Article 5 (Data minimisation): AI prompts that include personal data must satisfy the data minimisation principle. Sending a full customer record to an AI model when only name and email are needed for the task is a compliance failure.
Article 25 (Data protection by design): Systems that use AI to process personal data must be designed with data protection in mind from the outset. Retrofitting GDPR controls onto AI workflows deployed without data protection review is a common audit finding.
Article 28 (Processor agreements): AI model providers that process personal data on behalf of your firm are data processors. GDPR-compliant DPAs must be in place. Consumer AI services typically do not offer DPAs that satisfy Article 28 — this is the primary reason uncontrolled use of consumer AI tools creates GDPR exposure.
Article 22 (Automated decision-making): Decisions with legal or similarly significant effects that are made solely by automated means require either explicit consent, contractual necessity, or legal authorisation. AI systems involved in credit decisions, insurance pricing, or employment screening require specific legal review.
Data transfers (Chapter V): Sending personal data to AI model providers whose infrastructure is outside the EEA requires either an adequacy decision covering the destination country, Standard Contractual Clauses, or another transfer mechanism. Many AI API providers process data in the United States — transfer compliance is a live issue for every firm using these services.
EU AI Act
The EU AI Act classifies AI systems by risk level and applies requirements accordingly. For financial services firms, the most relevant provisions are:
High-risk AI systems (Annex III): AI used in credit scoring, insurance risk assessment, or employment decisions is classified as high-risk. These systems require a conformity assessment before deployment, CE marking (for systems covered by harmonised standards), and registration in the EU database for high-risk AI systems. Firms using AI in these functions must assess whether they are deployers of high-risk AI systems under the Act.
General-purpose AI model obligations (Articles 51-56): Firms that fine-tune or otherwise adapt foundation models inherit obligations that go beyond those of pure deployers. This is relevant for firms building proprietary AI capabilities on top of API-accessible models.
Transparency obligations (Article 50): AI systems that interact with humans — chatbots, AI-generated communications — must disclose that content is AI-generated. This applies to customer-facing AI in financial services.
The compliance gap most firms have
Despite this extensive regulatory framework, most financial services firms have the same gap: they know what the rules say, but they lack the infrastructure to demonstrate compliance.
Specifically, firms typically cannot:
-
Enumerate every AI model being used across the organisation. Business units, technical teams, and individual employees use AI tools outside of central IT oversight. A firm that cannot inventory its AI use cannot manage it.
-
Show an audit trail of AI interactions via Shield Control. Regulators conducting examinations are increasingly asking for logs of AI usage — what models were used, with what data, for what decisions. Firms without this logging have nothing to show.
-
Demonstrate that sensitive data did not leave controlled environments. For GDPR and MaRisk purposes, firms need to be able to show that personal data, client data, and regulated information was not sent to external AI services without appropriate controls and agreements.
-
Prove that AI decisions were reviewed appropriately. For high-risk use cases, regulators expect evidence that AI outputs were subject to human review before consequential decisions were taken.
What compliance-grade AI governance infrastructure looks like
Meeting these requirements demands a technical layer, not just policy documentation.
A control plane for AI interactions. Every AI call — whether from an employee using a chatbot interface or an automated agent workflow — should pass through a system that can inspect, log, and apply policy to that interaction. This is the foundation of audit-ready governance.
Data classification at the prompt layer with Shield Web. Before data reaches an external model, it should be classified and filtered against your data protection policy. Personal data that cannot be processed under an adequate legal basis should be blocked or pseudonymised before transmission.
Third-party vendor assessment and contract management. Each AI model provider in use should be assessed against your DORA third-party risk framework and have appropriate DPAs in place. This requires knowing who those providers are — which requires the inventory capability above.
Model performance monitoring. For models used in consequential decisions, you need ongoing monitoring for drift, unexpected outputs, and performance degradation. This is not optional under MaRisk’s model risk requirements.
Incident detection and classification. AI-related incidents — prompt injection attacks, model-generated misinformation used in a business process, data exposure through an AI tool — need to be detectable and classifiable within your existing incident management framework.
Documented human review for high-risk use cases. For decisions that require human oversight under Article 22 of GDPR or the EU AI Act’s high-risk provisions, the review must be documented. Not a general policy statement that review happens — a record of the specific review for the specific decision.
A practical compliance checklist
For financial services firms assessing their current position:
- Inventory of all AI tools and models in use, including tools deployed by individual business units
- Classification of each use case by risk tier (high-risk EU AI Act, MaRisk model risk scope, GDPR Article 22 scope)
- DPAs in place with all external AI model providers processing personal data
- DORA-compliant contracts with material AI third-party providers (audit rights, exit provisions, data location)
- Technical logging of AI interactions covering what model, what data category, what business function
- Data filtering controls to prevent uncontrolled transmission of personal or confidential data to external models
- Model validation records for models used in risk-relevant business processes
- Incident classification criteria that include AI-related events
- Human review documentation for high-risk automated decisions
- Review date set for EU AI Act compliance (GPAI provisions apply from August 2025; high-risk system obligations from August 2026)
The path forward
The regulatory direction of travel is clear. MaRisk supervision will intensify on model risk. DORA examinations will start asking about AI in ICT risk programmes. The EU AI Act’s high-risk obligations will kick in for financial services use cases. Firms that have built governance infrastructure now will be in a fundamentally different position than those still relying on policies alone.
The firms that handle this best treat AI governance not as a compliance burden but as an operational capability. The control plane that satisfies your auditor is the same one that gives your security team visibility, your operations leaders confidence, and your employees a safe path to using AI productively.
Qadar is built for regulated industries — book a call to see how we address MaRisk, DORA, and GDPR requirements in practice. Book a call