How will the AI Act alter the landscape for fintechs? Key requirements and penalties
On 2 August 2025, further elements of the AI Act came into force:
- rules for general-purpose AI models (GPAIs).
- an EU and national oversight system.
- a compliance assessment infrastructure (notified bodies) and a penalty regime (with the exception of Article 101).

Prohibitions on practices deemed unacceptable have been in force since 2 February 2025. However, the main part of the obligations for “high-risk” systems will apply from 2 August 2026, and Article 6(1) from 2 August 2027.
Fintechs make extensive use of AI in risk assessment, customer onboarding (KYC), fraud detection and customer service. The AI Act classifies systems based on risk. Crucially for the financial sector, AI used for credit assessment or credit scoring of natural persons and many biometric applications qualify as “high risk”, triggering the most extensive set of requirements (documentation, registration, compliance assessment, human oversight, etc.).
AI Act- key compliance obligations for fintechs
- Risk management & FRIA: a documented process for identifying and mitigating risks; in certain cases, an obligation to assess the impact on fundamental rights (FRIA) and an obligation to inform individuals that a decision concerning them is supported by a high-risk system.
- Data Quality and Governance: training data must be adequate, representative and free from bias leading to discrimination; necessary model update and validation processes throughout the lifecycle.
- Technical documentation & logging: full documentation of the purpose, architecture, metrics, tests and recording of events relevant from a risk perspective (e.g. credit decisions, AML alerts) to enable auditing and investigation of incidents.
- Transparency towards the user: clear messages when interacting with AI (e.g. chatbot), labelling of generated content (deepfake).
- Human oversight: design and procedures must allow for effective human intervention, including appeal paths from algorithmic decisions.
- Resilience, accuracy, cybersecurity: regular model validation, quality monitoring in production, protection of models/data against manipulation and leakage.
- Conformity assessment, registration, CE marking: before making a high-risk system available, conformity assessment and registration in the EU database; then a declaration of conformity and CE marking.
- User (deployer) obligations: use in accordance with the instructions, monitoring, incident reporting, log storage and appointment of competent supervision by the financial institution.

What are the penalties?
- prohibited practices- up to EUR 35 million or 7% of global turnover.
- breaches of obligations relating to, among other things, high-risk/GPAI- up to EUR 15 million or 3%.
- for false/incomplete information- up to EUR 7.5 million or 1% (reductions for SMEs are taken into account).
Practical steps for FinTech institutions
- AI inventory & risk classification: list all systems (front-, mid-, back-office), identify those that are potentially “high-risk” (scoring, biometrics, certain AML use cases), and in case of uncertainty, err on the side of caution and assign a higher class.
- AI Governance Programme: appoint an owner (AI Compliance Officer) and an AI committee; include the AI Act in the risk matrix alongside GDPR, DORA, MiFID II/PSD2.
- Model lifecycle policies and procedures: data governance (sources, documentation, refresh), validation/bias testing before and after implementation, production monitoring, incident procedures, and kill switch.
- Transparency and UX: label chatbots/AI content; provide a human fallback for financially significant decisions.
- Human oversight: formalise checklists for analysts/risk units that can suspend or override AI decisions; implement training.
- Vendor Due Diligence: update contracts with suppliers (AI Act obligations, audit, support with controls), require documentation/CE from high-risk solutions and compliance information, especially for GPAI.
- Registration and compliance assessment: for high-risk solutions, plan a compliance assessment and registration in the EU database before production; assess whether your own modifications make you a “supplier” within the meaning of the AI Act.
- Follow guidance: monitor AI Office/EC guidance (especially on GPAI and models with “systemic risk”) and CEN/CENELEC standards — the EC is publishing further practical documents.
Protect your fintech from risk and gain a competitive edge
The AI Act does not hinder innovation but rewards those who implement solutions that comply with the principles of “trustworthy AI.” Companies that prepare their scoring systems, biometrics, and chatbots for regulatory requirements today will pass audits faster, increase their credibility in B2B/B2G tenders, and minimise the risk of penalties.
Don’t get left behind! Our team will conduct a comprehensive AI Act Ready audit- from inventory and risk mapping, through gap analysis and vendor clauses, to a compliance assessment plan and training. Contact us if you are developing: scoring, e-onboarding with biometrics, AML/Fraud AI tools, or GPAI-based chatbots.