What the EU and Colorado AI Acts Mean for Financial Services

Efi Pylarinou
4 min readMay 30, 2024
By Designer

The European Union’s Artificial Intelligence Act is set to become the world’s most comprehensive AI regulation. It is expected to take effect this year or the latest next year. As a risk-based framework, it will have significant implications for how financial services firms develop and deploy AI systems.

The first state in the US to develop an AI Act is Colorado, signed on May 17th and expected to take effect on February 1, 2026.

Risk Categories & Prohibited Practices — Creditworthiness

The EU AI Act categorizes AI systems into four risk levels: unacceptable risk (prohibited), high-risk (heavily regulated), limited risk (subject to transparency obligations), and minimal risk (permitted).

It applies to all providers, deployers, importers, and distributors of AI systems in the EU market! All financial services providers, Fintechs, Financial services tech vendors independent of their headquarters, will be subject to the requirements and the restrictions of the EU AI Act.

As data centers and cloud services will increasingly `bake` AI in their hardware and software, I foresee that nobody will be excluded from this Act.

The Colorado AI Act focuses primarily on high-risk AI systems that make consequential decisions. It applies to developers and deployers of AI systems doing business in Colorado. It doesn’t explicitly prohibit specific AI practices, instead it imposes obligations on developers and deployers to mitigate risks of algorithmic discrimination, maintain risk management policies, conduct impact assessments, provide consumer disclosures and rights, and report incidents to the Attorney General, who has sole enforcement authority.

For financial services, the most relevant prohibited practices of the EU AI Act are AI systems that exploit vulnerabilities to distort behavior and cause significant harm, as well as AI-based social scoring by public authorities.

AI systems used for evaluating creditworthiness are classified as high-risk, except when used solely for detecting fraud.

As credit scoring is classified as high-risk, all credit scoring models must comply with the Act’s extensive requirements for high-risk systems.

Deployers of AI credit scoring systems will have to conduct regular impact assessments, maintain risk management policies, and provide consumers with meaningful information about how the AI system contributes to credit decisions.

To mitigate algorithmic bias, developers will need to ensure training data is sufficiently representative and free of errors. They must also achieve appropriate levels of accuracy and robustness across different demographic groups.

Obligations for High-Risk AI Developers

Providers and developers of high-risk AI systems which include those used for creditworthiness assessments, will face extensive requirements. They will have to:

- Establish risk management systems and data governance

- Register stand-alone high-risk systems in an EU database

- Prepare technical documentation for conformity assessments

- Enable human oversight and achieve appropriate levels of accuracy, robustness, and security

Transparency in Conversational Banking — AI Chatbots

While everything related to Credit will be highly impacted as they are classified as High-risk, Conversational banking is less affected. AI chatbots fall under the limited risk category! Therefore, deployers of AI chatbots, are obliged to make sure their users are aware they are interacting with an AI system.

Biometric Identification in Payments

The AI Act prohibits the use of ‘real-time’ remote biometric identification (RBI) systems in publicly accessible spaces, with narrow exceptions for law enforcement purposes.

However, biometric verification systems that confirm a person’s claimed identity are not considered high-risk.

Most biometric authentication methods used by banks and e-commerce providers with embedded payment capabilities should be fine under the AI Act, but they need to be careful about real-time facial recognition in retail environments and more importantly, avoid inferring sensitive attributes (race, religion, etc.) from biometrics and using them.

Governance and Enforcement

Enforcement of the EU AI Act will be handled by national supervisory authorities in each EU member state, with the European Artificial Intelligence Board providing guidance. Non-compliance risks fines of up to €30 million or 6% of global annual turnover.

The enforcement of the Colorado AI Act will be handled by the Attorney General. Violations can incur penalties of $7.5M or 1.5% of revenue for minor infractions, up to $35M or 7% of revenue for serious infractions

Preparation Steps

With the AI Act expected to take effect in 2024–2025, financial institutions should start preparing by:

- Auditing their AI systems and assessing which ones are likely to be classified as high-risk

- Reviewing development processes and governance frameworks for alignment with the AI Act

By proactively managing AI risks and embracing transparency, financial services firms can build trust with customers and regulators in our AI-powered future.

The EU has created a 6-page high-level summary of the AI Act | EU Artificial Intelligence Act https://buff.ly/3vodRZF

The Future of Privacy Forum (FPF) has released a Two–Page Fact Sheet summarizing the key definitions, consumer rights, and business obligations of the Colorado AI Act.

📌 Subscribe to my YouTube Channel with my insights and industry leader interviews. New video every Wednesday: https://www.youtube.com/EfiPylarinou

📌 Twitter: https://twitter.com/efipm

📌 Apple Podcast 👉 https://buff.ly/3P1Cp1Z

📌 Spotify Podcast 👉 https://buff.ly/3xPyWaV

📌 Linkedin: https://www.linkedin.com/in/efipylarinou/

📌 TikTok: https://www.tiktok.com/@efiglobal

--

--

Efi Pylarinou

№1 #Finance Global Woman Influencer by Refinitiv 2020 & 2019. Top Global #Fintech Influencer, Futurist, #AI, #Blockchain +: 30yrs FINANCE — https://linktr.ee/Ef