Veriff
LibraryFraud centerFraud guidesFive ways to tackle AI risks in US finance by 2025

Five ways to tackle AI risks in US finance by 2025

The finance sector continues to lead the way in AI adoption, driven by decades of machine learning use and growing interest in generative AI. With a massive $35 billion invested in AI in 2023 alone, the industry is gearing up for even greater innovation, with banking leading the charge.

Header image
Author
Aleksander Tsuiman
Head of Product Legal & Privacy
December 18, 2024
Fraud
Finserv
Share:
On this page
Introduction
Regulatory concerns
Step 1: Stay ahead of evolving AI regulations
Step 2: Address bias and discrimination risks
Step 3: Strengthen data privacy and security
Step 4: Enhance operational transparency
Step 5: Balance innovation with regulation
Conclusion: Building a resilient future

Introduction

The rise of generative AI has reignited public interest in artificial intelligence, positioning the finance sector as a leader in AI adoption across industries. This is largely due to the widespread use of traditional AI, like machine learning (ML), which has been a staple in the sector since the late 2000s. Although generative AI is less prevalent today, it is expected to gain traction in the industry in the years ahead. In 2023, the financial services sector invested around 35 billion USD in AI, with banking leading the way, contributing approximately 21 billion USD.

Artificial Intelligence (AI) is revolutionizing financial services, bringing advancements in areas like fraud detection, customer support, predictive analytics, and risk management. It has become a crucial tool across multiple business divisions within financial institutions, each utilizing it in unique ways. In 2023, the operations segment saw the highest adoption of AI, followed closely by risk and compliance. 

Regulatory concerns

However, the rapid adoption of AI brings complex regulatory challenges. Navigating the evolving AI regulatory landscape is critical for financial services providers to maintain a competitive edge while ensuring compliance and ethical standards.

With plans to rescind President Biden’s 2023 AI Executive Order and reduce regulatory barriers, the regulatory landscape is set to become less predictable. While these proposals have sparked debate among industry experts regarding their practicality, they signal a broader trend of prioritizing innovation over heavy regulation, which is likely to influence the financial services sector.

This guide outlines five steps financial institutions can take to effectively address the regulatory risks of AI while maximizing its potential:

Step 1: Stay ahead of evolving AI regulations

AI regulation in the US is fragmented, with state-level initiatives like Colorado, Utah, and California setting the pace. Colorado and Utah have implemented cross-industry laws addressing AI risks, drawing inspiration from the EU’s framework. California’s Consumer Privacy Act (CCPA) emphasizes transparency in AI systems, particularly for large-scale models and synthetic content.

As mentioned earlier, leadership changes have shifted regulatory priorities at the federal level. President Biden’s October 2023 Executive Order on AI emphasized safe, secure, and ethical AI development. However, the Trump administration’s plans to rescind this order signal a pivot toward less regulatory oversight. This evolving landscape underscores the importance of monitoring both state and federal developments to anticipate future requirements.

Financial services companies should also pay close attention to any guidelines and statements issued by their respective regulators, which will reflect and guide the acceptable use of technology, and also the risks that the regulators see as important to address. For example, FinCen has proposed a rule to strengthen and modernize financial institutions' anti-money laundering and counter the financing of terrorism (AML/CFT) programs pursuant to the Anti-Money Laundering Act of 2020 (AML Act). The rule itself specifically calls for the potential to adopt machine learning or AI as an innovative approach for financial service providers to more effectively comply with regulatory requirements.

Actionable strategy:

Financial services firms should develop adaptive strategies to address the regulatory patchwork. Proactively aligning AI strategies with emerging guidelines, both at the state and federal levels, ensures compliance, operational resilience, and readiness for future changes.

Step 2: Address bias and discrimination risks

AI systems in financial services can inadvertently perpetuate bias, especially in critical areas like lending, credit scoring, and hiring. Federal authorities, including the Federal Trade Commission (FTC) the Department of Justice (DOJ), and the Consumer Financial Protection Bureau have signaled their intent to enforce anti-discrimination laws rigorously. The 2023 “Joint Statement on Enforcement Efforts Against Discrimination and Bias in Automated Systems” reinforces the urgency of addressing this issue.

Actionable strategy:

Conduct regular audits of AI models to identify and mitigate biases. Implement diverse datasets and ensure fair outcomes in decisions such as loan approvals.

By proactively addressing bias, financial firms can build trust while avoiding legal repercussions.

Step 3: Strengthen data privacy and security

AI's reliance on data makes privacy and security top concerns. Financial institutions must align with existing data protection laws, such as the Gramm-Leach-Bliley Act (GLBA), and prepare for future state-specific regulations. The US today operates with a patchwork of state-specific privacy laws which have to be read in conjunction with any exceptions they might provide for financial services companies. Therefore, financial services companies should understand whether the activity they pursue with the help of AI also subjects them to specific privacy obligations, and if yes, to what extent.

Actionable strategy:

Financial firms should establish robust data governance frameworks, ensuring that customer data used in AI systems is protected and compliant with all relevant laws.

Step 4: Enhance operational transparency

Regulators and customers increasingly demand transparency in how AI systems operate. For financial institutions, explainable AI (XAI) practices are essential, particularly for high-impact decisions like loan approvals and fraud detection.

Actionable strategy:

Develop systems that allow regulators and customers to understand how AI decisions are made. For example, use visual dashboards to explain AI-driven outcomes in simple terms.

Operational transparency not only helps compliance but also fosters customer confidence in AI systems.

Step 5: Balance innovation with regulation

The financial services sector must balance the need for innovation with the responsibility to comply with regulations. Overregulation risks stifling AI’s potential, while under-regulation could lead to ethical lapses and erode consumer trust.

Actionable strategy:

Engage in policy dialogues to advocate for balanced AI governance. Collaborate with regulators and industry bodies to ensure practical, clear, and consistent policies.

Firms that prioritize ethical AI practices and advocate for pragmatic regulations will be better positioned to lead in a competitive landscape.

Webull: Strengthening security and trust through AI-powered Identity Verification

Webull, a leading online trading platform, is at the forefront of leveraging cutting-edge technology to ensure secure and seamless user experiences. Partnering with Veriff, Webull implemented advanced identity verification solutions to enhance fraud detection and streamline its onboarding process. This collaboration not only fortified Webull’s compliance with regulatory standards but also demonstrated a commitment to operational transparency and user trust. By integrating Veriff’s AI-powered systems, Webull effectively addressed critical challenges in fraud prevention while maintaining an innovative edge in the highly competitive financial services industry.’

“Providing our users with a safe and secure platform has always been a top priority at Webull, and Veriff has helped us to do so. Compared to previous partners, Veriff has been able to support us in identifying fraudulent activity accurately and effectively – even as platform user numbers climbed.”

Webull Chief Risk Officer, Brendan Fuller.

Conclusion: Building a resilient future

AI holds immense potential to revolutionize financial services, offering innovations that enhance efficiency, security, and customer experience. However, these advancements come with intricate regulatory challenges. Addressing key issues such as bias, data privacy, and transparency while remaining adaptable to shifting legislative priorities is crucial.

With the Trump administration poised to roll back certain federal AI regulations, financial institutions must navigate a landscape where state-level policies dominate and federal oversight becomes less centralized. This deregulated approach may lower compliance costs, but also creates uncertainty, underscoring the need for financial firms to monitor state-driven initiatives and engage in policy dialogues to influence practical, balanced AI governance.

Financial institutions that balance innovation and compliance will be better positioned to thrive amid these complexities. Firms can build trust and resilience by embedding ethical AI practices, fostering transparency, and staying ahead of regulatory changes. Those who proactively adapt to both state and federal developments can lead the way in creating a robust, trustworthy financial ecosystem in an increasingly AI-driven world.