Introduction

One day soon, you may be able to ask Siri to pay your cell phone bill from funds in your checking account. Or ask Alexa to recommend investments tailored to your risk profile. Or tell Gemini to manage your investing portfolio so you can travel in retirement. Such capabilities will be possible thanks to generative artificial intelligence, or Generative AI. Generative AI agents are computer systems with abilities to interpret and execute requests, such as these examples, without additional human interaction, and have been described as “the next frontier of Generative AI” (Yee et al. 2024). They have the potential to change the way individuals and firms interact with their banks and other financial services providers, opening the door to efficiency and economic growth—but also posing new risks to consumers, investors, and the safety and soundness of the financial system.

Generative AI agents may assist retail consumers with wealth management, such as working through personal assistants like Siri and Alexa to recommend financial products or serving as robo-advisors tailored to investors’ needs. They may also help nonfinancial firms through off-the-shelf treasury management solutions that balance financial returns with liquidity needs, and help financial institutions with risk management and regulatory compliance, such as with automated fraud detection, customer identity verification, and risk assessments through highly customized software (Zheng et al. 2019; Polak et al. 2020; OECD 2021).

AI agents pose new risks to the financial system, with the potential of sending it swinging from crisis to crisis. They may be used by malicious actors for fraud, market manipulation, and cyberattacks; can hallucinate and cause harm to financial institutions’ customers; and can engage in herding behavior that results in bank runs or flash crashes.

Simultaneously, Generative AI agents threaten to destabilize the financial system, sending it swinging from crisis to crisis. Malicious actors can use it to defraud consumers, execute cyberattacks against financial institutions, and engage in market manipulation (Fang et al. 2024b; Mizuta 2020; Hsu 2024). Financial institutions’ own uses of Generative AI can hallucinate (that is, produce false or misleading outputs), resulting in harms to their customers, the institutions themselves, and the financial markets in which they operate (CFTC 2024). And when individuals and real-economy firms rely on a small number of Generative AI providers for financial decisions, they can engage in herding behavior that results in bank runs or flash crashes (Gensler and Bailey 2020).

Generative AI, in its current form, is not “good” or “bad” in and of itself. The large language models that power AI agents are simply computer systems capable of generating new content, such as images, text, audio, or video from a simple prompt. These systems are best considered a form of applied statistics—they capture patterns in the data upon which they have been trained and create outputs that resemble the training data but are unique variations. These software are just the latest use of algorithms and machine learning techniques that have been used in financial markets for decades, unlike in other industries in which Generative AI is novel, and can be used as inputs in human decision-making, as copilots that make decisions in coordination with humans, and as agents that make decisions on behalf of humans (US Senate Committee on Homeland Security and Governmental Affairs 2024; Hsu 2024).

To that end, the concern in this brief is not simply about the use of algorithms in finance, but about a world in which AI agents are widely available to individuals and small businesses as well as the largest financial firms; in which malicious actors may easily use Generative AI to scam financial institutions and their customers; and in which financial institutions use Generative AI to interact with their customers, rather than human employees.

In particular, this brief is concerned about the harms that may result from individuals’ and small businesses’ AI agents interacting with large financial institutions and scammers—especially if they rely on developers’ statements that AI agents will act in customers’ interests.

Some United States financial regulators are already working to address the harms Generative AI poses. The Consumer Financial Protection Bureau (CFPB) has explained that federal law does “not permit creditors to use complex algorithms when doing so means they cannot provide the specific and accurate reasons for adverse actions” and has penalized financial institutions for relying on faulty automated compliance systems (CFPB 2022a; CFPB 2022b). The federal banking agencies have created offices to study financial innovation and AI (Phillips and Conner 2024). The Securities and Exchange Commission (SEC) has proposed a regulation addressing brokers’ uses of Generative AI and has begun examining investment advisers’ uses of Generative AI for offering financial advice (US Securities and Exchange Commission 2023).

Nevertheless, more needs to be done, especially as it comes to the use of AI agents in the financial system. This brief highlights the expected rise of AI agents and the risks that their use by financial institutions, real-economy firms, and individuals pose to all aspects of the financial system and the families and businesses who rely on it. It concludes with recommendations to Congress and regulators.

Financial System Risks from AI Agents

#1

Herding: When multiple AI agents use similar algorithms and training data, they may react to market conditions in nearly identical ways—behavior known as herding. Algorithmic biases may perpetuate such that some financial products are favored over others without reason, and rapid movements on the part of a large number of customers or market participants can lead to bank runs and flash crashes.

#2

Systemic risk: Reliance on a small number of providers of AI agents introduces a single point of failure risk. A technical failure or security breach at a single service provider could affect the AI agents of a large segment of the population. Depending on the nature of the failure or breach, this failure could cause not just herding, but uneconomical herding that creates cascading effects throughout the financial system.

#3


Reduced competition: The provision of AI agents may be an oligopolistic market, if not a natural monopoly. Thus, all of the well-documented negative consequences of reduced competition—including higher prices, higher inequality, and lower rates of innovation—can be applied to AI agents (Steinbaum and Stucke 2018). For example, service providers may lack incentive to improve AI capabilities, develop new features, or allow for customization without competitive pressure. A lack of alternatives could also allow providers to charge premium prices for their services, driving up prices for what may come to be considered a necessity for everyday life. And if AI agents are not working appropriately, customers may not have alternatives to which they can switch.

#4

Fiduciary conflicts: The phrase “AI agent” implies a legal relationship whereby agents are deemed fiduciaries of—and must act for the benefit of—their principals. Yet it is not guaranteed that AI agents will be designed to act in the interest of—and only in the interest of—their licensees. For example, in interactions between a licensee (such as an individual customer or a financial firm) and AI service providers, AI agents may be designed to preference the provider. Similarly, in interactions between two licensees, AI agents may be designed to preference one party over the other; even if unintentionally, AI agents may struggle to truly act in the best interest of their clients if both parties are using the same agent.

Suggested Citation

Read the citation

Phillips, Todd. 2024. “The Risks of Generative AI Agents to Financial Services.” Roosevelt Institute, September 26, 2024.

Related Resources