The accelerating pace of artificial intelligence development is presenting a significant challenge for financial industry compliance, with executives expressing concern that regulators are struggling to keep up. This sentiment was a recurring theme at the Financial Industry Regulatory Authority’s (FINRA) annual conference, where discussions highlighted the potential for AI, particularly generative AI tools, to clash with existing securities regulations.
Dan Gallagher, Chief Legal, Compliance, and Corporate Affairs Officer at Robinhood Markets and a member of FINRA’s Board of Governors, voiced these concerns during a panel discussion. He pointed out that while AI is rapidly integrating into customer interactions and investment decision-making processes, the regulatory landscape has yet to establish clear guidelines. This disconnect, Gallagher warned, could lead to unintended violations of securities laws, including Regulation Best Interest and Regulation S-P, which govern how firms must act in their customers’ best interests and protect their non-public personal information, respectively.
"The demand is going to force regulators to come around as opposed to getting ahead of it," Gallagher stated, encapsulating the prevailing anxiety within the industry. He elaborated on the complex situation where firms are developing AI-powered tools to assist clients with investment decisions, but the current rules, when interpreted strictly, present a degree of incongruity. "I’m not saying FINRA says no or the SEC says no, but if you read the rules, it’s mildly incongruous," he noted. The urgency stems from the desire to prevent American investors from relying on third-party, potentially less regulated, sources for investment advice when using brokerage applications.
The conference, held in Washington D.C., was heavily influenced by the pervasive presence of AI, with sessions dedicated to both its transformative potential and the inherent compliance hurdles it creates. Many financial firms are already deploying AI assistants for their advisors, and the integration of advanced AI models into wealth management platforms is becoming increasingly common. For instance, Anthropic announced earlier this year an expansion of its "plug-ins" for Claude, its generative AI model, specifically targeting wealth managers, investment bankers, equity research, and private equity firms. These tools can analyze market data, generate reports, and even assist in client communication.
However, the core of Gallagher’s concern lies in the evolving expectations of consumers. Customers are increasingly looking to AI not just for information, but for direct guidance and even automated execution of investment decisions. The question then arises: is it more prudent for firms to develop and deploy such AI-driven capabilities internally, within a controlled environment, rather than letting clients seek advice from external, less secure, or less regulated sources?
"Why have them go do it with a third party when you can build it internally in a walled garden that’s more protected, where there’s better data, and quite frankly, where it’s not scraping Reddit for what it’s going to recommend to you? It’s actually using your own data," Gallagher argued. He contrasted this ideal scenario with the current reality, stating, "But right now, we sit here, and Claude can do it and I can’t, on its face." This highlights a regulatory gap where external AI tools might be capable of providing investment advice that is currently difficult or impossible for regulated firms to offer through their own AI systems without clear guidance.
Navigating the Regulatory Labyrinth: FINRA’s Perspective
Nathaniel Stankard, Executive Vice President and Chief of Staff at FINRA, acknowledged the "transition" phase the industry and regulators are in, particularly in the wake of the widespread public introduction of models like ChatGPT. He emphasized FINRA’s focus on identifying where regulatory intervention is truly necessary versus where existing rules can adequately address AI-related compliance.
"From a regulatory standpoint, you want to say, ‘All right, let’s not stymie innovation. Let’s take our time, let’s learn what the users are,’" Stankard explained. However, he also stressed the growing need for proactive engagement. "And now we’re hitting a level of maturation and expansion where I think that, from a regulatory perspective, we want to understand where it’s actually productive to engage where we need to, whether it’s to protect investors or protect funds." This indicates a shift from a purely observational stance to a more active role in shaping the regulatory environment around AI.
The challenge for FINRA, and indeed for all financial regulators, lies in balancing the promotion of technological advancement with the imperative of investor protection. The sheer speed at which AI capabilities are evolving makes it difficult for rule-making bodies to establish frameworks that are both effective and future-proof. Historically, regulatory bodies have taken years to develop and implement new rules, a timeline that is increasingly out of sync with the rapid iterations of AI technology.

The Plight of Smaller Firms in the AI Era
The complexities of AI compliance are not uniformly distributed across the industry. Wendy Lanton, Chief Operations and Compliance Officer at Herold & Lantern Investments and a small-firm representative on FINRA’s Board of Governors, highlighted the particular difficulties faced by smaller firms. The technological infrastructure and expertise required to implement and oversee AI solutions can be a significant barrier.
"I find that there might be many solutions out there, and as a small firm, you can’t build it yourself. You need a vendor," Lanton stated. This reliance on third-party vendors introduces a new layer of compliance challenges. Firms must vet vendors rigorously, ensure their AI solutions align with regulatory requirements, and manage multiple vendor relationships. "And so you say, ‘okay, well this vendor has this, and this vendor has that.’ Now, I’ve got 10 vendors: A, I can’t afford it, and B, I can’t manage the relationships," she added, illustrating the operational burden.
For smaller firms, the cost of acquiring, integrating, and maintaining sophisticated AI compliance tools can be prohibitive. This could create a competitive disadvantage, where larger institutions with greater resources are better positioned to leverage AI for both operational efficiency and enhanced client services, while also managing the associated compliance risks. This disparity could exacerbate existing market inequalities.
Frontier AI and the Specter of Exploitation
The discussion also delved into the implications of more advanced AI systems, often referred to as "frontier models," which possess capabilities that raise significant security and ethical concerns. Jeffrey Tricoli, Chief Information Security Officer and Managing Director at Charles Schwab, described the "core mission" of these cutting-edge models as the ability to "find an exploit" in existing systems and "put that exploit to use."
This inherent capability poses a substantial risk, particularly as these powerful AI agents become more accessible. Tricoli stressed the critical need for "proper guardrails" to prevent these advanced AI systems from being misused, whether by malicious actors seeking to exploit vulnerabilities or by legitimate firms whose AI tools might inadvertently cause harm. "What you don’t want to happen is you put something in place, it gives you an outcome you’re looking for, but then it goes beyond, and then potentially, it starts with time to degrade or give you results that aren’t favorable," he cautioned.
The limited release of Anthropic’s agentic AI system, Claude Mythos, to a select few organizations due to its perceived power and potential to identify defects in computer systems, underscores these concerns. Similarly, OpenAI’s planned release of an agentic system with comparable capabilities highlights the growing trend of developing AI that can autonomously operate and interact with complex systems.
Data Triage: The Foundation of AI Security and Compliance
In light of these risks, Tricoli emphasized the paramount importance of "data triage" as a foundational element of AI-related fixes for firms. Understanding and managing the data that AI systems access and process is crucial for both security and compliance. "If firms don’t know and understand what kind of data is exposed, it can’t be protected," he warned.
The ability for firms to ascertain, "at a moment’s notice," the location and nature of specific data throughout their technological environment is essential. Without this granular visibility, firms are ill-equipped to implement appropriate security measures or to demonstrate compliance with data privacy regulations. This requires robust data governance frameworks and sophisticated data management tools, which are often complex and costly to implement, particularly for smaller entities.
The rapid evolution of AI technology, from generative models assisting with customer queries to frontier models capable of system exploitation, presents a multifaceted challenge for the financial industry. The consensus among industry leaders at the FINRA conference is that a proactive and collaborative approach between industry participants and regulatory bodies is essential to navigate this rapidly changing landscape, ensuring that innovation can proceed without compromising investor trust and market integrity. The coming months and years will likely see a significant regulatory response as authorities grapple with the profound implications of artificial intelligence on financial services.
