AI tools are highly complex and may be flawed, hallucinate and reflect biases, according to Merrill.
Financial advice firms of all stripes are relying on the technology of artificial intelligence to drive down costs as technology replaces work performed by employees.
At the same time, using AI or machine learning in operating a financial advice firm opens a firm to potential hazards, according to industry giant Merrill Lynch.
“Merrill may use programs and systems that utilize AI, machine learning, probabilistic modeling and other data science technologies, AI tools, including those developed by third parties,” according to new disclosure from the firm. “AI tools are highly complex and may be flawed, hallucinate, reflect biases included in the data on which such tools are trained, be of poor quality, or be otherwise harmful, which therefore requires supervision and oversight.”
READ MORE: Monthly Spotlight: AI in Wealth
“With the increased use of technologies to conduct business, like all companies, Merrill, its parent BofA Corp, their affiliates, customers and clients and service providers are susceptible to operational, information security, and related risks,” according to the updated wrap fee program brochure on file with the Securities and Exchange Commission for Merrill Lynch’s Investment Advisory Program.
The warning was part of the revised document from Monday and included in a revised section of the boiler plate style brochure titled “Investment Strategies and Risk of Loss—Information Security, Cybersecurity and Artificial Intelligence Risks.”
Adopting new technologies always invites a certain exposure to added risks at financial advice firms, which are highly regulated and must follow industry rules and practices that have often been in place for decades before using new technologies like Artificial Intelligence.
FINRA, the primary regulator for more than 3,000 brokerage firms, last month released its annual regulatory overview of the industry and among the topics covered in the report were generative artificial intelligence, or Gen AI, cybersecurity and cyber-enabled fraud; manipulative trading in small-cap, exchange-listed equities; and third-party risk landscape.
The industry regulator was essentially warning financial advice firms that emerging technology and long‑standing compliance gaps are converging into higher risk for investors.
“I’m still skeptical of people’s perception of how AI is going to impact this business,” said Sander Ressler, managing director of Essential Edge Compliance Outsourcing Services. “Until we reach a point to trust the output of AI without supervision and verification, AI is more of a weight than a rocket ship.”
“Firms are using AI to gather and consolidate data to reduce human need for analysis,” he said. “Regulators are rightfully concerned that firms are accepting data consolation and analysis as flawless. What we all know is that’s not exactly true. We’re not at a point to trust the AI process from beginning to end.”
Merrill and its related companies “are targets of an increasing number of cybersecurity threats and cyberattacks Cyber-incidents cause disruptions and affect business operations,” according to the brochure.
“The legal and regulatory environment relating to the use of AI tools is uncertain and rapidly evolving, and could require changes in our implementation of AI tools and increase compliance costs and the risk of non-compliance,” according to Merrill. “We may have limited visibility over the accuracy and completeness of AI tools developed by third parties.”
In its December report, FINRA urged firms exploring Gen AI tools to think beyond productivity gains and build supervisory and governance frameworks around any models they use. That includes testing for accuracy and bias, logging prompts and outputs, and making sure existing rules on supervision, communications, recordkeeping and fair dealing still hold when AI is in the loop.
The report also zeroed in on AI “agents” – systems that can plan and execute tasks on their own across multiple data sources and applications. These agents can accelerate automation and cut costs, but FINRA said they bring a different risk profile: tools that act without human sign-off, stretch beyond their intended authority, are hard to audit, or mishandle sensitive data.
This article is part of our Monthly Spotlight series, which in January focuses on AI in Wealth. Full coverage can be found here.
