From apps that sort your spending into neat categories, to platforms that automatically build and rebalance an investment portfolio, artificial intelligence is becoming embedded in our financial lives.
Banks now deploy chatbots to answer questions in seconds, while digital investment services promise to match you to a portfolio aligned with your goals and appetite for risk, often with minimal human involvement.
As these tools move from novelty to norm, the question is not whether AI has a role in finance, but how much trust we should place in it.
The UK’s financial regulator is clear that technology brings benefits, but also limits. An FCA spokesperson says: “Artificial Intelligence (AI) can help people make more informed financial decisions. But for personalised advice and protections if something goes wrong, people should use a financial adviser authorised by the FCA.”
Here is where AI-led money management works well…and where it may fall short.
In investing, “AI” rarely means a robot independently picking shares. More often, it refers to rules-based systems that follow a structured, pre-determined process.
Stefano Giudici, B2C lead product manager at Moneyfarm, says the firm blends technology with human oversight. “We combine quantitative techniques and qualitative judgement to build diversified multi-asset portfolios, with asset allocation at the core of the process,” he explains.
Automation is typically used to match clients to portfolios through risk questionnaires, process payments and rebalance investments so they remain aligned with long-term targets.
“So, while ‘AI’ can mean many things in the industry, the key point is that we use technology to make the investment process systematic, scalable, and consistent, while maintaining governance and oversight,” Giudici adds.
Automation tends to perform best where discipline and consistency matter most.
“Algorithm-led investing is at its best where consistency, discipline and scale matter,” says Giudici. “By design, a systematic approach keeps attention on the main drivers of long-term outcomes, especially diversification and asset allocation, which form the backbone of risk management and long-term return potential.”
It can also help counter emotional decision-making. Investors are often tempted to sell during downturns or chase recent winners. A rules-based system sticks to the agreed strategy.
For simpler planning needs, AI can also be useful.
(Getty Images)
David Macdonald of Path Financial says: “AI can be useful for simple financial planning, helping illustrate options like ISAs, pensions, and basic savings strategies. It can also assist with calculations or modelling potential outcomes. Accuracy depends on good input data – poor information can lead to misleading results.
“Think of AI as a helpful guide through the basics, not a decision-maker.”
The FCA advises consumers to treat AI tools as a starting point for research rather than a final answer.
Outputs from general-purpose AI tools do not amount to regulated financial advice from an authorised firm, meaning that if something goes wrong, consumers may not be able to access protections such as the Financial Ombudsman Service or the Financial Services Compensation Scheme. Users should also be alert to AI “hallucinations”, outdated information and hype.
Automation has limits, particularly when financial decisions intersect with complex life events. “AI and automation can be powerful tools in investing, but they have clear limits, especially in a domain where suitability, goals and risk tolerance matter as much as optimisation,” says Giudici.
Macdonald argues that human advisers remain essential in more complicated cases. “Human advisers remain important for complex situations: specialist trusts, multi-generational planning, sensitive family circumstances, life events like death, divorce or disability, and decisions involving ethical or personal values,” he says.
“AI cannot replicate the judgment or tact a human adviser provides. Some financial decisions need a human touch – nuance and empathy matter.”
There is also an important distinction between guidance and advice. Regulated advice carries stricter standards and consumer protections. Before signing up to a product or service, the FCA advises consumers to check whether a firm is authorised using its Firm Checker tool. Firms using AI to deliver services must still meet existing requirements, including the Consumer Duty.
As with any tool, AI outputs are only as good as the data and assumptions behind them.
“AI relies on historical and publicly available data, which may not reflect unique circumstances. Outputs are indicative and can change with new information. Over-reliance without review could be risky, especially in more complex or sensitive scenarios,” Macdonald says.
“Over-reliance without expert review can turn a helpful tool into a risky shortcut.”
Automation may improve discipline, but it does not remove investment risk, and no algorithm can guarantee returns.
(Getty Images)
The FCA says it wants to enable safe and responsible AI adoption, taking an outcomes-based approach rather than introducing entirely new rules. A forward-looking review launched in January and led by Sheldon Mills is examining how AI could reshape retail financial services by 2030 and beyond.
Using AI to budget, compare products or build a diversified long-term portfolio can be cost-effective and convenient, but trust should not mean blind faith.
And for personalised recommendations – and the reassurance of regulatory protection if something goes wrong – a qualified, FCA-authorised human adviser still has an important role to play.
When investing, your capital is at risk and you may get back less than invested. Past performance doesn’t guarantee future results.