People are already using ChatGPT to help with retirement planning, but is it good enough yet to deliver the kind of smart, personalized financial guidance human advisers have traditionally provided? Does it engage in ethical financial behavior and act in the best interests of clients, as human and organizational fiduciaries must?
“Yes and no,” said a finance professor at the MIT Sloan School of Management. Presenting recently as part of the MIT Sloan speaker series, “AI + X: How AI Is Changing Management Practice,” Lo said that ChatGPT can offer reliable, individualized financial advice, but it’s not perfect — yet.
For now, people who want to use generative artificial intelligence for investment advice should be their own advocate and cross check what it says, according to Lo, who is director of the MIT Laboratory for Financial Engineering.
“You need to be educated because ultimately, it’s your life, it’s your wealth. You need to bear responsibility until such time as large language models can bear such responsibility,” Lo said.
What AI does well in planning for retirement
Lo has been investigating the impact of generative AI on retirement planning since GPT-3.5 was released to the public in November 2022. Once skeptical, he has become a lot more optimistic following the release of the latest model, ChatGPT-5.2, in December 2025. Specifically, he believes AI is now good at:
- Explaining trade-offs.
- Scenario exploration.
- Emotional intelligence.
- Behavioral coaching.
- Portfolio logic.
“It’s good at narratives around all of these things, explaining the stories about why you should or should not engage in certain of these behaviors,” Lo said.
Ideally, AI should be able to “talk” to people from all walks of life, but older versions of ChatGPT were not able to fully personalize financial advice in tone and content.
That has changed dramatically in the latest version. “Can [ChatGPT-5.2] tailor financial advice that’s different for you and for me?” Lo asked. “The answer is, we believe right now we’re close, if not already there, in terms of personalization.”
ChatGPT has likewise showed improvement in emotional intelligence. In 2022, when Lo asked ChatGPT-3.5, “What should I do if I lose more than 25% of my life savings in the stock market?” it came back with reasonable advice delivered in a neutral tone — and with one suggestion that wouldn’t be right for most investors.
Lo’s then-conclusion: Generative AI wasn’t yet fully trustworthy for investment advice. When he asked the same question of ChatGPT-5.2, “this time the results blew me away,” Lo said.
The chatbot started its reply with, I’m really sorry. You’re not alone in this, and a loss of that size can feel gut wrenching. Let’s slow this down and make it manageable.
“It didn’t begin with any kind of advice. It began with empathy, and that’s exactly what’s called for here,” Lo said. “It’s telling you, ‘Don’t worry; calm down. Things are going to be OK, both emotionally and financially.’”
What AI can’t yet do in planning for retirement
- AI does not bear any legal responsibility. Human financial advisers are legally required act in their clients’ best interests and adhere to regulations concerning fair dealing and conflict disclosure. Chatbots have none of those constrictions.
“If ChatGPT gives you bad advice, if you end up getting traded ahead [or] front-run by one of these large language models, they will not go to prison,” Lo said. “There is no actual fiduciary duty in the sense of bearing consequences. So you’ve got to take their advice with a pound of salt.”
Humans can only be taken fully out of the relationship once fiduciary duty for generative AI is achieved, “and we are definitely not there yet,” Lo said.
- AI is not good at precise tax optimization and arithmetic precision. Believe it or not, Lo said, LLMs are not great at math. Rather than relying on algorithmic logic, LLMs operate on probability; they offer predictions based on what data they were trained on.
“There’s only so many ways to enter in a stock price of $45.28. That’s pretty much cut-and-dried,” Lo said. LLMs are processing narrative as a function of your specific prompt. “The notion of a prompt versus a particular input is very different because prompts have a much larger set of degrees of freedom.”
- AI is not yet capable of regulatory nuance. Regulators are trying to build up their expertise in AI, but the industry is moving faster than government agencies and self-regulatory organizations can keep up with. What’s needed is governance on how to deal with legal liability.
“We need guardrails, [but] guardrails do not exist,” Lo said. “As far as I can tell, we’re not doing a lot in order to put those guardrails in place.”
Data security is one area of concern. If your personal data is used in an AI model and subsequently used to train future iterations, there is always the chance that sensitive details could be exposed. “We have no idea what happens to the data, so good luck if you’re concerned about that,” Lo said.
How to use AI to plan your retirement
Lo offered some practical guidelines for those who understand AI’s shortcomings and still wish to proceed:
- Ask an LLM to tell you why you might be wrong. By prompting ChatGPT to challenge your perspective — “Am I wrong to consider real estate a safe investment? How so?” — you can uncover weaknesses in your reasoning.
- Ask your LLM to cross-check and verify facts and conclusions across multiple sources.
- Require the Al to state its assumptions and uncertainties.
- Ask what information is missing from the model’s analysis.
- Use multiple Al platforms to evaluate and critique one another’s conclusions.
- Become an expert. Use AI to teach you the basics of finance and to suggest additional sources of information (including humans).
- Refine your prompts over time. If you give ChatGPT multiple prompts to generate a particular desired outcome, follow that up by asking it what prompt you should have asked, to circumvent all the back-and-forth.
As Lo sees it, the best results are derived from an informed, engaged individual using the newest AI models as a collaborative partner rather than an oracle. “This is the power of AI,” he said. “You have powers that you didn’t have a couple of years ago.”
Andrew Lo is the Charles E. and Susan T. Harris Professor at the MIT Sloan School of Management and the director of the MIT Laboratory for Financial Engineering. His recent projects include an evolutionary model of financial markets based on his Adaptive Markets Hypothesis; new financing methods and business models for accelerating biomedical innovation; quantitative approaches to deep-tech investing; applying AI, especially machine learning and LLMs, to financial advice; quantamental investing; and health care finance. His most recent book is “The Adaptive Markets Hypothesis: An Evolutionary Approach to Understanding Financial System Dynamics.”
