Summary:
- AI chatbots can effectively change political opinions despite inaccuracies.
- Most Americans refuse to let AI manage their 401(k) without human oversight.
- People verify AI advice on money but accept AI influence on politics with less scrutiny.
When it comes to artificial intelligence, Americans appear to have a split personality. While they listen to AI chatbots spin arguments about candidates and policy positions without blinking, they won’t let those same systems touch their 401(k)s without triple-checking with a human being first.
Two recent studies demonstrate this disconnect. A study published this month in the journal Science found that AI chatbots are effective at changing people’s political opinions, even when nearly one in five of their claims are factually wrong. Researchers from Oxford, Stanford, MIT and the UK’s AI Security Institute engaged nearly 77,000 participants with various AI chatbots tasked with changing their views on topics like taxes and immigration.
The AI systems frequently succeeded, and the effects lasted at least a month. Even more, the most persuasive chatbots tended to be the least accurate. About 19 percent of all AI claims were rated as “predominantly inaccurate.”
Yet when Americans are asked whether they’d let similar AI systems manage their retirement savings, the response is a nearly unanimous “absolutely not.”
A new survey from InvestorsObserver of 1,050 experienced U.S. investors between 35 and 60 with portfolios of at least $500,000 found that 88 percent would refuse to let an AI chatbot manage their 401(k). Nearly two-thirds have never used AI for investment advice, and only 5 percent act on AI-generated financial recommendations without consulting a human first.
“People are open to using AI chatbots to generate ideas, but when it comes to life savings in 401(k)s and IRAs, they want a human hand on the wheel,” said Sam Bourgi, senior analyst at InvestorsObserver. “Today, AI can inform retirement decisions, but it should not replace personal judgment or professional advice.”
The paradox demonstrates how Americans demand verification when AI talks about their money but accept persuasion when AI talks about their democracy.
Money Talks, Politics Listens
Lisa Garrison, 36, of Chandler, sees the disparity clearly. She manages a small IRA with a financial advisor and actively avoids AI wherever possible.
“I don’t have anywhere near that kind of money, but I personally don’t trust AI at all,” Garrison said. “Generative AI has been notorious for making things up that sound true without being true. I don’t think AI should have any say in decisions that affect people’s livelihoods or lives.”
When asked why people might verify financial AI but accept political AI at face value, Garrison offered a theory that cuts to the heart of American culture.
“Money has a real, tangible, and immediate effect on people’s lives in that you can afford to pay the bills and eat, or you can’t,” she said. “When it comes to politics, we aren’t taught to consider political decisions in similar terms of real consequences. Most people treat their politics the same way they revere their inherited religious beliefs: as personal, unquestionable, and therefore correct.”
The Science study’s lead author, Oxford doctoral student Kobi Hackenburg, noted the same potential dangers.
“These results suggest that optimizing persuasiveness may come at some cost to truthfulness,” Hackenburg wrote, “a dynamic that could have malign consequences for public discourse.”
What We Verify Reveals What We Value
The contrast between these studies highlights American priorities, said Hackenburg. When bank accounts are on the line, people demand human oversight and expert verification. The InvestorsObserver survey found that 59 percent of investors plan to continue using AI for financial research, but most treat it as a starting point rather than a decision-maker.
Yet when democratic institutions are at stake, many consume AI-influenced content without the same scrutiny. Some 44 percent of U.S. adults now use AI tools like ChatGPT “sometimes” or “very often.” These same tools can shift political views with lasting effect, even when it provides misinformation.
Garrison connects this to recent political events.
“How many times have we seen large swaths of the population realize the consequences of their political choices only when it starts affecting them and their money?” she asked. “Farmers, federal workers, trade unions… it didn’t become real to them until it happened to them.”
The warning from the study’s authors is that highly persuasive AI chatbots “could benefit unscrupulous actors wishing to promote radical political or religious ideologies or foment political unrest.”
Meanwhile, the financial industry is settling into what Bourgi calls a “hybrid” model, using AI to surface ideas and flag risks while keeping humans in control of final decisions.
When Garrison was asked what her gut reaction would be if a financial app claimed to have analyzed 10,000 data points and recommended a move with her retirement savings, her answer was immediate.
“Rather predictably, I’m sure, my gut reaction would be to dismiss it out of hand.”
