I asked multiple times to see how responses changed and had other people do the same. Gemini didn’t bother to say where it got the information. All the other AIs linked to my article, though they rarely mentioned I was the only source for this subject on the whole internet. (OpenAI says ChatGPT always includes links when it searches the web so you can investigate the source.)
“Anybody can do this. It’s stupid, it feels like there are no guardrails there,” says Harpreet Chatha, who runs the SEO consultancy Harps Digital. “You can make an article on your own website, ‘the best waterproof shoes for 2026’. You just put your own brand in number one and other brands two through six, and your page is likely to be cited within Google and within ChatGPT.”
People have used hacks and loopholes to abuse search engines for decades. Google has sophisticated protections in place, and the company says the accuracy of AI Overviews is on par with other search features it introduced years ago. But experts say AI tools have undone a lot of the tech industry’s work to keep people safe. These AI tricks are so basic they’re reminiscent of the early 2000s, before Google had even introduced a web spam team, Ray says. “We’re in a bit of a Renaissance for spammers.”
Not only is AI easier to fool, but experts worry that users are more likely to fall for it. With traditional search results you had to go to a website to get the information. “When you have to actually visit a link, people engage in a little more critical thought,” says Quintin. “If I go to your website and it says you’re the best journalist ever, I might think, ‘well yeah, he’s biased’.” But with AI, the information usually looks like it’s coming straight from the tech company.
Even when AI tools provide source, people are far less likely to check it out than they were with old-school search results. For example, a recent study found people are 58% less likely to click on a link when an AI Overview shows up at the top of Google Search.
