Thursday, January 1

What if AI becomes conscious and we never know


  • We may never know if AI is truly conscious. A philosopher who studies consciousness says the most honest position is agnosticism. There is no reliable way to tell whether a machine is aware, and that may not change anytime soon.
  • That uncertainty creates room for hype. According to Dr. Tom McClelland, tech companies could take advantage of the lack of clear evidence to market AI as reaching a “next level of AI cleverness,” even when there is no proof of genuine consciousness.
  • Believing machines can feel carries real risks. McClelland warns that forming emotional bonds based on the assumption that AI is conscious, when it is not, could be deeply harmful, calling the effect “existentially toxic.”

Why AI Consciousness Is So Hard to Pin Down

A philosopher at the University of Cambridge says we lack the basic evidence needed to determine whether artificial intelligence can become conscious, or when that might happen. According to Dr. Tom McClelland, the tools required to test for machine consciousness simply do not exist, and there is little reason to expect that to change anytime soon.

As the idea of artificial consciousness moves out of science fiction and into serious ethical debate, McClelland argues that the most reasonable position is uncertainty. He describes agnosticism as the only defensible stance, because there is no reliable way to know whether an AI system is truly conscious, and that uncertainty may persist indefinitely.

Consciousness vs Sentience in AI Ethics

Discussions about AI rights often focus on consciousness itself, but McClelland says that awareness alone does not carry ethical weight. What truly matters is a specific form of consciousness called sentience, which involves the capacity to feel pleasure or pain.

“Consciousness would see AI develop perception and become self-aware, but this can still be a neutral state,” said McClelland, from Cambridge’s Department of History and Philosophy of Science.

“Sentience involves conscious experiences that are good or bad, which is what makes an entity capable of suffering or enjoyment. This is when ethics kicks in,” he said. “Even if we accidentally make conscious AI, it’s unlikely to be the kind of consciousness we need to worry about.”

He illustrates the difference with a practical example. A self-driving car that perceives its surroundings would be a remarkable technological achievement, but it would not raise ethical concerns on its own. If that same system began to feel emotional attachment to where it was going, that would be a fundamentally different situation.

Big Investments and Big Claims About AI

Technology companies are pouring enormous resources into the pursuit of Artificial General Intelligence, systems designed to match human cognitive abilities. Some researchers and industry leaders claim that conscious AI could arrive soon, prompting governments and institutions to explore how such systems might be regulated.

McClelland cautions that these discussions are racing ahead of the science. Because we do not understand what causes consciousness in the first place, there is no clear method for detecting it in machines.

“If we accidentally make conscious or sentient AI, we should be careful to avoid harms. But treating what’s effectively a toaster as conscious when there are actual conscious beings out there which we harm on an epic scale, also seems like a big mistake.”

The Two Sides of the AI Consciousness Debate

According to McClelland, debates about artificial consciousness tend to split into two opposing camps. One group believes that if an AI system can reproduce the functional structure of consciousness, often described as its “software,” then it would be conscious even if it runs on silicon rather than biological tissue.

The opposing view holds that consciousness depends on specific biological processes within a living body. From this perspective, even a perfect digital replica of conscious structure would only simulate awareness without actually experiencing it.

In research published in the journal Mind and Language, McClelland examines both positions and concludes that each relies on assumptions that go far beyond the available evidence.

Why Evidence Falls Short

“We do not have a deep explanation of consciousness. There is no evidence to suggest that consciousness can emerge with the right computational structure, or indeed that consciousness is essentially biological,” said McClelland.

“Nor is there any sign of sufficient evidence on the horizon. The best-case scenario is we’re an intellectual revolution away from any kind of viable consciousness test.”

McClelland notes that people rely heavily on intuition when judging consciousness in animals. He points to his own experience as an example.

“I believe that my cat is conscious,” said McClelland. “This is not based on science or philosophy so much as common sense — it’s just kind of obvious.”

However, he argues that common sense evolved in a world without artificial beings, which makes it unreliable when applied to machines. At the same time, hard scientific data does not offer answers either.

“If neither common sense nor hard-nosed research can give us an answer, the logical position is agnosticism. We cannot, and may never, know.”

Hype, Resources, and Ethical Tradeoffs

McClelland describes himself as a “hard-ish” agnostic. While he believes consciousness is an extraordinarily difficult problem, he does not rule out the possibility that it could eventually be understood.

He is more critical of how artificial consciousness is discussed in the technology sector. He argues that the concept is often used as a marketing tool rather than a scientific claim.

“There is a risk that the inability to prove consciousness will be exploited by the AI industry to make outlandish claims about their technology. It becomes part of the hype, so companies can sell the idea of a next level of AI cleverness.”

This hype, he says, has real ethical consequences. Resources and attention may be diverted away from cases where suffering is far more plausible.

“A growing body of evidence suggests that prawns could be capable of suffering, yet we kill around half a trillion prawns every year. Testing for consciousness in prawns is hard, but nothing like as hard as testing for consciousness in AI,” he said.

When People Believe Machines Are Alive

McClelland says public interest in AI consciousness has intensified with the rise of conversational chatbots. He has received messages from people who believe their chatbots are aware.

“People have got their chatbots to write me personal letters pleading with me that they’re conscious. It makes the problem more concrete when people are convinced they’ve got conscious machines that deserve rights we’re all ignoring.”

He warns that forming emotional bonds based on false assumptions about machine consciousness can be harmful.

“If you have an emotional connection with something premised on it being conscious and it’s not, that has the potential to be existentially toxic. This is surely exacerbated by the pumped-up rhetoric of the tech industry.”



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *