Wednesday, March 11

Could Bees and ChatGPT Be Conscious? Scientists Are Seriously Asking


Human Mind Brain Waves Thought Consciousness
A growing scientific debate is exploring whether consciousness might extend far beyond humans. New research suggests that both animals and artificial intelligence could potentially possess conscious experiences, but determining this requires looking deeper than outward behavior. Credit: Shutterstock

Researchers propose that the key to identifying consciousness in animals and AI lies in understanding how their information processing systems work.

At first glance, a honeybee collecting nectar in a garden and a computer running ChatGPT might seem to have nothing in common. Yet scientists are increasingly exploring the possibility that both biological organisms and advanced artificial systems could possess some form of consciousness.

Behavior alone may mislead

Researchers study consciousness in many ways. A common strategy has been to observe behavior and evaluate how an animal or an artificial intelligence (AI) responds to its surroundings.

However, two recent scientific papers examining consciousness in both animals and AI propose new ways to investigate the phenomenon. Their approach attempts to avoid both exaggerated claims and overly skeptical views that assume humans are the only beings capable of conscious experience.

Expanding the circle of consciousness

Debates about consciousness have long been intense within philosophy and science.

One reason is that consciousness carries moral significance. If a being is conscious, its experiences may matter ethically in ways that unconscious systems do not. As the range of potentially conscious organisms expands, so do the ethical questions surrounding how they should be treated. Even when certainty is impossible, some researchers argue that caution is warranted. Philosopher Jonathan Birch refers to this idea as the precautionary principle for sentience.

The recent trend has been one of expansion.

For example, in April 2024, a group of 40 scientists at a conference in New York proposed the New York Declaration on Animal Consciousness. Subsequently signed by over 500 scientists and philosophers, this declaration says consciousness is realistically possible in all vertebrates (including reptiles, amphibians and fishes) as well as many invertebrates, including cephalopods (octopus and squid), crustaceans (crabs and lobsters) and insects.

In parallel with this, the incredible rise of large language models, such as ChatGPT, has raised the serious possibility that machines may be conscious.

Why conversation is not enough

Five years ago, a seemingly ironclad test of whether something was conscious was to see if you could have a conversation with it. Philosopher Susan Schneider suggested that if we had an AI that convincingly mused on the metaphysics of consciousness, it may well be conscious.

By those standards, today we would be surrounded by conscious machines. Many have gone so far as to apply the precautionary principle here too: the burgeoning field of AI welfare is devoted to figuring out if and when we must care about machines.

Yet all of these arguments depend, in large part, on surface-level behavior. But that behavior can be deceptive. What matters for consciousness is not what you do, but how you do it.

Examining the internal structure of AI

A new paper in Trends in Cognitive Sciences that one of us (Colin Klein) coauthored, drawing on previous work, looks to the machinery rather than the behavior of AI.

It also draws on the cognitive science tradition to identify a plausible list of indicators of consciousness based on the structure of information processing. This means one can draw up a useful list of indicators of consciousness without having to agree on which of the current cognitive theories of consciousness is correct.

Some indicators (such as the need to resolve trade-offs between competing goals in contextually appropriate ways) are shared by many theories. Most other indicators (such as the presence of informational feedback) are only required by one theory but indicative in others.

Importantly, the useful indicators are all structural. They all have to do with how brains and computers process and combine information.

The verdict? No existing AI system (including ChatGPT) is conscious. The appearance of consciousness in large language models is not achieved in a way that is sufficiently similar to us to warrant attribution of conscious states.

Yet at the same time, there is no bar to AI systems—perhaps ones with a very different architecture to today’s systems—becoming conscious.

The lesson? It’s possible for AI to behave as if conscious without being conscious.

Studying consciousness in insect brains

Biologists are also turning to mechanisms—how brains work—to recognize consciousness in non-human animals.

In a new paper in Philosophical Transactions B, we propose a neural model for minimal consciousness in insects. This is a model that abstracts away from anatomical detail to focus on the core computations done by simple brains.

Our key insight is to identify the kind of computation our brains perform that gives rise to experience.

This computation solves ancient problems from our evolutionary history that arise from having a mobile, complex body with many senses and conflicting needs.

Importantly, we don’t identify the computation itself—there is science yet to be done. But we show that if you could identify it, you’d have a level playing field to compare humans, invertebrates, and computers.

A shared lesson across biology and AI

The problem of consciousness in animals and in computers appears to pull in different directions.

For animals, the question is often how to interpret whether ambiguous behavior (like a crab tending its wounds) indicates consciousness.

For computers, we have to decide whether apparently unambiguous behavior (a chatbot musing with you on the purpose of existence) is a true indicator of consciousness or mere roleplay.

Yet as the fields of neuroscience and AI progress, both are converging on the same lesson: when making judgments about whether something is conscious, how it works is proving more informative than what it does.

Reference:

“Identifying indicators of consciousness in AI systems” by Patrick Butlin, Robert Long, Tim Bayne, Yoshua Bengio, Jonathan Birch, David Chalmers, Axel Constant, George Deane, Eric Elmoznino, Stephen M. Fleming, Xu Ji, Ryota Kanai, Colin Klein, Grace Lindsay, Matthias Michel, Liad Mudrik, Megan A.K. Peters, Eric Schwitzgebel, Jonathan Simon and Rufin VanRullen, 10 November 2025, Trends in Cognitive Sciences.
DOI: 10.1016/j.tics.2025.10.011

“Phenomenal interface theory: a model for basal consciousness ” by Colin Klein and Andrew B. Barron, 13 November 2025, Philosophical Transactions B.
DOI: 10.1098/rstb.2024.0301

Disclosure: Colin Klein receives funding from the Australian Research Council and the Templeton World Charity Foundation. Andrew Barron receives funding from the Australian Research Council and the Templeton World Charity Foundation.

Adapted from an article originally published in The Conversation.The Conversation

Never miss a breakthrough: Join the SciTechDaily newsletter.
Follow us on Google and Google News.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *