Friday, March 13

Assuming AI Will Become Conscious Is Dangerous, Scientists Say. It’s Also Dead Wrong.


Estimated read time5 min read

Here’s what you’ll learn when you read this story:

  • Artificial intelligence systems, such as Large Language Models, are not conscious, and do not have the basis to ever be, some experts say.
  • That’s because they are just advanced computing machines. They don’t actually feel anything, like humans and other living beings do.
  • Assuming AI such as chatbots are conscious is dangerous, because it puts people in a position to be psychologically vulnerable to false or inaccurate information.

Pour out your troubles to a chatbot and it may tell you, “I understand what you’re going through.”

But of course it doesn’t, scientists say. These artificial intelligence systems—some experts prefer the term Large Language Models (LLMs)—are nothing more than advanced computing engines good at imitating humans. So good that it is easy to think they’re intelligent, and maybe even conscious.

Some experts argue that they are neither.

Unfortunately, clear and uncontroversial definitions of these terms would be useful for such arguments, but they don’t exist.

Informally, Anil Seth, PhD, professor of neuroscience at the UK’s University of Sussex Center for Consciousness Science defines consciousness as anything that is part of an experience—colors, tastes, emotions, thoughts.

“You consciously think something, feel something, do something,” he says.

American philosopher Thomas Nagle wrote that an organism has conscious mental states if and only if there is something that it is like to be that organism. As Seth elaborates, “There is something that it is like to be me.” But, he asks, is there something that it is like to be a fish? How about a language model?

Andrzej Porębski, MD, with the Faculty of Law and Administration, a scientific unit at Jagiellonian University, Poland, sees consciousness as related to a concept of oneself. For example, to have thoughts about something, to feel oneself in space, to feel separation from that space.

Intelligence, on the other hand, is about doing. Solving a crossword puzzle, navigating a tricky family situation, walking to the shop; in general, the ability to achieve complex goals by flexible means, Seth says. To get things done.

“Consciousness and intelligence are related in humans, of course,” he says. “When we have conversations, or think, we are conscious of it. But just because they go together in us doesn’t mean they always do.”

LLMs are created to make statements with probable wording, not to make true statements. In other words, statements that ring true but may not be.

To further complicate things, the term artificial intelligence is less than precise.

The original meaning of the term, says Porębski, is a field of knowledge seeking to develop methods that allow computers to intelligently solve computationally difficult problems. Over time, the term began to be used as a shortcut for systems or tools based on AI techniques. But a computer system isn’t intelligent in the same sense that human beings are, he argues.

Generative chatbots, for example, are trained to generate answer patterns. They’re great at conversation but sometimes make absurd mistakes on simple tasks and often get calculations wrong. These LLMs are created to make statements with probable wording, he points out, not to make true statements. In other words, statements that ring true but may not be.

“They sort of absorb everything. They know everything but don’t know anything the way humans do,” Seth says. “Which means we need to be very careful about how we use them.”

While AI could be considered intelligent, in the sense that it does things, these experts say it is not conscious.

“Many researchers, including me, believe that consciousness requires a biological component that AI systems don’t have,” says Porębski. These programs perform the tasks for which they are designed; it is humans who create false associations between what they generate and consciousness. Even a very complex computer program is still just a computer program.

“Our intuitions can be very misleading in this area,” Seth says. “When we assume that because something is intelligent it has to be conscious, we are seeing things through a human lens. We tend to over-attribute consciousness to things that seem to behave in ways we perceive as like us.”

Some people find the idea of AI consciousness plausible, he thinks, because they’ve taken the metaphor of the brain as a computer literally.

“It’s kind of natural to think that, if the brain is a computer that just happens to be made of meat, then everything that a brain can do should be doable by other things that can do computation. And silicon is very good at computation, Seth says. “But the more you look at brains, the more you realize how different they are from computers.” And, he adds, the more it is not enough to describe what they do as merely algorithms.

The prevailing view in the tech sector, he adds, is that LLMs currently are not conscious in the way we experience the world, and probably not in any way. These tools may operate in ways that resemble consciousness but are not in any way equal to human consciousness.

To many people, the idea of AI consciousness is frightening.

“Ironically, in my opinion, these fears are justified, but misdirected,” Porębski says. “They should be directed not at the technology itself, but at the companies and people who create it and put business interests above ethics or human welfare.” The industry has created technologies that are socially uncontrollable and over which there is no actual democratic oversight, he adds.

Seth notes that a lot of people already perceive AI as being conscious. “When [a chatbot] says they know what you’re going through, people believe that. That is very dangerous. If you think you are interacting with something conscious you behave differently than if it is a spreadsheet. We become psychologically vulnerable.”

He worries that people who accept the illusion of machines being conscious could trust them more, and be more open to persuasion.

Porębski agrees. “A program to which users attribute human characteristics can manipulate them much more easily. Perceiving products as conscious beings is risky at so many levels and has no benefits.”

Joachim Keppler, PhD, a theoretical physicist is director of Germany’s DIWISS Research Institute, which is investigating a scientific foundation for upholding a conclusive theory of consciousness. He also finds it worrying that current arguments about the possibility of AI consciousness seem to turn on pure speculation.

Ultimately, he says, the question can only be answered with rigorous science. That seems like a sound argument.

Lettermark

Austin-based science writer and author Melissa Gaskill focuses on ocean issues, endangered wildlife, the environment, and space. She has written for dozens of publications including Mental Floss, Scuba Diving Magazine, Men’s Journal, Alert Diver, Stardate, and Scientific American. 



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *