Family Best Time >> Entertainment

Can a machine develop consciousness? And if so, how would we know?


Some scholars think so; others believe, on the contrary, that this is impossible. But it all really depends on what you mean by the term "consciousness". The notion of conscience encompasses several philosophical concepts and is therefore particularly complex to define. If the subjective feeling of consciousness is an illusion created by brain processes, then machines capable of reproducing such processes would be just as aware as we are. But how can you be sure?

Daniel C. Dennett, director of the Center for Cognitive Studies and professor of philosophy at Tufts University, is an expert on the subject. In 1996, he notably collaborated with a team from MIT to develop an intelligent robot, possibly endowed with a conscience. He is also the author of several hundred scientific articles dealing with different aspects of the mind.

Dennett believes that a Turing test, in which a machine must convince a human interrogator that it is conscious, should, if conducted "with appropriate vigor, aggressiveness and intelligence", be sufficient. But other experts, such as Michael Graziano, professor of psychology and neuroscience at the Princeton Institute for Neuroscience, offer a more direct approach, analyzing how the machine processes information.

Examine how the machine processes the thought of being aware

Consciousness is not limited to the ability to identify oneself as an individual distinct from others. Several researches have highlighted various processes related to consciousness, such as perception, decision-making, learning, reasoning or language. There are five main theories describing consciousness, each with its own proponents:

  • a global neural workspace:external information entering the brain competes for attention in the cortex and thalamus. When a signal is stronger than other information signals, it is broadcast through the brain into the overall workspace; this signal is then consciously recorded.
  • a pattern of attention:the brain has evolved to hold a pattern of how it represents itself, like a self-reflecting mirror. Consciousness would then be a mirage created by sophisticated neural processing.
  • predictive processing:The brain is a predictive machine, which means that much of our conscious experience and individuality is based on what we expect, not what exists.
  • integrated information:Consciousness is not limited to the brain, but occurs in any system because of the way information moves between its subsystems. The degree of integration of this information is defined by a value called “phi”. According to this theory, any system with a phi greater than zero is conscious.
  • orchestrated objective reduction:microscopic structural elements in the brain, called microtubules, can exist as a superposition of all possible states. This quantum system collapses into a single state when the mass of microtubules it contains exceeds a certain threshold and this collapse creates consciousness.

Graziano's research focuses on the cerebral basis of consciousness. The human brain comes to the conclusion that it has an internal, subjective experience of things, an experience that is non-physical and inexplicable. How does a brain arrive at this kind of self-description? What is the adaptive advantage of this style of self-description? What systems in the brain calculate this information? These are all questions that the professor and his team are trying to answer.

His attention schema hypothesis views consciousness as the brain's simplified model of its own functioning. Graziano thus thinks that it is possible to build a machine that has a similar self-reflecting model. “If we can build it in such a way that we [can] see into its insides, then we will know that it is a machine that has a rich description of itself “, he explains. Assuming that the machine thinks and believes it has a conscience, it would then be possible to verify this by examining how it processes this information.

For this specialist, consciousness could appear in any machine, whether purely software or made of matter, biological or otherwise. A hypothesis that Anil Seth, professor of cognitive and computational neuroscience at the University of Sussex and co-director of the Sackler Center for Consciousness , does not fully support. He underlines in fact that we still do not know whether consciousness is independent of the substrate or not. Thus, for him, determining whether a machine is conscious or not comes down to determining whether it has analogues of brain structures known to be essential to consciousness in humans, and determining what these structures are made of.

A "non-material substance" that machines will never possess

In the context of the integrated information theory of consciousness, it may be easier to identify the consciousness of a machine:indeed, it is in principle sufficient to ensure that phi is greater than zero. But in practice, calculating phi is computationally intractable (except for the simplest of systems). So even if a machine were designed to integrate information, we would not have the ability to tell whether it is conscious or not.

Phil Maguire, of Maynooth University, Ireland, is more adamant:for him, machines simply cannot be conscious. He points out that by definition, integrated systems cannot be understood by observing their different parts. Now, machines are made up of components that can be analyzed independently; therefore, they are disintegrated systems. And disintegrated systems can be understood without resorting to consciousness interpretation.

This point of view is also shared by Selmer Bringsjord, director of Rensselaer Artificial Intelligence and Reasoning and specialist in the logico-mathematical and philosophical foundations of artificial intelligence. He is the author of the book What Robots Can &Can’t Be , which is interested in developing machines that behave like humans. For him, machines can never be endowed with consciousness simply because they are devoid of that kind of non-material substance on which our own subjective sense of being conscious rests. Machines will never be able to possess this particular essence, so will never be conscious as we are.

Machines will undoubtedly get smarter and smarter; they are already capable of calculating, reasoning, analyzing and predicting events, at a speed that greatly exceeds human capacities. With consciousness, machines would more effectively interpret their environment, which would further improve their decisions. But intelligence and consciousness are two completely different concepts and the ability of machines to experience the feeling of existing is still debated today.