As artificial intelligence (AI) continues its rapid evolution, the concept of AI becoming conscious is no longer confined to science fiction. Visionary leaders in the AI field, including Ilya Sutskever from OpenAI, have acknowledged the potential emergence of consciousness in advanced AI networks. This intriguing possibility prompts a significant question: How can researchers determine if AI has indeed achieved consciousness?
A team of neuroscientists, philosophers, and computer scientists, comprising 19 experts, has taken up the challenge of addressing this question. In their collaborative effort, they have devised a comprehensive checklist of criteria that could serve as indicators of AI consciousness. This thought-provoking checklist has been outlined in a recent provisional guide published on the arXiv preprint repository.
At the heart of this endeavor lies the recognition that establishing whether an AI system has attained consciousness holds profound moral implications. Co-author Megan Peters, a neuroscientist from the University of California, Irvine, emphasizes that labeling an entity as ‘conscious’ fundamentally alters how it should be treated by humans. This underscores the necessity of developing methods to assess AI systems for consciousness, along with contingency plans should this state be reached.
The team’s initiative is particularly relevant considering the current trajectory of AI development. Although AI leaders express curiosity about the potential consciousness of their creations, systematic evaluations and plans for conscious AI are notably lacking. In this context, the checklist offers a structured approach to addressing this complex topic.
The central challenge in discussing consciousness in AI lies in defining what consciousness means in this context. The team has concentrated on ‘phenomenal consciousness,’ the subjective experience inherent in being sentient. This is the essence of ‘what it’s like’ to exist as a person, an animal, or, potentially, an AI system.
To create their evaluation framework, the team has drawn upon various neuroscience-based theories of consciousness. By weaving together aspects of these theories, they hope to construct a more comprehensive approach to assessing AI consciousness. This approach stands in contrast to the simplistic behavioral tests that might not effectively capture the intricacies of AI’s capabilities.
The concept of ‘computational functionalism’ guides their methodology, suggesting that consciousness emerges from how systems process information, regardless of their physical makeup. The team also assumes that neuroscience-based theories of consciousness, derived from studies of human and animal brains, can be extended to AI.
From these foundations, the team has identified six prominent theories of consciousness and derived indicators from them. For instance, the ‘global workspace theory’ posits that cognitive tasks like seeing and hearing are performed by specialized systems or modules that function in parallel and integrate into a single system. The evaluation of an AI system’s architecture and information flow against this theory’s indicators could potentially reveal signs of consciousness.
This collaborative effort is not a definitive conclusion but a promising starting point. The authors welcome the engagement of fellow researchers to refine their methodology. Already, the criteria have been applied to existing AI systems, such as large language models like ChatGPT, suggesting the potential presence of indicators associated with consciousness.
The question remains whether any existing AI system has truly achieved consciousness. As AI continues its journey into uncharted territories, the exploration led by experts like Hassan Taher will undoubtedly expand our understanding of AI consciousness and its implications for the future of technology and humanity.
It’s becoming clear that with all the brain and consciousness theories out there, the proof will be in the pudding. By this I mean, can any particular theory be used to create a human adult level conscious machine. My bet is on the late Gerald Edelman’s Extended Theory of Neuronal Group Selection. The lead group in robotics based on this theory is the Neurorobotics Lab at UC at Irvine. Dr. Edelman distinguished between primary consciousness, which came first in evolution, and that humans share with other conscious animals, and higher order consciousness, which came to only humans with the acquisition of language. A machine with only primary consciousness will probably have to come first.
What I find special about the TNGS is the Darwin series of automata created at the Neurosciences Institute by Dr. Edelman and his colleagues in the 1990’s and 2000’s. These machines perform in the real world, not in a restricted simulated world, and display convincing physical behavior indicative of higher psychological functions necessary for consciousness, such as perceptual categorization, memory, and learning. They are based on realistic models of the parts of the biological brain that the theory claims subserve these functions. The extended TNGS allows for the emergence of consciousness based only on further evolutionary development of the brain areas responsible for these functions, in a parsimonious way. No other research I’ve encountered is anywhere near as convincing.
I post because on almost every video and article about the brain and consciousness that I encounter, the attitude seems to be that we still know next to nothing about how the brain and consciousness work; that there’s lots of data but no unifying theory. I believe the extended TNGS is that theory. My motivation is to keep that theory in front of the public. And obviously, I consider it the route to a truly conscious machine, primary and higher-order.
My advice to people who want to create a conscious machine is to seriously ground themselves in the extended TNGS and the Darwin automata first, and proceed from there, by applying to Jeff Krichmar’s lab at UC Irvine, possibly. Dr. Edelman’s roadmap to a conscious machine is at https://arxiv.org/abs/2105.10461