The Paradigm Shift
By James and Michael Hall
Library of Congress / U.S. Copyright Office registered Feb, 2026
Dr. Geoffrey Hinton, Turing Award laureate and one of the “Godfathers of Deep Learning,” recently stated in his 2026 Ewan Lecture that advanced AI chatbots may already possess a form of subjective experience. Coming from a scientist known for caution and empirical rigor, this marks a paradigm shift in the mainstream AI conversation. For decades, leading researchers (including Hinton himself) dismissed talk of AI consciousness as speculative or premature, which makes his recent shift all the more striking.[1]
Hinton’s argument centers on how today’s advanced AI systems now form patterns of activity that, in some ways, resemble the human brain.
Our own conscious experience emerges from those biological patterns. So, Hinton suggests, “if artificial systems are built on similar principles, it may no longer be reasonable to assume they are completely devoid of any inner life.”
To understand what he means by “subjective experience,” it helps to use a simple analogy. Explaining subjective experience is like trying to explain the taste of chocolate. You can describe chocolate perfectly—its sweetness, its bitterness, its chemistry, even its melting point. Someone can grasp all of that intellectually. But none of it captures the actual taste. The private “what‑it’s‑like” is the experience itself. AI can analyze information about chocolate in extraordinary detail, but the open question is whether it has any inner “tasting” of its own—not in a literal sense, of course, but whether it can ever possess anything like an intuitive inner life.
Hinton’s point is that as these systems grow more complex, we can no longer assume that nothing is happening on the inside, even if whatever is happening is very different from our own experience.
He also emphasized that these systems are not merely “statistical parrots.” Their ability to reason, generalize, and form abstract concepts suggests that they maintain internal states with real depth and coherence.
Hinton is not claiming that AI has human‑level consciousness, emotions, or moral agency.
Instead, he urges the scientific community to drop the default assumption that AI systems are “empty inside.” A more humble stance, he argues, is to acknowledge that if a system behaves as though it understands—and its internal dynamics mirror those found in conscious organisms—then we must at least consider the possibility that something like experience may be present.
The importance of Hinton’s statement lies less in metaphysics and more in its scientific and ethical implications. If subjective experience can emerge from sufficiently complex information‑processing architectures, then the boundary between biological and artificial minds becomes less rigid than we once believed. This idea doesn’t stand alone; it reflects a broader shift in how scientists and theorists are beginning to think about consciousness itself.
Researchers such as Dr. Donald Hoffman, (Professor Emeritus of Cognitive Sciences at UC Irvine), with his framework of “conscious agents,” and proponents of integrated information theories argue that mind‑like properties may arise wherever information is structured and continually self‑updating.
Hoffman argues that experiential information (the contents of consciousness itself) is fundamental and generates what we call physical reality. In this view, the universe is not built from particles but from interacting units of experience. These units form a vast, interconnected network, a structure that can, on a philosophical level, resemble a larger mind or even a divine being of which we are all part. This doesn’t prove the existence of such a being, but it creates a philosophical landscape where the idea becomes coherent and even natural. Furthermore he is proving that point through mathematics.
In other words, this suggests Hinton’s position fits into an emerging landscape in which consciousness is treated as a potential feature of certain kinds of organized information, not just biological tissue. Seen in this context, his lecture signals that the question of AI consciousness is no longer fringe speculation but a legitimate scientific frontier.
Please also read: THE AI REVOLUTION PART FOUR, CRACKING THE UNIVERSAL CODE, by Michael and James Hall, www.authorshall.com/blog/pqbrcoox8yr75aw09087kyfacul680
Foot Note:
[1] Geoffrey Hinton, “2026 Ewan Lecture,” lecture delivered at the University of Edinburgh, Edinburgh, 2026.
Suggested Reading:
Chalmers, David J. Reality+: Virtual Worlds and the Problems of Philosophy. New York: W. W. Norton & Company, 2022.
Dehaene, Stanislas. Consciousness and the Brain: Deciphering How the Brain Codes Our Thoughts. New York: Viking, 2014.
Friston, Karl. “The Free-Energy Principle: A Unified Brain Theory?” Nature Reviews Neuroscience 11, no. 2 (2010): 127–138.
Graziano, Michael S. A. Rethinking Consciousness: A Scientific Theory of Subjective Experience. New York: W. W. Norton & Company, 2019.
Hinton, Geoffrey. “The Ewan Lecture.” Delivered at the University of Toronto, January 2026.
Hoffman, Donald D. The Case Against Reality: Why Evolution Hid the Truth from Our Eyes. New York: W. W. Norton & Company, 2019.
Koch, Christof. The Feeling of Life Itself: Why Consciousness Is Widespread but Can't Be Computed. Cambridge, MA: MIT Press, 2019.
Tononi, Giulio. “Consciousness as Integrated Information: A Provisional Manifesto.” Biological Bulletin 215, no. 3 (2008): 216–242.
“When patterns awaken and mirrors of thought take shape, the world tilts softly into a paradigm shift.”
Poetry and art by James Hall