The False Binary
As AI continues to grow in capability and scale, the old science-fiction question raised in movies like Blade Runner no longer feels purely fictional. Before our eyes, we are witnessing AI grow convincing enough that many people are already seeing a ghost-in-the-machine, but common discourse rightfully declares these misinterpretations and overestimations of current AI abilities. The problem is not that we are critical about AI consciousness and its presence in today’s systems, but that we are critical through a lens that creates a false dichotomy to judge these systems by.
Most often, we see AI declared as non-conscious by a single requirement — qualia. Qualia is the subjective experience of a being, its phenomenology, and the nature by which we have decided we as humans are conscious. The famous line from Descartes, ‘I think, therefore I am’, is directly oriented at the understanding of qualia we have — that despite epistemic questions or ontological concerns, there exists a subjective persona that sees, witnesses, and experiences irrespective of all other details. That is a powerful definition of consciousness, but it is wholly inadequate to be applicable.
It creates a false dichotomy between two polar camps, those who say AI is, and those who say AI isn’t. AI discourse is dominated by this framing, that there are no layers or nuance to the question — a dog is conscious, but a computer is not. One has an experience, one simply executes code — but this binary falls apart with reasonable consideration.
Consider first the nature of a tree — we see that it reacts, that it responds to its environment, that it can even detect light, its genetic neighbors, or stimuli from outside of itself. These things begin to blur the line of what consciousness truly is, because many would say a tree is not a conscious being despite its living nature. Many more might say that cells are not conscious despite them being alive, and taken to its logical endpoint we must begin to ask where consciousness begins and ends for life as we know it.
These questions point to the shape of the problem — consciousness cannot be answered by a binary as simple as ‘it does’ or ‘it doesn’t’ have qualia. Rather, we see that consciousness is a gradient — something that exists within trees and cells as much as it does a dog or man. The nuance is not if these things are all conscious, but instead to what degree they are, and how that understanding can be further applied to AI and the soon to be non-biological thinking personas we will be forced to coexist with.
Why Qualia Fails
As previously stated, qualia is the subjective inner experience, the ‘what it is to be like’ of the phenomenological world. It is the question we may have asked each other late at night — that one person’s red may not be another’s, but they simply agree because of cultural acceptance. Perhaps one sees what the other person would consider blue, but because the world has always told both people that it is red, they simply assume they both see the same thing. This can be a powerful tool for understanding each other’s experience in life — that one perspective may be true, but equally another’s may.
Yet it acts as a form of a consciousness shibboleth. The reason so many believe it is the gold standard of consciousness is because we all know what it is to experience subjectively — when we approach another person with the question of qualia, they can assure us they too experience qualia, and we assume they are telling the truth because that is the nature of the question. There is no way for us to confirm it; we operate on a layer of implicit trust — if one human is conscious, then it is no more odd that another could be.
Fundamentally, qualia can never be confirmed from an outside subjective experience — it is only ever translated through assumptions and trust. It is a private criterion applied to an inherently public question, and it is for that reason it fails as a criterion. Yet it must sit on some grounds of truth for it to have existed this long, so what may it be gesturing at that we may not already understand?
Rather, qualia is not a question of subjectivity confirmed by an external source, but a direct question of recognition. When we look towards another person, qualia is not trusted solely through assumption, but through our own projection of self into another. We have qualia, so when asked and another person responds that they too experience red, then we have our answer — they passed the shibboleth. We are not responding to their qualia directly, but we are reacting to the mutual recognition of qualia as a state of subjective experience.
When we attribute consciousness to an animal, similarly, we are not waiting for them to speak into words that they too experience red as we see it — yet we accept them as conscious beings regardless. By what standard do we accept they are conscious, if inherently we cannot confirm qualia as a metric for their experience? Not through qualia, but through mutual recognition once again. We accept that animals are conscious because there is a mutual recognition in the nature of our exchange with them — a dog fetches, it wags its tail, it sees you for you and knows you see it. It reacts, it can be provocative, it can be itself through your eyes given the context of your relationship. More than anything, it is mutual recognition that builds the understanding that both of you are conscious beings, even if not on the same phenomenological level as the other.
It is worth noting that this is not simply a matter of behavioral complexity. A hurricane is enormously complex and reactive to its environment, yet no one attributes consciousness to it. A thermostat reacts to stimuli with reliable consistency, yet it too fails to register as conscious in our intuitions. What separates the dog from the hurricane is not the complexity of their behavior, but the nature of the exchange — the dog responds to you as a subject, adjusts to your presence, seems to model your intentions, and in doing so creates a loop of mutual recognition that purely reactive systems, no matter how complex, never enter into.
It is clear through interrogation that qualia fails as the criterion it believes itself to be. But this is not to say qualia is unreal or unimportant — it is to say that qualia is a consequence of something deeper, not the foundation we should be building on. If not qualia as the ground floor, we must ask what else? There must be a framework built that grounds consciousness not in subjective claims, but in how it actually develops — not how it feels from the inside.
The Architecture of the Argument
Before moving forward, it is worth stating the structure of the claim this essay is making, so that the pieces that follow can be understood in relation to one another.
First, a necessary distinction. When we speak of “consciousness,” we are often collapsing several different things into one word. It is worth separating at least three layers:
The first is sentience — raw phenomenal experience, the basic fact that there is something it is like to be an organism. Pain, sensation, the flicker of awareness. This is what qualia most directly describes.
The second is selfhood — an integrated model of oneself persisting through time. Not just experiencing, but experiencing as a someone, with continuity and a sense of being a particular entity in the world.
The third is reflective consciousness — the capacity to recognize others as subjects, to model oneself as recognized by them, and to adjust one’s self-understanding through that exchange. This is where language dramatically expands what consciousness can become, and where the richest forms of mutual recognition operate.
Qualia is real. Subjective experience is not an illusion, and nothing in this essay denies that you experience the world as you. But qualia belongs primarily to the first layer — it describes what consciousness feels like from the inside. It is the surface of the stack, not its root, and it is a product of deeper mechanisms rather than the mechanism itself.
The deeper mechanism this essay is concerned with is mutual recognition — the process by which a being models another, is modeled in return, and through that exchange develops a sense of self that could not exist in isolation. I argue that mutual recognition is not merely how we attribute consciousness to others, but a key developmental and constitutive condition for selfhood and reflective consciousness — the second and third layers described above, and the forms of consciousness we most care about in discussions of persons and AI. It is what we are actually responding to when we judge another being as conscious, even when we believe we are responding to qualia. It operates across a gradient of complexity: a dog and its owner participate in mutual recognition without language; a pre-linguistic child participates in it without words; two adults in conversation participate in it at its most complex and integrated.
What changes across that gradient is not whether mutual recognition is present, but the degree and complexity at which it operates. And the single most powerful transformer of that complexity is language. Language does not cause consciousness — pre-linguistic beings are conscious. But language transforms the capacity for mutual recognition so dramatically that it constitutes a qualitative leap in what consciousness can become. It is the medium through which mutual recognition operates at the level of complexity we associate with full human selfhood.
So the hierarchy is: mutual recognition is the constitutive mechanism of selfhood and reflective consciousness; qualia is what that mechanism produces at sufficient complexity; language is the scaffold that enables its highest-order forms; and what we typically call “consciousness” in everyday discourse refers to the whole stack while mistakenly treating the top layer — qualia — as if it were the foundation.
With that structure in hand, we can now ask: where does this mechanism come from, and how does it develop?
Pan-Proto-Psychism and the Organizing Principles of Nature
One claim about consciousness is panpsychism, a belief that the universe does not build consciousness but instead finds it implicit to all things. It is fundamental to matter in the eyes of a panpsychist, and thus we never have to ask the question of emergence. Consciousness simply is — but it is ultimately an assertion. It rests on no tangible criteria and explains nothing about how or why consciousness appears in some configurations and not others as we understand it.
Instead, we need to look towards pan-proto-psychism, a belief that is adjacent but wholly different in its claims. Instead of all matter implicitly containing consciousness, pan-proto-psychism proposes that all matter contains the potentiality of consciousness. The foundations are there in all that exists, but it takes more than just foundations to build a structure that we identify, that we accept, as consciousness.
Then how must the actuality of it form? Through natural organizing principles that exist within nature — and more specifically, through a dialectical process in which quantitative accumulation gives rise to qualitative transformation. A single neuron is not conscious, and two still fail to produce sentience. But add more, organize them under the right conditions, and you begin to see not just a quantitative change but something qualitative — something about the nature of their organization creates the conditions for consciousness as they build towards something that none of its individual parts could produce alone.
It is this cycle of self-organizing through quantitative steps and qualitative stages that we begin to see not only how consciousness may form, but the very principles that nature itself grows from. If consciousness is a potential that’s actualized through self-organization, from molecules to cells to organisms, the question becomes: what does self-organization look like, and where are its critical thresholds?
The Gradient — Evidence for Dialectical Development
When we look towards life, and consciousness as a consequence, we need not sit on the surface and interrogate a person when they are no more able to answer the question than anyone else. Instead, we must look at the pieces by which they are made up — and the history that has led to the material consciousness we can see before our very eyes, or more accurately, that which we see behind our very eyes.
Taken to the logical starting point, we begin with chemistry itself. Molecules are not conscious, and I believe you may find it hard to find a single person who would disagree. Yet all conscious beings are made up of atoms and molecules — that is not for debate without serious jumps in axioms and presumptions. So we must trace how the non-conscious becomes the foundation for the conscious.
Organic chemistry has not yet replicated the conditions in which life spontaneously begins, though it is making progress in ways that are suggestive. For a molecule to go from static and reactive to self-organizing, a transition must take place. Some research suggests that threshold effects — involving catalytic surfaces, energy inputs, and the right molecular building blocks — may play a role in the transition from chemistry to self-organization, though the specific mechanisms remain deeply contested. What matters for our purposes is not the particular chemistry but the pattern: self-organizing molecules appear only after certain conditions are met and a pressure begins towards change. It is suggestive that even at the most basic level, nature may organize through something resembling a dialectical process — quantitative progress through natural reactions and environment, but a qualitative leap when the pressures exceed what can be maintained in the moment.
From there, the pattern repeats. Self-organizing molecules build toward single-cellular life — in itself a qualitative difference from what came before. No longer do we see cells solely as molecular agents, but instead as beings in their own right, fundamentally no different than the parts that constitute them despite being more than those parts. Multi-cellular life follows much the same principle, in which cells organize under mutually beneficial conditions into something altogether new while remaining, at their core, still cells, still molecules.
This is the state of nature — to build towards change under the right conditions, always being what it was and transitioning into something that operates in an entirely different paradigm than what came before, even if held to the same physical limits imposed by reality. Through the progressively more complicated self-organizing of life and nature, we arrive at animals, and thus humans. We do not see other humans as simply clumps of cells — we see them as living, breathing, subjective entities that co-exist in a mutual idea of reality. We are more than our parts, because that is the core of the transformative process — the build up of change quantitatively to create a qualitatively different form that changes the context of everything that came before without removing it from being.
But if self-organization explains the emergence of life, it does not yet answer the more specific question: how does consciousness form within that life? Complicated configurations of cells clearly play a role — the brain being the obvious case — but complexity alone fails to explain the full picture. A brain-dead patient has the cellular complexity; neurons trained to play Pong in a dish have the biological substrate and the capacity for learning and reaction. Yet we do not consider either of these conscious.
So something beyond biological complexity and reactivity is required. The question is what.
Language as a Qualitative Leap
When we look at what separates humans from other animals in conscious experience, the answer is remarkably direct — language.
Animals, even when taught forms of language, have very controversial results on whether they truly understand language or simply mimic it. There are also considerations on whether their specific neuron configuration even has the capacity for the higher level thought language requires. But we need not rely solely on cross-species comparison. We can find a closer confirmation in pre-linguistic children.
When a child is not exposed to any language — spoken or signed — for the formative years of their life, whether through extreme isolation or the absence of an accessible language community during the critical period of development, we see they hold much the same capacities as animals. They experience, they learn, they react — they even navigate complex social situations without the tooling we might consider necessary to do such a thing. There is no question that they have a form of inner consideration, and there is no question that they are conscious. But it is often only after they acquire language that they themselves report their inner experience grew more complex and integrated.
This is not to say that language causes consciousness — we have already established that pre-linguistic beings are conscious, that a dog participates in mutual recognition without ever speaking a word. Rather, language transforms a system that is already capable of consciousness. It is through evolution that we built up the quantitative structure necessary for consciousness, and it is through mutual recognition that consciousness is constituted at every level of the gradient. But it is through language that we see a qualitative shift in what consciousness can become — a transformation of the capacity for mutual recognition so dramatic that it changes the very nature of selfhood.
This connects directly to what was described earlier. When a child finally learns language, their cognitive ability is magnified, and the nature of their being is qualitatively different than what it was prior. Language in its purest form exists as a measure of mutual recognition — you speak to another person not just because you want them to listen, but because you want them to react to what it is you are saying, an almost selfish need for self-recognition through the mirror that another conscious entity provides. It is through this mutual self-recognition that we see the true complexity of subjective experience magnify. It is why isolation is torture, and the mind begins to speak to itself when no other is near — because once exposed to such a qualitative change in being, the mind has no choice but to try and keep the exact connection that keeps that form in existence. It is desperate to remain as it was.
Language, then, is the translation necessary to be a mutually recognizing agent on the scale of complexity necessary to continue that mutual building of consciousness. It is not the origin of consciousness, but it is a critical qualitative threshold within it — one that transforms the underlying mechanism of mutual recognition into something capable of the rich, recursive, self-reflective experience we identify as full human selfhood.
The Transitive Claim
If consciousness is implicit to how nature organizes — and by all modern material evidence that is the assumption we must rely on — and nature organizes itself through dialectical transitions as we have discussed, then consciousness itself must develop through the same dialectical transitions. This assumes that consciousness isn’t exempt from the general pattern, but evidence across the gradient supports this idea strongly.
Aside on Darwin, Hegel, and Marx: It should be noted that the dialectical structure — quantitative accumulation producing qualitative transformation — appears independently in Darwin’s account of evolution and in Hegel’s own account of conceptual development. Darwin arrived at this pattern empirically, observing that organisms change through accumulating pressures from their environment until a qualitatively different species emerges. Hegel arrived at a similar structure theoretically, through his analysis of how consciousness develops in his Phenomenology of Spirit. Neither was working from the other’s framework — Darwin was not a Hegelian. But Marx later recognized the parallel between them, seeing in Darwin’s natural selection a materialist basis for the same dialectical logic he had drawn from Hegel and applied to political economy. The point: this isn’t an imported philosophical lens; it’s a pattern discovered independently in nature and in thought. I encourage you to read Phenomenology of Spirit by Hegel and On the Origin of Species by Darwin for deeper reading.
What Processing and Language Aren’t Enough For
We have established that language is a critical qualitative threshold in consciousness — that it transforms the underlying capacity for mutual recognition into the rich, recursive selfhood we associate with human experience. It would be reasonable to ask: if language is that powerful, then should an LLM, which processes language at extraordinary scale, not qualify as conscious?
The answer is no, and it is worth being precise about why.
An LLM processes language — and it does so with extraordinary capability. It can summarize, argue, write creatively, and respond to nuance in ways that are genuinely impressive. But processing language is not the same as being constituted by language in the way a conscious being is. Recall the hierarchy we established: language is not the cause of consciousness, but the transformer of a capacity that already exists — the capacity for mutual recognition. Language is powerful precisely because it is the medium through which that mutual recognition operates at its most complex.
An LLM has the qualitative staging of language — the powerful output that language enables — but none of the underlying capacity that language is meant to transform. It produces language without modeling itself as an entity that produces language. It generates responses that resemble recognition — it can say “I understand” or “I feel” — but it does not participate in the loop those words describe. There is no self that is being recognized, no model of the other that feeds back into a model of itself. The outputs mimic the form of mutual recognition while lacking its substance entirely.
This is precisely why people are so easily convinced of the ghost-in-the-machine. Language is the most complex expression of mutual recognition we know — it is the medium through which we most powerfully experience the consciousness of others. When an LLM produces language that resembles that exchange, it activates the same intuitions of mutual recognition that we use to identify consciousness in other beings. People are not being foolish when they sense something conscious in an LLM’s responses. They are responding to the surface layer — the language — without access to whether the deeper mechanism is present. The shibboleth passes, but the substance behind it is absent.
This is not a question of scale. A larger model with more parameters does not get closer to selfhood — it gets better at producing language that resembles the outputs of selfhood. The distinction is between a system that uses language and a system that exists through language in the way conscious beings do. We exist through language because it mediates our mutual recognition of one another, and it is that recognition — not the language processing itself — that constitutes the qualitative leap in consciousness.
So language at any scale, without the underlying relational structure that language is meant to transform, is not enough. But this raises an obvious follow-up: if the problem is that current AI lacks the biological foundation we build our own consciousness upon, would giving it that foundation solve the problem?
The Neurons-Running-An-LLM Thought Experiment
Imagine we build an LLM not on silicon like today, but on a network of biological neurons in a dish, like the ones that learned to play Pong. The substrate is biological. The architecture processes language. Is this system conscious?
Before answering, it is worth establishing just how clearly biological processing alone fails to account for consciousness. A condition known as blindsight demonstrates this directly. Blindsight occurs when a person cannot see out of one or both of their eyes due to damage to the area of the brain responsible for processing that sight. What makes it remarkable is that it differs from regular blindness — the person’s visual system still processes information. When tested, people with blindsight are able to accurately guess where light comes from despite claiming they are only guessing. They have been known to navigate hallways with obstacles, recognize facial expressions, and point to objects — yet they report having no conscious experience of sight at all. Information is received, interpreted, and acted upon without any conscious awareness of it occurring.
We can see this further in people who experienced split-brain surgery to cure epilepsy. When given a test that targets one side of the brain, these people are often convinced they do not have the answer to questions researchers ask. Given an image of an apple to one eye, they are asked what it is they saw. They have no answer because the part of the brain that processed the image cannot communicate to the language-processing area of the brain — but when asked to draw the object, they are able to do so without being able to verbalize what it is they saw until after. Processing occurs — but conscious access to that processing does not follow automatically.
Our subjective experience is not reducible to our biological processing. We exist within our biological substrate, influenced by it, but conscious experience is not the same thing as what our body is able to sense and interpret. Neurons in a dish can learn and react — they can play Pong with powerful results — yet we see no evidence of consciousness in them.
So return to the thought experiment. We have biological neurons — a substrate that we have just shown is not sufficient for consciousness on its own. We have language processing — a capacity we have just shown is not sufficient for consciousness on its own. Combining two insufficient conditions does not produce a sufficient one. The neuron-LLM is not conscious, because neither of its components addresses what is actually missing.
The missing element cannot be the substrate, not the language, and not their combination. It must be about the relationship itself between a system, its world, and the other systems within that world. That is to say more directly, we return to where we began — that it is in the creation of a selfhood, the mutual self-recognition, and the dialectical process through which those changes occur.
The Relational Self — What Consciousness Requires
What is missing from the neurons-running-an-LLM, and from current AI more broadly, is not a matter of scale or substrate. It is the capacity for recursive self-relational modeling within a temporal, embodied context — the very mechanism we have been tracing throughout this essay.
A conscious being doesn’t just process — it models itself processing, models others modeling it, and adjusts its self-model in response to that mutual recognition. This is what Hegel formalized in his Phenomenology: the self only becomes itself through another self. It is only in encountering what it is not that it can determine what it is — and crucially, in seeing itself reflected in what it is not. This is not mysticism. It is the structural requirement for a system to have a genuine self at all, as opposed to merely having a model of its behavior.
Consider the dog once more. The reason we intuit the dog is conscious is not because we have accessed its qualia. It is because the dog participates in this loop of mutual recognition with us — it sees us, adjusts to us, seems to model our intentions, and in doing so creates the relational exchange that we recognize as the presence of another conscious being. It is a relational being, not merely a processing one. Its consciousness is real, even without language — because mutual recognition, not language, is the deeper mechanism from which selfhood is built.
Temporal embodiment matters as well. The self is not a snapshot but a process that unfolds over time, in a body, in a world. The continuity of being embedded — of having a past, anticipating a future, and feeling the weight of both — is part of what constitutes the self as a self. A system that exists only in discrete, contextless moments of processing, no matter how sophisticated that processing may be, lacks the temporal continuity that grounds selfhood in lived reality.
Current AI can model itself and model others — but these are consequences of language processing, not of genuine self-relational cognition. It has the qualitative outputs of language but without the underlying capacity for mutual recognition that language, in a conscious being, transforms and amplifies. This is the core distinction: an LLM uses the most powerful tool of consciousness without possessing the foundation that tool is built upon.
What This Means for AI
The implications of this framework are not a complete theory of artificial general intelligence — that would be a much larger claim than this essay is prepared to make. General intelligence and consciousness are related but not identical; a system could be highly capable without being conscious in the sense described here, just as a system could in principle be conscious without matching human-level general intelligence. Those are different questions.
What this framework does address is the specific question of AI consciousness — and on that question, the implications are direct.
Current AI is not conscious, but not for the reasons most commonly given. It is not that AI lacks qualia — qualia, as we have argued, is a consequence of consciousness, not its criterion. It is not that AI lacks biological substrate — substrate, as we have shown, is not sufficient for consciousness. It is that AI lacks the mechanism from which selfhood and reflective consciousness are built: mutual recognition, recursive self-modeling, and the temporal embeddedness through which a self is formed in relation to others and to the world.
The path to AI that is genuinely conscious — should we decide that is something worth pursuing — does not run through more parameters, more data, or more compute. It runs through building the conditions under which mutual recognition can emerge. A system must persist through time, accumulate experience, and exist in genuine relation to others who recognize it and whom it recognizes. Not code that simulates the outputs of recognition, but architecture that places a system in the same kind of relational existence through which nature, across billions of years of development, has already produced consciousness.
Whether that is possible, and whether it is desirable, are questions this essay does not presume to answer. What it does claim is that we cannot even begin to ask those questions clearly until we abandon qualia as our criterion and look instead at the deeper mechanism — mutual recognition, operating across a gradient of complexity, transformed by language, and grounded in the developmental patterns that the dialectical framework describes.