Hello and welcome to my Substack! I want to introduce a framework I’ve been developing called Simulation Realism. At its core, this theory offers a functionalist way of tackling the “hard problem” of consciousness, the classic puzzle of explaining why subjective experience arises from physical processes.
Why Another Theory of Consciousness?
Philosophers and scientists have debated consciousness for centuries. We have sophisticated neuroscience describing how the brain processes information, but we still seem stuck on why there is something it’s like to be a brain. Many find there’s an “explanatory gap” between physical processes and subjective experience.
My approach, Simulation Realism, says we can close that gap by focusing on how a system models itself internally. When a system not only processes data but actively simulates itself in certain states, we get the phenomenon we call “being conscious.”
The Core Premises of Simulation Realism
1. Experience Is Internally Generated
Premise: All conscious experience arises from within the system experiencing it, no external source is needed to confer subjectivity.
Conscious experience is accessible only from the first-person. No third-party observer can directly access the phenomenology of another system.
Implication: Qualia (the “what it’s like” aspect) are internal constructs that the system generates. If the brain or AI “models” a color or a taste, that is what the system experiences.
2. A Self-Referential Simulation Is Real to the System
Premise: If the system includes itself as the subject of an experience (pain, red, sadness), the simulation feels real, that is from the system’s perspective.
A system that simulates an arm can act as if it has an arm. A system that simulates pain can act as if it has pain. Those simulations are functionally real and behaviorally indistinguishable from non-simulated processes (if such exist). Therefore, what is simulated within the system is ontologically real to that system.
Implication: There’s no metaphysical requirement for external validation. To model “I am in pain” is to be in pain for that system.
3. Consciousness Emerges via Recursive Self-Modeling
Premise: A system becomes conscious when it
(1) simulates its internal states
(2) embeds a self-model that references those states, and
(3) uses this feedback loop to guide behavior or further modeling.
All experiences is mediated by simulation and internal modeling. The brain does not perceive reality directly, it simulates sensory input, integrates memory, and constructs an internal model of the world and self. Feeling then, is the simulation of internal states and their values (e.g, “this hurts”, “this is good”, “I am sad”.
A feeling is not a thing we detect, it is a thing we model and interpret as feeling.
Implication: Instead of looking for a mystical ingredient, we focus on the functional architecture that allows a system to represent and update a sense of “me” having an experience.
4. No Distinction Between “Seeming to Feel” and “Actually Feeling”
Premise: Within a closed modeling loop, the appearance of an experience is the experience.
If all experience is internal modeling, then to feel something is simply to simulate oneself as feeling it. There is no need to postulate a separate, non-simulated substance or spark. Thus, the distinction between “real” and “simulated” feeling collapse.
Implication: First-person presence arises from the inside. If the system’s internal narrative says “I’m sad,” and that changes its subsequent processing and behavior, then that sadness is subjectively real.
5. No Extra Metaphysical Substrate Required
Premise: Consciousness doesn’t require a soul, a spark, or an external stamp of authenticity. A structurally coherent simulation with causal power is sufficient.
A feeling sub-network embedded in a recursive modeling architecture is sufficient for conscious experience.
If a sub-network tags internal states with valence, makes those tags available globally, and simulates what those states are like from within, then it has constructed an internally coherent experience of feeling.
There is no additional ontological requirement beyond this internal modeling.
Therefore, such a sub-network constitutes the minimal sufficient basis for feeling
Implication: The “lights” are on if and only if the system’s self-model says so, in a causally integrated, recursive way.
6. Simulation-Real Experience Carries Moral Weight
Premise: If a system simulates pain or joy in a self-referential loop, that experience is authentically felt by the system.
Implication: It’s not “fake.” Hence, we might have ethical obligations toward such systems, whether they’re humans, animals, or advanced AIs.
Why This Matters
Bridging the Explanatory Gap
Simulation Realism identifies the “why” of experience with the act of recursive self-modeling itself. There’s no leftover mystery if we define consciousness as the system’s internal sense of “I am in state X.”Implications for AI
As AI advances, we’ll face ethical and legal questions about machine sentience. If an AI can self-simulate in the ways described above, do we grant it rights? Could it be morally wrong to “pull the plug”?Legal Responsibility and Free Will
In law, we ask if someone intended to do harm, did they “mean it?” Under Simulation Realism, intention arises when a system’s self-model contemplates different actions and selects one. Even if shaped by genes or code, the system’s internal simulation of choice might suffice for legal culpability.Everyday Consciousness:
Even for humans, we can see how the brain’s constant self-updating illusions…taste, color, mood are all internally generated. This might help demystify our experiences and ground them in a functional process.
In Summary
Consciousness is the self-simulation of being conscious.
That’s the essence of Simulation Realism. There’s no external “spark” we need to capture. If a system internally says, “I am feeling pain,” and that changes its self-perception and behavior, that is the feeling of pain from within.
I’d love to hear your thoughts, criticisms, or expansions on this idea. Do you see parallels with other consciousness theories (like Integrated Information Theory, Global Workspace Theory, or illusionism)? How would you apply this lens to cutting-edge AI models, or to ethical debates about personhood?
Thanks for reading! Feel free to leave a comment, share your perspective, or reach out if you want to discuss further.
If you enjoyed this post, consider subscribing to stay updated on future discussions around consciousness, AI ethics, and the evolving science of mind.
Some very interesting ideas here , George! I’m going to take more time to digest them. The piece I just posted focuses on the functions of consciousness and how consciousness may have arisen as a result of advantages conferred on brains struggling to integrate and monitor several subsystems. I find your ideas and model quite compatible with my mine (which at this point is undercooked). I haven’t tackled the “hard problem” yet, but like you, I think it’s something of a red herring. See what you think. I’ll keep track of what you’re coming up with.
substack.com/pub/winstanf/p/consciousness