preprint · part 1 · 2 of 7
Part 1: Specifying the Area of Interest
Vectors Are All You Need
Earlier in my life, I wrote a chapter called ‘chaos’ in a book I was writing, it was essentially a long-winded deconstruction of light and matter into component datums that interact with each other, so light + a particular protein/molecule + interpretation from a brain = the experiential feeling of, for example, seeing a color. I stretched that idea past its breaking point when I went on to describe a ‘being’ that can ‘see’ all light without interpretive structures as an illustrative tool. I imagined it would be like a chaotic static without form. The point of describing that background is because in the next chapter, I brought forward a nascent version of this Kantian theory I’m presenting now. We need to bias our interpretation of data to make heuristic sense of what’s going on around us and produce optimal responses. Afterwards, I diverge from the Kantian line by proposing the evolutionary model of transcendental idealism and a far more nuanced tool to understand reality.
I want to build a system that has universal composability between something that is true at the biochemical level to the realm of mental interiority and then further to the realm of macroscopic behavioral analysis and still be flexible enough to be integrated into practical applications. As I’ve stated before, inside this framework, perception, interpretation, understanding and action are manifold, they are all the same thing: This normalization greatly simplifies the mathematics of state transitions as we no longer need to factor different subroutines of the observer and we can ignore the machinery of the brain entirely: a thing is what it does. You can map states of phenomenal transition from one form to another form and call that change the product of a ‘transcendental function.’ The molecules in a rod cell interacting with light are as transcendental as the ‘image’ being cast from the eyes into the brain, and then the action that a particular constellation of light causes the observer to do.
Let me declare my priors before continuing; we will return to each of them in more precise form later.
- For the purposes of this framework, every observable change in matter can be modeled as a vector.
- These vectors are meaningless to an organism until an evolutionary process encodes significance onto the observer.
- I will call the pre-interpreted side of this picture noumenal reality vectors: the raw distinctions available prior to the organism’s full phenomenal organization of them.
- These are not “seen” directly by consciousness; they are the bits of raw, unfiltered reality that first enter the system through physical interaction.
- Reality arrives as a continuous stream, but for modeling purposes we can sample that stream into discrete state-snapshots. At any given snapshot, a local noumenal state contains the vector-representable material changes relevant at that moment and position.
- An observer’s qualia—phenomenal reality—can likewise be modeled as a state that updates over time. That state is the representation of feelings, visuals, memories, action-tendencies, and every other cognitive process available to the observer at that moment.
- Evolution encodes a seed for the organism’s Transcendental Embedding. This is the inherited template of reality the organism can, in principle, be privy to: evolution guarantees an expected version of reality for the lineage, and that inherited seed structures how noumena are turned into phenomena and then more phenomena recursively at each moment of qualia.
- There exists an ambient latent space—conceptually open-ended, and idealized here as infinite-dimensional—into which experiencable phenomena can be projected.
- A transformation function maps noumenal inputs together with the current phenomenal state into new phenomenal representations.
- This inherited seed represents the organism’s interpretive lens: the lineage-fixed framework through which experience must first pass.
- This genetic seed is, in itself, unchanging and stateless; later development, memory, and history determine how that inherited template is realized in the individual.
- The Transcendental Embedding transforms noumenal vectors into phenomenal vectors through two complementary processes:
- Projection: mapping raw inputs into meaningful coordinates within the organism’s latent space.
- Transformation: combining the current phenomenal state with new inputs under the inherited rule-set.
- These phenomenal vectors are representations of reality as it appears to the organism: this is the framework in which perception, interpretation, and action are treated as one continuous process.
- Paired with this inherited seed is a transition rule that maps one phenomenal state to the next. In the idealized version of the framework, that rule is treated as fixed with respect to the seed itself, while the organism’s actual state supplies the changing input.
- Taken together, the inherited seed and the transition rule generate phenomenal vectors. Those phenomenal vectors are reality as it appears to the organism.
- Noumenal reality vectors are invisible to consciousness; they are only encountered through the chain of physical interactions that first register them.
- Everything available to conscious experience is already on the phenomenal side of the transformation.
- In principle, one can trace the transformation of a noumenal vector into a phenomenal vector through the biochemical and computational chain that produces the experience.
- The recursive mapping from one phenomenal state to the next is, for the organism, its lived reality.
- In the idealized version of this framework, I assume a deterministic update rule: given the organism’s inherited seed and the full current phenomenal state, the next phenomenal state is fixed.
WARNING: I’m using a simpler, incomplete and somewhat contradictory version of the term “Transcendental Embedding” so that it will be easier to grok now, but will be explained more fully later.
From your eyes to inside your mind, you are currently running extremely complex systems to organize these letters into discernible symbols and then translate that ordering of symbols into information you can grok. But let’s pretend, for a moment, that you were much dumber than you are now—so dumb, in fact, that you are not even conscious. Things simply happen to you. You barely have a sense of time. Your vision is more like a flash of symbols to which you can only have strong affective reactions. Here is a story of your life:
You’re walking along and suddenly, out of nowhere:
A0 [0.54, -0.13, 0.75, 0.42, -0.26, 0.87]
Of course, in a moment of panic, you feel:
B [0.32, 0.69, -0.15, 0.78, 0.25, -0.44]
And instinctually you do:
C [0.61, -0.33, 0.48, 0.91, -0.18, 0.36]
Whew, thank God that’s over; now you want to do:
D [0.27, 0.72, -0.09, 0.65, 0.41, -0.53] with the:
A1 [0.54, -0.13, 0.75, 0.42, -0.26, 0.10]
...Kinda gross, but whatever.
Notice the small change in the last coordinate between the first and last object-vector, .87 → .10. We can infer that most of the structure remains intact while one aspect of the represented object has changed. In this toy example, the system’s bias structure turns
A0 [0.54, -0.13, 0.75, 0.42, -0.26, 0.87]
into
A1 [0.54, -0.13, 0.75, 0.42, -0.26, 0.10],
and everything in between is the vector representation of the internal phenomenal activity required to bring about that change. On this view, the feeling and the instinctive action can be written in the same general format.
For readability, I decomposed the previous series into separate time-steps. That is not how the process actually unfolds. In reality, these states overlap and bleed into one another. The point of the decomposition is only to show how one structured representation can recruit another by shared positions. If A0 activates the system and B appears immediately afterward, you should imagine the coordinates of A0 and B occupying the same larger state-space, with some regions active and others blank. In that more realistic presentation, the same story looks like this. Let S(n) denote the state at discrete modeling step n:
S1[...0.54,-0.13,0.75,0.42,-0.26,0.87,0,0,0,0,0,0,0,0,0,0,0,0,0 ... 0]
A0 alone
S2[...{0.54,-0.13,0.75,0.42,-0.26,0.87},
{0.32,0.69,-0.15,0.78,0.25,-0.44},
0,0,0,0,0,... 0]
A0 + B
S3[...{0.54,-0.13,0.75,0.42,-0.26,0.87},
{0.32,0.69,-0.15,0.78,0.25,-0.44},
{0.61,-0.33,0.48,0.91,-0.18,0.36},
... 0]
A0 + B + C
S4[...{0.54,-0.13,0.75,0.42,-0.26,0.10},
0,0,0,0,0,
0,0,0,0,0,
... 0]
A1
S5[...{0.54,-0.13,0.75,0.42,-0.26,0.10},
{0.27,0.72,-0.09,0.65,0.41,-0.53},
0,0,0,0,0,
... 0]
A1 + D
I added the {} only to make the story more legible and to separate features of the represented state; they are not part of the formal system itself. Notice also that the B and D representations share positions. That suggests a region of the state-space associated with an affective or motivational response to the coordinates occupied by A.
I am using a more tangible action-based example here, but the same logic would apply to something as simple as light hitting a photoreceptor and the observer mapping color and position into a phenomenal representation. Depending on the application, you can average the state over time, or produce a higher-order vector representation of the whole sequence. That may be lossy, but that is acceptable: evolution does not need a perfect copy of reality. It needs an approximation of reality good enough to bring about adaptive state-transitions.
There are many ways to describe the world. People often say that functions describe the world, and that is true as far as it goes. But most approximations—including functions in isolation—describe only the world of appearances available to the model. The advantage of the vector-space picture is that it lets us describe many different levels of organization inside one composable framework. Neural networks are still useful here, but they are downstream processors of structured representations; they are not, by themselves, a theory of how those representations are made available to the organism.
So, in this section, I assume an open-ended nonlinear mapping rule capable of weighting and summing phenomenal vectors as they co-occur. That rule takes the current phenomenal state, maps it through the organism’s inherited interpretive structure, and yields a new phenomenal state. Later, when we get to applications, the practical problem will be path-isolation and signal-processing: given the noise of a large state-space, which contributing factors produce the strongest signal for the transition we care about?
The broader claim is not that the organism first builds a neural network and only then acquires a world. It is the reverse. The organism inherits a structured way of carving reality into usable distinctions, and the processor it builds later operates within that inherited space. Evolution gives the observer a repertoire of vector-like distinctions—time, space, color, objectness, shape, bodily boundary, hierarchy, proportion, attention, lower-order affect, higher-order affect, symbolic meaning, interiority, and so on. Action is a byproduct of the continuous processing of these distinctions through the organism’s interpretive structure.
It is important to understand the primitives before the abstractions. We want a representation of phenomenal life that is mathematically tractable without pretending that every observer receives the same world in the same way. The same broad stream of reality may confront multiple observers, but different inherited structures and different realized histories will transform that stream into different phenomenal outcomes.
If you want an intuitive picture, think of the inherited structure as a massive coordinate-ready template and of lived history as the process that determines how that template is realized in the individual. In principle, that structure constrains how you can react to the world; in practice, any model we build will only ever approximate that structure. Evolution is the encoding protocol, compression is part of the mechanism, and the point of the framework is to describe how reality is described for an organism—not to confuse the description with the thing itself.
Why use an embedding system rather than just talk about neural networks? Because I am not trying only to describe a processor. I am trying to describe the representational conditions that make processing possible in the first place. In that sense, what this section does is combine the space/time side of experience with the rest of phenomenal life into one common representational framework, rather than treating them as separate faculties that must later be stitched back together.
Note also that we are never really processing one isolated datum at a time. The phenomenal stream is continuous, even when the model samples it discretely. The inherited structure is relatively stable; your experiences, memories, and moods are not modifications to the seed itself so much as modifications to the stream it is processing and to the realized organization built on top of that seed. That is why this device can represent what Kant’s table of categories cannot: not just a generic human mind, but, in principle, the different ways minds can be structured across organisms and across individuals.
So ends Kant from the Evolutionary Perspective and Vectors Are All You Need.
Next, we get into the real meat: what an application of this theory looks like, and how to do these calculations.
The Nature of Phenomenal Reality: What are we trying to measure?
Most of the time, when mathematicians are using the ‘infinite’ it is to simplify a problem and also say something concrete about finite things. Thus, we are continuing the tradition by emphasizing the infinite ways in which reality can be understood so that we can isolate a finite representation relevant to us. To simplify further, if you haven’t gathered this already, we are working with a deterministic model instead of a probabilistic model, we assume that if perfect knowledge of the noumenal and phenomenal reality could be encoded into the state, and if we knew with precision the nonlinear mapping function, then we can get the next state. Philosophically, probability is just a way of calculating a lack of understanding about the world and no event is truly random; thus, for the next series of equations, we also assume a deterministic world governed by finite cause and effect.
In the previous section, we went over the concept of Transcendental Embeddings and we assumed the finite vector space that the organism played on. In ‘God’s Infinite Vector Space’ there are infinite dimensions of information, infinite ways to represent the underlying vectors of information that make the disordered world into an apparent unity for the observer. Fundamentally, this is what quantum physics is attempting to do with hilbert spaces: they describe the state of a particle with as many dimensions of information as needed to specify what the particle is doing. In the same way we are describing the state of the observer with as many dimensions of information as needed to specify what the next state of observation will be.
So what we are trying to find first is the universal function that maps the current state onto the next state for the organism. Keep in mind we are trying to approximate the whole evolutionary history of the organism as that history encoded a representation of reality onto the organism’s genes. So how do we reduce the complexity to something tangible? Let’s start by recalling that every organism’s representation of reality—its distinct “slice” of the infinite dimensional chaos—emerges from the genetic blueprint that encodes both the capacity to perceive and the biases that drive interpretation. This genetic blueprint constrains the otherwise unbounded dimensionality of possible experiences to something finite. The guiding question, then, is how to operationalize this genetic constraint in a mathematical sense so that we can start constructing (or at least approximating) that “universal function” which transitions an organism from one phenomenal state to the next.
From the infinity of potential experiences, genes carve out a stable but extraordinarily high-dimensional subspace: the Transcendental Embedding. Within this subspace, an organism’s capacities—whether they be sensory (e.g., the ability to sense temperature, pressure, color, etc.) or conceptual (e.g., forming an internal sense of power, confidence, or fear)—are written in the coordinates of a Hilbert space. Each gene (or set of genes) provides a vectorial “template” for how certain categories of data (like “red light,” “predatory motion,” “threat posture,” “affection cues,” etc.) get processed and transformed.
The trick is, no organism has conscious access to these gene-level templates; instead, they appear as “normal” to the experiencing subject. That is, if your lineage evolved to treat the color red as an immediate signifier of threat, that intense emotional surge you feel upon seeing red is not an option but a genetically inherited, encoded response. The genetically molded embedding architecture—the shape of your finite subspace—guarantees that specific transformations from input to output (or from “noumenal vectors” to “phenomenal vectors”) will feel self-evident or even inescapable.
Also, keep in mind we do not want a statistical framework; in this ideal framework, this is, in essence, a deterministic model where we are attempting to do calculus rather than statistics. To start with, probability will lead to unnecessary complexity, and thus we have to ask the question: What if reality weren’t probabilistic?
The Evolutionary Mechanism for Encoding Transcendental Embeddings
It is useful to imagine humans as experiencing a small slice of all possible ways to experience reality. Within this small slice, there is enormous variation; however, we are still subject to an evolutionary inheritance that provides us with a predictable range of ways to perceive and structure reality. You could imagine the way humans experience reality is a series of dimensions. To be simple, we could have dimensions for the range of colors we see, as well as dimensions for how we position those colors in our mental interior when a photon hits our eyeball. We could build on this by adding other dimensions and associating groups of pixels. Then, we could create structures from those groupings and assign dimensions that associate meaning (or potential for meaning) with those superstructures. Finally—you could imagine—we have dimensions that represent placeholders for the superstructures humans expect in their lives. An example of this would be something like a mother figure, or what a place to sit looks like, or what violence is, or even something complex like a god object. These are complex phenomena that depend on humans holding a particular psychic position in reality; the god object, for example, could be computed by finding and averaging the vectors of communal bonding, fatherhood, war, purity, spite, revenge, love, sacrifice, death, externalized meaning, care, fear, language, outsiders …. and the addition of all these vectors could make strongly or weakly predisposed towards believing in god, coupled with a supporting or dismissive society you can get a range of possible outcomes for a human to experience when it comes to them internalizing and practicing the god object.
All of the sub-vectors of the god object are their own series of complex dimensions as well. The war object might require communal bonds, hunger, fear, negative ethnocentrism, concepts of ownership and land, hierarchy, etc. Let’s get even simpler, the hunger object could be composed of the vectors of glucose saturation, fullness, thirst, fat concentration, presence of grelin, and other biochemical factors. The sleight of hand I performed was relating biochemical signals and things that exist purely as a concept in the mind of the user, like god and war. Ah, but that is the point! To the mental interior, there is no difference between the experience of feeling hungry (even though it has simplistic origins) and feeling like your group is at war; they both offer the same qualia to the experiencer. Humans (to varying individual degrees of intensity and presence) have a space in their minds for both concepts.
All these complex traits being the results of a series of simpler objects makes their examination and deracination possible as well as completely computable. But how did humans get to become so complex? Why is the range of our experiences so large relative to other creatures? How did we obtain this Transcendental Embedding? To be overly simplistic, we can just trace our evolutionary lineage backward, and we can see the additions and pruning of dimensions over time. This process also gives us insight as to which organisms we have overlap in experience with; however, it is crucial to note that convergent evolution for mental interiors is entirely possible, I’m sure corvids and humans use very different neural hardware to get to the same implicit understanding of the physics of buoyancy and water, and thus we can ‘share’ some dimensions of how we experience reality, just as most complex organism share the reality dimensions of 3D space and time.
** To be very clear, N number of seemingly arbitrary vectors creates all of the expected reality you have. To have any level of precision, it is helpful to imagine this vector array in the hundreds of billions per human, and maybe only a couple of hundred for something like an earthworm.
But let’s start at the beginning and answer the question: how did every organism gain the dimensions by which it understands reality? We begin with the most austere state imaginable: a membrane-bound molecule that can do only one thing—detect whether its ionic balance crosses a critical threshold that will rupture the lipid wall. That single binary distinction creates the first coordinate of an experiential space: intact → viable, ruptured → death. Every replication cycle that preserves this discrimination reinforces its fitness value, so the “membrane-rupture axis” becomes genetically fixed.
Random copying errors then throw up tiny tweaks: a peptide that bends when it binds a proton, a chromophore that flips shape when hit by a photon, an ion channel that opens more readily in warmer fluid. Each tweak is, in effect, a proposal for a new axis, a new vector of feeling reality. Most axes cost energy without adding reproductive payoff, so they fade. Occasionally, one lets the organism swim toward glucose or duck away from acid, and descendants bearing that vector out-compete their siblings. The vector is retained, and the organism’s Transcendental Embedding expands from 1D to 2D, 3D, and so on.
With every retained axis, earlier ones are not discarded but re-encoded into higher-order compound dimensions. Once a light-sensitive molecule exists, downstream mutations wire two such molecules together, letting the organism register differences in intensity rather than mere presence. That difference becomes a new vector—contrast. A later duplication introduces a second pigment shifted in wavelength; the comparator circuit now yields a chromatic axis. What began as a single light/no-light bit has unfolded into a color cube where the vectors of understanding reality become synergistically linked.
This simple logical mechanism is highly scalable. Chemical gradients become taste maps; pressure sensors become a multi-point body schema; socially relevant signals (gaze direction, group ranking) become vectors in an emerging social manifold. At each step, selection keeps only those dimensions whose ‘predictive power’ outweighs their metabolic drag. Complexity is the byproduct of past environments and developments that rewarded sharper distinctions of the noumena and phenomena available to the organism.
By the time we reach hominids, the embedding has accrued countless axes. Some are exteroceptive (hue, pitch, depth), others interoceptive (blood CO₂, hunger peptides), and others still are purely synthetic (synthetic a priori, to Kant, this is an impossibility, but for us this is the nature of reality)—patterns that exist only as concepts: tool, ally, lover, or taboo. The aggregate is a pseudo-species-level template: the expected set of distinctions any typical human can, in principle, entertain; however, the level of resolution we should have in our examination should be at the level of the individual organism. An individual inherits their template, their seed Transcendental Embedding, fully formed at birth. If an illness or pathology in the individual eliminates the red-cone pigment, the axis for “long-wave chromatic contrast” is still present in the template; the ‘expectation’ of the organism is that the information is present, but the data stream along it is fixed at zero. Reality shrinks by one shade, even though its axis persists in the mathematically expected blueprint of this individual. Our concept of self-preservation appears complex; however, it is a product of our ancestors’ efforts to maintain their lipid membranes from rupturing. Inside these vectors, of course, we have weights of importance that the distance from the origin can dictate; self-preservation, for example, would be relatively far away.
To be brief:
- A mutation is proposed that creates a new distinction in the fabric of reality
- This mutation constitutes a new vector of qualia for the organism
- Based on reproductive success, the mutation is passed down.
- Retention of the mutation embeds that vector permanently in all offspring of the organism unless pruned.
- Richer composite coordinates are created by the addition of new vectors, creating more differentiated versions of reality.
Formalization
(Bold lowercase symbols denote column vectors, bold uppercase symbols denote matrices, and calligraphic symbols denote spaces or distributions.)
Before we continue, the philosophical argument I gave was to ensure that we can simplify the essential elements down to the point where we can inject them into standard, and proven, mathematical frameworks and systems. From here on, if the previous axioms have been accepted, I build off of them in a series of sequences layered on top of each other and attempt to formalize a proof. I have to reiterate: I am not inventing new math.
Lastly, this work I’ve done is an absolute tour-de-force of effort, as consequence of that is we should assume that there can be logical gaps or errors in my formalization, email me if you see anything wrong.
1. Universal arena
Let \(\mathcal N\) denote the noumenal arena: the ambient space of possible distinctions from which an organism’s reality is carved.
We need a container large enough to hold every possible distinction reality could make for all organisms and people, including the distinctions no organism has ever perceived. Take \(\mathcal N\) to be a real separable Hilbert space: an infinite coordinate system where each axis represents a genuinely independent distinction that can phenomenally be made. For concreteness, think of \(\ell^2\) the space of infinite sequences of real numbers that don’t blow up when summed, which gives us the tidiness we need without sacrificing the infinite space we require. Let \((\mathbf e_k)_{k=1}^{\infty}\) be an orthonormal basis for \(\mathcal N\). A full noumenal microstate at time \(t\) is then
This ambient state is not what the organism experiences. It is the total field of possible distinctions relevant to that moment. Experience begins only after a lineage-specific projection has been applied.
If one wants the intuitive reading: \(\mathbf n_t\) is the full noumenal state, while \(\mathcal N\) is the space in which such states live.
2. Species-level Transcendental Embedding
Evolution does not give a lineage access to all of \(\mathcal N\). It preserves a finite set of distinctions that proved useful for survival and reproduction. These distinctions span the lineage’s accessible subspace.
Define the species-level transcendental embedding by
Here \(\mathbf v_1,\ldots,\mathbf v_d\) are the lineage-selected axes of distinction. For simplicity, assume they are orthonormal:
This lets us represent the species template in two equivalent ways.
First, as a projection operator onto the accessible subspace:
Applied to the noumenal state \(\mathbf n_t\), this yields the part of the world the lineage can in principle access:
Second, as a coordinate encoder that expresses the same accessible slice in \(d\) coordinates:
In matrix form,
The vector \(\mathbf z_t\) is the lineage-accessible coordinate description of the noumenal state at time \(t\). It is not yet the full phenomenal state. It is the species-level input from which phenomenal processing will later be built.
This preserves your earlier intuition:
- \(\mathcal N\) is the total noumenal arena.
- \(\mathcal M^{\mathrm{spec}}\) is the finite slice evolution preserved for the lineage.
- \(\mathbf z_t\) is the organism-usable coordinate form of that slice at a moment in time.
If one wants the “pick a column” picture, each \(\mathbf v_i\) can be taken as one basis direction \(\mathbf e_{k_i}\). If one wants composite detectors, each \(\mathbf v_i\) can be a weighted combination of basis directions. The second form is more general and is the better default.
3. Axis-creation rule (mutation + selection)
Now let evolutionary time be indexed by \(\tau\), and let \(\mathcal M_{\tau}^{\mathrm{spec}}\) denote the current species-level template at that stage.
A mutation proposes a candidate distinction
To determine whether it adds a genuinely new axis, first remove the part already captured by the existing template:
If \(\Delta \mathbf v_{\perp}=0\), then the mutation adds no new distinction: the information it would supply is already representable in the current template. If \(\Delta \mathbf v_{\perp}\neq 0\), normalize it:
This normalized vector is the genuinely new candidate axis.
Let \(\mathcal E_{\tau}\) denote the distribution of environments encountered by the lineage at evolutionary stage \(\tau\). Let
denote expected reproductive value in environment \(e\) for organisms whose accessible subspace is \(\mathcal M\), and let
denote the cost of maintaining the new axis: metabolic cost, developmental cost, wiring cost, false-positive cost, and related burdens.
Define the net fitness contribution of the candidate axis by
The retention rule is then
So the story is:
- mutation proposes a candidate distinction;
- subtract what the lineage already captures;
- evaluate expected reproductive gain minus cost;
- retain the axis only if the net contribution is positive.
Iterating this rule across evolutionary time yields the species-level accessible template. Do that n number of times until you have the species template embedding for humans.
species template vs. individual embedding
This is a lineage-level template. It says what kinds of distinctions a member of the lineage can in principle represent.
It is not yet the full individual transcendental embedding. We’ll get to that in the next section.