1. Hierarchy: Independent levels of resolution.
This allows for zooming in and out. Different levels of focus may reveal different patterns. A non-linear relationship can sometimes be approximated as linear on a small scale. This is one of the main principles underlying the general theory of relativity. Manifolds appear Euclidean locally but may have a different emergent topology. Generally speaking, it is more convenient to work with lines than curves.
Systems theory and mathematics are in some sense the collection of rules that work across scale. Sets can be thought to transform into other sets. Such sets can contain elements or even other sets. The idea of containment is central to such theories and is so fundamental in our minds that it is often overlooked.
At times it may be best to ignore scale and isolate observations. This gives a clear view of the local data and shines light onto certain asymmetries or irregularities. Such information can be extremely useful when estimating the effects of aggregation.
2. Exclusivity: Parts are mutually exclusive when dividing wholes.
If the default state of a system is modeled with a monism, then identity is the only quality of interest. It is a whole that is not broken down into parts. Reduction cannot go on forever, therefore at some point an agent much stop at a whole. However, reduction is necessary for consciousness to exist at all. Apparent boundaries of existence define our reality. Without parts, there is not an alternative state or position within that existence. It is all just one thing.
Duality breaks the whole into two parts. Those parts can be further broken down into sub-dualities. If the parts of a whole are not mutually exclusive, then the distinction generates no information. The switch must either be on or off but such a certainty is not always practical. Confidence in distinguishing between two parts can be challenging when the signal is distorted by noise. At the human level, there is a significant amount of such noise due to the complexity/size ratio.
Binary classification trees are popular because of the low error rate when typing. Hence the use of transistors in computing and my use of binaries in personality theory. Selecting between two alternatives is more accurate than selecting between twenty. Though, this does not mean a binary encoding is optimal. A shallower, wider hierarchy can be quicker to navigate.
3. Symmetry: Parts of the same whole and resolution have equal impact.
Impact is sometimes hard to judge, especially across context. Setting a boundary is often hard due to our inability to accurately predict the consequences of non-linear interactions. Often, a measure is distributed according to a power law in some uniform space. Dividing down the middle in such a situation would lead to an asymmetric sampling of attention/time/energy in the space relative to the impacts of each side.
For instance, if you want to predict the future state of a market, then spending an equal amount of time researching every company in that market would be inefficient. Not every company has the same impact. A better strategy is to focus most of your time researching large cap companies and a relatively small amount of time researching mid/small cap companies (perhaps in accordance with a power law).
Overall, this is the most challenging criteria to meet yet is vital to the development of elegant models.
Monday, September 9, 2019
Thursday, September 5, 2019
Cognitive Functions
S:Sensory
N:Intuition
F:Feeling
T:Thinking
i:introversion
e:extroversion
Such functions can be combined into 16 types, each with a four-letter code. If interested then search online for the cognitive function stacks to see what orderings are associated with each type. Everyone uses all eight functions, but each individual tends to specialize in one perception axis and one judging axis (..which tend to be a bit asymmetrical). Personality theory is based on the principle of exclusivity, meaning that there is always a trade-off associated with wiring the brain in a certain way. I will probably go into this in more detail later.
Knowledge functions are ways of knowing. Particular instances in memory can serve to validate a hypothesis on the metaphysical plane. A universal truth is an instinctual leap to a conclusion based on prior action on the physical plane.
Action can be done on either the physical or metaphysical plane. The metaphysical plane references the physical plane with metaphor. Any symbolic or abstract form is rooted in metaphor.
Interfacing can be done with either agents or objects. Agents are assumed to be capable of making selections. Objects can be referenced from conditions generated by agents.
Deliberation functions are methods of deliberation. Selections are unconditional during agent interfacing (though may be influenced by conditioning). By unconditional, I mean that there is no apparent mechanism behind it (this does not mean the mechanism is non-existent). Generated conditions eventually get selected (or not) by agents while engaged in object interfacing.
This is just my particular take on the model. If you have an alternative suggestion that still retains symmetry, then feel free to leave a comment explaining what and why. Ultimately, the accuracy of such models is dependent on how the user trains them with data. Be wary of over/under fitting, and keep in mind that models are not the real thing.
Agency
Agency is the quality of making decisions. Such a quality assumes free will or an ability to choose between alternate future realities. Determinism is hard to argue against, but that does not stop human subjects from ignoring it. Humans have an inherent desire to feel as if they are in control. Even if that control is selecting between a sandwich or a pizza for lunch. On some occasions, the number of alternatives is so overwhelming that the subject stifles into submission. In such cases, the agent may beg nearby acquaintances to choose for them; else they suffer a panic attack.
Every agent has a type and a replicator. Roughly speaking, types are stochastic and replicators are deterministic. Types are linked to psychology and linguistics. Replicators are linked to evolution and epistemology. Types engage in particular information processes. Replicators counteract dissipation and generate observations to be statistically validated. Keep in mind that I am attempting to create a far reaching frame that goes well beyond the typical scope of what a 'type' or a 'replicator' are. Human types and genetic replicators are only small, albeit important sub-sets. For instance, a replicator could also be an experiment where several human scientists attempt to replicate the results by performing a specific measurement using a common method.
Types are located in a territory. The closest equivalence here is ecology or systems theory. Incomplete information or limited attention are the primary qualities of this branch. Each type builds a model of their environment that represents the known, but such an expansive external reality will always be partially unknown in the eyes of the beholder.
Types have a complex. Complexes are subject to the considerations of theoretical computer science (not so theoretical now days). Complexity, information, communication, computation, algorithms, data structures, and so forth are relevant here. The basic elements are codes and data streams. As data is collected it is encoded into memory and later decoded to be sent out as a signal for other nodes in the network. In an imperfect world, signal interference is a battle that must be fought with redundancy.
Replicators have an economy due to their scarcity. Supply and demand curves can be sketched with the aid of various counting methods. Typically, such analysis would require sampling and estimation. Agents can exchange replicators in a market according to their preferences. In the aggregate, symmetries and asymmetries are the name of the game.
Replicators are associated with a mechanism or deterministic process. This is the domain of physics where any apparent randomness is considered failure. Agents agree on a unit of measure, then several measurements can be performed. After a little cross-validation and model fitting, certain invariants may start to become apparent. Some quantities (like a Lagrangian) do not change in closed systems, though systems can only be approximated as closed at either very large or very small scales.
Overall, this is just a meta-frame that pegs traditional schools of thought onto a tidy hierarchy. In no sense is it a 'theory of everything' or a 'unification'. Rather it is more like a toy that should be thrown against the wall, kicked out the window, and ran over by a semi-truck. Perhaps causing the driver of the semi-truck to stop and give the kid a better toy out of guilt. Internalizing a meta-frame such as this can help keep the mind organized while everything else falls apart.
Subjective Objects
One paradox arises when some subjects consider some other subjects as objects. This is common place in our society where humans are frequently reduced to low-dimensional representations for the purpose of employment. This is a necessary condition for any organization to grow and survive the competition. Corporatization minimizes the human component of the equation in favor of a more powerful and expansive order. In such cases, another paradox can come into play when humans treat the emergent order as a subject. In the opposing direction, individual cells in the human body are sometimes considered independently of the host organism (the human).
In light of such paradoxes, I find it best to equate the set of all subjects to the set of all individual humans. This rule is not always easy to follow given that groups often appear to make decisions collectively. Nevertheless, this rule helps keep insanity at bay. Decisions can only be made by individuals, and groups of individuals can only agree and/or act on decisions constructed and/or discovered by individuals (the construction versus discovery distinction is a topic for another time).
Attention is a limited resource for all subjects. Personality theory attempts to categorize different types of attention. Information theory can provide a mathematical frame for the communication of information generated with each type. Such formalization is not absolutely necessary for understanding, though links of one kind or another are required to anchor down abstractions. Otherwise, they tend to float away like dissipating clouds.
Patterns are non-linear in the sense that no universal instructions exist telling subjects how to read them. This does not stop subjects from attempting to serialize such patterns into comprehensive stories or narratives. Just as I am doing right now. This is partially why I like to provide diagrams that serve as a crude non-linear metaphor in addition to several supporting narrations. Another idea to keep in mind is that patterns are one derivative removed from whatever reality is, so patterns are dimension-less ratios of how one quantity changes with respect to another quantity of the same dimension. Not sure if all that makes sense, but it does seem to fit with modern psychology.
This post may be updated as time unfolds.
Subscribe to:
Posts (Atom)