Continuing along the syllabus, there’s more to chew on with David Chapman's A bridge to meta-rationality vs. civilizational collapse. Chapman starts by discussing Robert Kegan’s model of adult cognitive, affective, and social development and points to his own summary of it, which is important for understanding both his post and where I’m going with this one.
Summarizing the summary (but definitely recommending you read the actual summary), Kegan organizes “adult development” into five stages that distinguish competency in operating within a particular mode of relationship to meaning. What makes higher stages superior (in the more literal sense of the word vs morally) is that individuals at each stage are able to also operate within a lower stage but not vice versa—someone at stage 4 is capable of operating in stage 3 but not vice versa.
Chapman’s summary skips over stages 1 and 2 since they’re considered to only apply to young children. The latter stages are labeled: stage 3 the communal mode (“the ethics on empathy”), stage 4 the systematic mode (“the ethics of systems”), and stage 5 the fluid mode.
Chapman explains that stage 3 corresponds both to adolescence and to an older, immature stage of civilization, and stage 4 corresponds more with modern civilization with its more structured society. However, stage 4 maps less perfectly onto a person’s age since many adults stop at stage 3.
Stage 5 is aspirational both for civilization and for most individuals, best I can describe is that it represents a point where rationalism breaks and flows backwards into empiricism (but in a good way), borrowing more heavily from Chapman:
But at some point you realize that all principles are somewhat arbitrary or relative. There is no ultimately true principle on which a correct system can be built. It’s not just that we don’t yet know what the absolute truth is; it is that there cannot be one. All systems come to seem inherently empty.
This uncomfortable midpoint of the stage 4 to 5 transition is sometimes called “stage 4.5.” Here it’s common to commit to explicit nihilism. Understanding that there is no ultimate meaning, one comes to the wrong conclusion that there are no meanings at all. It’s common to declare that you are “beyond good and evil,” to adopt ethical nihilism. That’s also possible at stage 2, where it can be sociopathic, and leads to blatantly unethical actions. At stage 4.5, one retains the empathy of communalism and the respectfulness of systematicity, so doing harm on the basis of this theoretical nihilism is rare.
Eventually, one notices that meanings continue to operate quite well despite their lack of ultimate foundations. Systems re-emerge as transparent forms. You no longer see by means of systems, but can see through systems as contingent constructions that most people mis-take as solid. Stage 3 sees systems as unfair but unavoidable external impositions; stage 4 sees them as rational necessities justified by ultimate principles. Stage 5 recognizes that they are both nebulous (intangible, interpenetrating, transient, amorphous, and ambiguous) and patterned (reliable, distinct, enduring, clear, and definite). Nebulosity and pattern are inherent in all systems, and are therefore inseparable. This becomes risible.
There’s a reason this features early in a “postrationalist syllabus”. To quote my own take on “postrationalism”:
the revolution in Bayesian reasoning inspired by the sequences influenced some worldviews that, while modestly faithful to the core inspirations, were not strictly good fits for the structures it set up to influence the real world. I love all the ways the sequences encourage you to think about the world, but I (and others) am less convinced of the obviousness of the specific solutions Yudkowsky’s writings imply and the directions Less Wrong-style rationalism undertook thereafter.
Old school rationalism has failed us by causing “scientific management principles”. Modern, Less Wrong-style rationalism, well, maybe isn’t completely failing us yet per se (though it definitely has in some major ways) but I’m far from the first person to observe that it’s had a solid 15+ years to do its thing without arriving at a workable grand paradigm. Effective Altruism has had some major stumbles, tends to be shamelessly hyperutilitarian, and uh is maybe a tad bit cult-y.
The exciting things to come out of modern rationalism seem to be relying less heavily on stiff cause-era-based activism, very much in the fashion of “eventually, one notices that meanings continue to operate quite well despite their lack of ultimate foundations”. Being disappointed with post-sequences rationality is, perhaps, a necessity for growing farther.1
While they don’t quite map one to one in a clean way, Kegan’s stages of adult development are evocative to me of another multi-tiered system of thought around information exchange: simulacra levels. In fact, these ideas struck me as so similar and seem so easily related that I had to do some quick Google-ing to make sure simulacra theory wasn’t something else David Chapman broached elsewhere that I hadn’t read yet (best I can tell, modern the idea originates from Benjamin Hoffman, and the concept itself comes originally from the 1981 book Simulacra and Simulation by Jean Baudrillard).
Zvi Mowshowitz has such a great summary of simulacra levels that it’s hard to not just quote in full, so I’ll try to grab just some bits that are helpful for where I’m going with this (some emphasis added by me in the last section):
A key source of misunderstanding and conflict is failure to distinguish between combinations of the following four cases.
Sometimes people model and describe the physical world, seeking to convey true information because it is true.
Other times people are trying to get you to believe what they want you to believe so you will do or say what they want.
Other times people say things mostly as slogans or symbols to tell you what tribe or faction they belong to, or what type of person they are.
Then there are times when talk seems to have have gone strangely meta or off the rails entirely. The symbolic representations are mostly of the associations and vibes of other symbols. The whole thing seems more like a stream of words, associations and vibes. It sounds like GPT-4.
One can refer to these as the simulacra levels as a useful fake framework for understanding this. When looking at talk, one can ask what level or levels a statement or discussion is on, and which ones people care about in context. One can also ask about the level a person, group or civilization most cares about. That is also how they default to understanding new talk.
If I was going to write the symbolic description of Simulacra levels in my own words, I would say this:
Level 1: A symbol corresponds to the key elements of underlying physical reality.
Level 2: A symbol pretends to correspond to underlying physical reality, but instead distorts key elements.
Level 3: A symbol pretends to be a distorted version of underlying physical reality (that is in turn pretending to be the underlying physical reality), but instead only corresponds as necessary to maintain the plausibility claim that this is the case.
Level 4: A symbol no longer pretends to be a version of anything other than other symbols. It has no relationship to the underlying physical reality.
Or more compactly:
Level 1: Symbols describe reality.
Level 2: Symbols pretend to describe reality.
Level 3: Symbols pretend to pretend to describe reality.
Level 4: Symbols need not pretend to describe reality.
More abstractly:
Level 1: Truth. Attempt to accurately share and describe physical reality.
Level 2: Manipulation of Perception. Lies. Attempt to shape perception of reality, so that others will act on that perception, without regard to whether it is true.
Level 3: Association. Attempting to change perception of one’s social position and alliances, rather than expecting anyone to act upon beliefs about physical reality. Requires maintaining some plausibility of the underlying physical claim.
Level 4: Manipulation and Intuition. Occasionally a strategic attempt to manipulate Level 3 dynamics. More centrally and commonly, a combination of intuitive attempts to manipulate associational dynamics and vibes, and adaptation executions that have abandoned any logic and all links to the underlying physical reality.
Or alternatively: Level 4: What GPT-4 would say.
The lion definition asks what each level means by ‘There is a lion across the river.’
Level 1: There’s a lion across the river.
Level 2: I don’t want to go (or have other people go) across the river.
Level 3: I’m with the popular kids who are too cool to go across the river.
Level 4: A firm stance against trans-river expansionism focus-grouped well with undecided voters in my constituency.
I’m going to add a little bit of my own interpretation, which is what I’ll be going by moving forward from here, and maybe clarifying a little bit at the risk of not being strictly accurate to the source material:
Level 1: Factual conveyance of information. Raw information exchange cleaving as close to reality as possible.
Level 2: Conveyance of information that foregoes some level of factual accuracy to drive a particular set of behaviors. Saying whatever you need to in order to get someone else to do the thing you want.
Level 3: Conveyance of information in service of social signaling. Establishing or maintaining coalitions around encouraging a particular set of behaviors. Think single issue political organization or a fan forum dedicated to a particular TV show.
Level 4: Conveyance of information in service of social strategy. Establishing or maintaining meta-structures around protecting one’s favored coalition. Think higher-level political parties or devotion to a broader fandom culture.
Noting naturally that for level 1 conveyance of information is the ends whereas for every other level conveyance of information is a means to an end, and as Zvi notes levels 3 and 4 are fully detached from sources of raw information. Hoffman uses a poignant example (see below) of medical degrees as level 3 communication; not only can you not get doctor’s privileges by directly demonstrating healing ability, the way you do get them—having a medical degree—is merely an information proxy for having supposedly been taught healing ability.
Perhaps Baudrillard’s example makes this even more clear:
Such would be the successive phases of the image (as we pass from levels 1 to 4):
It is the reflection of a profound reality.
It masks and denatures a profound reality.
It masks the absence of a profound reality.
It has no relation to any reality whatsoever: It is its own pure simulacrum.
Zvi notices some of the danger here:
The prioritization of various simulacra levels becomes a habit. If you are used to interpreting “There’s a lion across the river” almost entirely as “I’m with the popular kids who are too cool to go across the river,” because that’s what it almost always means in your village, it may be very difficult for someone to say “No, really, I’m not associating with the cool kids right now. There’s literally an actual lion across the actual river and if you cross the river you will die.”
There is no good way to sacrifice the cool points in order to communicate the presence of a lion. Even if it works at first, soon there will be a tendency for the new wording to become the canonical form of “I’m with the popular kids who are too cool to go across the river.”
If everyone’s instinct is to interpret “There’s a lion across the river” as both “There is an actual lion across the actual river” and also “I’m with the kids who are too cool to cross the river” then there is a chance.
There is still a barrier. Whoever wants to share knowledge of the lion will become less cool by doing so. Ideally, for high enough stakes, this stops being a problem in multiple ways. If lives are at stake, especially one’s own or one’s loved ones, being cool looks less important than avoiding the lion. Ideally, being the person who saved us from the lion is also considered kind of cool, allowing one to both starve lions and look cool. That only works if everyone realizes the lion was there. But the payoff could be very large. So there’s a chance.
Whereas, if things are too forsaken, one loses the ability to communicate about the lion at all. There is no combination of sounds one can make that makes people think there is an actual lion across an actual river that will actually eat them if they cross the river.
I’m not trying to be subtle here. You can guess where this is going.
(It goes on into a discussion of the communication around COVID-19.)
Similar to how the stages of adult development seem to map onto the advancement of civilization over time, Hoffman via Elizabeth on Less Wrong seems to imply that information exchange may have followed an implied similar early civilization development arc.
My friend Ben Hoffman talks about simulacra a lot, with this rough definition:
1. First, words were used to maintain shared accounting. We described reality intersubjectively in order to build shared maps, the better to navigate our environment. I say that the food source is over there, so that our band can move towards or away from it when situationally appropriate, or so people can make other inferences based on this knowledge.
2. The breakdown of naive intersubjectivity – people start taking the shared map as an object to be manipulated, rather than part of their own subjectivity. For instance, I might say there’s a lion over somewhere where I know there’s food, in order to hoard access to that resource for idiosyncratic advantage. Thus, the map drifts from reality, and we start dissociating from the maps we make.
3. When maps drift far enough from reality, in some cases people aren’t even parsing it as though it had a literal specific objective meaning that grounds out in some verifiable external test outside of social reality. Instead, the map becomes a sort of command language for coordinating actions and feelings. “There’s food over there” is perhaps construed as a bid to move in that direction, and evaluated as though it were that call to action. Any argument for or against the implied call to action is conflated with an argument for or against the proposition literally asserted. This is how arguments become soldiers. Any attempt to simply investigate the literal truth of the proposition is considered at best naive and at worst politically irresponsible.
But since this usage is parasitic on the old map structure that was meant to describe something outside the system of describers, language is still structured in terms of reification and objectivity, so it substantively resembles something with descriptive power, or “aboutness.” For instance, while you cannot acquire a physician’s privileges and social role simply by providing clear evidence of your ability to heal others, those privileges are still justified in terms of pseudo-consequentialist arguments about expertise in healing.4. Finally, the pseudostructure itself becomes perceptible as an object that can be manipulated, the pseudocorrespondence breaks down, and all assertions are nothing but moves in an ever-shifting game where you’re trying to think a bit ahead of the others (for positional advantage), but not too far ahead.
Again, note I’m saying the civilizational arc here is sort of implied—certainly it’s possible that all or most of the simulacra levels could have come about very shortly after any development of language or any simple form of communication, but it requires an understanding of communication and socialization to effectively lie in service to an agenda.
Alright so I began writing this with grand ambitions to tie these two big models together (thus the title) but having written this all out I’m not sure the connection’s as clear as I felt it was when I first read Chapman’s summary. I think maybe there was somewhere I was thinking that one learns to use higher simulacra levels as one ages but I’m not so sure that’s the case either—I realized the other day that my cat is capable of operating on simulacra level 2.
I was just about to indicate that varying philosophies has a preference for deploying particular simulacra levels as a throughline to the stages of development as stages of society but this doesn’t quite cleave with where I was going with it either, Chapman:
Political leftism tends to monism, and rightism to dualism. The communal mode tends to mistake the logic of stage 4 for rightish ideologies, particularly capitalism. However, stage 4 is not inherently rightist or anti-leftist. Marxism is a systematic, technical, rational critique of capitalism—and therefore a stage 4 ideology. (Notwithstanding that campus communists rarely understand Marxism’s details, and often misuse it as a simple stage 3 rejection of systematicity.) John Rawls’ Theory of Justice is an elegant stage 4 systematic justification for leftism. Conversely, stage 3 rightism is common; that is the appeal of simplistic calls to “protect our traditional communities.”
I was going to point to critical theory’s preference for simulacra level 4. I think perhaps you could do the legwork to tear out critical theory as a separate level of development from Marxism but I sort of don’t have the time and energy for the legwork for that in a thorough academic sense here.
So once again, unfinished thoughts deposited here, something to revisit again later maybe.
This ends up being a poor-fitting aside that was originally part of the main text here but I also think we can also point to:
Eventually, one notices that meanings continue to operate quite well despite their lack of ultimate foundations. Systems re-emerge as transparent forms. You no longer see by means of systems, but can see through systems as contingent constructions that most people mis-take as solid.
—to suggest this is the source (or a good example) of late-stage rationalism’s forgiveness of religious tradition (et al. re: woo shit, I guess). Where in the late 2000s, early 2010s, edgy atheism was all the rage, there’s a lot more recognition nowadays that religion seems to be lindy even in a secularist sense.