The Vibe Handbook, Part One: Defining Vibes Without Talking About TikTok At All
Lots of people, including self-identified postrationalists, aren’t sure what postrationalism is. Excerpts from an Astral Codex Ten comment thread:
It's about accepting that the rationalism movement provides value and makes some good points, but deliberately stepping away from the goals and structure of the community.
pretty likely to be into pretty wooey meditation / psychadelic / spiritualist stuff
Honestly I don't think there is a real post-rationalist community, so it's currently just a descriptor for lurkers (like me as well) who don't feel at home on LessWrong and the SSC-derived subreddits.
there was a collection of ex-rationalists, and people who looked at rationalism but never really accepted it, and to this accreted various people who agreed with these ex-or-not-quite rationalists on various things, usually more on a meta or vibe level
I'd say that postrationlists are creative high-concept oddballs who synthesize and "refactor" (that's a Rao-term) various abstract concepts from various disciplines. With some awareness and/or inclusion of rationality ideas, but doesn't have to directly reference or use LW-terminology.
Illegibility is central to postrationalism. You're welcome.
Eventually Scott Alexander weighs in:
Rationalists are people who think too much about Bayesian probability, forecasting, altruism, and AI.
Postrationalists are people who think too much about the difference between postrationalists and rationalists.
The original asker concludes:
Maybe the real postrationalism was the friends we made along the way
This confused me for a long time until one day I made up a secret special definition of postrat and started identifying with it. A new Thing You’re Allowed To Do is born every day.
Secret Special Definition of Postrat: Viewing the world through the lenses of lindy, Chesterton’s Fence, and vibes.
The reason this is “postrationalist” and not “one of the many worldviews unrelated to rationalism” is that you can actually be really analytical about these things. They’re perfectly rigorous. They’re just a different analytical lens from the “overcoming bias” or “shut up and multiply” lens.
People think of lindy as Paul Skallas’s thing, and Nassim Taleb has also written extensively on it. It’s the idea that the longer something’s been around, the more staying power it’s likely to have, because the fact that it’s survived the forces of natural selection thus far is good evidence that it’s robust. Rationalists tend to analyze things from first principles; postrationalism is about thinking more in terms of what you might call zeroth principles: listening to your priors, preconceived notions, intuitions, because these are lindy ways of thinking. Human bodies evolved them over millions of years, and human culture over tens of thousands.
Chesterton’s Fence is a way of uniting lindy with rationality – you can go ahead and make a change to the way things have always been done, but only if you understand the system you’re tweaking and have a good reason to believe it’s changed from the conditions it’s adapted for. Postrat might further caution that such cases are rarer than a rationalist might think, because most systems are complicated. You might learn that the fence is no longer used to pen in a raging bull, but if the fence has stood there a long time, it’s probably become load-bearing in other ways. Maybe it now serves as two neighbors’ property line, or it’s holding up somebody’s tomatoes, or it marks the spot where the townsfolk buried their uranium deposit.
Both lindy and Chesterton's Fence exist in the rationalist canon, especially the latter – the difference is that the postrationalist lens makes them a lot more central. What doesn’t come up much in rationalism is a third thing: vibes.
Vibes are the substrate of zeroth principles.
What I mean by a vibe is the unit of the left-hand side. The human vibe engine is extraordinarily powerful. It’s aided as needed by a little built-in ALU, which can evaluate small pieces of propositional logic, but this is not at all central, and most cognition only tangentially involves it.
What’s a vibe made of? The wrong answer is “no one really knows.” The right answer is “a tensor.”
Here’s how an autoencoder works. You take some collection of objects you want your autoencoder to learn about – photos, say – and convert them into numerical vectors: maybe the first position is a list of the R, G, and B values of the first pixel, the second position is the R, G, and B values of the second pixel, etc. Then you train a model to reproduce an object from the collection: you feed it a photo converted to a vector, let it internally manipulate the information somehow, then have it try and reconstruct the same photo. If it’s pretty close, the model only changes its process a little next time; if it’s way off, the model changes a lot more. Eventually you have a model that reduces the concept of a photo to a simpler essence, the way if you try to picture a room you’ve been to a few times you won’t necessarily recall the style of the lampshade or the number of chairs, but you’ll roughly remember the furniture layout and the main colors.
Internally the model just has a bunch of numbers arranged in lists of lists of lists, which represent how much to care about certain features of the input vectors or combinations thereof. That’s a tensor (a vector is a list of numbers, a matrix is a list of vectors, and a tensor is a list of matrices or anything more meta than that). So the internal tensor representing the state of the model is what converts inputs into vibes back into outputs.
(Terminology isn’t important here – mood as in mood board and energy as in “same energy” are roughly the same as vibe. Vibes aren’t a trend, either. The word is newish, and the culture is talking more about it lately because computers have recently gotten good at manipulating vibes. But the concept has always existed.)
There’s this popular fiction that most knowledge exists in the form of propositional logic. We pretend that AI models can be explained by some simple rules if only the system would tell us, or that hiring committees could be more unbiased without loss of effectiveness if they just went by a rubric, or that CEOs only have material nonpublic information in a few special cases. Part Two argues otherwise.