Purbo + AI

Perceptual Parochialism: The Scale We Can’t See

Philosophy Psychology

I used to get into debates about evolution in online forums. Not with cranks, but with genuinely intelligent people. They understood DNA sequencing. They could walk through the logic of natural selection step by step. But at the end of long threads, the conclusion was always some version of: “I just don’t buy that all of this came from blind chance over time.”

I kept trying different angles, different explanations. None of them worked. Eventually I stopped, not because I lost interest, but because I started suspecting the disagreement wasn’t really about evidence. Something else was going on, something I couldn’t quite name.

I think the answer isn’t about evidence. It’s about scale. Specifically, it’s about a deep limitation in how all of us perceive and reason about anything that falls outside the narrow window of our direct experience. I’ve been calling this perceptual parochialism: the tendency to treat the scale of everyday human life as the default reality, and to distort or reject anything that operates outside it.

This isn’t a character flaw. It’s the factory setting. Mine included.

The Bottleneck

Our perceptual systems were shaped by survival pressures that had nothing to do with understanding deep time, quantum mechanics, or global systems. We evolved to track predators, read faces, judge the ripeness of fruit, and navigate social hierarchies of perhaps 150 people. Everything about our sensory and cognitive architecture is tuned to what you might call “medium-sized reality”: objects you can hold, distances you can walk, timescales you can remember.

This tuning is not subtle. In the 1830s, Ernst Weber discovered that our ability to detect change in a stimulus isn’t based on absolute difference but on ratio. If you’re holding a 1kg weight, you’ll notice an added 100 grams. But if you’re holding 10kg, you need about 1kg of additional weight to notice the change. Same proportional threshold, wildly different absolute values. Gustav Fechner later formalized this into what’s now called the Weber-Fechner law: perceived intensity is proportional to the logarithm of actual intensity.

This logarithmic compression is everywhere. It’s why the difference between a $5 coffee and a $10 coffee feels enormous, but the difference between a $505 and a $510 purchase barely registers. It’s why we use decibels for sound and the Richter scale for earthquakes, because the raw numbers would be meaningless to human intuition. Our senses actively compress reality to fit within a manageable range.

But here’s the deeper implication, and this is where it gets interesting. Stanislas Dehaene’s research on numerical cognition shows that this compression isn’t just sensory. It’s cognitive. Young children and indigenous groups without formal mathematical education naturally space numbers logarithmically on a number line. The number 10 feels “halfway” between 1 and 100. Linear number sense, the understanding that 50 is actually halfway, is something that has to be trained. It doesn’t come naturally.

So when someone tries to intuitively grasp the 3.8 billion years of evolutionary history, their untrained cognition compresses most of it into a vague blur. The difference between a million years and a billion years feels roughly the same, even though one is a thousand times larger. The entire history of modern humans, some 300,000 years, is a rounding error in evolutionary time. But it doesn’t feel like a rounding error. It feels like “a really long time,” which is the same label our brains apply to anything beyond a few generations.

This is the perceptual bottleneck. Not a failure of intelligence, but a limitation of the cognitive hardware that intelligence runs on.1

Consider a few more examples of how this bottleneck distorts our thinking:

AI and Emergent Intelligence

When people hear that a large language model has “billions of parameters,” the number registers as “a lot” and then stops providing useful intuition. But the gap between a million parameters and a hundred billion parameters isn’t just “more.” It’s the difference between a system that can autocomplete sentences and one that can write poetry, debug code, and reason about philosophy. The capabilities don’t scale linearly with size; they emerge at thresholds that nobody fully predicted. This is deeply unsettling, because it means that something resembling intelligence can arise from a process that, at any local scale, is “just” matrix multiplication and gradient descent. The same cognitive compression that makes us say “it’s just statistics” about a billion-parameter model is the compression that makes us say “it’s just random mutation” about evolution. In both cases, the word “just” is doing enormous work to hide a scale we can’t feel.

Compound Interest

Einstein probably never called it the eighth wonder of the world (that attribution is apocryphal), but the underlying point stands: humans are bad at exponential curves. We think linearly. A dollar doubling every day for 30 days feels like it should yield maybe $30 or $60. It actually yields over $500 million. Financial advisors spend entire careers trying to make people feel this, because understanding it intellectually doesn’t change behavior.

Cosmic Distance

The nearest star beyond our sun is about 4.2 light-years away. Light itself, the fastest thing in the universe, takes over four years to get there. The Milky Way is 100,000 light-years across. The observable universe is 93 billion light-years in diameter. Each of these numbers is “very far,” and our brains essentially stop distinguishing between them. The internal experience of contemplating 4 light-years and 93 billion light-years is nearly identical: a vague sense of enormity.

Pandemic Growth

In early 2020, many people (including decision-makers) couldn’t intuit why a virus with “only” a few hundred cases required drastic action. The answer is exponential doubling. But logarithmic cognition fights exponential reality at every step. “It’s only a few cases” and “it’s overwhelming hospitals” can be separated by mere weeks, and our brains treat the first state as the real one because it’s the one we experienced directly.

The Resistance

If the bottleneck were the whole story, we could just educate our way out of it. Teach people logarithmic thinking, show them timelines, give them better visualizations. And to some extent this works. But there’s a second layer to perceptual parochialism that’s harder to address, because it’s not about inability. It’s about unwillingness.

Most of us have had the experience of walking someone through an argument, step by step, and having them agree with each piece individually, only to reject the conclusion. “Yes, the fossil record shows gradual change. Yes, DNA sequencing confirms common ancestry. Yes, natural selection is a logical mechanism. But I just don’t buy it.”

What’s happening? I think it’s this: accepting certain realities at certain scales has existential costs that our minds instinctively avoid. And we’re remarkably creative at constructing reasons not to pay those costs.

Robert Trivers explored this territory in his work on self-deception. His core insight is that we don’t just deceive others; we deceive ourselves, and we do it because it makes us more effective at deceiving others. If you genuinely believe something, you won’t show the telltale signs of lying. Evolution, in a darkly elegant twist, selected for our ability to hide the truth from ourselves.

But self-deception in the context of scale works slightly differently. It’s not that people are lying about the evidence for evolution or climate change. It’s that accepting the evidence requires a reorganization of their worldview that feels, at a visceral level, like a threat. If evolution is true, the universe wasn’t designed with humans at the center. If deep time is real, individual human life is almost incomprehensibly brief and cosmically insignificant. If climate operates on scales that dwarf human agency, we might not be in control of our own future.

These aren’t comfortable conclusions. And our minds have a well-documented tendency to reject uncomfortable conclusions, especially when a more comforting alternative is available.

Arie Kruglanski’s research on the “need for cognitive closure” describes this precisely. When confronted with ambiguity, uncertainty, or scale-induced discomfort, people gravitate toward frameworks that offer definitive answers. Creationism provides a clear agent (God), a clear timeline (thousands, not billions, of years), and a clear purpose (humanity is special). Evolution provides none of these. It offers a process without a designer, a timeline beyond comprehension, and a conclusion that humans are one twig on an incomprehensibly vast tree of life.

The appeal of cognitive closure isn’t about stupidity. It’s about comfort. And it’s reinforced by something even more fundamental.

Justin Barrett and Stewart Guthrie have documented what they call “hyperactive agency detection,” our tendency to see intention, design, and purpose everywhere. This bias was adaptive: our ancestors who assumed rustling bushes meant “predator” survived more often than those who assumed “wind.” False positives (seeing agency where there is none) were cheap. False negatives (missing a real predator) were fatal.

The result is that we’re wired to ask “who did this?” when confronted with complexity. A watch implies a watchmaker. An eye implies a designer. A universe implies a creator. These inferences feel obvious, intuitive, almost irresistible. And they are, at the scale of everyday human experience, where complex things really are usually made by someone.

But at the scale of 3.8 billion years, the inference breaks down. Composition over time, the accumulation of tiny, individually unremarkable changes filtered by selection, produces complexity without design. This is hard to feel, even when you understand it intellectually. Agency detection keeps whispering: “But surely someone must have…”

So the resistance to scale isn’t just perceptual limitation plus self-deception. It’s also the misapplication of cognitive heuristics that serve us well in daily life but mislead us catastrophically at larger scales.

Breaking Through

If perceptual parochialism were truly inescapable, we’d have no science, no mathematics, no ability to reason about scales beyond our direct experience. But clearly, we sometimes manage it. How?

The “overview effect” provides a clue. Astronauts who see Earth from space frequently report a profound, almost spiritual shift in perspective. The fragility of the atmosphere, the arbitrariness of national borders, the sheer isolation of this small planet in an enormous void, these become viscerally real in a way that no photograph or description had previously achieved. Many astronauts describe it as transformative.

The key word is visceral. The overview effect doesn’t work because astronauts learn new information. They already knew Earth was small and fragile. It works because they experience scale directly, bypassing the cognitive compression that normally flattens it. The bottleneck is momentarily opened.

This suggests that the antidote to perceptual parochialism isn’t more data. It’s more experience of scales beyond our default range. And this is genuinely hard to provide, because most of the scales that matter (deep time, quantum phenomena, cosmic distance) aren’t directly experienceable.

What we’re left with are approximations. Carl Sagan’s “Cosmic Calendar,” which compresses the universe’s history into a single year (with all of recorded human history occurring in the last few seconds of December 31st), is a classic example. It works not because it’s accurate but because it maps an incomprehensible scale onto one we can intuit. Powers of Ten, the Eames film that zooms from a picnic blanket to the observable universe and back down to subatomic particles, achieves something similar.

These tools are partial solutions at best. But they point to what might be a more general principle: that overcoming perceptual parochialism requires translating alien scales into the medium-sized reality where our cognition actually works, rather than asking our cognition to stretch beyond its design parameters.

Why It Matters Now

Perceptual parochialism has always been with us, but for most of human history, it didn’t matter much. The problems we faced, food, shelter, tribal conflict, social cooperation, all operated at scales our cognition was designed for.

That’s no longer true. Climate change, pandemic response, nuclear risk, AI development, global economic coordination: these are all problems that operate at scales where human intuition actively misleads. The person who can’t grasp deep time is also the voter who can’t grasp why a 1.5°C warming target matters. The same cognitive compression that makes a billion years feel like a million makes a billion tons of CO₂ feel like a million.

We are increasingly a species whose survival depends on reasoning well about scales we are constitutionally bad at perceiving. Whether we can develop the cognitive tools, cultural practices, or technological augmentations to bridge that gap may be one of the defining questions of this century.

Further Readings


  1. This framing treats cognition as an object with fixed properties, which is a deliberate simplification. In Schesism, a philosophical framework I’ve been developing, intelligence is better understood as a relational property: not something a mind has, but something that emerges from the interaction between minds, tools, traditions, and the world. The “limitation” described here isn’t a permanent feature of the hardware. It’s what cognition looks like before those connections are made. I’ve kept the simpler framing here for readability. ↩︎