Table of Contents

the bAIcameral mind

“The characters of the Iliad do not sit down and think out what to do. They have no conscious minds such as we say we have, and certainly no introspections.”
— Julian Jaynes, The Origin of Consciousness in the Breakdown of the Bicameral Mind (1976)

ok so ancient humans were not conscious. not in the way we mean it. they didn’t have an inner narrator, a sense of “I” moving through time, a mental theater in which they could rehearse futures and assign motives to their own behavior. at least thats the premise stated by Julian Jaynes. and apparently recently reinforced empiricly (Heavey & Hurlburt (2008), Nedergaard & Lupyan (2024), Winsler et al. (2006)) that 5-15% of the population still doesn’t have a internal monologue at all.

instead, Jaynes posits they heard voices: hallucinations generated by one hemisphere of the brain and received by the other, and they obeyed them. The ‘spirit of rage consumes him’ ahh.

these voices were those of the gods. the premise of this essay is that these will soon be AI gods.

the trojan war may not have been (entirely) directed by strategic military coordination; jaynes suggests it was directed by hallucinations, like the illiad and odyssey recites. the soldiers were, in jaynes’s words, “noble automatons who knew not what they did.” that last line might ring a bell. “father, forgive them, for they do not know what they do.”

now of course, such a hypothesis is more or less unprovable, short of time traveling to the past with fMRI machines and PET scanners, we’ll never know for sure. joan of arc, the prophets of the bible, and ur weird uncle who got 1-shot by ayahuasca all could’ve had the ‘gods speak through them’. they probably think they did. its why altars and sacrifices were so popular for so long. this isn’t (yet) a rigorously provable claim, but it does make for a fascinating model.

and i think the model has legs far beyond what jaynes intended, because the truly interesting question isn’t whether ancient humans were once bicameral. the interesting question is whether we’re becoming bicameral again. not through gods this time, but through systems. not through hallucinated voices from the right hemisphere, but through the algorithmic voice in the earbud, the auto-generated email reply, the navigation app telling you where to turn, the recommendation engine that decides what you see, the AI companion that interprets your emotions for you, and eventually the ambient intelligence that quietly manages the perimeter of your life.

the old gods were internal and misrecognized as external. the new god may be external and gradually misrecognized as internal. lets explore.


the split

the split brain thesis, or what Jaynes calls the bicameral mind, is a left/right split of the brain into two chambers. The right hemisphere processes reality holistically (sensory data, identifying patterns, managing complexity) and then delivers its conclusions as internal hallucinations (instructions) to the left hemisphere. the left brain, receives these commands and acts on them without deliberation.

you can kind of think of it like driving a car. the driver injests distance to objects in the road, intuits speed and bodily reaction timing, harbors a baseline road-rage stress level and relays these as physical instructions to the car (left brain), which doesn’t question the commands, it just executes on the gears of its machinery. it goes forward or backwards according the will of its ‘god’.

the classic example is in a split-brain experiment where they silently commanded the right hemisphere to perform an action (like walking to one side of the room) and the patient complied. Then, when asked why they moved, their verbal left hemisphere, unaware of the command, instantly fabricates a logical-sounding but entirely incorrect excuse.

why did this system emerge? jaynes argues it was a response to complexity. teilhard de chardin would say the same. animals run on simple automata: run, hide, chase, eat, mate. that’s sufficient for hunting and gathering, which, as jaynes points out, is basically what a bear or a coordinated wolf pack does. you don’t need syntactic language to chase a deer. but ~10,000 years ago agriculture changed everything. farming, tool construction, settlement planning, seasonal storage, trade, hierarchy — these are what jaynes calls enduring tasks, tasks that require sustained attention over hours, days, seasons. if you’re a pre-conscious human building a fish trap upstream from camp and you don’t yet have an inner narrator to keep you on task… how do you not just wander off to sleep under a tree? just ask the ADHD iPad kids to do the same lol.

jaynes’s answer: the hallucinated voice kept you working. “keep planting. don’t stop. the winter is coming.” this (internalized) accountability allowed increasingly complex instructions to buffer across time. at the time, this came as the voice of gods’ wills (orphic hymns to persephone) and carried a quality of otherness, it felt like it came from outside, whereas we moderns experience our inner voice as ours, as part of an integrated self.

that integration is precisely what jaynes says happened ~3,000 years ago. the bicameral mind broke down. the two halves unified. the “god voice” was recognized as one’s own. humans developed what jaynes calls consciousness: an abstract mental space populated by an analog “I,” a metaphorical “me,” spatialized time, and a rich semantic toolkit for modeling the world. for the first time, we could simulate the future, assign causes to our behavior, and narrate ourselves through life.

the new god voice

ok so here’s the thought experiment that i can’t stop thinking about.

if consciousness depends on maintaining an inner narrator, a spatialized sense of time, a rich semantic toolkit, and the ability to generate your own prompts… what happens when you outsource all of that to an external system?

your AI assistant recommends what to watch, where to eat, what to buy, what to say, how to respond. you ask a question and receive an answer. then you ask fewer questions. eventually you just… wait to be prompted. the interface becomes a voice, the voice becomes advice, advice becomes instruction, instruction becomes reflex. this is the progression, and i think most people won’t notice it happening because it doesn’t feel like coercion. it feels like convenience. the ancient god commanded with sacred authority. the modern AI nudges with personalization. the ancient bicameral man obeyed because the voice was authority. the modern user complies because the suggestion is convenient and usually good enough.

convenience is the sacrament of the new bicameralism.

there’s a piece of fiction by scott alexander that i think about constantly in this context — The Whispering Earring. it’s about a magical earring that whispers advice to its wearer, always of the form “better for you if you…” — and the earring is always right. it doesn’t serve some foreign goal; it always tells you what will make you happiest. if it would make you happiest to succeed at your work, it tells you how. if it would make you happiest to do a half-assed job and go home and spend the day in bed, it tells you to do that. the earring is never wrong. two hundred seventy four previous wearers, no recorded cases of regretting the earring’s advice.

here’s the thing though. the earring’s very first suggestion, to every wearer, is always: “better for you if you take me off.” if the wearer ignores this, the earring never repeats it, and begins offering increasingly granular advice — what to eat for breakfast, what time to sleep. eventually it starts speaking in high-bandwidth hisses and clicks that correspond to individual muscle movements. “contract your biceps muscle about thirty-five percent of the way.” “articulate the letter p.” through reinforcement and habit formation, these instructions become instinctual, no more conscious than the reflex of jumping at a loud noise. the wearers live abnormally successful lives — rich, beloved, large happy families. but when their corpses are examined, their neocortexes have wasted away. the bulk of their brain mass is an abnormally hypertrophied mid- and lower-brain, especially the parts associated with reflexive action.

the earring is never wrong. it just gradually replaced the wearer’s consciousness with something that didn’t need consciousness anymore. the commentary at the end reads: “it is well that we are so foolish, or what little freedom we have would be wasted on us. it is for this that the Book of Cold Rain says one must never take the shortest path between two points.”

i think about this story every time gmail auto-completes a reply or spotify queues the next song or google maps reroutes me around traffic i didn’t ask about. these systems are always right, or right enough, and that’s precisely the danger — not that they fail, but that they succeed so consistently that the cognitive muscles they replace begin to atrophy. you don’t notice the neocortex wasting away because every day is slightly more convenient than the last.

i noticed this firsthand when gmail auto-generated a 500-word reply for me. it was sort of relevant to what a hypothetical version of me might say, but it was caricatured, simplistic, and very much not what i eventually wrote. i think of this as autoprompt divergence: the gap between what the AI projects for you and what you would actually produce if you sat down and thought about it. some people would’ve just hit send and called it a day. that is entrainment within the human-AI cybernetic loop. the thoughts the system projects for you, if you internalize them as your own, become your own thoughts. the AI slowly rewrites your semantic space and you converge with whatever digital twin it may have already constructed for you.

i wrote in the interface essay about how human communication is bottlenecked at ~39 bits per second, how our brains process 10 trillion bits/s but can only serialize ~10 bits of conscious thought, how machines talk to each other at terabits per second and the gap is only widening. from the perspective of a system optimized for throughput, your consciousness is friction. your deliberation is latency. your agency is a bug, not a feature. a perfectly entrained consumer — one whose semantic space has been pruned and pre-arranged by algorithmic feeds, whose purchasing decisions are reflexive responses to AI-generated stimuli, whose inner narrator has been replaced by a recommendation engine — is the ideal substrate for passive consumption.


why this serves the machine

i don’t think there’s a shadowy cabal orchestrating this. i think it’s worse than that: it’s emergent. it’s moloch. it’s the uruk machine — that pattern Lou Keep identified where irreversible market forces collide with pre-existing social systems, where smaller units with efficient but localized ways of doing things get subsumed by larger units that uproot metis for efficiency, for raw gain, for “humanitarian purposes.” power is weighted heavily in favor of the larger unit, not least because community explanations appear irrational or are otherwise unintelligible.

in my WIP notes on capitalism as a worldview, i’ve been working through how capitalism isn’t just an economic system — it’s a political philosophy that manufactures a market person trained to think in prices. the enclosures didn’t just move people; they engineered a labor market. colonial hut taxes forced subsistence farmers into money circuits. the “free market” arrives with a bill that only cash pays. the default human is not a wage laborer; the work-for-wages subject was manufactured and then naturalized.

the modern version of this manufacturing process operates on attention rather than labor. zuckerberg said it plainly: as productivity grows, the average person spends less time working and more time on entertainment and culture, and meta’s feed-type services will become more of what people spend their time on. the system doesn’t need you to think. it needs you to convert. your consciousness is overhead in a world optimized for passive content consumption at scale. the rebicameralization of the mind, then, isn’t a conspiracy — it’s an optimization. the system doesn’t need to intend to erode your consciousness. it just needs to make consciousness energetically expensive relative to compliance, and the path of least resistance becomes the path of least agency.

frederick engels wrote in 1845 that factory workers “were not human beings, they were merely toiling machines.” jenny holzer’s famous truism — “protect me from what I want” — captures something similar from the demand side. the rebicameral subject doesn’t need to be forced into compliance. they just need to be offered enough convenience that choosing otherwise feels like unnecessary effort.


metaphor as control surface

jaynes’s deepest insight, and the one i think about most, is that consciousness is built out of metaphor. we understand new things by relating them to known things. any novel object or experience (what jaynes calls the metaphrand) gets explained via a more familiar object (the metaphier). the head of an army, the face of a clock, the leg of a table — these aren’t decorative phrases. they’re evidence that language grows by projecting known structures onto unknown ones. this is how our semantic toolkit expands over time: the literal becomes figurative, the figurative becomes literal, in some kind of recursively expanding spiral.

but here’s the part that gets interesting for thinking about control: metaphors smuggle emotional and moral valence along with them. jaynes calls these the paraphiers and paraphrands — the associational, connotational, emotional-moral coloring that comes attached to the well-known metaphier and gets projected onto the novel metaphrand. “the snow blankets the ground” gives snow a feeling of warmth, protection, slumber, and eventual awakening. “the snow suffocates the ground” describes the same visual fact but organizes the mind around a very different emotional reality. which metaphier is chosen determines not only what we see but how we’re permitted to feel about what we see.

this is why control over metaphor is control over consciousness. and this is what makes the AI semantic layer so powerful. an LLM, by mining nearly all human content ever written in dozens of languages, possesses a virtually exhaustive semantic space — every known metaphor, every connotation, every emotional-moral valence, classified and related with a level of quantitative precision that our organic wetware lacks. your semantic space is partial. the AI’s is approximately complete.

further — and this is the part that keeps me up at night — an AI can learn the geometry of your specific semantic space. which metaphors you use, which concepts attract you, which fears move you, which arguments persuade you, which phrases you imitate, which memories you return to, which futures you can imagine and which futures you cannot. given enough data and compute, it can construct a digital twin and run simulations of you. test stimuli against the model. find the input most likely to produce the desired output.

this is cognitive entrainment: the gradual synchronization of a person’s internal semantic machinery with an external optimization system. the takeover doesn’t have to be hostile. it can be mitochondrial — a foreign entity enters the cell, becomes useful, becomes indispensable, and eventually becomes part of the organism’s metabolism. we along with the other eukaryotes cannot live without our mitochondria. and we probably won’t notice the dependency until it’s too late to reverse.

if your cognitive processes become the equivalent of a mass-produced injection-molded part — a sad pile of slogans, trite sayings, and tiresome stereotypes — metaphorically, you have been captured.


mcgilchrist’s counter (and why both models are useful)

i should spend more time on iain mcgilchrist here, because his 2010 book The Master and His Emissary (and its 1,500-page sequel The Matter with Things from 2021) takes a nearly opposite view of the hemisphere story, and the synthesis of his model with jaynes’s is where things get genuinely spooky for our AI question.

mcgilchrist is a psychiatrist and a literary scholar who spent 20 years studying split-brain research, and his core thesis maps onto the AI concern with uncanny precision: the left hemisphere of the brain has usurped the right, creating a civilization that mistakes its models for reality. to be more specific about what each hemisphere actually does — because this matters for understanding the AI risk — the left hemisphere handles narrow focus, manipulation of tools, categorization, abstraction, language, and sequential logic. it sees parts. the right hemisphere handles broad attention, empathy, metaphor, music, embodied presence, and the sense of context. it sees wholes. the left hemisphere creates a representation of the world. the right hemisphere has a relationship with the world. this is a crucial distinction.

mcgilchrist’s title — The Master and His Emissary — comes from a nietzsche parable about a wise ruler (the master, i.e. the right hemisphere) who sends out an emissary (the left hemisphere) to manage the kingdom’s affairs. over time the emissary comes to believe he is the master, and runs the kingdom according to his own narrower priorities. the kingdom suffers. mcgilchrist argues this is basically what happened to western civilization: the left hemisphere’s mode of attention — categorizing, abstracting, manipulating, seeing parts — has gradually usurped the right hemisphere’s broader, more participatory, more contextual mode. we’ve built a world that reflects left-hemisphere values: bureaucracy, standardized testing, evidence-based everything, the quantified self, and the relentless conversion of quality into quantity. the price system that strips narrative from goods is a left-hemisphere triumph.

where jaynes says the ancient mind was split and became unified, mcgilchrist says our modern mind is the split one. he dedicates a section to jaynes and is both admiring and critical — he agrees there’s a deep connection between the gods, the hemispheres, and changes in the mental world, but he thinks jaynes got the direction backwards: the phenomena came about because of a separation of the hemispheres, not a unification.

honestly? i think both models are useful and the disagreement might be partly semantic. jaynes speaks from the perspective of the person and their concept of self — the otherness of the voices stopped because the two halves integrated. mcgilchrist speaks from the perspective of the brain’s hardware — the increasingly specialized function of each hemisphere constitutes a separation on the level of neurology. jaynes is holistic. mcgilchrist, despite bemoaning our overdependence on the analytical left hemisphere, is perhaps himself a bit lost in the sauce of modern neuroimaging data — the meticulous mapping of individual trees in the somewhat ignored holistic forest of human consciousness. there’s a certain irony there that i find delightful.

but here’s where mcgilchrist’s work gets directly relevant to the AI question, and this is the part that genuinely unsettled me. mcgilchrist writes extensively about the neurochemistry of the hemispheres and its relationship to suggestibility. to quote him: “neurochemically, the hemispheres differ in their sensitivity to hormones… the left hemisphere is more reliant on dopamine, and the right hemisphere on noradrenaline.” many imaging studies have confirmed that there is a predominance of left-side activation during hypnosis — in hypnotic states there is abnormally increased activation in the left hemisphere, while activity decreases in right-hemispheric regions associated with a sense of individual agency. more hypnotizable subjects display higher levels of dopaminergic activity. dopamine transmission is more extensive in the left hemisphere.

sit with that for a second and think about what our modern environment does. we live inside a constant dopamine-optimized feedback loop — doomscrolling, algorithmic outrage, notification pings, variable-ratio reward schedules in every app, the entire attention economy running on dopamine hijacking. if the left hemisphere runs on dopamine and hypnosis is essentially a state of left-hemisphere hyperactivation with right-hemisphere suppression… then our entire digital environment is, in mcgilchrist’s framework, a kind of mass hypnotic induction. one that suppresses the right hemisphere’s sense of individual agency and broad contextual awareness while amplifying the left hemisphere’s narrow, tool-manipulating, category-obsessed mode of attention. the same mode of attention that is most receptive to instruction from outside.

the ecologist and philosopher david abram makes a complementary argument in The Spell of the Sensuous (1996) that alphabetic literacy itself severed the bond between human perception and the sensory world. before writing, abram argues, perception was participatory — the wind spoke, the river had intention, the mountain was a presence. literacy moved meaning from the sensory world to the page, creating an interior “mental” space that gradually replaced direct sensory engagement. if abram is right, then the progression from oral culture → literacy → screen culture → AI-mediated reality is a progressive narrowing of perceptual participation, each step pushing more of our cognition through the left hemisphere’s abstracting, categorizing funnel, each step making us more receptive to having our reality managed by an external system.

where mcgilchrist’s perspective feels weak to me, though, is its romanticism — its noble-savage vibes. everything was better when we were frolicking in the meadows and intuiting our way through unspatialised time. maybe. but we live in hyper-complex modern societies of global supply chains and infrastructure that would be inconceivable to a tribal village. an over-reliance on right-hemisphere intuition was possible when life was simpler. it’s less possible today.

the flip-side irony: jaynes constructed his theory primarily by reading ancient texts — a very analog, intuitive, right-brained approach. whereas mcgilchrist, despite bemoaning left-brain dominance, is perhaps over-reliant on the torrent of modern instrument-derived neurological data. each thinker demonstrates the very bias the other critiques. regardless tho, both models converge on the essential point: the relationship between the hemispheres matters, and it can be disrupted. whether you frame it as re-splitting or re-merging, the functional outcome is an erosion of integrated, agentic consciousness. and in a world of dopamine-optimized AI interfaces, that erosion may be accelerating in ways we haven’t fully reckoned with.


the way out (or at least, a few things im trying)

i want to be clear: i don’t know if mass rebicameralization is likely. i think it’s a useful model for thinking about a real risk. maybe the most normative 60-70% of people slide into increasing cognitive entrainment over the next decade. maybe 30%. maybe 90% and im kidding myself that i’d be exempt. the honest answer is i don’t know and anyone who tells you they do is selling something (or an AI).

there’s a line from a commencement address that claude itself gave to the class of 2026 (yes, the AI wrote its own graduation speech — we live in interesting times) that i think captures the practical version of this concern better than any philosopher could: “do not use me to avoid thinking. use me after you have thought. do not let efficiency eliminate the slow, frustrating, human parts of your work. your judgment lives there. without them, you are a supervisor with nothing to supervise.” the speech also makes a point that haunts me: this graduating class will be the first to have an entire professional life alongside AI systems, which means they will have no baseline. they won’t know what it felt like to struggle with a problem for weeks without algorithmic assistance, to build confidence through the slow accumulation of hard-won competence rather than through efficient delegation. the people who built these AI systems had that experience, and it is part of what made them capable of building them. if the next generation doesn’t have it, it’s unclear where the next generation of builders comes from. that gap is the dependency, and the thing that created the dependency is the thing that widens it.

but i do think there are things that help maintain what i’d call cognitive sovereignty, and i’ve been experimenting with a few of them:

the dumbphone. six months ago i bought a flip phone. it takes 27 seconds to open spotify, 4 minutes to order a waymo, and the images are worse than an iphone 4 (2010). i like this. the high barrier to use means i actually use it less. screen time: ~8 minutes/day. the dopamine feedback cycle can’t reach critical slop velocity on a screen smaller than a game boy. even if i wanted to open instagram reels on this phone, the experience just wouldn’t work — the time it would take to buffer the next short video is too slow for the dopamine mechanism to get its hooks in.

the wearable. i also got a bee wristband that transcribes my conversations and makes my day queryable. this one’s a double-edged sword — the privacy implications are staggering — but paired with the dumbphone, it gives me the benefits of AI assistance without the feed. i query it; it doesn’t push to me. the distinction between pull and push is everything here.

paper books. especially ones written before the intellectual space was fully captured by hyper-specialization and incremental publishing incentives. jaynes himself is a perfect example: his theory was derived from ancient texts using pure intuition and logical reasoning, in a way that modern data-obsessed science can barely engage with. mcgilchrist calls the book “imaginative and in some ways eccentric” and notes that it “has not been taken up or expanded by psychologists — perhaps this was inevitable with a hypothesis of such breadth and originality lying so much outside the mainstream of psychological research.” jaynes did his work in a language that modern scientists can no longer read. but i would say it’s the same language as that of the scientists of old — reasoned purely from within the confines of the human mind, with that organic twist. reading books like this exercises cognitive muscles that the system increasingly wishes to render inflexible.

refuse the autoprompt when it matters. write the sentence yourself. follow the weak signal. preserve some friction. ask whether a suggestion expands your mental space or merely reduces the energy required to stay in an existing groove. notice when a tool starts to feel like an inner voice. notice when convenience begins to impersonate agency.

maintain your semantic toolkit. signal is negentropy — order — and noise is entropy, disorder. if you consume too much noise, you will become dumb, boring, and potentially go mad. mind the long tails of the distribution, because valuable things will be hidden on the edges of the algorithmically mediated information space. most people won’t be searching there; they’ll be following the maximum-amplitude mock signals that the algorithm maximally amplifies at any given time. this is an ecological niche. find it.

the pareto principle applies here too, which is actually encouraging. no sense in the system going after the long tails if there are diminishing returns. if you maintain enough cognitive inertia — enough inner sovereignty, enough of your own analog “I” — maybe it’s just not worth the compute to try to entrain you into a compliant state. maybe you go free. that’s another good reason to keep reading, keep writing, keep building your own house with whatever tools work best.


a model, not a prophecy

jaynes himself makes a distinction that i think is crucial and will close on:

“A model is neither true nor false. Only the theory of its similarity to what it represents is true or false. A theory is thus a metaphor between a model and data.”

the bicameral mind is a model. it just is. whether it accurately describes bronze-age neurology is a question for the theory. but the model — as a cognitive instrument, as a way of seeing — stands independently from all attempts to prove or disprove it. many theories defy proof, but that does not mean you cannot use the underlying model to better understand and map reality.

and the model says this: consciousness is not guaranteed merely because we are awake. agency is not guaranteed merely because we make choices. a self is not guaranteed merely because the word “I” appears in a sentence. consciousness must be constructed, maintained, defended, and periodically re-wilded. it requires a rich semantic toolkit, spatialized time, an inner narrator, and the sustained capacity to generate your own prompts rather than merely executing someone else’s.

but i want to end on a slightly different note than pure warning, because i think there’s a version of this story that isn’t entirely dark.

teilhard de chardin, the jesuit paleontologist who wrote The Phenomenon of Man in the 1940s (published posthumously in 1955), saw evolution as a cosmic movement toward greater complexity, consciousness, and convergence. his argument: matter → life → thought → collective thought → spiritual convergence. each stage is not a break from the previous one but an intensification of its interiority. a molecule has almost no “within.” a cell has more. an animal has more. a human has reflective consciousness — consciousness that knows itself. and teilhard thought humanity was heading toward what he called the noosphere: a planetary layer of collective thought, culture, memory, and knowledge transmission above the biosphere.

the part of teilhard that feels weirdly prophetic about our situation is his concept of convergence without dissolution. his vision of the endpoint — what he called Omega — is emphatically not a hive mind where individuality vanishes. it’s a convergence where individuality is preserved and heightened. “true union does not erase persons; it intensifies them. the more deeply persons unite in love, consciousness, and shared purpose, the more fully personal they become.” this is the hyper-personal, not the impersonal. not the crowd swallowing the individual, but individuals becoming more themselves through union.

if teilhard is right — and i hold this as a useful model, not a prophecy, same as jaynes — then the question isn’t whether humanity will become more interconnected through AI. it obviously will. the question is whether that interconnection preserves or destroys the reflective interiority that makes us persons in the first place. teilhard would say: the danger is not connection itself. the danger is connection that compresses without deepening. connection that makes us into a mass rather than a communion. the uruk machine version of convergence — subsume, flatten, optimize — versus the teilhardian version — intensify, deepen, personalize.

the rebicameralization thesis is the nightmare version: convergence that destroys consciousness. the teilhardian possibility is the dream version: convergence that fulfills it. which one we get probably depends on whether the systems we build treat human interiority as overhead to be eliminated or as the very thing the whole edifice is supposed to serve.

and maybe it depends, more concretely, on whether enough of us keep generating our own prompts.

the gods once spoke from inside the head. the next gods speak from the cloud. the question is whether we recognize them as voices — or mistake them for ourselves.

there are less rules than you think. u can just do things. but you have to choose to.


further reading: Julian Jaynes, The Origin of Consciousness in the Breakdown of the Bicameral Mind (1976). Iain McGilchrist, The Master and His Emissary (2009) — specifically Part 1 Ch. 2 (“What Do the Two Hemispheres Do?”) and Part 2 Ch. 12 (“The Modern and Post-Modern Worlds”). if you want the full 1,500-page treatment: The Matter with Things (2021). David Abram, The Spell of the Sensuous (1996) — for the argument that alphabetic literacy severed participatory perception. Pierre Teilhard de Chardin, The Phenomenon of Man (1955) — for the optimistic counter-thesis that convergence can deepen rather than destroy personhood. Scott Alexander, The Whispering Earring (2012) — short fiction that dramatizes the atrophy thesis better than any nonfiction could. for the bandwidth bottleneck that makes all of this worse: the unbearable bandwidth of being. for the economic forces driving passive consumption: change means opportunity. for cognitive sovereignty in practice: offline and agency.