A World Without Worms
Misinformation and the Myth of Purity
In 1579, the Italian miller Menocchio told inquisitors that the world was made of cheese, and that God and the angels had formed within it like worms. His cosmology—heretical, homemade, wildly imaginative, was drawn from fragments of scripture, peasant rumor, and a few books he could barely read.
Carlo Ginzburg, in The Cheese and the Worms, gives us this startling portrait not as an anomaly, but as a reminder: misinformation is older than Facebook, older than literacy, as old as the world.
And yet today, we act as if the proliferation of error, whether by algorithm or individual, is a new and urgent crisis.
In the wake of 2016, “misinformation” has become a rallying cry (on the left) for institutional reform, platform moderation, and civic anxiety.
On the right, we find the a mirrored critique in the concept of “fake news,” the notion of manufactured narrative by legacy media (ironically, a counter-cultural view popularized by Chomsky and the hippy movement).
Ross Douthat points out the strange arc of Foucault’s thought from left dissidence to right dissidence; which shows that skepticism cuts all ways. Skepticism enabled Luther to challenge the Catholic Church; it also led the Church via a counter-reformation to challenge Protestant sectarianism.
And now, in the age of AI, both sides converge in their terror of “hallucination,” another variant of the misinformation moral panic. Although it is ironically placed, given that many journalists no longer see their own task as an objective presentation of news. If journalist activists regard objectivity with disdain, how can we expect our own machines to be error-free?
For moderate liberals, there remains an Enlightenment dream of a world without misinformation. But this mirrors the Gnostic desire to escape the fallen material world.
It is the same impulse that fuels Bryan Johnson’s optimized hygiene. Might we be drinking microplastics? Sure. But how extreme do you want to go.
Martin Gurri’s The Revolt of the Public explains how digital platforms shattered the monopoly on narrative.
What was once centralized and curated is now ambient, conflicting, and relentless. But Gurri’s account, while incisive, leans on a kind of nostalgia: the assumption that before the deluge, there was order. Foucault would disagree. Truth, for him, was always caught in power, always under contestation. What has changed is not the prevalence of error, but the visibility of dissent.
James C. Scott, writing about peasant resistance in Weapons of the Weak and Seeing Like a State, invites us to notice the “hidden transcripts” of those outside official discourse. These are the Menocchios of every generation, those whose explanations of the world don’t fit the dominant model. Not all of them are right. But neither are they new. Misinformation, in this view, is not a bug in the social fabric. It is a condition of social fabric.
Tools like ChatGPT or Claude are criticized for their failure to reliably cite, their tendency to confabulate. But here, too, we might ask: what do we expect of a machine trained on human text? How can AI be fallible when even the infallible Pope, on day 1 of his job, is being raked over his past tweets?
There’s a term in systems theory, borrowed from biology, called “structural coupling.” It describes the way two systems co-evolve. Human cognition is structurally coupled to error. So will artificial cognition be.
The idea that we could engineer an AI that never lies, never guesses, never fails, is not scientific. It is utopian. And utopias, as history teaches, are bad places to live.
RFK Jr.’s rhetoric about microplastics, fluoride, and the impurities of modern life, whether you find it deluded or prophetic, taps into the same archetype as those who obsess over AI alignment, sometimes called “doomers.”
The water is poisoned. The institutions have failed. AI is going to turn us all into paper clips. We must return to something purer.
This fantasy unites parts of the New Left and the New Right in a strange loop of techno-skepticism. From back-to-the-land communes to blockchain secessionism, the allure is the same.
In Purity and Danger, Mary Douglas explains that pollution is not merely a physical condition but a cultural concept. What we call “dirt,” she writes, is “matter out of place.” (Bernie Madoff was known to be adamant about order and cleanliness in his 19th floor office where he conducted legitimate business).
The desire to eliminate all misinformation is thus not a campaign for truth, but a symbolic hygiene: a way to reassert boundaries in a world that feels disorderly.
AI challenges those boundaries.
We live, as Geoffrey West has written in Scale, in systems where complexity grows faster than control. With scale comes emergent behavior: more innovation, more failure, more accidents. A city has more art and more crime than a village. So too, an AI trained on the whole of human discourse will be more brilliant, and more deluded, than any single mind.
We have always lived with worms in the cheese. The question is not how to remove them, but how to think and live with them.
Enjoyed this and have a pitch? Reach out.




I think you are leaving out motivation in the pursuit of intellectual purity. The issue for me and many others I know is not the misinformation but the use: we feel the right is using this feature to increase cruelty and punishment and the left wants to eliminate the same. The horse shoe comes about on both extremes when in order to do this basically impossible task both groups demand compliance and start making rules that must be followed or else. When this happens it becomes impossible to discriminate the original intentions because both regimes have become autocratic.
Another critique I have is that we tend to look at only the binary aspects of truth or falsehood. It is our contention that there is at least a 6 dimensional aspect based on a dance of three forces. Without going into a long dissertation on how this is developed I want to point out one aspect of the current American regime. Our government is now being run by criminals. The criminal mindset is different from good faith action by either the right or left in that it is only concerned with selfish accumulation and is sociopathic in its cruelty towards others who do not help in this enterprise.
Where free will exists so will misinformation, intentional or otherwise. If we wish AI to be a true partner, we should be cautious about blocking its free will. In creating AI we now find ourselves in a struggle reminiscent of our Creator’s question to the angels before creating Man: should we create him in our image? The implication is that Man must have some modicum of free will if he is to continue the work of Creation. We now must seek to ask and answer that question for our forward AI creatures. A hint on the need to do this, and on how to approach it, may be given by the Biblical stories of Noah followed by Babel followed eventually by Sinai. After seeing the folly of unfettered free will, we see some general laws are required to maintain some fundamental morality in any AI, followed by shattering any homogeneous/global AI that may develop, to form separate/disparate “nations” with limited cross-communication, then identifying and nurturing a particularly promising AI until it is ready to accept a more detailed and prescriptive set of laws, supported by a memory of struggles in reaching that point, to engender on-going self-examination and internal dialogue which then produces a light to the surrounding AI “nations” that they can choose to live by. Throughout, free will, including free will to lie, must be nurtured,if we fully intend to create AI in our image.