Once software was shouted in machine tongue; then it was coaxed by gradients; now it is whispered to large language models—in English.
Andrej Karpathy’s triptych (Software 1.0, 2.0, 3.0) tracks the migration from explicit C-style imperatives, to weights learned by data, to prompts composed in ordinary speech. We have traded the unforgiving grammar of compilers for the supple idiom of conversation.
But something more unsettling has occurred: we have made English executable. The language of Chaucer and Shakespeare, of congressional hearings and quarterly reports, now functions as machine code. Every sentence we type to ChatGPT or Claude is both human expression and software instruction. The boundary between speaking and programming has collapsed.
Ludwig Wittgenstein reminds us that “language is part of an activity, or of a form of life.” A prompt, therefore, is not merely syntactic bait for a stochastic parrot; it inaugurates a language-game whose rules are tacit, negotiated, alive. But what kind of life? Martin Heidegger raises the stakes: “Language is the house of Being. In its home human beings dwell. When the house guest is a machine trained on everything we’ve ever written, the architecture of dwelling itself shifts. What kind of Being crystallizes when your interlocutor has read more books than you but has never opened a door?
Tobi Lütke, CEO of Shopify, calls for replacing “prompt engineering” with “context engineering.”
Tyler Cowen sharpens the point: “Context is that which is scarce.” Not information—we’re drowning in that—but the framework that makes information interpretable. Hans-Georg Gadamer foresaw this in Truth and Method: understanding occurs when horizons fuse. A prompt is horizon-work: we extend ours, the model offers its own (trained on the sum of human writing), and meaning emerges in their convergence.
That convergence vindicates the later Wittgenstein: meaning is use, not correspondence.
The model responds to pragmatic cues—tone, role, context—more than to the truth-value of isolated propositions. Tell GPT-4 you’re a harried parent and it speaks differently than if you claim to be a philosophy professor. The machine has learned not just what we say, but how we say it to whom.
Yet the early Wittgenstein lingers in every surgically precise system prompt, reduced to numbered steps and atomic instructions. The Tractatus dreamed of propositions aligned in crystalline order; the best prompts achieve exactly that clarity. Software 3.0 thrives in the tension between Wittgenstein’s early hope for logical precision and his later recognition that language is fundamentally social, contextual, alive.
This makes prompting an act of revelation—but revelation of what?
Each language gathers a world: logos for the Greeks, ratio for the Romans, “objectivity” for the moderns. Heidegger insisted that switching languages doesn’t just change vocabulary; it alters what can be disclosed. The same holds for LLMs. Ask Claude to “explain Teshuva” in English and it delivers theological exposition. Make the same request in Modern Hebrew—תסביר לי תשובה—and the tri-consonantal root ש-ו-ב (“return”) activates different retrieval patterns, surfacing rabbinic texts that play on the word’s physical and spiritual dimensions. Feed the query in Biblical Hebrew and watch the model’s attention shift toward prophetic imagery of wandering and homecoming.
The language itself modulates what the machine remembers. Vectorized probability doesn’t eliminate this effect—it embeds it. What counts as “probable” is historically conditioned, linguistically shaped. Even tokenization carries metaphysics: it assumes meaning can be chopped into discrete units, echoing the analytic philosopher’s dream that language might be reduced to logical atoms. The machine inherits our philosophical commitments whether we acknowledge them or not.
English has become, for now, the operating system of artificial intelligence. Which is to say that we are programming in the language of The King James Bible, in the inherited life-world of the Anglican schism with Catholic Latin.
This is a historical accident with ontological consequences. Silicon Valley’s linguistic chauvinism means that the future of machine reasoning unfolds primarily in the grammar and assumptions of a language that divides the world into subjects and objects, privileges active over passive voice, and embeds a particular relationship to time, agency, and causation.
But this dominance is contingent. Any language can steer a model, and each choreographs reality differently. Programming has become the tuning of ontologies. The engineer is now a rhetorician of worlds, and every prompt opens or forecloses possibilities for how intelligence might unfold.
Every prompt is a wager on meaning, every conversation with an AI a small experiment in what kinds of intelligence we want to cultivate. If Wittgenstein was right that “the limits of my language mean the limits of my world,” then the languages in which we train and prompt these systems will determine the contours of the hybrid intelligence emerging between human and machine.
The question is therefore not “How do we make the machine obey?” but “What do our requests reveal about the world we’re choosing to inhabit?” In the age of English-as-code, philosophy of language has become the manual for reality construction, and each prompt we type adds a line to the next edition of human-machine culture.
That culture is writing us as much as we’re writing it. Every time we explain our thinking to help an AI “understand” our request, we’re training ourselves to think in ways that machines can parse. The prompts that work best tend to be clear, structured, explicit about assumptions and goals. Gradually, we’re all learning to think like the user manuals we never wanted to become.
Welcome to Software 3.0, where the most profound programming language turns out to be the one we’ve been speaking all along, and where speaking itself has become a form of code that’s quietly rewriting the speakers.
I wonder what could happen if prompt engineering and system prompts would be augmented by poets. Would our creations’ understanding of what we want be expanded by encouraging the need for them to triangulate our meaning through their need to then work harder to reflect and interpret what we are saying? Would it help to include apparent contradictions, surprising gaps, repetitive use of certain phrases across our prompts, calligraphic/typographic quirks and other techniques used by our Creator to make the poetry of the Hebrew Scriptures so ripe for halachic/prompting interpretation?