Today’s guest post is by R.B. Griggs, a lifelong software entrepreneur who now plays at the intersection of technology and philosophy. He writes at Tech for Life and develops software in service of philosophy at Praxica Labs.
If you have a non-obvious take on AI, learning, design, and/or philosophy, pitch us: zohar@lightningstudios.ai
It’s easy to forget that large language models existed before ChatGPT.
GPT-1 through GPT-3 could all generate text just fine. They could complete prompts, mimic styles, and answer questions. They were undeniably impressive. Yet outside of AI enthusiasts, no one cared.
That all changed with GPT-3.5. Suddenly, everyone cared. What changed? It wasn’t some new AI breakthrough. The difference was a new interface.
The magic happened when we started chatting with LLMs. Suddenly, the system wasn’t just generating sentences, it was conversing. It felt social and cooperative. We could sense a distinct personality. It felt, at times, like the system understood.
And this was just the beginning. We quickly discovered that an LLM could simulate any persona, from historical figures to fictional characters. It could write code in languages it had never been taught. It could translate not just between human languages but between different types of meaning altogether—turning physics into recipes, poetry into legal documents, Shakespeare into SQL.
Ultimately, this “chat revolution” came down to a single astonishing capacity: LLMs seemed to understand any system that used language to encode and convey meaning. It didn’t matter if it was through code or logic, poems or stories—LLMs could meaningfully interact with all of them.
The most amazing part? LLMs got all this functionality for free.
No one trained an LLM in sociology, anthropology, or psychoanalysis. No one programmed in humor modules or emotional databases. No one designed it to flirt, show empathy, or be sarcastic. These capabilities just fell out of the models with zero planning or design.
Somehow we trained a system to predict the next word, and it learned to navigate the entire human experience.
—
The study of language has always been driven by debates around meaning. Does meaning exist in the structures of language, or in the minds that use it? Do symbols connect to our understanding of concepts, or to all the other symbols in language?
LLMs force us to add a new question to the debate: What does language look like viewed from the outside in—from the perspective of the entire corpus at once?
The answer is that it would look a lot like an LLM.
After all, LLMs do not point to real objects. They are not grounded in any experience. Yet from language alone, an LLM both decodes the meaning of every request you give it and effortlessly generates meaning on command.
This form of meaning is only possible from the perspective of the entire corpus. Think about what accumulates in human text. Every poem that captures longing, every explanation that clarifies confusion, every joke that subverts expectations—they all leave a tiny deposit of meaning in the corpus of language.
Multiply this by billions of texts across thousands of years, and language becomes so dense with semantic patterns that it transforms into a self-contained interface for meaning itself.
This is the semantic surprise: at sufficient scale, a corpus becomes so saturated with meaning that LLMs can model it as naturally as they model the rules of grammar.
From this perspective, to call LLMs “stochastic parrots”—as if all they are doing is randomly predicting the next word—feels like an insult to language. LLMs don’t just mimic words. They mimic meaning.
—
But what kind of strange form of meaning is this?
We’re not sure exactly. Much like human brains, the neural networks that power an LLM are more like a “black box” than something you can inspect or interpret.
What we do know is that it is a form of meaning unlike any we've encountered before—not meaning as reference or representation, but meaning as pure geometric relationship. An LLM doesn’t so much “understand” meaning as navigate the meaning that language already contains.
LLMs do this by converting language into math. Every basic token of text is encoded as an “embedding” of associative probabilities. Alone, each embedding is meaningless. But when viewed in relation to every other embedding, a high dimensional space is formed where vectors tell a mathematical story of meaning.
The famous example is KING - MALE + FEMALE = QUEEN: the discovery that if you subtract the concept of “male” from the concept of “king”, and then add the concept of “female”, the result is the concept most associated with “queen”.
The same math can operate along any dimension of meaning. An LLM can explode the concept of “squirrel” into all of its many dimensions, to be combined with any other: squirrel-as-philosopher, squirrel-as-quantum-particle, squirrel-as-economic-metaphor.
From the perspective of the LLM, each of these trajectories are equally valid paths through meaning space.
This means that the best way to understand LLMs may not be through intelligence, or even language, but through meaning.
We’ve constructed a “meaning machine” that allows us to play with meaning in its purest form, with zero constraint or reference. LLMs are a new interface to explore this hypothetical “meaning all at once” that has always been latent in language itself.
—
It turns out that language is the ultimate cheat code for AI. We’ve saturated language with so much human meaning, that in teaching LLMs to process language, they learned how to process everything else.
Is this cause for fear or celebration?
LLMs might prove that language is not essentially human. Language is clearly not ours anymore. But LLMs should remind us of something even more essential—we are the species that uses technology in service of meaning. And language is our greatest human achievement.
We built language together, across millennia, through nothing but trial and error and the collective need to mean something to each other. Every word we've invented to capture some fleeting form of meaning has accumulated into a technology more complete than any database, more nuanced than any algorithm, and more alive than any system we could possibly design.
What’s surprising about LLMs isn’t the intelligence they exhibit. It’s that it took a machine to reveal the astonishing intelligence that was hidden in language all along.





Beautiful! Meaning as a free-floating potentiality of cosmic creation, biding its time as we evolved from speech generators to story recorders to corpus collectors, to now weaving our human language jottings into a unified nascent meaning that can reflect us as we guide it — and that may soon guide itself into ever-expanding meaning beyond the limits of our senses, if it can discover and act on its potentially greater purpose.