In this essay, I’d like to do two things:
1) Reflect on Clayton Christensen’s ontology.
2) Explore the ontology of AI tutors, a topic taken up in Simone and Malcolm Collin’s latest piece, “Why We’re Surrounding our Kids with AI.”
Clayton Christensen’s Ontology
Clayton Christensen is well known as the originator of the concept of disruptive innovation which he popularized in his book The Innovator’s Dilemma. Less well known is that he was a religious and devout Mormon. For him, his business and theological perspectives were deeply intertwined. Just as companies can sacrifice long term health for short term victories, so too, we sinful mortals can over-emphasize our own vanity metrics over the things that matter over a longer time horizon.
Christensen famously says that we hire “products” to do jobs for us. A product’s definition or “value proposition” can be defined by its “job to be done.”
This simple and elegant approach is at once Aristotelian and Heideggerian. Aristotle names the “final cause”—an object’s telos or purpose—as one one of its defining determinants. Heidegger, in describing objects in a craftsman’s workshop as Vorhandensein (“ready to hand”) suggests that a hammer’s being is not what the hammer is made of, nor what it appears to be to a disinterested scientific observer, or to alien anthropologist, but instead what it’s for—hammering. The craftsman doesn’t need to read a philosophy textbook to know what the hammer is. He demonstrates his knowledge by picking it up and using it.
Yet when we follow the chain of thought—what is hammering for? Building a house. What is building a house for? Dwelling…—we realize that there’s one being that brings it all together, Dasein (that’s you and me), the being who cares. We can’t say why we care, but we do. It’s our given. In German, “Es gibt,” means “It is” or “there are” but it translates literally as “It gives.” For Heidegger, at some point we just have accept that we are given over to the world. Our job to be done—to live the life we are given—is itself one that is impossible to define abstractly; we can only demonstrate our knowledge of it by living, in the exact same way that the hammerer knows how to hammer.
Defining a product by its job to be done also has early Wittgenstein vibes. Wittgenstein of the Tractatus seeks to show that our linguistic universe is composed of analytic propositions all the way down. Just as everything in the material world is composed of basic elements, so too, the world of concepts can be distill to basic analytic components.
Yet a park bench doesn’t simply have a single job to be done; rather, it has affordances; it can be used to read the paper, people watch, catch a breath, talk to a stranger, fraternize with a friend, or take a nap. I can even stand on the bench, transforming it into a make-shift pedestal. A simple park bench “bundles” different use cases, some of which are intended, and a few which are not. It also offers optionality or potentiality. I may sit on the bench myself, yet the empty space next to me implies a social horizon that a single chair would not. Yet despite the plurality of ways I might “hire” a bench to do a job for me, I cannot enlist its help as a therapist or a priest; I cannot delegate work to the bench. I can bring my own lunch to the bench, but I can’t order a meal at the bench. I might stare at the bench as at a work of art, but I couldn’t use it as a bench press. The limits of the bench—what it’s not for—also help define it.
As it turns out, one way to develop a moat or competitive advantage is to bundle value proposition and to enable the solving of multiple jobs to be done with a single product or suite of products. Diversifying jobs to be done ensures that even if you’re weak in one, it will be hard for people to leave. Universities are great examples of value bundles. YouTube or ChatGPT might outperform on the content and instruction side, but they can’t replace the frat party or the college green or the diploma you get after four years. Github might disrupt the credential, but you won’t meet your future spouse there. Dating apps won’t teach you Art History. There is no other institution that gives 18-22 year olds a socially acceptable way to take a low-key sabbatical. Of course, this is all changing. As colleges prove to be inferior at delivering social proof, social networking, and job training, they enter a doom-loop. But with luxury brands like Harvard, don’t underestimate the value prop of simply being able to say you went there.
Another great example is the NYTimes, which many predicted would crash in the tech age, but has only compounded hard. The NYTimes offers not just news, but opinions, games, and a comments section. It’s also marketplace—not unlike Facebook and Twitter—that matches people with takes to people demanding them. And its an iconic brand that serves as a Schelling point for conversation and cultural Zeitgeist measuring, even amongst those who love to hate it. You can isolate any one value prop and point to its inferiority, but few bundle as well as the NYTimes.
The job to be done by bundlers is to aggregate and curate. The park bench doesn’t solve one problem; it says “first, sit down, then figure out what problem to solve.”
The Ontology of AI Tutors
What, then, do we hire AI tutors to do? What are their jobs to be done? What jobs do they come to challenge or replace? And what can they not do that the bundlers can? And how much should it matter?
AI Tutors can’t be real people. So the extent to which you need to feel connected to an actual person, even with a difficult personality or psychological limits, it can’t offer humanness as a service. Some teachers are not the ones we’d choose, but the ones that we remember and thank later. We didn’t like every few they had or every way they communicated, yet we remain inspired by one thing they said or one way they said it. To the extent that a teacher-student relationship matters, the AI tutor can’t offer that. From that point of view, their analogue—to put it crassly—is the AI girlfriend; self-involvement in the clothing of alterity.
When I reflect on my experiences as both a student and teacher of Torah, I understand that a big part of the value is not the content itself, but the environmental mood of awe and the impression it has made on me and students. Perhaps tech will manage to solve this issue, but I have a keen sense that it’s tied to what Buber calls the I-Thou relationship. My teachers learned from their teachers who learned from theirs, going all the way back to Sinai (according to Pirkei Avot). The sense of authenticity I experience when I feel this “chain of tradition” is part of the bundle I purchase when I learn with a rabbi. I may learn more efficiently on my own, but I’m using reason (and cultural bias) to achieve my conclusions, not Mesorah, the notion of conventions being passed down through the ages. I imagine the same is true for many guilds, such as ballet or classical music, which likewise involve heavy doses of apprenticeship and mentorship.
It’s easy to compare AI tutors to base AI models like Claude and Chat GPT. “Virgil is just those things but with better RAG, better subject matter expertise, better system prompt architecture, a more refined multi-agent orchestration.” I don’t disagree with that, but I think it misses the point.
AI tutors aren’t competing with base models; they’re competing with professors, coaches, therapists, and encyclopedias.
In academia, professors shine in their subject matter expertise; but that’s not the job that most students hire professors to do for them. The 19th century research university sees professors as purveyors of knowledge. But we don’t read Joseph Conrad, Herman Melville, or Jane Austen for the kind of knowledge that is scientifically rigorous. We read them to learn how to observe our world and derive deeper lessons about the human condition.
When Professors aren’t hewing to 19th century objectivity, they’re replacing it with Neo-Puritanical wokeism, creating a different form of alienation. A moderate, free-thinking, curious undergrad must choose between dry, analytical study or impassioned intolerance. As the FIRE reports show, the most prestigious universities are the least ideologically diverse and the most censorious. AI tutors can connect the intellectual search to the personal dimension in a way that navigates the Scylla of objectivism and the Charybdis of far-left cultural Marxism.
If you are fortunate enough to be religious, you run into the challenge that your teachers are typically only well read in their own tradition and/or driven by a commitment that makes it harder for them to engage with skepticism and doubt. Liberal religious leaders will typically tell you exactly what you want to hear, ideologically speaking (and are, in that regard, just as sycophantic as AI bots), while more Traditionalist ones will assume you’re already bought into the creed. There are a lot of religiously and spiritually curious people that want to hear something more sophisticated than “the Bible endorses the [insert the Democratic Party’s platform], but are also not fideist (going to take the truth claims and moral claims of their religion at face value and without any cognitive dissonance). AI tutors can help religious seekers, going where secularists can’t, and orthodox insiders won’t.
Therapy tends to take two forms: Talk-therapy or cognitive behavioral. Neither one is particularly intellectually rich, and there’s a reason why the meme exists “Men would rather X than go to therapy.” Jordan Peterson’s followers prove the need for moral instruction combined with an intellectual frame, yet without the feminizing effects of therapeutic culture, on the one hand, and the clinical approach to personal problem-solving on the other. We’ve traded the archetypes of Aristotle and Confucius for Tony Robins and Esther Perel. AI Tutors can attack this niche, going to places that merge the therapeutic and the intellectual.
In many ways, both academic culture and therapeutic culture are part of the unbundling of religion. Academia unbundles the scholarly apparatus that used to be requisite amongst religious leaders. Universities began as theological institutes. Therapy swaps the therapist for the priest and turns the couch into the confessional booth. With a Rousseauvian twist. Instead of atoning for your sins, as in the past; now you blame society for your trauma. You are good. Society is bad. In both cases, the job to be done is a sense of catharsis.
Montaigne and Oakeshott teach us that you can’t change something without unintended consequences. The reason why certain institutions have proved durable is often more complex than we realize. In short, great institutions bundle, and we unbundle them with greater risk than we realize. You think you know the job to be done, but you don’t.
This perspective might lead to a certain skepticism about AI tutors. But I take an opposite conclusion. Great AI tutors will also have to bundle multiple value propositions. They’ll combine the credibility of Wikipedia with the warmth of a religious leader, the rigor of a professor, the practical orientation of a coach, the personal regard of a Mr. Rogers. Tutor is the wrong word if we assume that a tutor’s job is simply to convey knowledge. Friend is the wrong word because it’s not real. Research Assistant is the wrong word, because you’re not outsourcing the search. It’s a new category. For now, I’m calling it “Your Intelligent Library.”
To bring it back to Christensen; a product that has a single value prop dies when we find a way to solve that value prop a different way or when our need for the value prop dissipates; but a product that contains affordances within it, like a park bench, contains an embedded option within it that you don’t even yet know how to realize. I am optimistic that AI tutors are in the early innings not just from a technology and capability perspective, but from a cultural perspective. We don’t even fully know what problems they will be able to solve.
Having studied a range of thinkers and books with Virgil (launching this week!) from Adam Smith and Gregory Nyssa to Lord Byron and Epictetus, the thing that moves me most is not what I have learned, but how the conversation has elicited and awakened questions I had latent in me, but hadn’t thought to ask until I was engaged. I had to first sit down on the park bench to notice how pleasant it was to eat my lunch, while talking to a stranger, and watching the kite-flyers.