Post 9: Saving Torah Lishma
The three inviolable rules for constructing AI-Torah systems
Dicta (my AI lab) had been working on making Jewish texts more accessible for a while when GPT2 came along, and we recognized both a challenge and an opportunity. Could we use our digital library together with an LLM to give reliable, hallucination-free answers to halachic questions? It turned out we could. The truth is we picked this objective just because it looked like an interesting challenge. We didn’t think hard at the time about whether this was precisely the tool the world needed, but it was probably a necessary first step toward whatever that tool might be, and so a reasonable place to start.
It worked better than expected, but the response told us something. Rabbis wanted sources, not answers; they were annoyed at the tool for presuming to decide, as if this were hasagat gvul. Regular users were doing the opposite, jumping straight to the answer and skipping the study. Two problems, one diagnosis: focus on sources, not conclusions. The rabbis don’t want to be replaced and others shouldn’t be encouraged to lazily rely on good but imperfect technology.
To put it in terms that should be familiar by now, we wanted to be helping users develop their halachic intuition rather than replace it. The same distinction is now showing up in the literature on AI more broadly. Recent studies of lawyers and other professionals find that the key variable in effective use of AI is the user’s ability to evaluate the AI’s output and keep it honest. When the user retains control, they benefit both from a better product and from the experience of producing it. When the user can’t evaluate the output and accepts it as is – that is, when they’re delegating to the AI rather than using it as a partner – both the product and the experience suffer.
All this suggests four objectives that any use of AI for Torah should serve, or at least not undermine:
-Develop intuition, rather than replace it or kill it by making it unnecessary.
-Preserve mesorah as something alive, not mechanistic.
-Reward productive effort; take over the drudgery.
-Avoid mistakes; don’t let AI’s confident tone pass for reliability.
These translate into three operational rules. In a perfect world they would become a kind of agreed standard for AI-Torah tools — and beyond, if others find them useful. Dicta is planning a relaunch shortly, and all three are built into the new version.
Rule 1: Every claim must be grounded in a retrievable authoritative source. The user can always check what the tool is claiming against the source and decide whether the source supports the claim. This is what prevents hallucination-based mistakes.
Rule 2: Present the full range of opinion within the user’s chosen library. The user picks the library – for example, Chabad, Sephardi, Ashkenazi, some particular period/region, psak style, etc. – and just about every relevant source in it is retrieved. This limits hidden curation by the AI. The user stays within their chosen mesorah but sees the full range of opinion within it.
Rule 3: The default is not to give a conclusive answer. Conclusions are presented only on request, and only after the landscape has been laid out. The user who needs the psak can ask for it and get it. But by default, what the user sees is the terrain, not a definitive answer. This discourages jumping to the conclusion, and it leaves the work of adapting the sources to a particular situation, with all its human subtleties, to a human, whether that’s the educated user or an actual posek. The machine can describe what each position implies in general; it can’t necessarily read the room.
In short, this approach preserves the idea of Torah lishmah – Torah study as something more than a mechanism for answering practical questions.
Torah lishmah is worth protecting for its own sake, but it also happens to be a solution to two serious problems that come along with AI. The first is cognitive offloading: reasoning skills atrophy from disuse, the way navigation skills atrophied once Waze did the navigating for us. The second is a kind of purposelessness: human skills that used to confer meaning lose their standing once machines do them better.
Torah lishmah pushes back against both. Reasoning done for its own sake isn’t offloadable — its value is in the doing, not the output. And a purpose that doesn’t depend on a skill’s external usefulness can’t be eroded by machines doing the skill better. What the tradition has been doing all along, celebrating study for its own sake, happens to be what this era particularly needs.
Of course, articulating the rules is one thing; building a tool that follows them and is actually useful is another. The next chapter is a demo.


Two friends writing about the same challenge this week:
https://compoundalex.substack.com/p/running-out-of-interesting-problems?utm_source=share&utm_medium=android&r=7xc7s
Looking forward to using it