February 2026
Why we don't use LangChain
Every few weeks, someone asks why we don't use LangChain. It's a fair question — it's the most popular framework in the LLM space, and most AI consultancies default to it. Here's why we don't.
Abstractions have costs
LangChain wraps API calls in layers of abstraction. When things work, those abstractions save time. When things break — and in production AI systems, things break regularly — those same abstractions make debugging significantly harder. You're not troubleshooting your code; you're troubleshooting the framework's interpretation of your code.
When we build directly on Claude's API, every request and response is visible. We know exactly what we're sending, what we're getting back, and where things go wrong. That transparency matters enormously in production.
The right level of abstraction
We do build our own abstractions — lightweight wrappers specific to each client's use case. The difference is that our abstractions are shaped by the problem, not by a framework's opinion about what all problems look like.
This isn't a principled stance against frameworks in general. It's a practical observation: for the kind of production AI systems we build, the cost of a generic framework consistently exceeds the benefit.
Questions about what you've read?
Reach out