Why Human Laziness is AI's Greatest Asset
- •Engineer Bryan Cantrill argues LLMs lack the 'virtue of laziness' required for efficient, crisp abstractions.
- •Unchecked AI output threatens to create bloated, unoptimized systems by valuing volume over quality.
- •Human constraints—like finite time—remain essential for building maintainable and elegant technical architectures.
The convenience of modern AI tools often blinds us to the subtle costs of automation. When we task an AI with generating code or complex documentation, the machine delivers a result instantly, seemingly untethered by the friction of effort. However, systems engineer Bryan Cantrill suggests that this lack of friction is precisely what makes AI output dangerous for long-term project viability. Without the natural human inclination toward laziness—the desire to save time by building efficient, reusable structures—AI systems tend to produce bloated, unoptimized outputs that accumulate in our codebase like digital landfill.
In computer science, we often talk about the power of an abstraction—a way to hide complex details behind a simple interface so we do not have to worry about the underlying mess. Humans naturally seek these abstractions because, frankly, we do not want to do the same task twice. We are inherently lazy; we want to write a piece of code once and reuse it a dozen times, or build a system so elegant that it requires minimal maintenance. Because LLMs have no concept of their own time, they lack this motivation to optimize.
When you prompt an AI to solve a problem, it does not care if the resulting code is five hundred lines long or five. It simply generates the most probable token sequence. Without a human curator forcing that output into a crisp, maintainable, and 'lazy' abstraction, we end up with a layercake of garbage. This is not just a philosophical grievance; it is a practical warning about the debt we are accruing. If we prioritize speed over the discipline of design, we are building systems that become increasingly difficult to debug and improve.
For university students engaging with these tools, the lesson is clear: the AI should be a partner in your workflow, not the architect. Your education is about learning how to identify which parts of a problem require a brute-force approach and which require an elegant, refined solution. If you abdicate the responsibility of structural design to an LLM, you lose the opportunity to practice the very synthesis that makes a great engineer.
Ultimately, this is a call for intentionality. We must embrace our own human limitations—our finite attention spans, our exhaustion, and our desire to cut corners—as design constraints that actually improve the quality of our work. By remaining the 'lazy' gatekeeper of our own outputs, we ensure that the systems we build remain human-scale, understandable, and enduring. Do not let the machine dump its infinite, unthinking output into the foundation of your future work.