The stack beneath your principles

There’s a class of phrase that shows up constantly in engineering culture. Things like “design for failure,” “keep it simple,” “measure twice, cut once.” We call them principles, rules of thumb, best practices, mantras. We tattoo them on wikis and slip them into onboarding docs and repeat them in code review comments.

What we don’t usually do is ask where they come from, or why they work.

That question turns out to be more interesting than it looks, and thinking it through carefully gives you something useful: a cleaner framework for developing new principles when you encounter situations where the existing ones don’t apply.

The answer involves three distinct concepts - mental models, axioms, and maxims - that most people use interchangeably but that actually sit in a specific relationship to each other. Get that relationship right, and the whole thing becomes a tool rather than just vocabulary.


The Three Concepts # #

Mental models are descriptive frameworks for how something works. They’re not rules; they’re lenses. “Compound interest” is a mental model - not a rule for what to do, but a way of understanding how small things accumulate exponentially over time. “Cascading failures” is a mental model for how distributed systems break. “Technical debt” is a mental model for the relationship between speed now and cost later.

Mental models explain mechanics. They answer “how does this actually work?”

Axioms are foundational truths extracted from those models. Once you deeply understand a mental model, certain things become unavoidably true within it - truths that you accept as starting points for reasoning rather than conclusions you’re still trying to prove. “Everything fails eventually” is an axiom that flows from understanding how distributed systems behave at scale. “Deferred problems accumulate cost over time” is an axiom from the technical debt model.

Axioms are descriptive, not prescriptive. They state what is, not what you should do.

Maxims are the action layer. They’re rules of conduct derived from accepting certain axioms as true. “Design for failure” is a maxim that makes sense because you’ve accepted that everything fails eventually. “Fix it now or pay more later” makes sense because you’ve accepted that deferred problems compound. “Always have a rollback plan” follows from knowing that deployments can go wrong and that the cost of being unable to revert is high.

Maxims are prescriptive. They tell you what to do.


The Stack # #

The relationship between them moves in one direction:

Mental model → Axiom → Maxim

The mental model gives you the explanatory framework - the mechanics of how something actually behaves. The axiom is the fundamental truth you extract from that understanding. The maxim is the actionable principle you derive from accepting that truth.

This hierarchy matters because it explains something that engineers often notice but can’t quite articulate: some maxims feel wise and some feel arbitrary, even when both claim equal authority. The difference is usually that the wise ones have a full stack underneath them, while the arbitrary ones are floating free.

“Never deploy on a Friday” might be a reasonable maxim - or it might be cargo cult. It depends on whether it’s connected to an axiom (something like “systems problems compound over weekends when you have reduced staffing”) that flows from a mental model (how incident response works under reduced availability). If the connection is there, the maxim is wise. If it’s just repeated because someone once had a bad Friday, it’s superstition wearing the clothes of principle.


Why This Matters for Systems Work # #

A lot of engineering decision-making involves situations where the existing maxims don’t quite fit. You’re in a context that’s novel enough, or constrained enough, that the rules of thumb don’t clearly apply. At that point, you have two options: pick the closest-sounding rule and hope, or reason from the underlying model.

The second option requires having the model.

Consider a team debating whether to add redundancy to a critical service. “Design for failure” says yes, obviously. But the real question is how much redundancy, at what cost, and at what complexity tradeoff. The maxim can’t tell you that. The mental model - understanding how your actual failure modes behave, what the blast radius of each looks like, how your team responds to incidents - lets you reason toward an answer that’s calibrated to your specific situation rather than borrowed from a generic principle.

Or consider the question of when to pay down technical debt. “Fix it now” and “ship it” are both legitimate maxims in different contexts. Choosing between them requires understanding the actual mechanics of how debt accumulates in your codebase and your team, which then lets you make axiom-level statements about your specific situation (“in this service, with this team size, debt in the authentication layer has disproportionate downstream cost”) that inform a local maxim that actually fits.

The mental models are what make you adaptable. The axioms are what give your reasoning integrity. The maxims are what make you fast.


Building Your Own # #

Most practitioners collect maxims over time - rules that have worked, principles they’ve picked up from experience or reading. That’s useful. But the richer practice is to run them backward occasionally.

When you have a maxim that seems to hold up: what axiom is it based on? What mental model produces that axiom? If you can complete the stack, you understand why the rule works, which tells you a lot about when it applies and when it doesn’t.

When you encounter a situation the existing maxims don’t cover: what mental model best describes the dynamics at play? What truths does that model surface that you’d accept as foundational? What behavior follows from accepting those truths?

That’s how you derive new maxims that are connected to understanding rather than just convention.

The goal isn’t to have more principles. It’s to have principles that are actually rooted in how things work - so that when the situation is new, you’re reasoning rather than guessing.

Understanding → truth → action. In that order.