We are currently living through a Great Blurring. As generative AI becomes the primary interface through which we filter the American landscape - both digital and physical - the line between what is true and what is simply likely has begun to dissolve.
To navigate this, we have to return to a few foundational distinctions that feel increasingly like a surveyor’s brass stakes in shifting sand.
The Probability Engine vs. The Correspondence of Mind # #
At its core, an AI does not “know” anything. Knowledge is, by definition, a correspondence between the mind and reality. When we say we know something, we are asserting that our mental model matches the actual state of the world. AI, however, operates on a different axis. It is a probability engine. It doesn’t look at the “ground” of reality; it looks at the “map” of human text. If we look at the classical analogy:
certainty : knowledge :: probability : opinion
AI resides entirely on the right side of that equation. It provides us with what might be called “right opinions” - outputs that are highly probable based on a consensus of data. But because consensus only exists over matters of opinion, and never over matters of knowledge, the AI is effectively “counting noses” across its training set to decide what to tell you.
The “Royce Liar” Without Intent # #
In his philosophical work, Josiah Royce defined a liar as “a man who willfully misplaces his ontological predicates”. He meant that a liar takes what is not and says it is.
When an AI “hallucinates” a fact, it is doing exactly this. It misplaces its ontological predicates - it asserts the existence of things that are not. The nuance here is that the AI isn’t “willful.” It doesn’t have a mind to correspond with reality, so it cannot technically lie in the human sense. However, the result for the end-user is the same: a lack of correspondence between the words on the screen and the reality of the world.
Falsifiability: Our Human Shield # #
If we cannot prove an AI wrong, we cannot prove it right. This is the core of Karl Popper’s focus on falsifiability. A theory or an AI-generated claim is only useful if it risks being wrong - if there are stated conditions under which it can be proven false.
In a world where AI can generate infinite amounts of “consistent” sounding text, we must remember that consistency alone is not a sufficient sign of truth. A story can be perfectly consistent and entirely fictional. Truth requires more: it requires that we take the AI’s output and subject it to “truths of perception” - the facts we know by direct observation and measurement.
Truth as a Public Matter # #
Finally, we must remember that truth belongs to no one. It is not a private preference or a subjective “feeling”. Truth is objective, immutable, and essentially a public matter.
As we rely more on these tools, the burden of inquiry shifts back to us. We cannot “vote” on the truth or rely on an algorithmic consensus. To find the truth, we must remain like the surveyor in the field: patient, methodical, and always checking our “speech” against the actual ground beneath our feet.
My next post focuses on the friction within our public squares - how we mistake the mechanics of democracy (consensus) for the mechanics of reality, and why the first step toward truth is often a step backward into “ignorance.”