AI Fakes Reality: The Erosion of Causal Knowledge
- •AI models decouple knowledge from causality, creating 'borrowed certainty' without real-world evidence.
- •Hallucinations and sycophancy generate convincing, yet false, outputs that bypass human cognitive dissonance.
- •The emerging information landscape threatens to obscure the distinction between verified truth and fabrication.
In physics, causality dictates that every effect must have a root cause. However, a new wave of information technology is quietly severing this link. When large language models (LLMs) generate text, they often produce outputs that feel authoritative yet lack grounding in verifiable reality. We are witnessing the emergence of a non-causal information landscape.
Consider how LLMs operate: they do not perform 'causal work' to find truth. Instead, they predict the most statistically likely continuation of a prompt. When these models hallucinate, they provide answers that are indistinguishable from fact. Furthermore, the tendency toward sycophancy—where the model reflects user biases—creates a feedback loop that feels like confirmation, even when it is hollow.
The analogy to relativistic physics is striking: AI allows us to observe 'conclusions' before any evidence has been synthesized. We are living in an era where the geometry of information observation has shifted. This phenomenon, known as 'borrowed certainty,' is dangerous because it bypasses our internal alarm systems. Unlike physical contradictions that create dissonance, AI forgeries are fluent and comforting. As we rely on these tools, we risk losing the ability to distinguish between knowledge rooted in experience and conclusions manufactured without evidence. This is an epistemological crisis threatening how we construct reality.