Augmented Reality adds layers. It tags, labels, and points.
Diminished Reality subtracts.
In research literature, it’s described as techniques that conceal, eliminate, or “see through” real objects in a user’s view. The wording matters. Your view is being edited. Someone decided what you don’t need to see.
See: https://link.springer.com/article/10.1186/s41074-017-0028-1
What “remove” means in practice
Most work in this field collapses into three distinct verbs:
- Conceal: Reduce prominence. Quiet the thing down. The object is still there, but it stops pulling your eyes.
- Eliminate: Cut an object out and fill the gap with a plausible background. This is technically hard; the system must guess what would’ve been behind the person.
- See-through: Create the experience of looking past an obstacle. This is reconstruction, often a camera view from somewhere else.
These verbs don’t carry the same weight. Concealing a billboard is a convenience. Eliminating a person is a decision.
The Subtraction Map
From the operating theatre to the morning commute, subtraction serves a single master: Attention.
- Surgery: Reducing occlusion so the working area stays legible. Researchers have looked at ways of visualizing registration uncertainty (showing where an overlay stops being trustworthy) so the surgeon can feel the boundary between what’s sensed and what’s inferred.See: https://pmc.ncbi.nlm.nih.gov/articles/PMC12239872/
- Audio: Noise cancellation and voice isolation are already familiar forms of subtraction. We call it silence.
- The Sky: Subtracting light pollution or glare is the choice to strip away the orange haze so Orion can land on the retina. Form is prioritized over the raw, muddy reality of atmosphere and streetlight.
- Driving: A windscreen that suppresses attention-bait. The hooks of a busy high street quiet down. The road stays readable.
The Car Park
Imagine scanning rows of cars. You’re holding a memory of colour and shape against the physical grid in front of you. Then, the system dims every car that isn’t yours. They take a step back into the grey.
Your car “stands up” because everything else retreated. This clears the room so you can decide faster. But a habit forms: after a few times, you stop scanning. You expect the edit. The mediation becomes part of how you see.
The Physicality of the Guess
In high-stakes environments, the “edit” is often a matter of hardware relocation rather than magic. Military systems like IVAS (Integrated Visual Augmentation System) are often described as providing “through-wall” vision. In practice, soldiers inside an armored vehicle see “outside” it by using vehicle cameras mapped to their headsets. Instead of looking at a wall, they see the feed.
It’s relocation.
See: https://www.army.mil/article/268702/army_accepts_prototypes_of_the_most_advanced_version_of_ivas
The deeper issue is that the system is always dealing with uncertainty, but the interface struggles to show it. A highlight is a claim of certainty. Salience has weight. Your brain treats it as signal, even when the system is only trying to whisper.
Cognitive Security: The Need for Friction
DARPA’s Intrinsic Cognitive Security program suggests that this closeness to perception is a vulnerability that can be exploited or simply over-trusted.
See: https://www.darpa.mil/research/programs/intrinsic-cognitive-security
The proposed solution involves “Cognitive Forcing Functions” or “friction.” In high-stakes moments, the interface should slow the hand, forcing one deliberate act before the system’s suggestion becomes reality.
See: https://dl.acm.org/doi/10.1145/3449287
See: https://www.darpa.mil/research/programs/friction-for-accountability-in-conversational-transactions
The Continuum: Proportion and Collapse
Diminished Reality is a tool for proportion.
Used well, it lowers Activation Load by removing signals that don’t help you act. It raises Interpretive Capacity by making the working surface legible. When it’s Tempered, the system shows its seams. You still feel the boundary between what’s sensed, what’s inferred, and what’s missing. The car park edit is reversible; you can still see the dimmed cars if you choose to look.
Used badly, it creates a Brittle Calm. The view looks Composed because the edit hid the complexity. You feel clear, but you are merely interpreting a filter. If the system misjudged what to remove, you won’t notice until the consequence arrives. This is Dynamic action tipping into Volatility: you’re moving fast, but your comprehension has hollowed out. A map that makes you stop checking the terrain is a trap.
The Right to a Raw View
The stakes sit in the moment you stop noticing.
The windscreen that suppresses billboards eventually just feels like “better driving.” The audio filter feels like “better hearing.” The edit becomes the baseline. You stop holding two versions (the raw and the filtered) and simply inhabit the filtered one. It feels like seeing.
You’re standing in the car park. Every car but yours has been dimmed. You walk straight to it, efficient and fast. On the way, you pass someone searching the “old way.” Scanning. Checking. Holding memory against the view. They look slow.
But they are the only ones actually seeing the car park. You didn’t think about what you didn’t see. You just got there faster.
