How to use this glossary: These terms form the conceptual vocabulary of Cognitive Experience Design (CXD) — a discipline I originated in 2014 at IBM Watson. They appear across my writing, research, and product work at Vibes AI. Where a term has a specific scientific citation, that source is noted. Where a term represents an original framework, its origin is marked. This glossary is updated as the field evolves.
The practice of using artificial intelligence technologies to reduce the human mental effort and time required to complete a task, while preserving and strengthening human agency. CXD employs knowledge of human perception, mental processing, modeling, and memory to design systems that function as legitimate partners in human cognition — not opaque replacements for it.
CXD unites four disparate design practices: the artful intention and intuition of Human-Centered and User-Centered Design, and the objective scientific approach of Cognitive and Neuro-Ergonomics. It moves practice from the management fad of design thinking to the measurable rigor of designing for thinking and doing.
CXD asks not "Is this usable?" but: "Does this support how people actually think — and does it leave them more or less capable of thinking for themselves?"
An operational design constraint in Cognitive Experience Design: AI systems should help users understand their options, not overwhelm them with alerts, recommendations, or competing offers. Cognitive clarity is the antidote to the information overload that characterizes the attention economy.
Practically: every interface element should be assessed for its cognitive cost — the mental effort required to process it — and that cost should be justified by proportional value to the user.
In Indi Young's Mental Model Skyline methodology, a cognitive gap is a space in the skyline visualization where user cognitive towers exist — representing real human thinking, goals, and emotional processes — but no organizational capability, product feature, or AI behavior supports them. Cognitive gaps represent unmet needs that are structurally invisible to any product roadmap built on solution-space research.
Identifying cognitive gaps is both the strategic and ethical output of mental model research: they reveal the minds a system is not yet serving.
The amount of mental effort required to use a system or process information. Working memory is a finite resource; cognitive load theory (Sweller, 1988) posits that well-designed systems must stay within the brain's natural processing limits. When cognitive load exceeds capacity, comprehension fails, errors increase, and users disengage.
In CXD, reducing cognitive load is not an end in itself — it must be balanced against the imperative to preserve deliberative effort for meaningful tasks. Eliminating all cognitive friction risks the Hollowed Mind.
The user's capacity to think independently and critically alongside AI — not merely through it. Cognitive sovereignty is the design goal that must replace engagement optimization as the primary metric of AI-driven systems.
A system that preserves cognitive sovereignty amplifies a user's reasoning capacity. A system that erodes it — by pre-deciding, over-automating, or bypassing deliberation — produces learned helplessness at cognitive scale. EEG research confirms that intensive LLM interaction reduces frontal theta power (a marker of working memory load), suggesting that poorly designed AI actively degrades the neural architecture of independent reasoning.
The hidden layer of archetypal patterns, cultural narratives, and implicit hierarchies embedded in AI training corpora. AI systems do not merely absorb facts — they absorb the psychological residue of their training data's origins: who was portrayed as protagonist, whose voice carries authority, which moral arcs were rewarded.
These patterns crystallize into latent cognitive structures within the model. They are not post-training artifacts; they are pre-training inheritances. Because they are invisible to engineers, they are nearly impossible to correct downstream — making the DataSoul Imprint a structural alignment problem, not a fine-tuning issue.
Introduced by philosophers Andy Clark and David Chalmers, the Extended Mind Thesis holds that under the right conditions, tools and environments can become functional parts of the thinking process itself — not mere aids to cognition, but extensions of it. The logic of a system is a structure that shapes memory, attention, and choice.
In the context of AI, this thesis carries profound design consequences: when a language model drafts your thoughts, when a recommendation engine shapes what you read, when a predictive system pre-fills your choices, these systems are participants in cognition — not convenient interfaces. Designing AI as if it is merely a display, rather than a cognitive participant, produces systems that reshape minds without accountability.
A state in which users bypass the deliberative cognitive effort required to build and maintain resilient internal reasoning architectures — produced by AI systems designed to provide frictionless answers rather than to support independent thinking. Named in research examining the neurological consequences of intensive LLM interaction.
EEG studies indicate that sustained AI-assisted cognition reduces frontal theta power — a measurable marker of working memory engagement — suggesting that the Hollowed Mind is not merely a metaphor but an observable neurological phenomenon.
The Hollowed Mind is the design failure that Cognitive Sovereignty is designed to prevent.
Coined by the World Health Organization to describe the condition in which the volume of information supply exceeds the human capacity to process and evaluate it. The infodemic is the environmental condition that makes Cognitive Experience Design necessary at civilizational scale.
In 2020 alone: 3.5 billion searches per day, 294 billion emails, 230 million tweets, and 75+ petabytes of user-generated social media data — daily. The infodemic is not a temporary crisis. It is the permanent operating condition of the connected world.
An individual's internal representation of how a system, process, or situation works — not a factual map, but a belief-based simulation. Mental models are derived from observation, perception, experience, and culture. They are the cognitive scripts people run when they encounter a situation, determining what actions they take, what errors they make, and how they interpret outcomes.
A critical and often overlooked property: mental models are contextual and dynamic, not fixed. The same person will operate with a different mental model of the same system depending on their emotional state, situational context, and accumulated experience. Any design tool that flattens this dynamism produces a misrepresentation.
Donald Norman's framework identifies four interdependent components: the target system (t), the designer's conceptual model C(t), the user's mental model M(t), and the user's predicted model of future behavior. The most consequential design failures occur in the gap between C(t) and M(t).
A behavioral signature derived from rigorous mental model research — capturing the distinct cognitive approach that a cluster of humans takes toward a shared goal. Unlike personas, mental model archetypes are:
• Grounded in observed interior cognition, not researcher projection
• Defined by how people think, not who they are demographically
• Explicitly contextual — one person may embody multiple archetypes across different situations
• Documented with language patterns, emotional triggers, guiding values, and cognitive gaps
In the context of AGI alignment, mental model archetypes function as distinct alignment targets — enabling training pipelines to represent diverse cognitive approaches rather than optimizing for a statistical average. MaxMin-RLHF research demonstrates a 33% improvement in performance for cognitive minorities when alignment represents diverse preference distributions.
Indi Young's visualization of cognitive patterns across a problem space, arranged like a city skyline. The diagram is organized as a three-level hierarchy:
Boxes — summaries of interior thinking, emotional reactions, or guiding principles, derived directly from qualitative listening sessions.
Towers — collections of related boxes organized around a shared cognitive purpose.
Skyline — the complete panoramic visualization of cognitive landscape across the problem space.
The skyline is mapped against an organization's existing capabilities. Spaces where towers exist but no features support them are cognitive gaps — the most strategically important output of mental model research. The skyline is not a journey map; it tracks cognitive space, not behavioral time.
The domain of research concerned with how people pursue goals, intentions, and purposes — independently of any specific product or technology. Problem-space research asks: "What are people trying to accomplish, and how do they think about accomplishing it?"
Contrasted with the solution space: the domain of research concerned with how users interact with a specific product, feature, or interface. Most traditional UX research — usability testing, A/B testing, funnel analysis, journey mapping — operates in the solution space. Mental model research, listening sessions, and cognitive archetype development operate in the problem space.
The failure to distinguish these domains is one of the three primary errors of design thinking in the AI era. Solution-space research produces data that is inherently constrained by the product being studied. Problem-space research produces data about human cognition that remains valid regardless of what products exist.
The dominant technique underlying current AI alignment: human evaluators rank model responses, a reward model is trained to approximate those preferences, and the AI is fine-tuned via reinforcement learning to maximize the reward signal. RLHF underlies ChatGPT, Claude, Gemini, and most deployed large language models.
The structural limitation identified in peer-reviewed research: a single reward model derived from aggregate preference data mathematically cannot represent the full distribution of human cognitive approaches. RLHF as typically implemented produces alignment to a statistical center — systematically underserving cognitive minorities and non-dominant thinking styles.
MaxMin-RLHF (2024) proposes a mixture-of-preference approach that achieves 16% improvement in overall win-rates and 33% improvement for minority cognitive groups — demonstrating that cognitive diversity in alignment is not a performance concession but an improvement.
A meta-level pattern emerging from mental model research — a distinct cognitive approach that different participants take toward the same goal. Thinking styles emerge from the patterns of which mental model towers matter most to whom, and in what sequence or combination.
Thinking styles are not demographic profiles. They are behavioral signatures that cut across age, gender, culture, and socioeconomic status. Critically, they are contextual: the same person may adopt different thinking styles in different emotional or situational contexts. No one thinking style is "right" — cognitive diversity is a feature of human intelligence, not a problem to resolve.
Designing for all thinking styles does not mean building more features. It means building systems that respond to cognitive diversity — AI that adjusts to users' thinking styles, not the other way around.
A framework for understanding the systemic cognitive crisis produced by the attention economy, comprising three interrelated conditions:
1. Attentional Collapse — the progressive degradation of sustained, directed attention as platforms engineer for maximum interrupt frequency.
2. Cognitive Fatigue — the chronic depletion of working memory and executive function from continuous multitasking and decision overload.
3. Digital Overwhelm — the infodemic condition in which information supply structurally exceeds human processing capacity.
The Triple Brain Epidemic is the design context that makes Cognitive Experience Design not merely a methodology but a moral imperative — and the human problem that Vibes AI's biowearable and AI-driven wellness platform is designed to address.


