Back to Article
Methodology
Download Source

Methodology

SMASHIN SCOPE Encoding

We developed a systematic framework for creating memorable mental images. SMASHIN SCOPE is an acronym encoding 12 memorability factors:

SMASHIN SCOPE Encoding Factors
Letter Factor Description
S Substitute Replace abstract with concrete
M Movement Add animation and action
A Absurd Make impossible or exaggerated
S Sensory Engage all 5 senses
H Humor Include funny elements
I Interact User participates in scene
N Numbers Encode quantities with shapes
S Symbols Use visual puns
C Color Add vivid, unusual colors
O Oversize Dramatic scale changes
P Position Precise spatial placement
E Emotion Evoke strong feelings

Multi-Channel Redundancy

Each memory is encoded through multiple channels, providing resilience to partial information loss:

Concept: 2PC
    ├── Visual: Stone statues
    ├── Sensory: Cold granite
    ├── Emotional: Frozen forever
    ├── Contrast: Saga divorce
    └── Scale: 47 couples
            │
            ▼
        [Recall]

Hierarchical Index Design

We structure memories in a three-level hierarchy to minimize retrieval context:

Level 0 (Root): Domain mapping (~400 chars)

keyword → domain → anchor

Level 1 (Domain): Location pointers (~300 chars each)

anchor → file:line → verify_token

Level 2 (Memory): Full SMASHIN SCOPE image (~500 chars)

Total navigational overhead: 2.5KB vs 46.5KB for flat structure (94.6% reduction).

Verification Token System

To prevent LLM hallucination, each memory includes a unique verification token—a phrase that:

  1. Only exists in the actual stored memory
  2. Appears unrelated to the concept (hard to guess)
  3. Must be present in any valid response
Example Verification Tokens
Concept Verify Token Rationale
CAP Theorem two heads breathe Dragon metaphor specific
Two-Phase Commit 47 couples Absurd scale
Write-Behind Cache 50-foot grandmother Emotional anchor
Consistent Hashing gnomes on clock Unique visual

Retrieval Confidence Scoring

Each retrieved memory receives a confidence score based on multiple signals:

\[\text{score}(m, q) = \alpha \cdot \text{sim}(m, q) + \beta \cdot \text{verify}(m) + \gamma \cdot \text{smashin}(m)\]

where:

  • \(\text{sim}(m, q)\) is the semantic similarity between memory \(m\) and query \(q\)
  • \(\text{verify}(m)\) is 1 if verification token is present, 0 otherwise
  • \(\text{smashin}(m)\) is the normalized SMASHIN SCOPE factor count (0-1)

The weights \(\alpha=0.5\), \(\beta=0.3\), \(\gamma=0.2\) are tuned on a held-out validation set.

Red Queen Protocol

“It takes all the running you can do, to keep in the same place.” — The Red Queen, Through the Looking-Glass [1]

Named after Lewis Carroll’s famous quote, the Red Queen Protocol represents the insight that constant adversarial testing is required just to maintain knowledge quality—without it, memories decay and hallucinations creep in.

Two-Phase Architecture:

  1. Pre-Learning Phase: Before deployment, run configurable adversarial rounds (0-5) to proactively identify and strengthen weak memories
  2. Runtime Phase: Four specialized agents continuously challenge memories during operation
Agent Model Role
Examiner Haiku Generate challenging retrieval queries targeting weak spots
Learner Haiku Attempt retrieval using only index anchors (blind recall)
Evaluator Haiku Score retrieval accuracy, identify gaps and misconceptions
Evolver Opus Re-encode weak memories with stronger SMASHIN SCOPE images

Pre-Learning Mechanism:

During pre-learning, memories are tested against harder thresholds (base probability 0.5 vs 0.7 for normal retrieval). Weak memories that fail are immediately boosted by the Evolver agent before deployment, reducing downstream retrieval failures.

The protocol ensures memories remain robust and verification tokens effective throughout the system lifecycle.

[1]
Carroll, L. 1871. Through the looking-glass, and what alice found there. Macmillan.