System Architecture
Overview
The Memory Palace system consists of five interconnected components:
Storage Schema
Memories are stored in JSON format with the following schema:
{
"id": "string - unique identifier",
"subject": "string - topic name",
"image": "string - SMASHIN SCOPE encoded image (300-500 chars)",
"content": "string - factual information",
"anchor": "string - memorable keyword",
"verify_token": "string - anti-hallucination phrase",
"created": "date - creation timestamp",
"confidence": "float - retrieval confidence score (0-1)",
"smashin_score": "int - encoding quality (0-12 factors)",
"last_retrieved": "date - last successful retrieval",
"retrieval_count": "int - total successful retrievals",
"linked_to": "array - related memory IDs"
}Index Structure
The hierarchical index minimizes context while maximizing retrieval precision:
Retrieval Protocol
The retrieval process follows a 2-hop navigation protocol (root → domain → memory):
def retrieve_memory(query: str) -> dict:
"""
Hierarchical retrieval with verification.
Returns memory only if verify token check passes.
"""
# Hop 1: Root index lookup
domain = root_index.match_keyword(query)
if not domain:
domain = semantic_search(query, root_index.domains)
# Hop 2: Domain index lookup
domain_index = load_index(f"index/{domain}.md")
location = domain_index.get_location(query)
verify_token = domain_index.get_verify_token(query)
# Load actual memory from location
memory = read_memory(location.file, location.line)
return {
"memory": memory,
"verify_token": verify_token,
"hops": 2,
"context_size": len(str(memory))
}
def generate_response(query: str, memory: dict) -> str:
"""
Generate response with hallucination check.
"""
response = llm.generate(
prompt=f"Answer based on this memory: {memory['image']}\n\nQuery: {query}"
)
# Verification check
if memory["verify_token"] not in response:
raise HallucinationError(
f"Response lacks verify token '{memory['verify_token']}'. "
"LLM may have hallucinated."
)
return responseRed Queen Protocol
The Red Queen Protocol provides adversarial pre-learning to strengthen memories before deployment. Named after the Red Queen’s race in Through the Looking-Glass (“It takes all the running you can do to keep in the same place”), this protocol continuously tests and strengthens weak memories.
def red_queen_prelearn(memories: List[Memory], rounds: int = 3) -> List[Memory]:
"""
Adversarial pre-learning: test and boost weak memories.
Args:
memories: List of memories to strengthen
rounds: Number of adversarial testing rounds
Returns:
Strengthened memories with boosted SMASHIN scores
"""
for round in range(rounds):
for memory in memories:
# Adversarial test with harder threshold
recall_prob = 0.5 + (memory.smashin_score * 0.03)
recalled = random.random() < recall_prob
if not recalled:
# Boost weak memory with stronger encoding
memory = strengthen_encoding(memory)
memory.smashin_score = min(12, memory.smashin_score + 1)
return memoriesThe protocol runs configurable rounds before learning begins, identifying and strengthening weak memories proactively rather than reactively during retrieval failures.
Trade-off Profiles
The system supports multiple retrieval profiles optimizing for different goals:
| Profile | Speed | Accuracy | Corpus | Image Size | RQ Rounds | Use Case |
|---|---|---|---|---|---|---|
| Interview | <1s | 70% | 200 | Minimal | 0 | Rapid-fire Q&A |
| Study | 10-30s | 95% | 50 | Full | 5 | Deep learning |
| Reference | 2-5s | 80% | 500 | Medium | 3 | Quick lookup |
| Teaching | 30s+ | 98% | 30 | Full+ | 5 | Explaining |

