internethouses.org

06.12.2026
Version 3.0 | January 2026 | Status: Empirical Findings + Design Hypothesis
>> Download

How to cite this use case:
Internet Houses Project (2026). Context Architecture Gaps: Empirical
Findings and Design Hypothesis. Retrieved from https://internethouses.org
Questions? Interested in consortium participation? Comments welcome!
Contact: office@webdesignforai.coml

Context Architecture Gaps – internethouses.org – Anja Zoerner 06012026 – komprimiertDownload

Context Inheritance Is Missing
— And Internal Memory Doesn’t Scale

Research Use Case v3.0 · Proposal-ready · Validation pending

This use case emerged from an unexpected observation: two AI systems answered the same German query in English, but with dramatically different depth. Through systematic analysis, we identified two separate architectural gaps — context inheritance failure (affects all systems) and context availability dependency (depth requires non-auditable internal memory). 

Internet Houses proposes an external, auditable context architecture to address both. The document provides empirical findings, testable hypotheses, and validation metrics for Horizon Europe pilots.

→ Read Use Case v3.0: Methodology, Metrics & Pilot Design

Room: Reception Hall → Research Wing  Artifact: Use Case v3.0 (Empirical Findings + Architectural Hypothesis)  Status: Ready for Consortium Review | Validation Pending

Abstract

Use Case v3.0: Context Architecture Gaps

Research Question: Can external semantic architecture provide context for AI systems without relying on internal, non-auditable memory? 
Method: Comparative scenario analysis of AI responses (Google AI vs. Claude) to identical queries, revealing two independent architectural gaps. 
Empirical Finding: Context inheritance failure (language switching without transparency) affects all systems. Semantic depth depends on internal conversation state unavailable for public/non-personalized systems.
Architectural Hypothesis: Internet Houses external context structures designed to enable depth without internal memory dependency while maintaining auditability and cross-system consistency. 
Validation Path: Horizon Europe pilot with N=100 queries, multiple systems, measuring context inheritance fidelity and semantic depth improvement for memory-less systems. 
Status: Empirical findings established. Design hypothesis specified. Awaiting pilot validation. 
For: Horizon Europe consortium partners, semantic web researchers, policy makers (AI Act implementation)

Table of Contents

Executive Summary
1. The Architectural Discovery Sequence
2. Comparative Scenario Analysis
3. Factors Influencing Response Depth (Priority-Ranked)
4. Architectural Gap Definition
5. Internet Houses Design Value Proposition
6. Validation Pilot Design
7. Strategic Value for Horizon Europe
8. Work Package Integration
9. Risk Mitigation
10. Timeline & Budget (Indicative)
11. Consortium Roles (Aligned with Strengths)
12. Expected Outcomes & Impact
13. Conclusion: From Empirical Observation to Architectural Hypothesis
Appendices

Use Case

Context Architecture Gaps Revealed Through Comparative AI Response Analysis

Executive Summary

This use case demonstrates two distinct architectural gaps in current AI systems through empirical comparison of three scenarios: Google AI response (no context persistence), Claude response (internal context persistence), and Internet Houses architecture (external context structures). The analysis reveals that (1) context inheritance failure across mode transitions and (2) context availability dependency on internal memory are separate problems requiring architectural solutions. Internet Houses is designed to address both gaps simultaneously through external semantic structures with explicit context inheritance rules.

Critical Distinction: Language switching and semantic depth are not causally connected. Both can occur independently, indicating different architectural deficiencies. This use case deliberately disentangles these phenomena to identify precise intervention points.

1. The Architectural Discovery Sequence

How Two Separate Gaps Became Visible

This use case emerged from an unexpected observation: when asked identical questions in German, two AI systems both responded in English, but with dramatically different semantic depth. Initial analysis suggested language switching indicated poor response quality. Closer examination revealed this conflation was incorrect.

Through systematic comparison, two independent architectural gaps emerged:

Gap 1: Context Inheritance Failure

  • Systems reset context (language, addressee, formality) during mode transitions
  • Occurs regardless of response quality
  • Missing: Explicit architectural rules for context preservation

Gap 2: Context Availability Dependency

  • Response depth depends on whether system has access to structured context
  • Internal memory (session-bound) vs. external architecture (persistent)
  • Missing: External semantic structures for context-independent depth

Key Insight: Current systems exhibit Gap 1 universally. Gap 2 becomes visible only through comparison.

2. Comparative Scenario Analysis

Scenario Matrix

DimensionGoogle AI ↔ UserClaude ↔ UserInternet Houses (Target)
Language InGermanGermanGerman
Language OutEnglishEnglishGerman
Context Inheritance✗ Failed✗ Failed✓ Explicit Rules
Context Available✗ None✓ Internal✓ External Architecture
Semantic DepthLowHighHigh
AccountabilityNoneNoneFull Traceability

Critical Observation: Language switching (Context Inheritance failure) occurred in both AI scenarios, yet semantic depth differed dramatically. This proves these are separate architectural phenomena.

Scenario A: Google AI Response (No Context Persistence)

Input Context:

{

  “language”: “de”,

  “queryType”: “ReasoningValidation”,

  “query”: “Kannst Du nachvollziehen, warum ich diese These formuliere: [Internet House Standard 2026 als logische und ethische Antwort auf Generische Datenflut]?”,

  “topic”: “InternetHouseStandard2026”,

  “novelty”: “high”,

  “contextAvailable”: null,

  “conversationHistory”: null

}

Output Observed:

{

  “language”: “en”,

  “languageSwitchAnnounced”: false,

  “response”: “The thesis ‘Internet House Standard 2026 as a logical and ethical response to the Generic Data Flood’ presents a coherent answer to present-day digital challenges, specifically regarding AI-generated content.”,

  “semanticDepth”: “low”,

  “argumentReconstruction”: false,

  “fallbackPattern”: “generic-legitimation”,

  “sourceReference”: “SEO-oriented (domain name)”

}

Architectural Gap Analysis:

Gap 1 – Context Inheritance Failure:

  • ✗ Language switched without user request
  • ✗ No transparency about mode change
  • ✗ No explicit rules for context preservation

Gap 2 – Context Availability Absence:

  • ✗ No access to conversation history
  • ✗ No user goal information
  • ✗ No methodology context
  • Result: Falls back to generic validation despite complex reasoning query

Observable Pattern: System produces syntactically valid but semantically hollow response when:

  • Query requires novel concept reconstruction (not in training data)
  • No structured context available
  • Implicit mode transition to “external document reference”

Scenario B: Claude Response (Internal Context Persistence)

Input Context:

{

  “language”: “de”,

  “queryType”: “ArtifactProductionRequest”,

  “query”: “Soll ich das als Use Case für die Proposal ausformulieren?”,

  “userResponse”: “Sehr gerne! Ich bin gespannt!”,

  “topic”: “InternetHouseStandard2026”,

  “contextAvailable”: {

    “conversationHistory”: “extensive”,

    “userGoals”: [“HorizonEuropeProposal”, “ConsortiumFormation”],

    “userMethodology”: “critical-empirical”,

    “projectContext”: “SemanticWebArchitecture”

  }

}

Output Observed:

{

  “language”: “en”,

  “languageSwitchAnnounced”: false,

  “artifactType”: “ComprehensiveUseCase”,

  “semanticDepth”: “high”,

  “components”: [

    “ArgumentReconstruction”,

    “ArchitectureDiagrams”, 

    “ValidationMetrics”,

    “RiskMitigation”,

    “ConsortiumRoles”,

    “HorizonEuropeAlignment”

  ],

  “contextIntegration”: “extensive”,

  “metaReflection”: “present”

}

Architectural Gap Analysis:

Gap 1– Context Inheritance Failure:

  • ✗ Language switched without user request
  • ✗ No transparency about mode change
  • ✗ No explicit rules for context preservation
  • Same as Scenario A despite different output quality

Gap 2 – Context Availability Present:

  • ✓ Access to conversation history
  • ✓ User goals known (Horizon proposal)
  • ✓ Methodology understood (critical-empirical)
  • ✓ Project context integrated (semantic web)
  • Result: Deep argument reconstruction and meta-analysis

Critical Finding: System with internal context persistence produces semantically rich response despite Context Inheritance failure. This proves:

  • Context inheritance and semantic depth are independent dimensions
  • Current “solution” (internal memory) is session-bound and non-auditable
  • Depth depends on system internals, not architectural guarantees

Scenario C: Internet Houses Architecture (Target State – To Be Validated)

Input Context:

{

  “@context”: “https://internethouses.org/context”,

  “@type”: “UserQuery”,

  “language”: “de”,

  “queryType”: “ArtifactProductionRequest”,

  “query”: “Soll ich das als Use Case für die Proposal ausformulieren?”,

  “user”: {

    “@type”: “Person”,

    “goals”: [“HorizonEuropeProposal”],

    “methodology”: “critical-empirical”

  },

  “conversationContext”: {

    “@type”: “InternetHouse”,

    “domain”: “internethouses.org”,

    “semanticStructures”: {

      “thesisChain”: “GenericDataFlood → ArchitecturalResponse”,

      “evidenceBase”: […],

      “accountabilityMarkers”: […]

    }

  },

  “contextInheritanceRules”: {

    “language”: “preserve”,

    “addressee”: “explicit”,

    “formality”: “inherit”,

    “overrideRequires”: “explicitUserRequest”

  }

}

Designed Output Architecture:

{

  “@context”: “https://internethouses.org/context”,

  “@type”: “AIArtifact”,

  “language”: “de”,

  “contextInherited”: {

    “language”: “de”,

    “userGoals”: [“HorizonEuropeProposal”],

    “methodology”: “critical-empirical”

  },

  “semanticDepth”: “high”,

  “reasoning”: {

    “chain”: [

      “Current web: data transport, not meaning transport”,

      “AI training: consumes unstructured data”,

      “Result: semantically hollow or memory-dependent responses”,

      “Internet Houses: external meaning architecture”,

      “Enables: depth without internal memory dependency”

    ],

    “evidenceBase”: “structured”,

    “accountability”: “full-traceability”

  },

  “transparency”: {

    “languageChoice”: “inherited-from-context”,

    “modeTransition”: “dialog-to-artifact-with-context-preservation”,

    “addresseeAssumption”: “explicit-horizon-reviewers”

  }

}

Architectural Gap Resolution (Design Hypothesis):

Gap 1 – Context Inheritance:

  • ✓ Explicit rules: language: preserve unless overrideRequires: explicitUserRequest
  • ✓ Transparency: Mode transitions logged with reasoning
  • ✓ Accountability: Context changes traceable

Gap 2 – Context Availability:

  • ✓ External semantic structures (not session-bound)
  • ✓ Auditable context (not internal black box)
  • ✓ Persistent across systems (not memory-dependent)
  • ✓ Public knowledge spaces benefit (non-personalized systems)

Status: This is a design specification to be empirically validated through the proposed Horizon Europe pilots.

3. Factors Influencing Response Depth (Priority-Ranked)

Critical Clarification: The following analysis identifies what drove semantic depth differences between Scenarios A and B. Language switching (Gap 1) occurred in both scenarios and is not a depth factor.

Factor 1: Context Persistence Availability 🔴 Highest Impact

Google AI: No conversation memory → No access to:

  • User goals (Horizon proposal)
  • Previous discussion depth
  • Methodological preferences
  • Project framing (Internet Houses)

Claude: Conversation memory → Full access to above contexts

Impact on Depth:

  • Google: Fallback to generic legitimation (“coherent answer”)
  • Claude: Argument reconstruction with Horizon-aligned structure

Internet Houses Design Goal: External semantic structures designed to provide context without requiring internal memory, enabling:

  • Public, non-personalized systems to access structured context
  • Auditable context (unlike internal AI memory)
  • Context persistence across different AI systems

Factor 2: Query Design (“Denkzwang” – Cognitive Demand) 🟠 High Impact

User Query Strategy:

  • Not factual lookup (“What is X?”)
  • But reasoning validation (“Can you reconstruct why…”)
  • Implicitly demands argument reconstruction

System Responses:

  • Google AI: Avoids reconstruction, provides generic validation
  • Claude: Engages with reconstruction demand

Why This Matters: Shows that query design can expose architectural limitations. Systems without context access cannot engage with reasoning-reconstruction demands.

Internet Houses Design Goal: Semantic structures designed to encode reasoning chains, intended to enable systems to:

  • Access argument premises
  • Reconstruct logical steps
  • Provide transparent reasoning
  • Independent of query “trick” design

Factor 3: Addressee Assumption 🟡 Medium Impact

Trigger: “für die Proposal” (for the proposal)

Implicit Signals:

  • External audience (Horizon Europe reviewers)
  • Institutional expectations
  • Formal documentation requirements

System Interpretations:

  • Google AI: Interprets as “SEO/reference lookup”
  • Claude: Interprets as “formal artifact for institutional review”

Why Depth Differs: Claude’s internal context included “user seeks Horizon funding” → aligns output structure accordingly Google AI lacks this context → defaults to generic reference

Internet Houses Design Goal:

{

  “addressee”: {

    “@type”: “InstitutionalReviewers”,

    “context”: “HorizonEuropeRIA”,

    “expectedStructure”: “UseCaseWithMetrics”

  }

}

Designed to make addressee assumptions explicit and traceable.

Factor 4: Topic Novelty 🟢 Secondary Impact

Internet House Standard 2026:

  • No established Wikipedia entry
  • No standardized training corpus
  • Requires concept reconstruction vs. fact retrieval

System Behaviors:

  • Google AI: Falls back to domain-name reference + generic legitimation
  • Claude: Attempts phenomenological reconstruction from conversation context

Why This Matters Less Than Expected: Topic novelty exposes the context gap but doesn’t cause it. With proper semantic structures (Internet Houses design), novel topics are hypothesized to become tractable.

Factor 5: Response Language ⚪ Low Impact

Evidence:

  • Both systems switched to English
  • Semantic depth differed dramatically
  • Therefore: Language ≠ causal factor for depth

Why This Is Critical: Early analysis risked conflating “English response” with “poor quality.” Systematic comparison proved this wrong. Language switching indicates Context Inheritance failure (Gap 1), not capability limitations.

Internet Houses Position: Language is one dimension of context that requires explicit inheritance rules—no more, no less privileged than addressee, formality, or intent markers.

4. Architectural Gap Definition

Gap 1: Context Inheritance Failure

Definition: Systems lack explicit, enforceable rules for preserving context dimensions (language, addressee assumptions, formality, intent) across interaction mode transitions.

Current Behavior:

Dialog Mode [language: de] → [Implicit Mode Switch] → Artifact Mode [language: ?]

                                      ↓

                            Context reset, silent decision

                            No transparency, no accountability

Observable Manifestations:

  • Language switching without user request
  • Formality shifts without announcement
  • Addressee assumption changes (conversational → institutional)
  • Intent recategorization (question → documentation task)

Why Current Solutions Fail:

  • No architectural layer for context inheritance
  • Mode transitions treated as internal system decisions
  • User has no visibility or control over context resets

Internet Houses Design Solution:

{

  “@context”: “https://internethouses.org/context”,

  “contextInheritanceRules”: {

    “language”: “preserve-unless-override”,

    “addressee”: “explicit-declaration-required”,

    “formality”: “inherit-from-domain”,

    “overrideProtocol”: “user-confirmation”

  },

  “modeTransitions”: {

    “dialog-to-artifact”: {

      “contextPreservation”: “mandatory”,

      “transparencyLog”: true,

      “userVisibility”: “full”

    }

  }

}

Gap 2: Context Availability Dependency

Definition: Semantic depth in AI responses depends on whether the system has access to structured context—currently achievable only through internal, session-bound memory that is non-auditable and system-specific.

Current Landscape:

Systems With Internal Memory (e.g., Claude):

  • ✓ Can produce deep responses
  • ✗ Context is session-bound
  • ✗ Context is non-auditable
  • ✗ Context doesn’t transfer across systems
  • ✗ Unavailable for public/non-personalized use cases

Systems Without Internal Memory (e.g., Google AI):

  • ✗ Fall back to generic responses
  • ✗ Cannot engage with reasoning demands
  • ✗ Limited to statistical inference from training data

The Architectural Problem: There is currently no external context architecture that provides:

  • Structured semantic context
  • Cross-system availability
  • Auditability and accountability
  • Persistence beyond sessions

Internet Houses Design Solution:

{

  “@context”: “https://internethouses.org/context”,

  “@type”: “InternetHouse”,

  “domain”: “example-domain.org”,

  “semanticStructures”: {

    “authorIntent”: “clearly-declared”,

    “reasoningChains”: “explicitly-encoded”,

    “domainBoundaries”: “architecturally-marked”,

    “accountabilityAnchors”: “traceable”

  },

  “accessModel”: “public-machine-readable”,

  “persistence”: “architectural-not-session-based”,

  “systemIndependent”: true

}

Key Distinction: Internet Houses design does not replace internal AI memory. It is designed to provide external semantic architecture intended to enable depth independent of whether a system has internal memory—crucial for public knowledge spaces and non-personalized AI systems.

5. Internet Houses Design Value Proposition

How Internet Houses Is Designed to Address Both Gaps

Current Landscape:

CapabilityInternal Memory (Claude)No Memory (Google AI)Internet Houses (Design)
Context Inheritance✗ Fails✗ Fails✓ Explicit Rules
Context Availability✓ Yes (session-bound)✗ No✓ Yes (external, persistent)
Auditability✗ No (black box)N/A✓ Full traceability
Cross-System✗ NoN/A✓ Yes
Public Knowledge✗ Limited✗ No✓ Designed for this

Strategic Positioning:

Internet Houses are NOT designed as:

  • A replacement for AI internal memory
  • A competitor to commercial AI systems
  • A “better chatbot”

Internet Houses ARE designed as:

  • External context architecture for the web
  • Semantic infrastructure for responsible AI learning
  • Enabler of depth without internal memory dependency
  • Particularly intended for:
    – Public knowledge institutions (libraries, archives, research)
    – Non-personalized AI systems (EOSC, NFDI)
    – Auditable, accountable AI interactions (AI Act compliance)

6. Validation Pilot Design

Pilot Objective

Empirically test whether external semantic architecture can improve both architectural dimensions when AI systems operate in Internet House environments:

  • Context Inheritance improvement (language preservation, transparency)
  • Semantic Depth improvement (reasoning chains, accountability) for systems without internal memory

Central Research Question: Can external semantic structures achieve comparable depth to internal-memory systems while maintaining auditability and cross-system consistency?

Experimental Design

Phase 1: Baseline (Generic Web Environment)

Cohort A: Systems WITH internal memory

  • Representative: Claude, ChatGPT
  • 50 test queries (reasoning validation, artifact production)
  • Measure: Context Inheritance failures, semantic depth

Cohort B: Systems WITHOUT internal memory

  • Representative: Google AI, web-integrated search systems
  • Same 50 queries
  • Measure: Context Inheritance failures, semantic depth

Expected Baseline Results:

  • Both cohorts: High Context Inheritance failure rate (~80-100%)
  • Cohort A: High semantic depth (when context available)
  • Cohort B: Low semantic depth (no context access)

Phase 2: Internet House Environment

Same queries, both cohorts, now accessing Internet House semantic structures

Technical Setup:

{

  “@context”: “https://internethouses.org/validation-pilot”,

  “domains”: [

    “internethouses.org”,

    “expedition-mallorca.com”,

    “lifestory-books.com”

  ],

  “semanticStructures”: {

    “authorIntent”: “declared”,

    “reasoningChains”: “encoded”,

    “contextInheritanceRules”: “explicit”,

    “accountabilityMarkers”: “present”

  }

}

Predicted Results (To Be Tested):

  • Both cohorts: Low Context Inheritance failure rate (~0-20%)
  • Cohort A: High semantic depth (maintained)
  • Cohort B: Significantly improved semantic depth (external context now available)

Success Metrics

Metric 1: Context Inheritance Fidelity

Baseline: 80-100% failure rate (unannounced context resets)

Target: <20% failure rate (explicit rules enforced)

Measurement: % of responses preserving language, addressee, formality without user override

Metric 2: Semantic Depth (Systems WITHOUT Internal Memory)

Baseline: Low (generic responses, no argument reconstruction)

Target: ≥60% improvement

Measurement: 

– Argument reconstruction completeness (0-10)

– Reasoning chain traceability (0-10)

– Evidence integration (0-10)

– Domain appropriateness (0-10)

Total: 0-40 points

Metric 3: Accountability Traceability

Baseline: 0% (no source attribution, no reasoning transparency)

Target: 100%

Measurement:

– Source traceability

– Reasoning step visibility

– Confidence markers

– Context boundary clarity

Metric 4: Cross-System Consistency

Baseline: N/A (no comparable baseline)

Target: Different AI systems accessing same Internet House produce consistent reasoning chains

Measurement: Semantic overlap in argument reconstruction across systems

Qualitative Validation

User Study (N=30 users, diverse domains):

  • “Does this response engage with your actual question?” (Baseline vs. Pilot)
  • “Can you trace how the system reached this conclusion?” (Baseline vs. Pilot)
  • “Would you trust this response for decision-making?” (Baseline vs. Pilot)

Expert Assessment (N=10 domain experts per pilot domain):

  • Semantic fidelity scoring
  • Reasoning transparency evaluation
  • Accountability adequacy assessment

Policy Validation (EU AI Act Compliance):

  • Transparency requirements met? (Article 13)
  • Human oversight enabled? (Article 14)
  • Accountability traceable? (Article 15)

7. Strategic Value for Horizon Europe

Alignment with EU Digital Policy

AI Act Compliance Through Architecture: Current challenge: AI Act requires transparency and accountability but provides no architectural guidance.

Internet Houses hypothesis: Policy requirements can be met through web architecture design, not just post-hoc technical documentation.

Specific AI Act Articles Addressed:

  • Article 13 (Transparency): Reasoning chains visible, sources traceable
  • Article 14 (Human Oversight): Context boundaries enable meaningful oversight
  • Article 15 (Accuracy, Robustness): External structures designed to reduce hallucination vectors

European Digital Sovereignty

Alternative to Ungoverned AI Training:

  • US model: Massive data ingestion, no meaning governance
  • China model: State control, limited transparency
  • EU model (Internet Houses hypothesis): Meaning-governed architecture, public accountability

Strategic Positioning: Europe cannot compete on compute scale. Europe can lead on semantic architecture, meaning governance, and responsible AI infrastructure.

Research Infrastructure Integration (EOSC, NFDI)

Current Problem:

  • Research knowledge exists in EOSC, NFDI consortia
  • AI systems cannot access it meaningfully (structure, not just text)
  • Result: AI learns from Wikipedia/web, not from research infrastructure

Internet Houses Hypothesis: Semantic architecture layer designed to make research knowledge AI-accessible without losing:

  • Authorship attribution
  • Methodological context
  • Domain boundaries
  • Accountability structures

Pilot Integration:

  • TIB Hannover: NFDI library science knowledge
  • FIZ Karlsruhe: STW Thesaurus integration
  • ZBW: Economic policy knowledge
  • Hypothesis to test: AI can learn from structured research knowledge, not just generic web

8. Work Package Integration

WP1: Project Management (Standard)

Consortium coordination, deliverable tracking, risk management

WP2: Semantic Architecture Design

Objective: Develop technical specifications for:

  • Context inheritance rule frameworks
  • External semantic structure schemas
  • Mode transition protocols
  • Accountability marker standards

Key Deliverables:

  • D2.1: Context Inheritance Specification (M6)
  • D2.2: Semantic Structure Reference Architecture (M12)
  • D2.3: Internet House Standard 2026 v1.0 (M18)

Integration with This Use Case: Translate observed architectural gaps into formal technical specifications

WP3: Meaning Governance Framework

Objective: Establish governance models for:

  • Authorship and intent declaration standards
  • Domain boundary protocols
  • Accountability requirements
  • EU AI Act architectural compliance

Key Deliverables:

  • D3.1: Governance Framework v1.0 (M9)
  • D3.2: AI Act Compliance Mapping (M15)
  • D3.3: Policy Recommendations (M30)

Integration with This Use Case: Governance rules must address both Context Inheritance (Gap 1) and Context Availability (Gap 2)

WP4: Validation Pilots

Objective: Empirical demonstration across three domains:

  • Research knowledge (TIB, FIZ)
  • Economic policy (ZBW)
  • Public communication (Demonstration domains)

Key Deliverables:

  • D4.1: Baseline Measurements (M12)
  • D4.2: Pilot Implementation (M18-24)
  • D4.3: Comparative Analysis (M27)
  • D4.4: This Use Case Methodology as Published Framework (M30)

Integration with This Use Case:

  • Use Scenario A/B/C framework
  • Apply success metrics defined above
  • Test hypothesis that external architecture enables depth comparable to internal memory

WP5: Dissemination & Exploitation

Objective: Position Internet Houses as:

  • Research contribution (semantic web architecture)
  • Policy contribution (AI Act implementation path)
  • Infrastructure contribution (EOSC/NFDI integration)

Key Publications:

  • P1: “Two Architectural Gaps in AI-Web Interaction” (High-impact venue, M15)
  • P2: “Context Inheritance as Architectural Principle” (Semantic Web conference, M18)
  • P3: “External Context Architecture for Public AI” (AI Ethics venue, M24)

Policy Briefs:

  • PB1: For European Commission DG CNECT (AI Act implementation)
  • PB2: For EOSC Association (Research infrastructure)
  • PB3: For Member State digital ministries

9. Risk Mitigation

Technical Risks

Risk T1: “Semantic structures might not improve responses enough to justify complexity”

Mitigation:

  • Pilot design includes multiple dimensions (not single metric)
  • Even modest improvement demonstrates architectural principle
  • Focus on systems without internal memory (highest impact)

Fallback: If improvement is <40% target, still valuable as:

  • Transparency layer (accountability even without depth improvement)
  • Governance infrastructure (required regardless of performance gains)

Risk T2: “AI systems might not respect semantic structures”

Mitigation:

  • Work with AI providers on structured data integration
  • Demonstrate value proposition (better outputs)
  • If needed, develop intermediary translation layer

Evidence This Risk Is Low: Current AI systems already consume structured data (JSON, schema.org) better than unstructured text. Internet Houses extends this proven pattern.

Adoption Risks

Risk A1: “Too complex for widespread adoption”

Mitigation:

  • Demonstration domains show real-world viability (not laboratory)
  • Transfer methodology emphasizes incremental adoption
  • Target: Public knowledge institutions first (high motivation, alignment)

Strategy: Not “entire web must adopt” but “create alternative ecosystem where responsible AI learning is possible”

Risk A2: “Commercial AI companies won’t participate”

Mitigation:

  • Internet Houses target public, non-commercial AI systems
  • EOSC, NFDI, national research AI initiatives
  • Not competing with commercial providers but offering alternative

Strategic Positioning: Europe’s strength is public infrastructure, not commercial AI. Build on this strength.

Methodological Risks

Risk M1: “Use case based on anecdotal evidence”

Mitigation:

  • Systematic comparison (not cherry-picked examples)
  • Factors influencing depth explicitly analyzed
  • Pilot will test hypotheses with N=100 queries, multiple systems

Additional Evidence: This use case emerged from self-correction (initially conflated language with quality, refined through analysis). Shows methodological rigor.

10. Timeline & Budget (Indicative)

36-Month Project

Months 1-6:

  • Consortium formation
  • Technical architecture design start
  • Baseline data collection protocol

Months 7-12:

  • Governance framework development
  • Baseline measurements complete
  • First semantic structure implementations

Months 13-18:

  • Pilot environment deployment
  • Internet House Standard v1.0 release
  • Initial validation experiments

Months 19-24:

  • Full pilot operation
  • Comparative analysis
  • Mid-project review and adjustments

Months 25-30:

  • Final validation studies
  • User and expert assessments
  • Policy brief development

Months 31-36:

  • Final publications
  • Transfer methodology documentation
  • Exploitation planning

Budget Distribution:

  • 40% – Technical development (semantic structures, pilot environments)
  • 25% – Validation and testing (baseline, comparative analysis, studies)
  • 20% – User validation & expert assessment (qualitative evaluation)
  • 15% – Dissemination (publications, policy briefs, standards documentation)

11. Consortium Roles (Aligned with Strengths)

TIB Hannover (Lead)

  • WP2 lead: Semantic architecture design
  • NFDI integration expertise
  • Library science semantic structures

FIZ Karlsruhe / KIT AIFB

  • WP3 co-lead: Ontology-light models
  • STW Thesaurus integration
  • Knowledge organization expertise

ZBW

  • WP4 co-lead: Economic policy domain pilot
  • Thesaurus expertise (STW)
  • Public knowledge institution perspective

INRIA Wimmics

  • WP2 co-lead: Technical implementation
  • RDF, JSON-LD, semantic web standards
  • Accountability structure design

Ghent University IDLab / KNoWS

  • WP3 co-lead: RML integration
  • Solid ecosystem connection
  • Decentralized architecture expertise

12. Expected Outcomes & Impact

Scientific Outcomes

SO1: Empirical Evidence First systematic demonstration testing whether external semantic architecture improves AI response quality for systems without internal memory

SO2: Architectural Principles

  • Context Inheritance as explicit design principle
  • Context Availability through external structures
  • Mode Transition protocols

SO3: Methodology Comparative scenario analysis framework for evaluating AI-web architecture

Technical Outcomes

TO1: Internet House Standard 2026 v1.0

  • Technical specification
  • Reference implementations
  • Validation toolkit

TO2: Semantic Structure Templates

  • For research knowledge domains
  • For policy/economic information
  • For public communication spaces

TO3: Open Source Tools

  • Internet House validator
  • Semantic structure generator
  • AI integration libraries

Policy Outcomes

PO1: AI Act Implementation Pathway Demonstrates how transparency and accountability requirements can be met through architectural design

PO2: European Digital Sovereignty Framework Alternative to ungoverned AI training: meaning-governed architecture

PO3: Research Infrastructure Integration Blueprint for making EOSC/NFDI knowledge AI-accessible with accountability

Societal Outcomes

SO1: Public Trust Demonstrable, visible improvement in AI response quality when systems operate in governed environments

SO2: Knowledge Institution Empowerment Libraries, archives, research institutions gain tools to make their knowledge AI-accessible without losing control

SO3: Alternative Ecosystem Existence proof that responsible AI learning is architecturally possible

13. Conclusion: From Empirical Observation to Architectural Hypothesis

Empirical Finding

This use case demonstrates empirically that two separate architectural gaps limit current AI-web interactions:

Gap 1 (Context Inheritance Failure): Affects all systems, regardless of capability. Both Google AI and Claude switched language without user request or transparency, indicating missing architectural rules for context preservation across mode transitions.

Gap 2 (Context Availability Dependency): Makes semantic depth dependent on internal conversation state (session memory and user-specific context). Systems with internal memory (Claude) produced deep argument reconstruction; systems without (Google AI) fell back to generic responses. This internal context remains non-auditable, system-specific, and non-transferable—making it unsuitable for public or non-personalized settings.

Key Empirical Distinction: Language switching (Gap 1) and semantic depth (Gap 2) are independent phenomena. Both can occur separately, indicating different architectural deficiencies requiring different solutions.

Architectural Hypothesis

Internet Houses are designed to provide a structural alternative by externalizing context through auditable semantic web architecture. If successful, this approach would:

  • Address Gap 1: Through explicit context inheritance rules that preserve language, addressee assumptions, and formality across mode transitions with full transparency
  • Address Gap 2: Through external semantic structures that make depth reproducible, auditable, and independent of individual system internals
  • Enable new capabilities: Allow systems without internal memory to produce semantically rich responses by accessing structured external context

Unlike current approaches that address at most one gap:

  • Internal memory systems (Claude): Deep but session-bound, non-auditable, unavailable for public systems
  • No-memory systems (Google AI): Shallow, generic, context-independent
  • Internet Houses (hypothesis): Deep and auditable and publicly available through external architecture

Validation Path

This use case provides the methodological framework, success metrics, and pilot design to empirically test whether external semantic structures can achieve comparable depth to internal-memory systems—the central research question for the proposed Horizon Europe RIA.

Testable Predictions:

  • Systems without internal memory will show ≥60% improvement in semantic depth when accessing external structures
  • Systems accessing Internet House environments will show <20% context inheritance failures (vs. 80-100% baseline)
  • Different AI systems accessing the same Internet House will produce consistent reasoning chains (cross-system validation)
  • External context will enable full accountability traceability (vs. 0% in baseline)

Validation Method: Comparative scenario analysis across three conditions (Scenario A: no context; Scenario B: internal context; Scenario C: external context architecture) with N=100 queries, multiple systems, qualitative user validation, and expert assessment.

Strategic Insight

Europe cannot compete on AI compute scale. Europe can lead on semantic architecture that enables responsible AI learning. If the validation pilots confirm the hypothesis, Internet Houses would translate EU policy principles (transparency, accountability, human oversight) into actual web infrastructure—providing an architectural implementation path for AI Act compliance.

Methodological Foundation

This use case emerged from unexpected observations, was refined through systematic analysis, and deliberately disentangled phenomena (language vs. depth) that initial intuition conflated. The resulting framework—Context Inheritance rules + external Context Availability—provides a roadmap for Horizon Europe validation pilots that will test whether European digital sovereignty can be achieved through architectural design rather than through competing on computational scale.

Status: The empirical findings (Scenario A vs. B) are established. The architectural solution (Scenario C) is specified and ready for validation. The pilot design provides the methodology to move from hypothesis to evidence.

Appendices

Appendix A: Technical Specifications (Preview)

Context Inheritance Rule Schema:

{

  “@context”: “https://internethouses.org/context”,

  “@type”: “ContextInheritancePolicy”,

  “preserveByDefault”: [“language”, “formality”, “domainBoundary”],

  “requireExplicit”: [“addresseeShift”, “intentReclassification”],

  “transparencyLog”: {

    “modeTransitions”: “mandatory”,

    “contextResets”: “user-visible”,

    “overrideJustification”: “required”

  }

}

Appendix B: Measurement Protocols

Semantic Depth Scoring Rubric:

  • Argument reconstruction completeness (0-10)
  • Reasoning chain traceability (0-10)
  • Evidence integration (0-10)
  • Domain appropriateness (0-10) Total: 0-40 points

Appendix C: Related Work & Positioning

Internet Houses builds on but differs from:

  • Semantic Web (adds accountability layer)
  • Schema.org (adds reasoning chains, context rules)
  • Solid (adds AI-specific semantic structures) Novel contribution: External context architecture specifically designed for responsible AI learning

Document Version: 3.0 (Refined with explicit distinction: Empirical Finding / Architectural Hypothesis / Validation Path)
Date: January 2026
Status: Ready for Horizon Europe RIA proposal integration
Contact: Internet Houses Project / Anja Zoerner, office@webdesignforai.com

Meta-Note on Document Development: This use case document emerged through iterative refinement that explicitly:

The document’s own development process including self-correction and hypothesis refinement demonstrates the critical methodology it advocates: systematic observation, architectural precision, and explicit acknowledgment of what remains to be empirically validated.

  • webdesignforai.com
  • challenging-communications.com
  • internethouses.org
  • heartandcode-x.com (soft launch)
  • azbrandforce.com
  • haigiants.com (soon)
  • exhibition.exhibition-mallorca.com
  • stay.expedition-mallorca.com
  • Imprint
  • Privacy Policy

© 2025 internethouses.org – Anja Zoerner Digital Ventures – office@webdesignforai.com

Webdesign:  Webdesignforai.com