Interlink Bridge Institute
Open Architectural Standards · Structural Governance Research · Global Protocol
Non-Profit Research
Open Standard
CC BY 4.0
Interlink Bridge · Global Protocol

Structural Sovereignty:
Architecture Precedes Policy

A constraint-first framework for governable AI systems. Defining governability as a structural property of state-space reachability — not as a post-deployment policy layer.

25+
DOI Records
7
Core Frameworks
2026
Established
Central Thesis
"Governance must be encoded as a structural property of system dynamics, not applied as a post-deployment policy layer."
— Structural Sovereignty · Master Habilitation · Senke · 2026

Open Architectural Standards

Both manuscripts are available for reading on this page and for download. Published under CC BY 4.0. Implementation specifications remain proprietary.

IB·2026·01
Master Habilitation · Open Standard

Structural Sovereignty: Architecture Precedes Policy — A Constraint-First Theory of Governable AI Systems

Alexanja Senke · Interlink Bridge · Germany · 2026

Introduces Structural Sovereignty as a constraint-first architectural doctrine grounded in dynamical systems theory. Defines a new class of systems — Structurally Governable Dynamical Systems (SGDS) — and proposes constraint-first architecture as a foundational condition for governable AI. Integrates control theory, systems theory, institutional enforcement logic, and AI governance into a unified framework.

SGDS · New System Class Load–Drift Modeling Halt Conditions Dynamical Systems Theory EU AI Act Compatible Habilitation-Equivalent
↓ Download · Zenodo NDA Disclosure

Abstract

Artificial intelligence systems scale in capability at a rate exceeding the structural mechanisms designed to govern them. Contemporary approaches to AI governance focus on model alignment, risk classification, compliance documentation, and post-hoc auditing. These approaches remain downstream interventions and do not structurally determine whether a system is capable of halting, deferring, or transferring authority when instability emerges.

This work introduces Structural Sovereignty as a constraint-first architectural doctrine grounded in dynamical systems theory. It defines governability as a property of bounded state trajectories, executable halt conditions, responsibility-anchored admissibility domains, and deterministic runtime gating.

A new class of systems — Structurally Governable Dynamical Systems (SGDS) — is formally defined. These systems encode governance within state dynamics rather than layering policy externally.

Governance must be encoded as a structural property of system dynamics, not applied as a post-deployment policy layer.

Central Thesis · Structural Sovereignty · 2026

1. Introduction: The Scaling–Governance Asymmetry

Artificial intelligence systems increasingly operate in high-consequence environments including healthcare, finance, defense, governance, and critical infrastructure. These systems scale in parameter count, deployment density, and operational autonomy. Structural constraint mechanisms do not scale proportionally.

This creates a systemic asymmetry:

Capability Growth ≫ Constraint Growth

When capability expands faster than structural boundaries, instability becomes inevitable — even if functional performance metrics remain satisfactory. Governance mechanisms typically evaluate outputs after state transitions have occurred. They do not restrict trajectory formation itself. This produces governance lag.

2. The Structural Gap

Current governance approaches include risk categorization, compliance documentation, alignment tuning, and post-hoc auditing. These operate externally. Structural Sovereignty argues that governability must be internal to the system's state dynamics.

The shift is from regulating outputs to constraining trajectories. This is not a refinement of existing approaches — it is a relocation of the governance enforcement point.

IB·2026·02
Peer-Circulation Manuscript · Open Standard

Structural Sovereignty: A Substrate-Level Framework for Governable AI Systems

Alexanja Senke · Interlink Bridge · Germany · 2026

Proposes a taxonomy distinguishing declaratively governed systems, conditionally enforced systems, and structurally sovereign systems. A system is defined as structurally sovereign when non-admissible state trajectories are architecturally unreachable and authority binding remains invariant under scale, load, and version iteration. Draws from systems theory, cybernetics, control theory, and infrastructure design.

Substrate-Level Framework Three-Class Taxonomy State Reachability Authority Invariance Infrastructure Analogy Peer Circulation

Introduction

Advanced AI systems increasingly operate within critical, distributed, and long-horizon environments. As deployment contexts expand, governance challenges shift from isolated behavioral alignment to structural durability under scale.

Most governance models operate above execution. Policies prescribe acceptable behavior. Monitoring systems detect deviation. Oversight bodies interpret and intervene. Yet in such configurations, unsafe trajectories may remain structurally reachable even if normatively disallowed.

This manuscript proposes a structural reframing: governability must be encoded at the substrate level of state transition rather than layered as post hoc evaluation.

Governance–Execution Separation

In many AI architectures, governance and execution occupy distinct layers. Execution generates system trajectories. Governance evaluates them. This separation introduces latency, interpretive dependence, and authority fragmentation under load.

As systems scale, distributed components may preserve behavioral alignment while losing structural boundary coherence. Structural Sovereignty addresses this separation by relocating constraint to the same level as execution logic.

Governance becomes a property of reachable state topology rather than of behavioral interpretation.

§2 · Governance–Execution Separation
Institute Information
StatusIndependent architectural research institute
LicenseCC BY 4.0 · Implementation specs proprietary
Records25+ DOI-secured publications on Zenodo
ContactAlexanjaGT5S@proton.me · NDA disclosure available
Deployed Public System
Kyber for LibreOffice v2.2Local AI assistant and document cleaner with Human Commit Boundary enforced
StackMistral AI (Paris) · CroissantLLM (Paris) · Ollama · LibreOffice · Linux
StatusPublic release · trilingual DE / FR / EN · live community testing
Recorddoi:10.5281/zenodo.19650652 · LibreOffice Extensions listing live
Related Standards
IBOGS-1.0Interlink Bridge Open Governance Standard · doi:10.5281/zenodo.19070178
GCI-01Governed Cognitive Interface · doi:10.5281/zenodo.19138098
UGA-01Unified Governance Architecture · Master Reference
ETL-01Executive Translation Layer · doi:10.5281/zenodo.19242685
Core Bibliography
Wiener, N. (1948)
Cybernetics
MIT Press
Ashby, W. R. (1956)
An Introduction to Cybernetics
Chapman & Hall
Perrow, C. (1984)
Normal Accidents
Basic Books
Russell, S. (2019)
Human Compatible
Viking
Amodei et al. (2016)
Concrete Problems in AI Safety
arXiv preprint
Open for Collaboration

Governments, regulatory bodies, standards institutes, and research institutions are invited to engage with this framework under NDA or open collaboration terms.

Contact Institute →

The Standard in Six Propositions

These propositions constitute the doctrinal foundation of the Structural Sovereignty standard as published by the Interlink Bridge Institute.

Proposition I
Architecture precedes policy.
Governance that exists only as documentation cannot enforce itself when systems operate at scale. Structural constraint must be embedded at the execution substrate.
Published · CC BY 4.0
Proposition II
Non-admissible transitions must be unreachable by design.
The difference between refusing a transition and making it unreachable is the difference between advisory governance and structural governance.
Published · CC BY 4.0
Proposition III
Authority must remain external to the AI.
Execution authority is always human-anchored. A system that can self-authorize consequence-bearing actions is not structurally governed.
Published · CC BY 4.0
Proposition IV
Constraint must be load-invariant.
Governance that degrades under operational scale is conditional, not structural. Load invariance — ∂C/∂L = 0 — is a classification criterion for SGDS.
Published · CC BY 4.0
Proposition V
Governable AI must be structurally stoppable.
A system that cannot halt, defer, or transfer authority when instability emerges is not governed regardless of its alignment properties.
Published · CC BY 4.0
Proposition VI
The model is not the system.
The governance layer is the system. Models are interchangeable execution backends within an admissible operational envelope. Origin is irrelevant. Structure is not.
Published · CC BY 4.0

Kyber for LibreOffice

Structural Sovereignty is not presented only as theory. Kyber for LibreOffice is a deployed public system that applies the same governance logic in day-to-day document workflows.

The system runs fully local, requires no cloud infrastructure, and enforces Human Commit Boundary at execution level: AI proposes, the human decides what becomes real. No text is applied automatically.

Kyber v2.2 operates in Deutsch · Français · English and supports Mistral, Mistral Nemo, and CroissantLLM via Ollama. This makes it directly relevant for sovereignty-oriented public sector deployments across Europe, including French administrative environments.

Local
No cloud dependency
No external API keys, no remote inference path, no data leaving the machine.
HCB
Execution remains human-anchored
AI output is transformed into explicit, reviewable actions. Automatic application is structurally excluded.
FR
French public sector alignment
CroissantLLM support and trilingual prompts make Kyber directly usable in French-language administrative contexts.

EU AI Act Compatibility

The Structural Sovereignty framework operates pre-compliance. It defines the architectural conditions that make compliance structurally enforceable — rather than interpretively monitored.

Where the EU AI Act defines risk categories and oversight obligations, Structural Sovereignty defines the substrate conditions under which those obligations become structurally binding rather than procedurally aspirational.

The framework is compatible with Articles 9, 12, 14, 15, and 22 of the EU AI Act. Implementation specifications are available under NDA for regulatory bodies.

Art. 9
Risk Management Systems
SGDS encode risk boundaries structurally. Non-admissible transitions are absent before runtime — not detected after.
Art. 12
Record-Keeping & Logging
DAP delegation chains and append-only audit records provide complete traceable evidence of all admissibility decisions.
Art. 14
Human Oversight
HCB — Human Commit Boundary — enforces structural presence of human authority before consequence-bearing execution.
Art. 22
Fundamental Rights Impact
LIAN topology layer excludes entire classes of inadmissible states before any model or agent is invoked.