Systems that understand their own state before they change.
A research initiative exploring state-aware learning and adaptive system dynamics.
Research Overview
We study how machine learning systems can observe their own internal state before adapting — to reduce destructive updates and enable stable learning dynamics.
Current machine learning systems optimize toward external loss signals, but lack an explicit model of their own internal structural state. This leads to instability, destructive updates, and catastrophic forgetting.
Self-Alignment Learning (SAL) introduces a state-aware feedback layer between observation and parameter update. Instead of treating all updates as equally safe, SAL reads internal signals to distinguish stable structure from regions still available for adaptation.
The Problem
Models optimize external loss without awareness of internal structural state — leading to blind overwriting of consolidated knowledge.
The Approach
Read internal signals (carry, transfer, entropy) before updating. Distinguish load-bearing structure from available movement.
What Is Not Claimed Yet
Solving alignment or eliminating forgetting. The current stage is structured observation and reproducible state detection.
Direction
From reproducible state observation toward minimal conditional intervention — protecting what should not change.
Current Work
A working transformer-side proof-of-concept (GPT-2 small, 1000-step runs).
Focus: reproducible internal state observation — not yet intervention.
Reproducible Internal State Classes
Confirmed · Seeds 41–43Across seeded runs we observe a recurring structural state: carry stable, transfer at zero, entropy elevated across all layers. We call this the quiet-open window. It reproduces independently of initial weights.
Transition Taxonomy
Explicit Rule · Type A / BStrong transition events are classified under a strict operational rule. Type A (open transition): entropy elevated at spike. Type B (blind spike): entropy suppressed at spike. 37–42 events per 1000 steps, consistent across seeds.
Layer Depth Gradient
ObservedTransfer activity concentrates in deeper layers (8–10), with early layers remaining comparatively stable. This structural gradient emerges during training without explicit constraint.
Orchestration & Memory Layer
Active · OngoingA runtime architecture under development that treats memory, routing, and feedback not as side effects but as primary design concerns. Core components: append-only event memory with differentiation logic, claim resolution, anti-amplification constraints, role elasticity, and observable state transitions.
Operates outside the model weights — between inference and application logic.
Initial evidence of structured, state-dependent dynamics during transformer training. Full intervention validation is the planned next phase.
Early Paper
Self-Alignment Learning — Initial Exploration (2025)
Early TheoryThis paper represents the initial conceptual framing of Self-Alignment Learning. Since publication, the concept has evolved significantly through experimental work and transformer-side observations. The core intuition holds; the empirical grounding is now substantially deeper.
Read paper →Core Idea
Three concepts that form the foundation of the approach.
State
Every system has an internal structural state. Reading it before acting is the prerequisite for coherent adaptation.
Relation
What matters is not only the object but its position within the system. Relation determines effect more than the object alone.
Stability vs Adaptation
Not everything should change at once. Distinguishing consolidated structure from available movement is the core challenge.
Endogenous Signal
The signal for when and how to adapt should emerge from within the system — not only from external loss.
Where We Work
AI systems rest on a layered foundation built over decades. We do not replace that stack — we build responsibly on top of it, targeting the layer where stability, memory, and feedback need explicit design.
Each layer in this stack was built by serious people over decades. Our work does not attempt to replace or circumvent it. We use PyTorch, standard transformer architectures, and established inference runtimes as the foundation — and ask what still needs to be built above them: observable state, structured memory, and feedback that does not silently drift.
Feedback is unavoidable
Any system carrying context over time has feedback loops. The question is whether they are observable and constrained — or silent and accumulating.
Repetition is not truth
Without explicit differentiation, a system can amplify its own assumptions. We build architecture that distinguishes echo from new evidence.
Observability first
A system whose internal states are not readable cannot be responsibly improved. We treat observability as a prerequisite, not a feature.
No overclaim
We are not solving alignment. We are building the structural layer that makes responsible alignment work possible — state-aware, honest, measurable.
About
Aaron Liam Lee
Independent AI Researcher · Emergenzwerke® · Germany
Working on endogenous learning dynamics and state-aware training methods. The question is not only how models learn — but whether they can understand what to preserve while learning.
Impressum — Legal Notice
Required legal disclosure under German law (§ 5 TMG). Emergenzwerke is a registered German sole proprietorship (Einzelunternehmen).
Angaben gemäß § 5 TMG
Aaron Liam Lee
Einzelunternehmen, handelnd unter: Emergenzwerke
Bottroper Straße 136
45964 Gladbeck
Deutschland
Kontakt
E-Mail: aaronliamlee@emergenzwerke.de
Schutzrechte
Marke Emergenzwerke® beim Deutschen Patent- und Markenamt (DPMA) eingetragen.
Anmeldung: 08.09.2025 · Eintragung: 04.02.2026
Verantwortlich für den Inhalt nach § 18 Abs. 2 MStV
Aaron Liam Lee, Bottroper Straße 136, 45964 Gladbeck
Diese Website enthält keine Tracking-Cookies und erhebt keine personenbezogenen Daten.
Datenschutz — Privacy Policy
Required privacy disclosure under German law (DSGVO / GDPR). This site does not use tracking, cookies, or third-party analytics.
Diese Website verwendet keine Tracking-Technologien, keine Cookies und keine Analyse-Dienste Dritter. Es werden keine personenbezogenen Daten für Marketingzwecke gespeichert oder weitergegeben.
Beim Aufruf dieser Website werden durch den Hosting-Provider technisch notwendige Verbindungsdaten (z. B. IP-Adresse, Zeitstempel, aufgerufene Seite, Browsertyp) in sogenannten Server-Logfiles verarbeitet. Diese Speicherung erfolgt nur kurzfristig zur Gewährleistung der Systemsicherheit und des reibungslosen Betriebs (Rechtsgrundlage: Art. 6 Abs. 1 lit. f DSGVO). Eine Auswertung zu anderen Zwecken findet nicht statt.
Verantwortlicher im Sinne der DSGVO:
Aaron Liam Lee · Bottroper Straße 136 · 45964 Gladbeck
aaronliamlee@emergenzwerke.de