December 2025 Foundation of SAL

Self-Alignment Learning (SAL)

Training as Structured Communication

Aaron Liam Lee · Emergenzwerke

Abstract

Many current training and fine-tuning procedures optimize neural networks primarily through external loss signals, without an explicit model of the system's internal structural state. These updates operate on aggregated error signals that compress rich internal representations into scalar objectives, without distinguishing between stable structure and transient variation.

This can lead to destructive updates, instability, and forms of catastrophic forgetting in certain settings, particularly when small but structurally relevant differences are not explicitly preserved during optimization.

We introduce Self-Alignment Learning (SAL), a training paradigm that reframes optimization as a structured communication process between external objectives and the model's internal organization.

Rather than overwriting learned representations, SAL aims to detect and preserve coherent structures while enabling continued adaptation. This approach explores a path toward reducing destructive updates and improving stability, while maintaining flexibility for learning.

Key Concepts

Communication Layer

Mediates between loss functions and optimizer through parameter stability analysis.

Stability Detection

s(p) = 1/(1 + Δw × g_norm) identifies consolidated parameters.

Adaptive Threshold

τ = τ₀ + α × (σ/μ) responds to training dynamics.

Soft Protection

Graduated gradient scaling preserves plasticity.

The Communication Layer above corresponds to the Signal Activation Layer at the micro-level of the SAL architecture.

Integration

# Minimal integration: 2 lines added to standard training loop

output = model(input)
loss = criterion(output, target)
loss.backward()

comm_layer.analyze(model)
comm_layer.protect(model)

optimizer.step()
optimizer.zero_grad()

Results

3.6×
Improvement in minimum accuracy
(MNIST continual learning)
~10%
Computational overhead
(compatible with standard optimizers)

Terminology Note

SAL is a context-invariant concept; the same acronym intentionally operates across two distinct levels of the architecture.

Micro-Level · The Mechanism

Signal Activation Layer

The concrete, PyTorch-native communication layer between observation and parameter update. Reads carry, transfer, and entropy signals from the model's internal state.

Macro-Level · The Paradigm

Self-Alignment Learning

The emergent, state-aware training behavior enabled by the underlying Signal Activation Layer. A new learning paradigm where the model reads its own state before it changes.

The layer enables the learning. The acronym remains invariant across both levels.

Research Plots

Visual highlights from the SAL experiments.

Gradient preservation plot

Gradient Preservation

SAL suppresses gradients on consolidated parameters.

Stability spectrum plot

Stability Spectrum

Protected / neutral / volatile parameter distribution.

Drift reduction plot

Drift Reduction

Semantic drift reduction across continual learning runs.

Emergence map plot

Emergence Map

Coherence × novelty landscape; emergent zones highlighted.

Pulse-Split-Cascade flow plot

PSC Flow

Pulse-Split-Cascade as semantic Game of Life for idea evolution.

Early Research Note

January 2026 SAL Extended Research

Emergent Structure Without Smoothing

The original 64D PoC code is deprecated; focus shifted entirely to current Phase A transformer validation. The theoretical problem described here remains the mathematical foundation of the SAL architecture.

The Core Problem: Smoothing Loss

When a system must expose a lower-dimensional view of a richer internal state, fixed projections can erase unexpected structure. If those erased components correspond to emergent features, downstream observers misinterpret the system as smoother than it is.

The pL Scaling Argument

Why is 98–99% per-token accuracy often not enough for long-horizon coherence? If per-token correctness is p and a sequence requires L correct tokens, the probability the entire sequence is correct is pL.

0.991000 ≈ 4.3 × 10−5

This mathematically necessitates a residual channel: the remaining mismatch must be carried explicitly rather than smoothed away, otherwise the system inevitably drifts into catastrophic forgetting.

The Proposed Principle

An adaptive residual path that carries additional information when needed, a stable prediction path plus an innovation side-channel, following the classical "prediction + residual" pattern.

Honest trade-off — Innovation Cost: The residual channel is not free. Carrying the remaining mismatch explicitly requires extra bits per step — a real compute and memory overhead we call innovation cost. This cost is the price of structural honesty: the system does not compress away what it cannot yet explain. SAL Phase B will quantify this overhead against the stability gains achieved.

Other Research

Cellular Memory Systems

Cellular automata-inspired memory architecture where semantic units persist across lifecycle transitions through pattern pooling rather than parameter freezing.

Status: Experimental · Not yet published

Stability Metrics for Neural Networks

Methods to identify consolidated parameters through weight-gradient analysis with adaptive thresholds responding to training dynamics.

Status: Ongoing