Architecting the Physical
Substrate of Intelligence

We co-design algorithms, device physics, and materials from first principles using the Free Energy Principle (FEP) as our unifying objective. Our goal is a free-energy-minimizing substrate for adaptive intelligence—from molecular optimization to perceptual inference.

The Paradigm Shift

The walls facing AI are symptoms of a deeper computational inefficiency: using deterministic, high-precision Boolean logic to simulate stochastic, energy-constrained, physical processes. The most critical discovery problems in biology, chemistry, and medicine are not data puzzles; they are complex physical processes.

Three walls: (a) Thermodynamic: Landauer's principle sets a lower bound on energy per bit operation; conventional AI treats noise as a nuisance, not a resource. (b) Formulation bottleneck (quantum): Mapping real-world optimization to quantum hardware suffers from the "penalty paradox"—poor constraint formulation leads to invalid solutions or excessive qubit overhead. (c) Architectural mismatch (neuromorphic): Spiking networks promise brain-like efficiency but are trained via BPTT on GPUs—biologically implausible and inefficient.

We believe these are not independent problems. They stem from a lack of a unifying physical objective. We look to the only proof of general intelligence: the biological brain. It is not a digital computer, but a self-organizing system whose core directive is to minimize free energy.

Our hypothesis: The Free Energy Principle (FEP) provides this objective: minimize variational free energy (F = Prediction Error + λ × Energy Cost). Our mission is to architect the physical matter that embodies it. This is not incremental improvement. It is a new substrate for discovery.

Our programs are instantiations of a single free-energy-minimizing (FEM) architecture—a heterogeneous system where a neuromorphic core (Logos) implements stochastic inference, a thermodynamic sampler (Hades) enforces an energy prior and provides native noise for sampling, quantum processes (MMP) tune parameters and satisfy constraints, and application layers (OctaViT, MMP-as-action) serve as high-precision I/O and validation. Thermodynamic and quantum principles drive efficient, adaptive computation across scales—from the molecular to the perceptual.

01 / BUILD

Program Stack

High-precision sensory and interaction layers that interface with the world.

A hybrid quantum–classical protocol for protein–ligand docking that reformulates constraints via penalty calibration to run efficiently on NISQ hardware. MMP tunes parameters of the generative model (e.g., synaptic weights in Logos, energy landscape in Hades) by solving high-dimensional, constrained optimization. It addresses the "blind docking" problem by allowing models to explore the full protein surface, outperforming prior state-of-the-art deep learning on benchmarked target clusters.

  • Benchmark: 25.5% improvement in binding-affinity predictions vs classical baselines.
  • Benchmark: Patent-pending penalty calibration to resolve QAOA noise sensitivity.

A pseudo-3D Vision Transformer for volumetric OCT image classification. OctaViT provides the "accuracy" term for the FEP by minimizing sensory prediction error. Unlike traditional 2D models, it processes full volumes to align medical imaging with clinical expertise and serves as the architecture's high-bandwidth perceptual front-end, providing stabilized sensory data to the core inference engine.

  • Benchmark: 0.96 AUC on public datasets for disease classification.
  • Benchmark: <100ms inference on standard medical GPU substrates.
02 / ENGINE

Core Architecture

The central inference unit and physics-aligned prior that unify the stack.

Logos Core is the Active Inference Engine of our architecture—a hybrid neuromorphic framework that implements the FEP's perception-action loop via Spiking Neural Networks. Its internal model is updated to minimize prediction error; spiking dynamics are optimized by quantum processes (MMP) and regulated by thermodynamic efficiency (Hades). It is the physical instantiation of active inference.

  • Benchmark: Brain-scale efficiency in silicon (1000x over standard GPUs).
  • Benchmark: Reactivity paired with event-based cameras (~2ms reaction time).

Hades' Funnel is the Thermodynamic Prior in our architecture—it provides the "complexity" term in the FEP and ensures the generative model favors explanations requiring minimal dissipative work. It enforces an energy prior and provides native noise for sampling. Treating state transitions as energy-minimizing paths, Hades reduces training energy through physical dissipative dynamics and turns physical noise into a computational resource.

  • Benchmark: 30–40% reduction in training energy requirements.
  • Benchmark: Treat physical noise as a computational resource.

Coupling mechanisms (the architecture's core innovation)

  • Energy-Guided Plasticity: The effective "temperature" or dissipation rate from Hades modulates the learning rate and noise injection in Logos Core's STDP rule.
  • Confidence-Gated Attention: OctaViT's prediction confidence scores gate signal routing in Logos Core's spiking network, stabilizing perception.
  • Quantum-Calibrated Constraints: MMP's penalty calibration optimizes the energy landscape that Hades' Funnel samples from, aligning stochastic search with problem structure.
Ecosystem Partners
QUANTUM CONTINUUM
NVIDIA H100
UCSF HEALTH
AWS FRONTIER
BAIR LAB
QUANTUM CONTINUUM
NVIDIA H100
UCSF HEALTH
AWS FRONTIER
BAIR LAB

"Intelligence is a physical process that minimizes free energy. We are building the matter that does this."

"Intelligence is a physical process that minimizes free energy. We are building the matter that does this."

The architecture is a heterogeneous system instantiating active inference: Logos (inference), Hades (prior), MMP (constraints), OctaViT (perception).

FOLLOW MAHAMAIA
FOLLOW MAHAMAIA
FOLLOW MAHAMAIA
FOLLOW MAHAMAIA
FOLLOW MAHAMAIA
FOLLOW MAHAMAIA
FOLLOW MAHAMAIA
FOLLOW MAHAMAIA