🧬 Darwin Family: Zero Gradient Steps, GPQA Diamond 88.89%
How far can we push LLM reasoning *without* training?
Our team at VIDRAFT submitted this paper to Daily Papers yesterday, and it's currently #3. Huge thanks to everyone who upvoted — sharing the core ideas below.
Darwin Family is a training-free evolutionary merging framework. By recombining the weight spaces of existing LLM checkpoints — with zero gradient-based training — it reaches frontier-level reasoning.
- 🏆 Darwin-28B-Opus: GPQA Diamond 88.89% - 💸 Zero gradient steps — not a single B200 or H200 hour needed - 🧬 Consistent gains across 4B → 35B scale - 🔀 Cross-architecture breeding between Transformer and Mamba families - 🔁 Stable recursive multi-generation evolution
#Three Core Mechanisms
① 14-dim Adaptive Merge Genome — fine-grained recombination at both component level (Attention / FFN / MLP / LayerNorm / Embedding) and block level, expanding the prior evolutionary-merge search space.
② MRI-Trust Fusion — we diagnose each layer's reasoning contribution via an **MRI (Model Reasoning Importance)** signal and fuse it with evolutionary search through a **learnable trust parameter**. Trust the diagnostic too much and search collapses; ignore it and search becomes inefficient — Darwin learns the balance from data.
③ Architecture Mapper — weight-space breeding across heterogeneous families. Attention × SSM crossover actually works.
Why It Matters > Diagnose latent capabilities already encoded in open checkpoints, > and recombine them — no gradients required.