RFTSystems commited on
Commit
8b5661d
·
verified ·
1 Parent(s): 76ebc0d

Create README_stage6.md

Browse files
Files changed (1) hide show
  1. README_stage6.md +63 -0
README_stage6.md ADDED
@@ -0,0 +1,63 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # README_stage6.md
2
+
3
+ # Stage Six — ViT-Base (Full-Scale ImageNet-1K Validation)
4
+
5
+ **Rendered Frame Theory (RFT)**
6
+ Author: Liam S. Grinstead
7
+ Date: Oct‑2025
8
+
9
+ ---
10
+
11
+ ## Abstract
12
+ Stage Six extends RFT validation to the ViT‑Base architecture on the full ImageNet‑1K dataset. This stage provides large‑scale proof of coherence‑governed energy efficiency at transformer depth and width typical of production models. Using unified telemetry from earlier stages, RFT (DCLR + Ψ–Ω coupling) is compared against Adam under identical training parameters. Results show a verified reduction in energy per training step, stable drift/flux, and matched or improved accuracy.
13
+
14
+ ---
15
+
16
+ ## Objective
17
+ Demonstrate that the DCLR + Ψ–Ω governor remains stable and efficient under full ImageNet‑1K conditions, validating scalability beyond smaller ViT models and confirming production‑grade coherence.
18
+
19
+ ---
20
+
21
+ ## Methodology
22
+ - Model: ViT‑Base (patch 16, dim 768, depth 12, heads 12, MLP ratio 4)
23
+ - Dataset: Full ImageNet‑1K (train ≈ 1.28M images, val ≈ 50K images)
24
+ - Optimisers: RFT (DCLR + Ψ–Ω) vs Adam baseline
25
+ - Environment: Single or multi‑GPU (A100/H100), bf16 AMP if available, seed 1234
26
+ - Metrics: Loss, accuracy, validation accuracy, J/step (energy proxy), drift, flux, energy‑retention (E_ret), coherence (coh), ΔT, wall‑time
27
+ - Telemetry: JSONL, unified schema established in Stages 3–5
28
+
29
+ ---
30
+
31
+ ## Results
32
+ - RFT: Lower energy per step at matched accuracy, tightly bounded drift, smooth flux, coherence near unity, stable thermal behaviour.
33
+ - Adam: Higher J/step and looser drift/flux at similar accuracy.
34
+ - The efficiency gain persists over epochs with consistent telemetry, confirming scalability at ViT‑Base capacity.
35
+
36
+ ---
37
+
38
+ ## Discussion
39
+ Full‑scale ImageNet validates RFT’s coherence mechanisms in production‑sized transformers. The coherence lock (Ψ–Ω) stabilises training dynamics, reducing energy consumption without degrading learning curves. The telemetry confirms reproducibility across runs with deterministic seeding.
40
+
41
+ ---
42
+
43
+ ## Conclusion
44
+ ViT‑Base confirms RFT’s scalability: coherence and drift stay bounded, energy per image is reduced, and accuracy is maintained or improved. This stage completes the large‑scale visual transformer validation and sets the foundation for multi‑modal and generative extensions.
45
+
46
+ ---
47
+
48
+ ## Reproducibility
49
+ - Script: `stage6.py`
50
+ - Log Output: `stage6_vit_base.jsonl`
51
+ - Seed: 1234
52
+ - Hardware: A100/H100 (CPU fallback supported)
53
+ - Sealing: All runs sealed with SHA‑512 hashes
54
+
55
+ ---
56
+
57
+ ## Usage
58
+ ```bash
59
+ # RFT mode
60
+ python stage6.py --mode RFT --epochs 10 --batch 256 --lr 5e-4 --data_dir /path/to/ImageNet
61
+
62
+ # BASE (Adam)
63
+ python stage6.py --mode BASE --epochs 10 --batch 256 --lr 5e-4 --data_dir /path/to/ImageNet