bobchenyx commited on
Commit
41dd42d
·
verified ·
1 Parent(s): db13cb7

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +4 -169
README.md CHANGED
@@ -2,8 +2,10 @@
2
  license: apache-2.0
3
  ---
4
 
5
- ### https://github.com/Bobchenyx/MoBE/tree/Qwen3
6
 
 
 
7
 
8
 
9
  <h1 align="center">
@@ -16,6 +18,7 @@ license: apache-2.0
16
  </div>
17
 
18
 
 
19
  ## 📘 Introduction
20
 
21
  **MoBE (Mixture-of-Basis-Experts)** is a novel model compression technique designed for MoE LLMs developed by the **AGI Center, Ant Group Research**. It achieves efficient parameter reduction by factorizing each expert's weight matrix as:
@@ -35,174 +38,6 @@ MoBE significantly outperforms prior compression methods with minimal accuracy d
35
  - Incurs only **1%–2% absolute accuracy drop** (≈2% relative)
36
  - Demonstrated on **Qwen3-235B**, **DeepSeek-V3 (671B)**, and **Kimi-K2-Instruct (1T)**
37
 
38
-
39
- ## 📊 Evaluation Results
40
-
41
- ![results](results.jpg)
42
- ---
43
-
44
- ## 🚀 Quickstart
45
-
46
- ### 🔧 Installation
47
- ```
48
- pip install -r requirements.txt
49
- ```
50
-
51
- ---
52
-
53
- ### 🛠️ Step-by-Step Instructions
54
- Converting an MoE model to MoBE involves two stages:
55
- 1. **Train** the MoBE decomposition.
56
- 2. **Generate** either a native MoBE model or reconstruct a standard MoE for compatibility.
57
- ---
58
- #### 1. Train MoBE Matrices
59
- ```
60
- python train.py --index_path /root/DeepSeek-V3-0324/model.safetensors.index.json \
61
- --base_dir /root/DeepSeek-V3-0324 \
62
- --save_path /root/MoBE/DeepSeek-V3-0324 \
63
- --num_hidden_layers 61 \
64
- --num_matrices 256 \
65
- --rows_per_matrix 2048 \
66
- --cols 7168 \
67
- --num_epochs 10000 \
68
- --batch_size 32 \
69
- --num_batches 8 \
70
- --learning_rate 0.07 \
71
- --num_B 64 \
72
- --truncation 2048 \
73
- --start_layer 3 \
74
- --end_layer 61 \
75
- --matrix_type "gate_proj" \
76
- --activation 'tanh'
77
- ```
78
- | Argument | Description |
79
- |--------|-------------|
80
- | `index_path` | Path to `.safetensors.index.json` mapping tensor names to shards |
81
- | `base_dir` | Root directory containing model shards |
82
- | `save_path` | Output directory for trained MoBE matrices |
83
- | `num_hidden_layers` | Total number of transformer layers |
84
- | `num_matrices` | Number of experts in the original MoE model |
85
- | `rows_per_matrix` | Row dimension of the weight matrices (e.g., `up_proj`, `gate_proj`) |
86
- | `cols` | Column dimension of the weight matrices |
87
- | `num_epochs` | Number of optimization steps for reconstruction |
88
- | `batch_size` | Batch size (number of experts sampled per step) |
89
- | `num_batches` | Number of batches processed per epoch. Total experts in one layer = `batch_size × num_batches` |
90
- | `learning_rate` | Learning rate for the optimizer (e.g., Adam) |
91
- | `num_B` | Number of basis matrices used in the MoBE |
92
- | `truncation` | Maximum number of rows retained in each basis matrix |
93
- | `start_layer` | First transformer layer (inclusive) to apply MoBE compression |
94
- | `end_layer` | Last transformer layer (exclusive) to apply compression |
95
- | `matrix_type` | Type of weight matrix to compress (e.g., `"gate_proj"`, `"up_proj"`) |
96
- | `activation` | Activation function used in MoBE (e.g., `"silu"`, `"tanh"`) |
97
-
98
- > 💡 **Tip**: Run this step separately for each `matrix_type` (e.g., `gate_proj`, `up_proj`) within the same layer range.
99
-
100
- For Kimi-K2-Instruct, we recommend dividing the experts within each transformer layer into two groups and applying MoBE compression separately to each group.
101
- ```
102
- python train_group.py --index_path /root/Kimi-K2-Instruct/model.safetensors.index.json \
103
- --base_dir /root/Kimi-K2-Instruct \
104
- --save_path /root/MoBE/Kimi-K2-Instruct \
105
- --num_hidden_layers 61 \
106
- --num_matrices 384 \
107
- --rows_per_matrix 2048 \
108
- --cols 7168 \
109
- --num_epochs 15000 \
110
- --batch_size 32 \
111
- --num_batches 12 \
112
- --learning_rate 0.07 \
113
- --num_B 128 \
114
- --truncation 2048 \
115
- --start_layer 1 \
116
- --end_layer 61 \
117
- --matrix_type "gate_proj" \
118
- --num_groups 2 \
119
- --activation 'silu'
120
- ```
121
- | Argument | Description |
122
- |--------|-------------|
123
- | `index_path` | Path to `.safetensors.index.json` mapping tensor names to shards |
124
- | `base_dir` | Root directory containing model shards |
125
- | `save_path` | Output directory for trained MoBE matrices |
126
- | `num_hidden_layers` | Total number of transformer layers |
127
- | `num_matrices` | Number of experts in the original MoE model |
128
- | `rows_per_matrix` | Row dimension of the weight matrices (e.g., `up_proj`, `gate_proj`) |
129
- | `cols` | Column dimension of the weight matrices |
130
- | `num_epochs` | Number of optimization steps for reconstruction |
131
- | `batch_size` | Batch size (number of experts sampled per step) |
132
- | `num_batches` | Number of batches processed per epoch. Total experts in one layer = `batch_size × num_batches` |
133
- | `learning_rate` | Learning rate for the optimizer (e.g., Adam) |
134
- | `num_B` | Number of basis matrices used in the MoBE |
135
- | `truncation` | Maximum number of rows retained in each basis matrix |
136
- | `start_layer` | First transformer layer (inclusive) to apply MoBE compression |
137
- | `end_layer` | Last transformer layer (exclusive) to apply compression |
138
- | `matrix_type` | Type of weight matrix to compress (e.g., `"gate_proj"`, `"up_proj"`) |
139
- | `activation` | Activation function used in MoBE (e.g., `"silu"`, `"tanh"`) |
140
- | `num_groups` | Number of expert groups to split the original MoE experts into before applying MoBE compression separately to each group |
141
-
142
- ---
143
-
144
- #### 2. Generate MoBE or Reconstructed MoE Model
145
-
146
- After training, you can:
147
- - Deploy the **native MoBE model** (high compression)
148
- - Reconstruct a **standard MoE model** for compatibility with `vLLM` or `SGLang`
149
-
150
- ##### 🔹 Option A: Save Native MoBE Model
151
- ```
152
- python get_mobe.py --base_model /root/DeepSeek-V3-0324 \
153
- --mobe_dir /root/MoBE/DeepSeek-V3-0324 \
154
- --save_dir /root/DeepSeek-V3-0324-MoBE \
155
- --num_B 64 \
156
- --num_experts 256 \
157
- --start_layer 3 \
158
- --end_layer 61 \
159
- --dtype bfloat16 \
160
- --activation 'tanh'
161
- ```
162
- ###### Arguments
163
-
164
- | Argument | Description |
165
- |--------|-------------|
166
- | `base_model` | Path to the original model directory |
167
- | `mobe_dir` | Directory containing trained MoBE matrices (`A`, `B^i`, `α_i`) |
168
- | `save_dir` | Where to save the final MoBE model |
169
- | `num_B` | Number of basis matrices (must match training) |
170
- | `num_experts` | Number of experts in the original model |
171
- | `start_layer` | First layer to replace with MoBE (inclusive) |
172
- | `end_layer` | Last layer to replace (exclusive) |
173
- | `dtype` | Target data type (`float32`, `bfloat16`, `float16`) |
174
- | `activation` | Activation function used in MoBE (e.g., `"silu"`, `"tanh"`) |
175
- | `grouped_experts` | Whether to group experts within the same layer |
176
-
177
- ##### 🔹 Option B: Reconstruct Standard MoE Weights
178
-
179
- For seamless integration with existing inference engines:
180
- ```
181
- python get_hf_model.py --base_model /root/DeepSeek-V3-0324 \
182
- --mobe_dir /root/MoBE/DeepSeek-V3-0324 \
183
- --save_dir /root/DeepSeek-V3-0324-MoBE-hf \
184
- --start_layer 3 \
185
- --end_layer 61 \
186
- --num_experts 256 \
187
- --dtype bfloat16
188
- ```
189
- ###### Arguments
190
-
191
- | Argument | Description |
192
- |--------|-------------|
193
- | `base_model` | Path to the original model directory |
194
- | `mobe_dir` | Directory containing trained MoBE matrices (`A`, `B^i`, `α_i`) |
195
- | `save_dir` | Output path for reconstructed MoE model |
196
- | `start_layer` | First layer to replace with MoBE (inclusive) |
197
- | `end_layer` | Last layer to replace (exclusive) |
198
- | `num_experts` | Number of experts in the original model |
199
- | `dtype` | Target data type (`float32`, `bfloat16`, `float16`) |
200
- | `grouped_experts` | Whether to group experts within the same layer |
201
-
202
- > ✅ The reconstructed model is **fully compatible** with Hugging Face `AutoModelForCausalLM`, `vLLM`, and `SGLang`.
203
-
204
- ---
205
-
206
  ## 💡 MoBE Generate Example
207
 
208
  ```
 
2
  license: apache-2.0
3
  ---
4
 
5
+ ### Bobchenyx/MoBE/tree/Qwen3
6
 
7
+ For more usage instructions and details, please check my GitHub fork.
8
+ https://github.com/Bobchenyx/MoBE/tree/Qwen3
9
 
10
 
11
  <h1 align="center">
 
18
  </div>
19
 
20
 
21
+
22
  ## 📘 Introduction
23
 
24
  **MoBE (Mixture-of-Basis-Experts)** is a novel model compression technique designed for MoE LLMs developed by the **AGI Center, Ant Group Research**. It achieves efficient parameter reduction by factorizing each expert's weight matrix as:
 
38
  - Incurs only **1%–2% absolute accuracy drop** (≈2% relative)
39
  - Demonstrated on **Qwen3-235B**, **DeepSeek-V3 (671B)**, and **Kimi-K2-Instruct (1T)**
40
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
41
  ## 💡 MoBE Generate Example
42
 
43
  ```