inclusionAI/MoBE
Collection
https://github.com/Bobchenyx/MoBE/tree/Qwen3
β’
2 items
β’
Updated
For more usage instructions and details, please check my GitHub fork. https://github.com/Bobchenyx/MoBE/tree/Qwen3
MoBE (Mixture-of-Basis-Experts) is a novel model compression technique designed for MoE LLMs developed by the AGI Center, Ant Group Research. It achieves efficient parameter reduction by factorizing each expert's weight matrix as:
The factorization is learned by minimizing the reconstruction error between the original and compressed weight matrices.
MoBE significantly outperforms prior compression methods with minimal accuracy degradation:
from transformers import AutoTokenizer
from models.modeling_deepseek_v3_mobe import DeepseekV3MoBEForCausalLM
from models.modeling_qwen3_mobe import Qwen3MoBEForCausalLM
from models.modeling_kimi_k2_mobe import KimiK2MoBEForCausalLM
import torch
model_name = "/root/DeepSeek-V3-0324-MoBE"
offload_folder = "./offload_dir"
tokenizer = AutoTokenizer.from_pretrained(model_name)
tokenizer.pad_token = tokenizer.eos_token
max_memory = {i: "120GiB" for i in range(8)}
max_memory["cpu"] = "1200GiB"
if 'Qwen' in model_name:
model = Qwen3MoBEForCausalLM.from_pretrained(
model_name,
device_map="auto",
offload_folder=offload_folder,
offload_state_dict=True,
torch_dtype=torch.bfloat16
max_memory=max_memory
)
elif 'DeepSeek' in model_name:
model = DeepseekV3MoBEForCausalLM.from_pretrained(
model_name,
device_map="auto",
offload_folder=offload_folder,
offload_state_dict=True,
torch_dtype=torch.bfloat16,
max_memory=max_memory
)
else:
model = KimiK2MoBEForCausalLM.from_pretrained(
model_name,
device_map="auto",
offload_folder=offload_folder,
offload_state_dict=True,
torch_dtype=torch.bfloat16,
max_memory=max_memory
)
input_text = "Artificial intelligence is"
inputs = tokenizer(input_text, return_tensors="pt").to("cuda" if torch.cuda.is_available() else "cpu")
with torch.no_grad():
outputs = model.generate(
**inputs,
max_new_tokens=128,
do_sample=True,
temperature=0.7,
pad_token_id=tokenizer.eos_token_id
)
generated_text = tokenizer.decode(outputs[0], skip_special_tokens=True)
print("Generated text:")
print(generated_text)
If you find MoBE useful in your research or application, please consider citing our work:
@misc{chen2025mobemixtureofbasisexpertscompressingmoebased,
title={MoBE: Mixture-of-Basis-Experts for Compressing MoE-based LLMs},
author={Xiaodong Chen and Mingming Ha and Zhenzhong Lan and Jing Zhang and Jianguo Li},
year={2025},
eprint={2508.05257},
archivePrefix={arXiv},
}