Instructions to use lucylq/qwen3_06B_lora_math with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- PEFT
How to use lucylq/qwen3_06B_lora_math with PEFT:
from peft import PeftModel from transformers import AutoModelForCausalLM base_model = AutoModelForCausalLM.from_pretrained("unsloth/Qwen3-0.6B") model = PeftModel.from_pretrained(base_model, "lucylq/qwen3_06B_lora_math") - Transformers
How to use lucylq/qwen3_06B_lora_math with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="lucylq/qwen3_06B_lora_math")# Load model directly from transformers import AutoModel model = AutoModel.from_pretrained("lucylq/qwen3_06B_lora_math", dtype="auto") - Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use lucylq/qwen3_06B_lora_math with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "lucylq/qwen3_06B_lora_math" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "lucylq/qwen3_06B_lora_math", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'Use Docker
docker model run hf.co/lucylq/qwen3_06B_lora_math
- SGLang
How to use lucylq/qwen3_06B_lora_math with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "lucylq/qwen3_06B_lora_math" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "lucylq/qwen3_06B_lora_math", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "lucylq/qwen3_06B_lora_math" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "lucylq/qwen3_06B_lora_math", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }' - Unsloth Studio new
How to use lucylq/qwen3_06B_lora_math with Unsloth Studio:
Install Unsloth Studio (macOS, Linux, WSL)
curl -fsSL https://unsloth.ai/install.sh | sh # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for lucylq/qwen3_06B_lora_math to start chatting
Install Unsloth Studio (Windows)
irm https://unsloth.ai/install.ps1 | iex # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for lucylq/qwen3_06B_lora_math to start chatting
Using HuggingFace Spaces for Unsloth
# No setup required # Open https://huggingface.co/spaces/unsloth/studio in your browser # Search for lucylq/qwen3_06B_lora_math to start chatting
Load model with FastModel
pip install unsloth from unsloth import FastModel model, tokenizer = FastModel.from_pretrained( model_name="lucylq/qwen3_06B_lora_math", max_seq_length=2048, ) - Docker Model Runner
How to use lucylq/qwen3_06B_lora_math with Docker Model Runner:
docker model run hf.co/lucylq/qwen3_06B_lora_math
Model Card for Model ID
Qwen0.6B trained on the MetaMathQA dataset using Unsloth. Used to test ExecuTorch LoRA capabilities.
Training Data
Dataset: https://huggingface.co/datasets/meta-math/MetaMathQA
Training Configuration
OUTPUT_DIR = "./outputs"
BATCH_SIZE = 2 # Smaller batch for longer sequences
GRADIENT_ACCUMULATION_STEPS = 8 # Effective batch = 16
LEARNING_RATE = 2e-4
NUM_EPOCHS = 1 # MetaMathQA is large, 1 epoch is often enough
WARMUP_RATIO = 0.03
LOGGING_STEPS = 25
SAVE_STEPS = 500
MAX_SAMPLES = 50000 # Limit samples for faster training (set None for full dataset)
Training Hyperparameters
Using bf16, which is what the original Qwen0.6B checkpoint it.
Framework versions
- PEFT 0.18.0
ExecuTorch Files
These are Qwen3 0.6B models, lowered to XNNPACK, quantized with torchao 8da4w and embedding quantization following the export script in: https://github.com/meta-pytorch/executorch-examples/blob/main/program-data-separation/export_lora.sh
See the corresponding README in: https://github.com/meta-pytorch/executorch-examples/tree/main/program-data-separation/cpp/lora_example
- qwen3_06B_q.ptd: foundation weights
- qwen3_06B_q.pte: base model
- qwen3_06B_lora_q.ptd: lora weights
- qwen3_06B_lora_q.pte: lora model
To run the model, please download the Qwen tokenizer from: https://huggingface.co/Qwen/Qwen-tokenizer/tree/main
- Downloads last month
- 5,233