j05hr3d's picture
Model save
04f3cc5 verified
metadata
library_name: peft
license: other
base_model: Qwen/Qwen2.5-Coder-3B-Instruct
tags:
  - base_model:adapter:Qwen/Qwen2.5-Coder-3B-Instruct
  - lora
  - transformers
pipeline_tag: text-generation
model-index:
  - name: SFT-Qwen2.5-Coder-3B
    results: []

SFT-Qwen2.5-Coder-3B

This model is a fine-tuned version of Qwen/Qwen2.5-Coder-3B-Instruct on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 0.9728

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.0001
  • train_batch_size: 2
  • eval_batch_size: 8
  • seed: 42
  • gradient_accumulation_steps: 4
  • total_train_batch_size: 8
  • optimizer: Use OptimizerNames.PAGED_ADAMW_8BIT with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_ratio: 0.03
  • num_epochs: 5

Training results

Training Loss Epoch Step Validation Loss
1.0721 0.2985 20 1.1757
0.8989 0.5970 40 1.1059
0.8293 0.8955 60 1.0656
0.787 1.1940 80 1.0364
0.7025 1.4925 100 1.0206
0.7386 1.7910 120 0.9961
0.7471 2.0896 140 0.9916
0.624 2.3881 160 0.9843
0.6839 2.6866 180 0.9728
0.6561 2.9851 200 0.9737
0.6027 3.2836 220 0.9785
0.5221 3.5821 240 0.9843

Framework versions

  • PEFT 0.18.0
  • Transformers 4.57.1
  • Pytorch 2.8.0+cu126
  • Datasets 4.4.1
  • Tokenizers 0.22.1