amphora commited on
Commit
86e6fc2
·
verified ·
1 Parent(s): 0e53ffd

End of training

Browse files
Files changed (1) hide show
  1. README.md +125 -0
README.md ADDED
@@ -0,0 +1,125 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ library_name: transformers
3
+ license: apache-2.0
4
+ base_model: kakaocorp/kanana-1.5-2.1b-instruct-2505
5
+ tags:
6
+ - axolotl
7
+ - generated_from_trainer
8
+ datasets:
9
+ - train.jsonl
10
+ model-index:
11
+ - name: fc-reasoning-2.1b
12
+ results: []
13
+ ---
14
+
15
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
16
+ should probably proofread and complete it, then remove this comment. -->
17
+
18
+ [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
19
+ <details><summary>See axolotl config</summary>
20
+
21
+ axolotl version: `0.12.2`
22
+ ```yaml
23
+ base_model: kakaocorp/kanana-1.5-2.1b-instruct-2505
24
+
25
+
26
+ load_in_8bit: false
27
+ load_in_4bit: false
28
+
29
+ datasets:
30
+ - path: train.jsonl
31
+ type: chat_template
32
+
33
+ dataset_prepared_path: preprocess
34
+ val_set_size: 0.01
35
+ output_dir: ./outputs
36
+ dataloader_num_workers: 56
37
+
38
+ adapter:
39
+ lora_model_dir:
40
+
41
+ sequence_len: 16384
42
+ sample_packing: false
43
+ eval_sample_packing: false
44
+ pad_to_sequence_len: false
45
+
46
+ plugins:
47
+ - axolotl.integrations.liger.LigerPlugin
48
+ liger_rope: true
49
+ liger_rms_norm: true
50
+ liger_swiglu: true
51
+ liger_fused_linear_cross_entropy: true
52
+
53
+ wandb_project: fastcampus
54
+ wandb_entity: guijinson
55
+ wandb_watch:
56
+ wandb_name: fc-proj2-reasoning-2.1b
57
+ wandb_log_model:
58
+ hub_model_id: amphora/fc-reasoning-2.1b
59
+
60
+ gradient_accumulation_steps: 64
61
+ micro_batch_size: 2
62
+ num_epochs: 3
63
+ optimizer: adamw_torch_fused
64
+ lr_scheduler: cosine
65
+ learning_rate: 2e-5
66
+
67
+ bf16: auto
68
+ tf32: false
69
+
70
+ gradient_checkpointing:
71
+ resume_from_checkpoint:
72
+ logging_steps: 1
73
+ flash_attention: true
74
+
75
+ warmup_ratio: 0.05
76
+ weight_decay: 0.01
77
+ evals_per_epoch: 0
78
+ saves_per_epoch: 1
79
+
80
+ ```
81
+
82
+ </details><br>
83
+
84
+ # fc-reasoning-2.1b
85
+
86
+ This model is a fine-tuned version of [kakaocorp/kanana-1.5-2.1b-instruct-2505](https://huggingface.co/kakaocorp/kanana-1.5-2.1b-instruct-2505) on the train.jsonl dataset.
87
+
88
+ ## Model description
89
+
90
+ More information needed
91
+
92
+ ## Intended uses & limitations
93
+
94
+ More information needed
95
+
96
+ ## Training and evaluation data
97
+
98
+ More information needed
99
+
100
+ ## Training procedure
101
+
102
+ ### Training hyperparameters
103
+
104
+ The following hyperparameters were used during training:
105
+ - learning_rate: 2e-05
106
+ - train_batch_size: 2
107
+ - eval_batch_size: 2
108
+ - seed: 42
109
+ - gradient_accumulation_steps: 64
110
+ - total_train_batch_size: 128
111
+ - optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
112
+ - lr_scheduler_type: cosine
113
+ - lr_scheduler_warmup_steps: 53
114
+ - training_steps: 1072
115
+
116
+ ### Training results
117
+
118
+
119
+
120
+ ### Framework versions
121
+
122
+ - Transformers 4.55.2
123
+ - Pytorch 2.6.0+cu126
124
+ - Datasets 4.0.0
125
+ - Tokenizers 0.21.4