single-finetuned-llama
community
AI & ML interests
None defined yet.
Organization Card
Model Zoo
Weights are stored in FP16 precision by default. Only TinyLlama models have variants of FP32 and FP16.
Naming convention: {P}?{F}?{E}?-{base_model}-epoch{1|2|3|5}-{fp16}?
Legend: P - Pretrained; F - Finetuned; E - Edited
Llama3.2 8B
- P-llama3.2-8b-epoch1-fp16
- PF-llama3.2-8b-epoch2-fp16
- Base model for finetuning: P-llama3.2-8b-epoch1
- PFE-llama3.2-8b-epoch2-fp16
- Base model for AlphaEdit: PF-llama3.2-8b-epoch2
Llama3.2 3B
- P-llama3.2-3b-epoch1-fp16
- F-llama3.2-3b-epoch2-fp16
- PF-llama3.2-3b-epoch2-fp16
- Base model for finetuning: P-llama3.2-3b-epoch1
- PFE-llama3.2-3b-epoch2-fp16
- Base model for AlphaEdit: PF-llama3.2-3b-epoch2
TinyLlama
- P-tinyllama-epoch1-fp16
- F-tinyllama-epoch3-fp16
- PF-tinyllama-epoch3-fp32
- Base model for finetuning: P-tinyllama-epoch1
- PFE-tinyllama-epoch3-fp32
- Base model for AlphaEdit: PF-tinyllama-epoch3
- PFE-tinyllama-epoch3-fp16
- Same as PFE-tinyllama-epoch3, but instead of FP32 precision, halved to FP16
- PFE-tinyllama-epoch5-fp32
- Base model for AlphaEdit: PF-tinyllama-epoch5 (not ported)
Quantised TinyLlama (.GGUF format)
models
0
None public yet
datasets
0
None public yet