Update README.md
Browse files
README.md
CHANGED
|
@@ -42,6 +42,15 @@ Mistral-Small-22B-ArliAI-RPMax-v1.1 is a variant based on mistralai/Mistral-Smal
|
|
| 42 |
* **Learning Rate**: 0.00001
|
| 43 |
* **Gradient accumulation**: Very low 32 for better learning.
|
| 44 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 45 |
## Suggested Prompt Format
|
| 46 |
|
| 47 |
Mistral Instruct Format
|
|
|
|
| 42 |
* **Learning Rate**: 0.00001
|
| 43 |
* **Gradient accumulation**: Very low 32 for better learning.
|
| 44 |
|
| 45 |
+
## Quantization
|
| 46 |
+
|
| 47 |
+
The model is available in quantized formats:
|
| 48 |
+
|
| 49 |
+
* **FP16**: https://huggingface.co/ArliAI/Mistral-Small-22B-ArliAI-RPMax-v1.1
|
| 50 |
+
* **GPTQ_Q4**: https://huggingface.co/ArliAI/Mistral-Small-22B-ArliAI-RPMax-v1.1-GPTQ_Q4
|
| 51 |
+
* **GPTQ_Q8**: https://huggingface.co/ArliAI/Mistral-Small-22B-ArliAI-RPMax-v1.1-GPTQ_Q8
|
| 52 |
+
* **GGUF**: https://huggingface.co/ArliAI/Mistral-Small-22B-ArliAI-RPMax-v1.1-GGUF
|
| 53 |
+
|
| 54 |
## Suggested Prompt Format
|
| 55 |
|
| 56 |
Mistral Instruct Format
|