This is a converted model to GGUF from meta-llama/Llama-2-7b-chat-hf quantized to Q4_0 using llama.cpp library.
- Downloads last month
- 6
Hardware compatibility
Log In
to view the estimation
4-bit
This is a converted model to GGUF from meta-llama/Llama-2-7b-chat-hf quantized to Q4_0 using llama.cpp library.
4-bit