jan-hq/trinity-v1 DPO-trained on Intel/orca_dpo_pairs

#1 Model on the Leaderboard of ANY SIZE 12/16/2023

12/18 Update: Some of the datasets used to create the model I fine-tuned may have been contaminated. I am doing my best to remove thie contamination in future models. Thanks for your patience. Contains traces of Cybertron-2:

  title={Cybertron: Uniform Neural Alignment}, 
  author={Xavier Murias},
  year={2023},
  publisher = {HuggingFace},
  journal = {HuggingFace repository},
  howpublished = {\url{https://huggingface.co/fblgit/una-cybertron-7b-v2-bf16}},
}```
Downloads last month
1,034
Safetensors
Model size
7B params
Tensor type
BF16
Β·
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Model tree for rwitz2/go-bruins-v2.1.1

Adapters
1 model
Merges
1 model
Quantizations
7 models

Spaces using rwitz2/go-bruins-v2.1.1 28