Token Classification
Transformers
Safetensors
qwen2
Generated from Trainer
trl
prm
text-generation-inference
Instructions to use hzy/Qwen2.5-Math-7B-Instruct-PRM-Modified-math_shepherd with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use hzy/Qwen2.5-Math-7B-Instruct-PRM-Modified-math_shepherd with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("token-classification", model="hzy/Qwen2.5-Math-7B-Instruct-PRM-Modified-math_shepherd")# Load model directly from transformers import AutoTokenizer, AutoModelForTokenClassification tokenizer = AutoTokenizer.from_pretrained("hzy/Qwen2.5-Math-7B-Instruct-PRM-Modified-math_shepherd") model = AutoModelForTokenClassification.from_pretrained("hzy/Qwen2.5-Math-7B-Instruct-PRM-Modified-math_shepherd") - Notebooks
- Google Colab
- Kaggle
Training in progress, step 1650
Browse files
model-00001-of-00003.safetensors
CHANGED
|
@@ -1,3 +1,3 @@
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
-
oid sha256:
|
| 3 |
size 4877660776
|
|
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:bce26ab585045a5575973b81e81a626afc5d96de84cfb0ebd6a0b219f10dd82c
|
| 3 |
size 4877660776
|
model-00002-of-00003.safetensors
CHANGED
|
@@ -1,3 +1,3 @@
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
-
oid sha256:
|
| 3 |
size 4932751008
|
|
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:65bcff7855f93ae7f41b75c5b002ae1f1fe95d1636a7179df32eb4fb86cf1421
|
| 3 |
size 4932751008
|
model-00003-of-00003.safetensors
CHANGED
|
@@ -1,3 +1,3 @@
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
-
oid sha256:
|
| 3 |
size 4330879708
|
|
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:968d0217d2d6b26fe6d8af93fb2e1fd383244083c1c1c34c341e38402cb5ea48
|
| 3 |
size 4330879708
|