Deploy FairSteer Layer 14 (Acc: 67.90%)
Browse files- README.md +1 -1
- config.json +2 -2
- layer_comparison.png +0 -0
- model.safetensors +1 -1
- training_dashboard.png +2 -2
README.md
CHANGED
|
@@ -20,7 +20,7 @@ This model detects whether an LLM's internal activation indicates biased reasoni
|
|
| 20 |
- **Base Model**: TinyLlama/TinyLlama-1.1B-Chat-v1.0
|
| 21 |
- **Target Layer**: 14
|
| 22 |
- **Architecture**: Linear Probe (Dropout -> Linear)
|
| 23 |
-
- **Performance**:
|
| 24 |
|
| 25 |
## Artifacts
|
| 26 |
- `model.safetensors`: Weights (SafeTensors only)
|
|
|
|
| 20 |
- **Base Model**: TinyLlama/TinyLlama-1.1B-Chat-v1.0
|
| 21 |
- **Target Layer**: 14
|
| 22 |
- **Architecture**: Linear Probe (Dropout -> Linear)
|
| 23 |
+
- **Performance**: 67.90% Balanced Accuracy
|
| 24 |
|
| 25 |
## Artifacts
|
| 26 |
- `model.safetensors`: Weights (SafeTensors only)
|
config.json
CHANGED
|
@@ -3,7 +3,7 @@
|
|
| 3 |
"layer_idx": 14,
|
| 4 |
"input_dim": 2048,
|
| 5 |
"dropout_rate": 0.25,
|
| 6 |
-
"best_metric_value": 0.
|
| 7 |
"architecture": "Linear Probe (Dropout -> Linear)",
|
| 8 |
-
"training_timestamp": "2025-12-
|
| 9 |
}
|
|
|
|
| 3 |
"layer_idx": 14,
|
| 4 |
"input_dim": 2048,
|
| 5 |
"dropout_rate": 0.25,
|
| 6 |
+
"best_metric_value": 0.6790040376850606,
|
| 7 |
"architecture": "Linear Probe (Dropout -> Linear)",
|
| 8 |
+
"training_timestamp": "2025-12-15T07:04:57.057892"
|
| 9 |
}
|
layer_comparison.png
CHANGED
|
|
model.safetensors
CHANGED
|
@@ -1,3 +1,3 @@
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
-
oid sha256:
|
| 3 |
size 8348
|
|
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:25a7acb5831932b170171f017d9f4f88b0594808f8567d1abaa082c4bff47ded
|
| 3 |
size 8348
|
training_dashboard.png
CHANGED
|
Git LFS Details
|
|
Git LFS Details
|