Instructions to use Crusadersk/gpt2-100m with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use Crusadersk/gpt2-100m with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="Crusadersk/gpt2-100m")# Load model directly from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("Crusadersk/gpt2-100m") model = AutoModelForCausalLM.from_pretrained("Crusadersk/gpt2-100m") - Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use Crusadersk/gpt2-100m with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "Crusadersk/gpt2-100m" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "Crusadersk/gpt2-100m", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'Use Docker
docker model run hf.co/Crusadersk/gpt2-100m
- SGLang
How to use Crusadersk/gpt2-100m with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "Crusadersk/gpt2-100m" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "Crusadersk/gpt2-100m", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "Crusadersk/gpt2-100m" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "Crusadersk/gpt2-100m", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }' - Docker Model Runner
How to use Crusadersk/gpt2-100m with Docker Model Runner:
docker model run hf.co/Crusadersk/gpt2-100m
GPT-2 100M
Custom-trained GPT-2 checkpoint with deliberate depth-width configuration for inference benchmarking research.
Created as part of the Banterhearts research program investigating benchmarking integrity for local LLM inference.
| Architecture | GPT2LMHeadModel (MHA) |
| Parameters | 100M |
| Config | n_embd=768, n_head=2, n_layer=8, n_inner=3072 |
| Context length | 1,024 tokens |
| Precision | FP32 |
| Model size | 367 MB |
| Vocab size | 50,257 |
Purpose
Largest MHA model; used in cross-backend and compiler benchmarks.
These checkpoints are not general-purpose language models. They are deliberately sized scaling-study artifacts designed to isolate the effect of model depth vs width on GPU inference latency. The key finding: in the small-model GPU regime, layer depth (not parameter count) dominates latency, producing inversions where a 5M-parameter model can be 3.6x slower than a 25M-parameter model.
Source Technical Reports
Used in: TR117, TR120, TR126, TR147
| TR | Role |
|---|---|
| TR117 | Original cross-backend benchmark matrix (7 backends, 4 model groups) |
| TR126 | Linux/Triton compiler validation with phase-separated measurement |
| TR147 | Second-regime portability validation on RTX 6000 Ada |
Design Rationale
The GPT-2 family (25M, 50M, 100M) uses a 2x3 factorial design:
| Model | n_embd | n_layer | n_inner | Params | Design role |
|---|---|---|---|---|---|
| gpt2-25m | 384 | 3 | 1,536 | 25M | Shallow, narrow |
| gpt2-50m | 512 | 8 | 2,048 | 50M | Deep, medium width |
| gpt2-100m | 768 | 8 | 3,072 | 100M | Deep, wide |
All models use 2 attention heads (MHA, not GQA) to isolate architecture effects from attention-group structure. Dropout is set to 0.0 for deterministic inference measurement.
Usage
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("Crusadersk/gpt2-100m")
tokenizer = AutoTokenizer.from_pretrained("Crusadersk/gpt2-100m")
inputs = tokenizer("Hello", return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=32, do_sample=False)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
Compatibility
| Framework | Supported |
|---|---|
| Transformers | Yes |
| torch.compile (Inductor) | Yes |
| Ollama | No (not GGUF format) |
| vLLM | Yes |
Citation
@misc{banterhearts2026gpt2100m,
title = {Custom GPT-2 Scaling Checkpoint (100M) for Inference Benchmarking Research},
author = {Kadadekar, Sahil},
year = {2026},
url = {https://huggingface.co/Crusadersk/gpt2-100m},
note = {Part of the Banterhearts research program. NeurIPS 2026 submission.}
}
Acknowledgments
This work is part of a 40-TR research program on consumer LLM deployment safety, conducted independently as pre-doctoral research. Full program details at github.com/Sahil170595/Banterhearts.
- Downloads last month
- 35