Instructions to use transformersbook/codeparrot-small with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use transformersbook/codeparrot-small with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="transformersbook/codeparrot-small")# Load model directly from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("transformersbook/codeparrot-small") model = AutoModelForCausalLM.from_pretrained("transformersbook/codeparrot-small") - Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use transformersbook/codeparrot-small with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "transformersbook/codeparrot-small" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "transformersbook/codeparrot-small", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'Use Docker
docker model run hf.co/transformersbook/codeparrot-small
- SGLang
How to use transformersbook/codeparrot-small with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "transformersbook/codeparrot-small" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "transformersbook/codeparrot-small", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "transformersbook/codeparrot-small" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "transformersbook/codeparrot-small", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }' - Docker Model Runner
How to use transformersbook/codeparrot-small with Docker Model Runner:
docker model run hf.co/transformersbook/codeparrot-small
step 75000
Browse files- .gitattributes +2 -0
- log/debug_0.log +0 -0
- pytorch_model.bin +1 -1
- runs/Aug30_13-13-56_leandro-16x-a100-v3/events.out.tfevents.1630329236.leandro-16x-a100-v3.9739.0 +2 -2
- wandb/run-20210830_131354-2654p8r7/files/output.log +0 -0
- wandb/run-20210830_131354-2654p8r7/files/wandb-summary.json +1 -1
- wandb/run-20210830_131354-2654p8r7/logs/debug-internal.log +2 -2
- wandb/run-20210830_131354-2654p8r7/run-2654p8r7.wandb +2 -2
.gitattributes
CHANGED
|
@@ -17,3 +17,5 @@
|
|
| 17 |
*tfevents* filter=lfs diff=lfs merge=lfs -text
|
| 18 |
wandb/run-20210830_131354-2654p8r7/logs/debug-internal.log filter=lfs diff=lfs merge=lfs -text
|
| 19 |
wandb/run-20210830_131354-2654p8r7/run-2654p8r7.wandb filter=lfs diff=lfs merge=lfs -text
|
|
|
|
|
|
|
|
|
| 17 |
*tfevents* filter=lfs diff=lfs merge=lfs -text
|
| 18 |
wandb/run-20210830_131354-2654p8r7/logs/debug-internal.log filter=lfs diff=lfs merge=lfs -text
|
| 19 |
wandb/run-20210830_131354-2654p8r7/run-2654p8r7.wandb filter=lfs diff=lfs merge=lfs -text
|
| 20 |
+
log/debug_0.log filter=lfs diff=lfs merge=lfs -text
|
| 21 |
+
wandb/run-20210830_131354-2654p8r7/files/output.log filter=lfs diff=lfs merge=lfs -text
|
log/debug_0.log
CHANGED
|
The diff for this file is too large to render.
See raw diff
|
|
|
pytorch_model.bin
CHANGED
|
@@ -1,3 +1,3 @@
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
-
oid sha256:
|
| 3 |
size 456677609
|
|
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:4017e4667a54b03416eff4b7352b4877fe494f0d9bc3e8c2a541c712984234db
|
| 3 |
size 456677609
|
runs/Aug30_13-13-56_leandro-16x-a100-v3/events.out.tfevents.1630329236.leandro-16x-a100-v3.9739.0
CHANGED
|
@@ -1,3 +1,3 @@
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
-
oid sha256:
|
| 3 |
-
size
|
|
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:5b20f813d39a0b60992eca720bc9bcb766c83d912798c5e7eca737b451ff12c8
|
| 3 |
+
size 13734493
|
wandb/run-20210830_131354-2654p8r7/files/output.log
CHANGED
|
The diff for this file is too large to render.
See raw diff
|
|
|
wandb/run-20210830_131354-2654p8r7/files/wandb-summary.json
CHANGED
|
@@ -1 +1 @@
|
|
| 1 |
-
{"lr": 0.
|
|
|
|
| 1 |
+
{"lr": 0.0002553116513810337, "samples": 14400000, "steps": 74999, "loss/train": 1.5982519388198853, "_runtime": 48994, "_timestamp": 1630378228, "_step": 75004, "loss/eval": 1.139208436012268, "perplexity": 3.1242942810058594}
|
wandb/run-20210830_131354-2654p8r7/logs/debug-internal.log
CHANGED
|
@@ -1,3 +1,3 @@
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
-
oid sha256:
|
| 3 |
-
size
|
|
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:7fc81c913b6890e4d27a435ff559719a1337b7a604b6744abdf8d91e142f184b
|
| 3 |
+
size 35143002
|
wandb/run-20210830_131354-2654p8r7/run-2654p8r7.wandb
CHANGED
|
@@ -1,3 +1,3 @@
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
-
oid sha256:
|
| 3 |
-
size
|
|
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:d28b3a808afbc202c98c8bf401735112be514464aaea875635bc7e8c23ba108e
|
| 3 |
+
size 28956617
|