Spaces:
Runtime error
Runtime error
Upload 5 files
Browse files- README.md +52 -13
- app.py +34 -0
- packages.txt +2 -0
- prepare_assets.py +22 -0
- requirements.txt +15 -0
README.md
CHANGED
|
@@ -1,14 +1,53 @@
|
|
| 1 |
-
|
| 2 |
-
|
| 3 |
-
|
| 4 |
-
|
| 5 |
-
|
| 6 |
-
|
| 7 |
-
|
| 8 |
-
|
| 9 |
-
|
| 10 |
-
|
| 11 |
-
|
| 12 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 13 |
|
| 14 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Deploying GestureLSM to Hugging Face Spaces
|
| 2 |
+
|
| 3 |
+
This directory contains a minimal scaffold for running the `demo.py` Gradio UI inside a Hugging Face Space. Copy the files into a new Space repository (or push this folder as-is) and provide the model checkpoints via the Hugging Face Hub so the app can download them at startup.
|
| 4 |
+
|
| 5 |
+
## 1. Create the Space
|
| 6 |
+
1. In your Hugging Face account, click **New Space**.
|
| 7 |
+
2. Choose a Space name, set **SDK** to **Gradio**, and select **CPU Basic** hardware.
|
| 8 |
+
3. Leave the default visibility or mark it **Private** while testing.
|
| 9 |
+
|
| 10 |
+
## 2. Populate the Space repository
|
| 11 |
+
Upload the following from this folder to the Space:
|
| 12 |
+
|
| 13 |
+
- `app.py` β boots the Gradio interface, downloads weights if available, and ensures output folders exist.
|
| 14 |
+
- `requirements.txt` β Python dependencies.
|
| 15 |
+
- `packages.txt` β system packages (ffmpeg + openfst).
|
| 16 |
+
- `prepare_assets.py` (optional helper described below).
|
| 17 |
+
- Any configs, sample audio, and auxiliary data your demo needs (e.g. `configs/`, `demo/examples/`, `mean_std/`).
|
| 18 |
+
|
| 19 |
+
> **Tip**: keep the repository lightweight. Large checkpoints should live in a separate dataset repo and be fetched at runtime.
|
| 20 |
+
|
| 21 |
+
## 3. Host the checkpoints
|
| 22 |
+
1. Create a private **dataset** repo on Hugging Face (e.g. `username/gesturelsm-assets`).
|
| 23 |
+
2. Upload the required files:
|
| 24 |
+
- `ckpt/net_300000_upper.pth`
|
| 25 |
+
- `ckpt/net_300000_lower.pth`
|
| 26 |
+
- `ckpt/net_300000_hands.pth`
|
| 27 |
+
- `ckpt/net_300000_lower_trans.pth`
|
| 28 |
+
- `ckpt/net_300000_upper.pth`
|
| 29 |
+
- `ckpt/new_540_shortcut.bin`
|
| 30 |
+
- `mean_std/*.npy`
|
| 31 |
+
3. In your Spaceβs **Settings β Variables and secrets**, add a variable named `HF_GESTURELSM_WEIGHTS_REPO` with the value of the dataset repo (for example `username/gesturelsm-assets`).
|
| 32 |
|
| 33 |
+
When the Space boots, `app.py` will call `snapshot_download` to pull everything into `ckpt/`, preserving the original directory layout expected by `demo.py`.
|
| 34 |
+
|
| 35 |
+
## 4. Optional asset preparation script
|
| 36 |
+
If you need to perform additional setup (e.g. copying assets after download), you can push the provided `prepare_assets.py` and call it from `app.py` or `__init__.py` before launching the interface. Modify it to match your workflow.
|
| 37 |
+
|
| 38 |
+
## 5. Verify locally
|
| 39 |
+
Before pushing, test with the same layout on your machine:
|
| 40 |
+
|
| 41 |
+
```bash
|
| 42 |
+
conda activate gesturelsm
|
| 43 |
+
pip install -r hf_space/requirements.txt
|
| 44 |
+
python hf_space/app.py
|
| 45 |
+
```
|
| 46 |
+
|
| 47 |
+
Ensure the UI launches and generates outputs using locally stored checkpoints.
|
| 48 |
+
|
| 49 |
+
## 6. Push & run
|
| 50 |
+
Commit and push the Space repository. After the build completes, the public URL will auto-refresh and display the Gradio interface. Upload audio, wait for inference to finish, then download the generated video/NPZ results just like the local demo.
|
| 51 |
+
|
| 52 |
+
---
|
| 53 |
+
Add or adjust dependencies as new features require. Heavy rendering tasks can be slow on free CPU hardware; consider upgrading the Space or trimming the pipeline (e.g. precomputing alignments) if latency becomes an issue.
|
app.py
ADDED
|
@@ -0,0 +1,34 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
import os
|
| 2 |
+
from pathlib import Path
|
| 3 |
+
|
| 4 |
+
from huggingface_hub import snapshot_download
|
| 5 |
+
|
| 6 |
+
# ---------------------------------------------------------------------------
|
| 7 |
+
# Optional: pull checkpoints and auxiliary assets at startup. Set the
|
| 8 |
+
# HF_GESTURELSM_WEIGHTS_REPO environment variable in the Space settings to the
|
| 9 |
+
# dataset or model repo that hosts the pre-trained weights (e.g. "username/gesturelsm-assets").
|
| 10 |
+
# Files will be placed under ckpt/ so the existing config paths keep working.
|
| 11 |
+
# ---------------------------------------------------------------------------
|
| 12 |
+
BASE_DIR = Path(__file__).parent.resolve()
|
| 13 |
+
CKPT_DIR = BASE_DIR / "ckpt"
|
| 14 |
+
CKPT_DIR.mkdir(parents=True, exist_ok=True)
|
| 15 |
+
|
| 16 |
+
weights_repo = os.environ.get("HF_GESTURELSM_WEIGHTS_REPO", "").strip()
|
| 17 |
+
if weights_repo:
|
| 18 |
+
snapshot_download(
|
| 19 |
+
repo_id=weights_repo,
|
| 20 |
+
repo_type="dataset",
|
| 21 |
+
local_dir=CKPT_DIR,
|
| 22 |
+
local_dir_use_symlinks=False,
|
| 23 |
+
allow_patterns=["*.pth", "*.bin", "*.npz", "*.npy"],
|
| 24 |
+
)
|
| 25 |
+
|
| 26 |
+
# Ensure expected runtime directories exist so the demo can write outputs.
|
| 27 |
+
for relative in ["outputs/audio2pose", "datasets/BEAT_SMPL"]:
|
| 28 |
+
(BASE_DIR / relative).mkdir(parents=True, exist_ok=True)
|
| 29 |
+
|
| 30 |
+
# Reuse the existing Gradio interface defined in demo.py.
|
| 31 |
+
from demo import demo as gesture_demo # noqa: E402
|
| 32 |
+
|
| 33 |
+
if __name__ == "__main__":
|
| 34 |
+
gesture_demo.queue(concurrency_count=1).launch(server_name="0.0.0.0", share=False)
|
packages.txt
ADDED
|
@@ -0,0 +1,2 @@
|
|
|
|
|
|
|
|
|
|
| 1 |
+
ffmpeg
|
| 2 |
+
openfst-bin
|
prepare_assets.py
ADDED
|
@@ -0,0 +1,22 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
"""Utility helpers for the Hugging Face Space deployment.
|
| 2 |
+
|
| 3 |
+
Call these functions from ``app.py`` if you want extra setup beyond the basic
|
| 4 |
+
snapshot download (for example copying files into specific folders). This file
|
| 5 |
+
is optional; include or edit it as needed for your workflow.
|
| 6 |
+
"""
|
| 7 |
+
|
| 8 |
+
from __future__ import annotations
|
| 9 |
+
|
| 10 |
+
import shutil
|
| 11 |
+
from pathlib import Path
|
| 12 |
+
|
| 13 |
+
|
| 14 |
+
def copy_if_missing(src: Path, dst: Path) -> None:
|
| 15 |
+
"""Copy ``src`` into ``dst`` if the destination path does not yet exist."""
|
| 16 |
+
if dst.exists():
|
| 17 |
+
return
|
| 18 |
+
dst.parent.mkdir(parents=True, exist_ok=True)
|
| 19 |
+
if src.is_dir():
|
| 20 |
+
shutil.copytree(src, dst)
|
| 21 |
+
else:
|
| 22 |
+
shutil.copy2(src, dst)
|
requirements.txt
ADDED
|
@@ -0,0 +1,15 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
gradio>=4.42.0
|
| 2 |
+
huggingface-hub>=0.24.0
|
| 3 |
+
torch==2.4.0
|
| 4 |
+
torchvision==0.19.0
|
| 5 |
+
torchaudio==2.4.0
|
| 6 |
+
librosa>=0.10,<0.11
|
| 7 |
+
einops
|
| 8 |
+
loguru
|
| 9 |
+
omegaconf>=2.3
|
| 10 |
+
tqdm
|
| 11 |
+
pyvirtualdisplay
|
| 12 |
+
soundfile
|
| 13 |
+
montreal-forced-aligner==2.1.0
|
| 14 |
+
numpy==1.26.4
|
| 15 |
+
scipy>=1.11
|