Ο€β‚€ SO-101 Libero Fine-tuned Model

This is a Ο€β‚€ (pi-zero) model fine-tuned on the Libero dataset for manipulation tasks with the SO-101 follower arm.

Model Details

  • Model Type: Ο€β‚€ (Physical Intelligence Zero)
  • Robot Type: SO-101 Follower Arm
  • Dataset: Libero
  • Task: Manipulation (pick and place, object manipulation)
  • Input: RGB images (224x224) + robot state (32-dim)
  • Output: Robot actions (7-dim)

Usage with LeRobot

Recording with Policy

lerobot-record \
  --robot.type=so101_follower \
  --robot.port=/dev/ttyACM0 \
  --robot.cameras='{"up":{"type":"opencv","index_or_path":0,"width":640,"height":480,"fps":30}}' \
  --robot.id=my_awesome_follower_arm \
  --dataset.single_task="pick up the screwdriver" \
  --policy.path=SilverKittyy/pi0_so101_libero_finetune \
  --dataset.repo_id=local_eval \
  --dataset.num_episodes=1

Evaluation

lerobot-eval \
  --policy.path=SilverKittyy/pi0_so101_libero_finetune \
  --dataset.repo_id=your_evaluation_dataset \
  --eval.n_episodes=10

Model Architecture

  • Vision Encoder: Frozen vision encoder
  • State Processing: 32-dimensional state vector
  • Action Space: 7-dimensional (6 joint positions + gripper)
  • Normalization: Mean-std normalization for all inputs/outputs

Training Details

  • Base Model: Ο€β‚€ pre-trained model
  • Fine-tuning Dataset: Libero manipulation tasks
  • Training Steps: 29,999
  • Learning Rate: 2.5e-05
  • Optimizer: AdamW with cosine decay

Hardware Requirements

  • Robot: SO-101 Follower Arm
  • Camera: OpenCV-compatible camera (e.g., /dev/video0)
  • Compute: CUDA-capable GPU recommended

Safety Notes

  • This model is trained for manipulation tasks
  • Always ensure proper safety measures when using with real hardware
  • Test in simulation or with safety constraints before real-world deployment
Downloads last month
3
Video Preview
loading