The dataset viewer is not available for this subset.
Exception: SplitsNotFoundError
Message: The split names could not be parsed from the dataset config.
Traceback: Traceback (most recent call last):
File "/usr/local/lib/python3.12/site-packages/datasets/inspect.py", line 289, in get_dataset_config_info
for split_generator in builder._split_generators(
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/packaged_modules/webdataset/webdataset.py", line 81, in _split_generators
first_examples = list(islice(pipeline, self.NUM_EXAMPLES_FOR_FEATURES_INFERENCE))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/packaged_modules/webdataset/webdataset.py", line 47, in _get_pipeline_from_tar
extracted_file_path = streaming_download_manager.extract(f"memory://{filename}")
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/download/streaming_download_manager.py", line 121, in extract
urlpaths = map_nested(self._extract, url_or_urls, map_tuple=True)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/utils/py_utils.py", line 493, in map_nested
mapped = function(data_struct)
^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/download/streaming_download_manager.py", line 131, in _extract
raise NotImplementedError(
NotImplementedError: Extraction protocol for TAR archives like 'memory://test/take_usb_out_of_computer.tar.xz' is not implemented in streaming mode. Please use `dl_manager.iter_archive` instead.
Example usage:
url = dl_manager.download(url)
tar_archive_iterator = dl_manager.iter_archive(url)
for filename, file in tar_archive_iterator:
...
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/src/services/worker/src/worker/job_runners/config/split_names.py", line 65, in compute_split_names_from_streaming_response
for split in get_dataset_split_names(
^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/inspect.py", line 343, in get_dataset_split_names
info = get_dataset_config_info(
^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/inspect.py", line 294, in get_dataset_config_info
raise SplitsNotFoundError("The split names could not be parsed from the dataset config.") from err
datasets.inspect.SplitsNotFoundError: The split names could not be parsed from the dataset config.Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
RLBench-OG Dataset
Overview
RLBench-OG is derived from the RLBench benchmark and is designed to evaluate the robustness of models under occlusion as well as their generalization capability under various environmental perturbations. The dataset selects ten tasks from the original RLBench task list, covering both simple scenarios and more complex long-horizon tasks. The benchmark consists of two main components: an Occlusion Suite and a Generalization Suite.
Dataset Components
Occlusion Suite
The Occlusion Suite focuses on scenarios where the camera's line of sight to key task-relevant regions is fully or partially blocked, leading to incomplete observations. Occlusions are introduced to the front_camera through two mechanisms:
Self-occlusion by object pose perturbation: Modifying the position or orientation of task-relevant objects to occlude essential interaction points (e.g., drawer handles, target objects).
Occlusion by external distractors: Placing task-irrelevant objects (cabinets, TVs, doors, etc.) in front of the workspace to partially block the scene.
Task-specific Occlusion Configurations:
- basketball_in_hoop: Basket and trash can poses are perturbed to occlude the basketball
- block_pyramid: A cabinet is placed in front of the workspace to occlude part of the blocks
- close_drawer: The drawer is rotated such that its geometry occludes the handle
- scoop_with_spatula: A wine bottle is positioned to block the target cube
- solve_puzzle: A storage cabinet is placed to occlude puzzle pieces
- straighten_rope: A desk lamp is placed in front of one end of the rope
- take_plate_off_colored_dish_rack: A box with a laptop blocks visibility of the plate
- take_usb_out_of_computer: A cabinet blocks the USB port area
- toilet_seat_down: A door is placed such that it occludes the toilet seat
- water_plants: A television partially blocks both the watering can and the plant
Generalization Suite
The Generalization Suite evaluates robustness to environment-conditioned variations. Based on the same ten tasks, the suite includes six types of environment variations, each modifying exactly one factor while keeping all others unchanged. Following the pipeline from COLOSSEUM, variation types are specified using yaml configuration files and data collection procedures via json metadata.
Variation Types:
- light_color: RGB values sampled within predefined ranges and applied to directional lights
- table_texture: Textures sampled from a texture dataset and applied to the table
- table_color: RGB values sampled within predefined ranges and applied to the table surface
- background_texture: Background textures randomly sampled and applied
- distractor: Two distractor objects sampled from a 3D asset dataset and spawned within workspace boundaries
- camera_pose: Camera position and orientation offsets sampled and applied to front, left-shoulder, and right-shoulder cameras
Dataset Structure
The dataset is organized by task and variation type. Each configuration includes:
- RGB-D images from multiple camera viewpoints
- Robot state information
- Robot action
Visualizations
For visualizations of different variant settings corresponding to each task, refer to the figures below:
Visualization of different variants for the basketball_in_hoop, block_pyramid, close_drawer, scoop_with_spatula, solve_puzzle tasks.
Visualization of different variants for the straighten_rope, take_plate_off_colored_dish_rack, take_usb_out_of_computer, toilet_seat_down, water_plants tasks.
Citation
If you use this dataset in your research, please cite our paper:
@misc{bai2025learningacttaskawareview,
title={Learning to See and Act: Task-Aware Virtual View Exploration for Robotic Manipulation},
author={Yongjie Bai and Zhouxia Wang and Yang Liu and Kaijun Luo and Yifan Wen and Mingtong Dai and Weixing Chen and Ziliang Chen and Lingbo Liu and Guanbin Li and Liang Lin},
year={2025},
eprint={2508.05186},
archivePrefix={arXiv},
primaryClass={cs.RO},
url={https://arxiv.org/abs/2508.05186},
}
License
This dataset is released under the MIT License.
Contact
For questions about this dataset, please contact: [email protected]
- Downloads last month
- 83