instance_id stringlengths 21 53 | repo stringclasses 188
values | language stringclasses 1
value | pull_number int64 20 148k | title stringlengths 6 144 | body stringlengths 0 83.4k | created_at stringdate 2015-09-25 03:17:17 2025-07-10 16:50:35 | problem_statement stringlengths 188 240k | hints_text stringlengths 0 145k | resolved_issues listlengths 1 6 | base_commit stringlengths 40 40 | commit_to_review dict | reference_review_comments listlengths 1 62 | merged_commit stringlengths 40 40 | merged_patch stringlengths 297 9.87M | metadata dict |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
voxel51__fiftyone-2353@02e9ba1 | voxel51/fiftyone | Python | 2,353 | Provide custom task name for CVAT | ## What changes are proposed in this pull request?
Closes #1753
1. Custom task name can be passed for labelling data in CVAT such as `dataset.annotate("anno_key", backend="cvat", task_name="Custom task name", ...)`
2. Default task name for CVAT to include an annotation key such as `FiftyOne_{dataset_name}_{anno... | 2022-11-28T09:18:12Z | [FR] Allow task names to be provided when annotating with CVAT
Currently, when annotating with the CVAT integration, the name for the tasks that get generated are hardcoded as `FiftyOne_{dataset_name}` which is not ideal when launching multiple annotation runs on the same dataset. There should be an argument allowing t... | [
{
"body": "Currently, when annotating with the CVAT integration, the name for the tasks that get generated are hardcoded as `FiftyOne_{dataset_name}` which is not ideal when launching multiple annotation runs on the same dataset. There should be an argument allowing the user to provide one or more task names to... | 0d0f1b51326a7859dea7c655e06c528aa775e02c | {
"head_commit": "02e9ba17a750b4f3193c54bff01b1dde443af821",
"head_commit_message": "provide custom task name when uploading to cvat",
"patch_to_review": "diff --git a/fiftyone/utils/cvat.py b/fiftyone/utils/cvat.py\nindex 4c33cfc741a..a0986607094 100644\n--- a/fiftyone/utils/cvat.py\n+++ b/fiftyone/utils/cvat.py... | [
{
"diff_hunk": "@@ -4290,8 +4293,18 @@ def upload_samples(self, samples, backend):\n project_id = self.create_project(project_name, cvat_schema)\n project_ids.append(project_id)\n \n- _dataset_name = samples_batch._dataset.name.replace(\" \", \"_\")\n- ... | 12208fbe141ad664e23f2b51c48d2f6d3a4414f1 | diff --git a/docs/source/integrations/cvat.rst b/docs/source/integrations/cvat.rst
index e7b1b38f7ea..b094454d2b3 100644
--- a/docs/source/integrations/cvat.rst
+++ b/docs/source/integrations/cvat.rst
@@ -492,6 +492,7 @@ provided:
otherwise a new project is created. By default, no project is used
- **project_id... | {
"difficulty": "medium",
"estimated_review_effort": 3,
"problem_domain": "New Feature Additions"
} | |
voxel51__fiftyone-1793@dbbc6d9 | voxel51/fiftyone | Python | 1,793 | adding filename exception logging for failed xml parsing | Currently when an invalid or malformed xml file is parsed an `ExpatError` will be raised. This change logs the filename that produced this error before re-raising it to the calling code. | 2022-05-26T16:12:10Z | [BUG] Importing VOCDetectionDataset from disk fails due to bad XML - but what file?!
### System information
- **OS Platform and Distribution (e.g., Linux Ubuntu 16.04)**: MacOS 10.15.7
- **FiftyOne installed from (pip or source)**: pip
- **FiftyOne version (run `fiftyone --version`)**: FiftyOne v0.15.1, Voxe... | Definitely agree that a more informative error that includes the offending filename is called for here 💪 | [
{
"body": "### System information\r\n\r\n- **OS Platform and Distribution (e.g., Linux Ubuntu 16.04)**: MacOS 10.15.7\r\n- **FiftyOne installed from (pip or source)**: pip\r\n- **FiftyOne version (run `fiftyone --version`)**: FiftyOne v0.15.1, Voxel51, Inc.\r\n- **Python version**: Python 3.6.8\r\n\r\n#... | 1a9c9bf0fe3f1a5a332629e2f711ab8d4358a7c4 | {
"head_commit": "dbbc6d949de59509441e459ca034d28ce23e23e5",
"head_commit_message": "adding filename exception logging for failed xml parsing",
"patch_to_review": "diff --git a/fiftyone/core/utils.py b/fiftyone/core/utils.py\nindex cbc6462d98f..f30e145b95d 100644\n--- a/fiftyone/core/utils.py\n+++ b/fiftyone/core... | [
{
"diff_hunk": "@@ -19,15 +14,24 @@\n import logging\n import ntpath\n import os\n-import posixpath\n import platform\n+import posixpath\n import signal\n import struct\n import subprocess\n import timeit\n import types\n import zlib\n \n+from base64 import b64decode",
"line": null,
"original_line": 26,... | cbd342bb38a25805450d635dcec6a213f2ceba47 | diff --git a/fiftyone/core/utils.py b/fiftyone/core/utils.py
index cbc6462d98f..3bec6a035cf 100644
--- a/fiftyone/core/utils.py
+++ b/fiftyone/core/utils.py
@@ -26,6 +26,7 @@
import subprocess
import timeit
import types
+from xml.parsers.expat import ExpatError
import zlib
try:
@@ -662,8 +663,11 @@ def load_xml_... | {
"difficulty": "low",
"estimated_review_effort": 2,
"problem_domain": "Bug Fixes"
} |
voxel51__fiftyone-1878@33ca8a8 | voxel51/fiftyone | Python | 1,878 | Maintain active dataset fields | Resolves #1852 | 2022-06-13T19:26:55Z | [BUG] Changing session.view resets field visibility choices
On `fiftyone==0.16.2`, updating `session.view` resets any field visibility toggles I may have set (eg, unselected all label fields), and forces the defaults (all label fields visible). I don't think this used to be the case though?
This came up when I was t... | [
{
"body": "On `fiftyone==0.16.2`, updating `session.view` resets any field visibility toggles I may have set (eg, unselected all label fields), and forces the defaults (all label fields visible). I don't think this used to be the case though?\r\n\r\nThis came up when I was trying to work with an interactive plo... | 9fb2226edb77fd0eb34a7f70d8621da5c84cd7ce | {
"head_commit": "33ca8a8acdc169b7f202a9256ee5ff89ceb44dc3",
"head_commit_message": "reset active field only on dataset change",
"patch_to_review": "diff --git a/app/packages/app/src/Root/Datasets/Dataset.tsx b/app/packages/app/src/Root/Datasets/Dataset.tsx\nindex cce7f8843d3..a0f86b00407 100644\n--- a/app/packag... | [
{
"diff_hunk": "@@ -97,17 +98,24 @@ export const Dataset: Route<DatasetQuery> = ({ prepared }) => {\n const update = useStateUpdate();\n \n useEffect(() => {\n- update(({ reset }) => {\n+ update(({ reset, get }) => {\n reset(filters);\n- reset(_activeFields({ modal: false }));\n reset... | 81b0953addd1b6b11854df796bb64acce028989d | diff --git a/app/packages/app/src/Root/Datasets/Dataset.tsx b/app/packages/app/src/Root/Datasets/Dataset.tsx
index cce7f8843d3..d0085586cdb 100644
--- a/app/packages/app/src/Root/Datasets/Dataset.tsx
+++ b/app/packages/app/src/Root/Datasets/Dataset.tsx
@@ -1,18 +1,17 @@
import { Route, RouterContext } from "@fiftyone/... | {
"difficulty": "medium",
"estimated_review_effort": 3,
"problem_domain": "Bug Fixes"
} | |
voxel51__fiftyone-1283@8c0bc4a | voxel51/fiftyone | Python | 1,283 | Relax `opencv-python-headless` version requirement | Removes the pin on `opencv-python-headless`. The original reason for this was to prevent building wheels from source for new installs during the few hours the source dist was available, but binary dists were not.
`opencv-python-headless` is popular package, only updated a few times a year, and therefore a loose requ... | 2021-09-22T17:12:28Z | [FR] update opencv-python-headless
### Proposal Summary
Currently this repo requires opencv-python-headless<=4.4.0.46. To cut a long story short there are no wheels available for python3.9 and I am unable to install fiftyone (I am using docker `image: jupyter/scipy-notebook:latest`). However version `4.5.3.56` is avai... | [
{
"body": "### Proposal Summary\r\nCurrently this repo requires opencv-python-headless<=4.4.0.46. To cut a long story short there are no wheels available for python3.9 and I am unable to install fiftyone (I am using docker `image: jupyter/scipy-notebook:latest`). However version `4.5.3.56` is available for inst... | 634f707fe4c02f925906efced047d681e6f2d1ca | {
"head_commit": "8c0bc4a90ff7dde7f03929b8883ba4441b6d876c",
"head_commit_message": "rm opencv-python-headless pin",
"patch_to_review": "diff --git a/setup.py b/setup.py\nindex 84cb5a72929..ed5c33b62d7 100644\n--- a/setup.py\n+++ b/setup.py\n@@ -78,6 +78,7 @@ def get_version():\n \"mongoengine==0.20.0\",\... | [
{
"diff_hunk": "@@ -78,6 +78,7 @@ def get_version():\n \"mongoengine==0.20.0\",\n \"motor>=2.3,<3\",\n \"numpy\",\n+ \"opencv-python-headless>=4.4,<5\",",
"line": null,
"original_line": 81,
"original_start_line": null,
"path": "setup.py",
"start_line": null,
... | 275d1bf698d882303b53a8011e958c7f6423aa09 | diff --git a/setup.py b/setup.py
index 84cb5a72929..7df55703079 100644
--- a/setup.py
+++ b/setup.py
@@ -78,6 +78,7 @@ def get_version():
"mongoengine==0.20.0",
"motor>=2.3,<3",
"numpy",
+ "opencv-python-headless",
"packaging",
"pandas",
"Pillow>=6.2",
@@ -96... | {
"difficulty": "low",
"estimated_review_effort": 2,
"problem_domain": "Dependency Updates & Env Compatibility"
} | |
voxel51__fiftyone-4236@2af48de | voxel51/fiftyone | Python | 4,236 | Lazily connect to database when needed | Closes #182
Closes #1964
Closes #1804
Closes #3189
## What changes are proposed in this pull request?
Lazily connect to database so you can import `fiftyone` without a database connection.
Most of the work was testing.
## How is this patch tested? If it is not, please explain why.
### Lazy connection te... | 2024-04-05T15:38:30Z | Lazily start DB service to reduce import time?
Currently, running `fiftyone config`, which simply loads and prints one's FO config, takes ~2 seconds to execute on my machine, because `import fiftyone` currently triggers a DB service to be started, among other things.
Can we adopt a lazy initialization strategy where... | Related: the DB is also spinning up unnecessarily (?) when connecting to a remote session.
@tylerganter do you have any thoughts on this? I don't have a good understanding of what operations should cause the DB to spin up. Another reason (besides import time) why lazily starting the DB would be good is that MongoDB log... | [
{
"body": "Currently, running `fiftyone config`, which simply loads and prints one's FO config, takes ~2 seconds to execute on my machine, because `import fiftyone` currently triggers a DB service to be started, among other things.\r\n\r\nCan we adopt a lazy initialization strategy where the DB service is only ... | 85339fcecf03978435fed6fd94565d1c63f58dd6 | {
"head_commit": "2af48de0332c1254bea0e992ff1b8067e4387b37",
"head_commit_message": "Revert \"reorder\"\n\nThis reverts commit 3b26be28c9456042be521fa31cb53ab7a0f22bca.",
"patch_to_review": "diff --git a/docs/generate_docs.bash b/docs/generate_docs.bash\nindex 058c58d14dc..8ec30aedc68 100755\n--- a/docs/generate_... | [
{
"diff_hunk": "@@ -25,9 +25,15 @@\n from fiftyone.__public__ import *\n \n import fiftyone.core.logging as _fol\n-import fiftyone.migrations as _fom\n \n-_fol.init_logging()\n+# The old way of doing things, migrating database on import. If we\n+# REALLY need to do this, for example doc build, we can.\n+if (\... | 5cdb08027d2acd1f7fa591916439d96a1ddd44b5 | diff --git a/docs/source/conf.py b/docs/source/conf.py
index 49b1f2279b8..bee158f640e 100644
--- a/docs/source/conf.py
+++ b/docs/source/conf.py
@@ -75,6 +75,7 @@
"inherited-members": True,
"member-order": "bysource",
"autosummary": True,
+ "exclude-members": "objects",
}
autodoc_inherit_docstrings ... | {
"difficulty": "medium",
"estimated_review_effort": 4,
"problem_domain": "Performance Optimizations"
} |
voxel51__fiftyone-4351@5e91389 | voxel51/fiftyone | Python | 4,351 | Gracefully handle None-valued tag fields | ## Change log
Resolves #3546
By convention, all non-required FO fields should be nullable, but the implementation of `tag_samples()`, `untag_samples()`, `tag_labels()`, and `untag_labels()` use the `$addToSet` operator, which gracefully handles missing fields, but unfortunately cannot handle `null` fields. So if... | 2024-05-05T19:13:15Z | [BUG] Clearing the tags field and then tagging samples raises error
To reproduce:
```python
import fiftyone.zoo as foz
dataset = foz.load_zoo_dataset("quickstart").clone()
dataset.clear_sample_field("tags")
dataset.tag_samples("test")
# ValueError: Cannot apply $addToSet to non-array field. Field named 'tags'... | [
{
"body": "To reproduce:\r\n```python\r\nimport fiftyone.zoo as foz\r\n\r\ndataset = foz.load_zoo_dataset(\"quickstart\").clone()\r\ndataset.clear_sample_field(\"tags\")\r\ndataset.tag_samples(\"test\")\r\n\r\n# ValueError: Cannot apply $addToSet to non-array field. Field named 'tags' has non-array type null\r\... | ec20c512099e97a2ee012442dca62560696f48e6 | {
"head_commit": "5e913890deef4191a6b42fecd41302e5c286265c",
"head_commit_message": "handle None-valued tag fields",
"patch_to_review": "diff --git a/fiftyone/core/collections.py b/fiftyone/core/collections.py\nindex 43098224227..ab15faf4f27 100644\n--- a/fiftyone/core/collections.py\n+++ b/fiftyone/core/collecti... | [
{
"diff_hunk": "@@ -1778,7 +1778,7 @@ def clear_sample_field(self, field_name):\n The field will remain in the dataset's schema, and all samples will\n have the value ``None`` for the field.\n \n- You can use dot notation (``embedded.field.name``) to clone embedded\n+ You can use d... | e08fb57295bba4b9263436be43c9d6015e99517c | diff --git a/fiftyone/core/collections.py b/fiftyone/core/collections.py
index 43098224227..ab15faf4f27 100644
--- a/fiftyone/core/collections.py
+++ b/fiftyone/core/collections.py
@@ -1611,7 +1611,27 @@ def tag_samples(self, tags):
# We only need to process samples that are missing a tag of interest
... | {
"difficulty": "medium",
"estimated_review_effort": 3,
"problem_domain": "Bug Fixes"
} | |
voxel51__fiftyone-1601@84c3032 | voxel51/fiftyone | Python | 1,601 | Keypoint updates | "Resolves #1581\r\n\r\nRequires https://github.com/voxel51/eta/pull/556\r\n\r\n## Change log\r\n\r\n(...TRUNCATED) | 2022-02-16T15:19:16Z | "[FR] Add support for per-point confidence/visibility \n### Proposal Summary\r\n\r\nAdd support for (...TRUNCATED) | "Thanks for the FR!\r\n\r\nOne option we have to represent not visible is to insert `(NaN, NaN)` for(...TRUNCATED) | [{"body":"### Proposal Summary\r\n\r\nAdd support for per-point confidence/visibility - currently Ke(...TRUNCATED) | 53e0372fcc16068b10a32b6b9b1ad2e0edd941ac | {"head_commit":"84c3032afbf73f11ca034edc4985d43b7178cae6","head_commit_message":"skeleton work","pat(...TRUNCATED) | [{"diff_hunk":"@@ -341,3 +343,115 @@ def classification_to_detections(sample_collection, in_field, o(...TRUNCATED) | 4ae35d90723bd5c4a2e0c52e1bff52c480c38796 | "diff --git a/app/packages/app/src/components/Actions/Options.tsx b/app/packages/app/src/components/(...TRUNCATED) | {
"difficulty": "medium",
"estimated_review_effort": 4,
"problem_domain": "New Feature Additions"
} |
voxel51__fiftyone-322@5739367 | voxel51/fiftyone | Python | 322 | Reorganize installation docs | "Closes #272\r\nI think I cleaned up all of the cross-references between the two docs, but an extra (...TRUNCATED) | 2020-07-31T19:24:44Z | "Migrate virtualenv setup instructions to separate docs page\nI like @lethosor's suggestion below. W(...TRUNCATED) | [{"body":"I like @lethosor's suggestion below. We can provide pip-only instructions first and then p(...TRUNCATED) | 3c21edf7840178286d03462c47fa8c0b345a93a7 | {"head_commit":"5739367f563d94fec967cd5bfafed627d222cb25","head_commit_message":"making link more ob(...TRUNCATED) | [{"diff_hunk":"@@ -0,0 +1,136 @@\n+\n+.. _virtualenv-guide:\n+\n+Virtual Environment Setup\n+=======(...TRUNCATED) | c031b717f13294e9e7c4e2f03d61de184d88759c | "diff --git a/docs/source/getting_started/install.rst b/docs/source/getting_started/install.rst\nind(...TRUNCATED) | {
"difficulty": "low",
"estimated_review_effort": 1,
"problem_domain": "Documentation Updates"
} | |
voxel51__fiftyone-459@c311516 | voxel51/fiftyone | Python | 459 | View stage enhancements | Closes #465 and #466.
And adds array slicing! | 2020-08-26T22:01:38Z | "View bar does not handle default values appropriately\nIf I try to create an instance of a view sta(...TRUNCATED) | [{"body":"If I try to create an instance of a view stage in the App with a parameter with a default (...TRUNCATED) | 4976c9cf20f04ffe6df547ed15633bf625bbed88 | {"head_commit":"c311516945ebc853dbdc1d6dd537665b98393663","head_commit_message":"include private fie(...TRUNCATED) | [{"diff_hunk":"@@ -1002,7 +1002,7 @@ class SelectFields(ViewStage):\n \"\"\"\n \n def __init(...TRUNCATED) | ac73b19754c59941553b12491c82746de18a7742 | "diff --git a/docs/source/cli/index.rst b/docs/source/cli/index.rst\nindex 005a50d6f75..18a59dbc9c3 (...TRUNCATED) | {
"difficulty": "medium",
"estimated_review_effort": 4,
"problem_domain": "Bug Fixes"
} | |
voxel51__fiftyone-1078@d9a17d0 | voxel51/fiftyone | Python | 1,078 | Adding support for customizing dataset imports and exports | "Resolves #707, and deprecates #1050.\r\n\r\nAdds a variety of new syntaxes for importing and export(...TRUNCATED) | 2021-06-23T05:20:08Z | "[FR] Flag to avoid copying data when exporting Dataset labels\nI updated labels for a dataset in Fi(...TRUNCATED) | yep makes sense, I like it. | [{"body":"I updated labels for a dataset in FiftyOne and want to use them in my own scripts to train(...TRUNCATED) | 99d7c9686f93691088642a311d1a5c8817b27c7d | {"head_commit":"d9a17d05d34f66dfdc4fcd6b424a610911288d21","head_commit_message":"improving label_fie(...TRUNCATED) | [{"diff_hunk":"@@ -47,13 +47,50 @@ a |DatasetView| into any format of your choice via the basic reci(...TRUNCATED) | b576a8decc277316957ffdbc3b7610994d95ef19 | "diff --git a/docs/source/integrations/lightning_flash.rst b/docs/source/integrations/lightning_flas(...TRUNCATED) | {
"difficulty": "medium",
"estimated_review_effort": 4,
"problem_domain": "New Feature Additions"
} |
SWE-CARE: A Comprehensiveness-aware Benchmark for Code Review Evaluation
Dataset Description
SWE-CARE (Software Engineering - Comprehensive Analysis and Review Evaluation) is a comprehensiveness-aware benchmark for evaluating Large Language Models (LLMs) on repository-level code review tasks. The dataset features real-world code review scenarios from popular open-source Python and Java repositories, with comprehensive metadata and reference review comments.
Dataset Summary
- Repository: inclusionAI/SWE-CARE
- Paper: CodeFuse-CR-Bench: A Comprehensiveness-aware Benchmark for End-to-End Code Review Evaluation
- Languages: Python
- License: Apache 2.0
- Splits:
test: 671 instances (primary evaluation set)dev: 7,086 instances (development/training set)
Dataset Structure
Data Instances
Each instance in the dataset represents a code review task with the following structure:
{
"instance_id": "voxel51__fiftyone-2353@02e9ba1",
"repo": "voxel51/fiftyone",
"language": "Python",
"pull_number": 2353,
"title": "Fix issue with dataset loading",
"body": "This PR fixes...",
"created_at": "2023-01-15T10:30:00Z",
"problem_statement": "Issue #2350: Dataset fails to load...",
"hints_text": "Comments from the issue discussion...",
"resolved_issues": [
{
"number": 2350,
"title": "Dataset loading error",
"body": "When loading datasets..."
}
],
"base_commit": "abc123...",
"commit_to_review": {
"head_commit": "def456...",
"head_commit_message": "Fix dataset loading logic",
"patch_to_review": "diff --git a/file.py..."
},
"reference_review_comments": [
{
"text": "Consider adding error handling here",
"path": "src/dataset.py",
"diff_hunk": "@@ -10,5 +10,7 @@...",
"line": 15,
"start_line": 14,
"original_line": 15,
"original_start_line": 14
}
],
"merged_commit": "ghi789...",
"merged_patch": "diff --git a/file.py...",
"metadata": {
"problem_domain": "Bug Fixes",
"difficulty": "medium",
"estimated_review_effort": 3
}
}
Data Fields
Core Fields
instance_id(string): Unique identifier in formatrepo_owner__repo_name-PR_number@commit_sha_shortrepo(string): GitHub repository in formatowner/namelanguage(string): Primary programming language (PythonorJava)pull_number(int): GitHub pull request numbertitle(string): Pull request titlebody(string): Pull request descriptioncreated_at(string): ISO 8601 timestamp of PR creation
Problem Context
problem_statement(string): Combined title and body of resolved issue(s)hints_text(string): Relevant comments from issues prior to the PRresolved_issues(list): Array of resolved issues with:number(int): Issue numbertitle(string): Issue titlebody(string): Issue description
Code Changes
base_commit(string): Base commit SHA before changescommit_to_review(dict): The commit being reviewed:head_commit(string): Commit SHA to reviewhead_commit_message(string): Commit messagepatch_to_review(string): Git diff of changes to review
merged_commit(string): Final merged commit SHAmerged_patch(string): Final merged changes (ground truth)
Reference Reviews
reference_review_comments(list): Human code review comments with:text(string): Review comment textpath(string): File path being revieweddiff_hunk(string): Relevant code diff contextline(int): Line number in new versionstart_line(int): Start line for multi-line commentsoriginal_line(int): Line number in original versionoriginal_start_line(int): Original start line
Metadata
metadata(dict): LLM-classified attributes:problem_domain(string): Category like "Bug Fix", "Feature", "Refactoring", etc.difficulty(string): "Easy", "Medium", or "Hard"estimated_review_effort(int): Scale of 1-5 for review complexity
Data Splits
| Split | Instances | Description |
|---|---|---|
| test | 671 | Primary evaluation set for benchmarking |
| dev | 7,086 | Development set for training/fine-tuning |
Usage
Loading the Dataset
from datasets import load_dataset
# Load the test split (default for evaluation)
dataset = load_dataset("inclusionAI/SWE-CARE", split="test")
# Load the dev split
dev_dataset = load_dataset("inclusionAI/SWE-CARE", split="dev")
# Load both splits
full_dataset = load_dataset("inclusionAI/SWE-CARE")
Using with SWE-CARE Evaluation Framework
from swe_care.utils.load import load_code_review_dataset
# Load from Hugging Face (default)
instances = load_code_review_dataset()
# Access instance data
for instance in instances:
print(f"Instance: {instance.instance_id}")
print(f"Repository: {instance.repo}")
print(f"Problem: {instance.problem_statement}")
print(f"Patch to review: {instance.commit_to_review.patch_to_review}")
print(f"Reference comments: {len(instance.reference_review_comments)}")
Running Evaluation
See the GitHub repository for detailed documentation and examples.
Evaluation Metrics and Baselines Results
See the paper for comprehensive evaluation metrics and baseline results on various LLMs.
Additional Information
Citation
If you use this dataset in your research, please cite:
@misc{guo2025codefusecrbenchcomprehensivenessawarebenchmarkendtoend,
title={CodeFuse-CR-Bench: A Comprehensiveness-aware Benchmark for End-to-End Code Review Evaluation in Python Projects},
author={Hanyang Guo and Xunjin Zheng and Zihan Liao and Hang Yu and Peng DI and Ziyin Zhang and Hong-Ning Dai},
year={2025},
eprint={2509.14856},
archivePrefix={arXiv},
primaryClass={cs.SE},
url={https://arxiv.org/abs/2509.14856},
}
Contributions
We welcome contributions! Please see our GitHub repository for:
- Data collection improvements
- New evaluation metrics
- Baseline model results
- Bug reports and feature requests
License
This dataset is released under the Apache 2.0 License. See LICENSE for details.
Changelog
- v0.2.0 (2025-10): Expanded dataset to 671 test instances
- v0.1.0 (2025-09): Initial release with 601 test instances and 7,086 dev instances
- Downloads last month
- 234