MedVision / README.md
YongchengYAO's picture
[doc] chore: update doc on the customized download mode
10083a7
---
license: cc-by-nc-4.0
task_categories:
- image-text-to-text
- image-segmentation
language:
- en
tags:
- medical
- image
- detection
- measurement
- angle
- distance
pretty_name: medvision
size_categories:
- 10M<n<100M
---
<div align="center">
<img src="fig/medvision-logo.png" alt="MedVision Logo" /><br/>
<div align="center" style="font-weight: bold; font-size: 28px;">
MedVision: Dataset and Benchmark for Quantitative Medical Image Analysis
</div>
| 🌏 [**Project**](https://medvision-vlm.github.io) | πŸ§‘πŸ»β€πŸ’» [**Code**](https://github.com/YongchengYAO/MedVision) | 🩻 [**Dataset**](https://huggingface.co/datasets/YongchengYAO/MedVision) | 🐳 [**Docker**](https://hub.docker.com/r/vincentycyao/medvision/tags) | πŸ€— [**Models**](https://huggingface.co/collections/YongchengYAO/medvision-sft-models) | πŸ“– [**arXiv**](https://arxiv.org/abs/2511.18676) |
πŸ”Ž Benchmarking VLMs for detection, tumor/lesion size estimation, and angle/distance measurement from medical images πŸ“
πŸ’Ώ 30.8M annotated samples | multi-modality | multi-anatomy | 3D/2D medical image πŸ’Ώ
</div>
```
@misc{yao2025medvisiondatasetbenchmarkquantitative,
title={MedVision: Dataset and Benchmark for Quantitative Medical Image Analysis},
author={Yongcheng Yao and Yongshuo Zong and Raman Dutt and Yongxin Yang and Sotirios A Tsaftaris and Timothy Hospedales},
year={2025},
eprint={2511.18676},
archivePrefix={arXiv},
primaryClass={cs.CV},
url={https://arxiv.org/abs/2511.18676},
}
```
<br/>
# News
- [Oct 8, 2025] πŸš€ Release **MedVision** dataset v1.0.0
<br/>
# Change Log
For essential updates, check the [change log](https://huggingface.co/datasets/YongchengYAO/MedVision/blob/main/doc/changelog.md).
<br/>
# Datasets
[File structure](https://huggingface.co/datasets/YongchengYAO/MedVision/blob/main/doc/file-structure.md): raw data will be automatically downloaded and processed, and our annotations are in each dataset folder
πŸ“ The MedVision dataset consists of public medical images and
quantitative annotations from this study. MRI: Magnetic Resonance Imaging;
CT: Computed Tomography; PET: positron emission tomography; US: Ultrasound;
b-box: bounding box; T/L: tumor/lesion size; A/D: angle/distance; HF:
HuggingFace; GC: Grand-Challenge; * redistributed.
| **Dataset** | **Anatomy** | **Modality** | **Annotation** | **Availability** | **Source** | **# Sample (Train/Test)** | | | **Status** |
| ---------------- | ------------- | ------------ | -------------- | ---------------- | ------------ | ------------------------- | ------------- | -------------- | ---------- |
| | | | | | | **b-box** | **T/L** | **A/D** | |
| AbdomenAtlas | abdomen | CT | b-box | open | HF | 6.8 / 2.9M | 0 | 0 | βœ… |
| AbdomenCT-1K | abdomen | CT | b-box | open | Zenodo | 0.7 / 0.3M | 0 | 0 | βœ… |
| ACDC | heart | MRI | b-box | open | HF*, others | 9.5 / 4.8K | 0 | 0 | βœ… |
| AMOS22 | abdomen | CT, MRI | b-box | open | Zenodo | 0.8 / 0.3M | 0 | 0 | βœ… |
| autoPEI-III | whole body | CT, PET | b-box, T/L | open | HF*, others | 22 / 9.7K | 0.5 / 0.2K | 0 | βœ… |
| BCV15 | abdomen | CT | b-box | open | HF*, Synapse | 71 / 30K | 0 | 0 | βœ… |
| BraTS24 | brain | MRI | b-box, T/L | open | HF*, Synapse | 0.8 / 0.3M | 7.9 / 3.1K | 0 | βœ… |
| CAMUS | heart | US | b-box | open | HF*, others | 0.7 / 0.3M | 0 | 0 | βœ… |
| Ceph-Bio-400 | head and neck | X-ray | b-box, A/D | open | HF*, others | 0 | 0 | 5.3 / 2.3K | βœ… |
| CrossModDA | brain | MRI | b-box | open | HF*, Zenodo | 3.0 / 1.0K | 0 | 0 | βœ… |
| FeTA24 | fetal brain | MRI | b-box, A/D | registration | Synapse | 34 / 15K | 0 | 0.2 / 0.1K | βœ… |
| FLARE22 | abdomen | CT | b-box | open | HF*, others | 72 / 33K | 0 | 0 | βœ… |
| HNTSMRG24 | head and neck | MRI | b-box, T/L | open | Zenodo | 18 / 6.6K | 1.0 / 0.4K | 0 | βœ… |
| ISLES24 | brain | MRI | b-box | open | HF*, GC | 7.3 / 2.5K | 0 | 0 | βœ… |
| KiPA22 | kidney | CT | b-box, T/L | open | HF*, GC | 26 / 11K | 2.1 / 1.0K | 0 | βœ… |
| KiTS23 | kidney | CT | b-box, T/L | open | HF*, GC | 80 / 35K | 5.9 / 2.6K | 0 | βœ… |
| MSD | multiple | CT, MRI | b-box, T/L | open | others | 0.2 / 0.1M | 5.3 / 2.2K | 0 | βœ… |
| OAIZIB-CM | knee | MRI | b-box | open | HF | 0.5 / 0.2M | 0 | 0 | βœ… |
| SKM-TEA | knee | MRI | b-box | registration | others | 0.2 / 0.1M | 0 | 0 | βœ… |
| ToothFairy2 | tooth | CT | b-box | registration | others | 1.0 / 0.4M | 0 | 0 | βœ… |
| TopCoW24 | brain | CT, MRI | b-box | open | HF*, Zenodo | 43 / 20K | 0 | 0 | βœ… |
| TotalSegmentator | multiple | CT, MRI | b-box | open | HF*, Zenodo | 9.6 / 4.0M | 0 | 0 | βœ… |
| **Total** | | | | | | **22 / 9.2M** | **23 / 9.6K** | **5.6 / 2.4K** | |
⚠️ For the following datasets, which do not allow redistribution, you need to apply for access from data owners, (optionally) upload to your private HF dataset repo, and set corresponding environment variables.
| **Dataset** | **Source** | Host Platform | Env Var |
| ----------- | ------------------------------------------------------- | ------------- | --------------------------- |
| FeTA24 | https://www.synapse.org/Synapse:syn25649159/wiki/610007 | Synapse | SYNAPSE_TOKEN |
| SKM-TEA | https://aimi.stanford.edu/datasets/skm-tea-knee-mri | Huggingface | MedVision_SKMTEA_HF_ID |
| ToothFairy2 | https://ditto.ing.unimore.it/toothfairy2/ | Huggingface | MedVision_ToothFairy2_HF_ID |
πŸ“ For SKM-TEA and ToothFairy2, you need to process the raw data and upload the preprocessed data to your **private** HF dataset repo. To use HF private dataset, you need to set `HF_TOKEN` and login with `hf auth login --token $HF_TOKEN --add-to-git-credential`
- Prepare SKM-TEA data: [tutorial](https://huggingface.co/datasets/YongchengYAO/MedVision/blob/main/doc/dataset_skm-tea.md)
- Prepare ToothFairy2 data: [tutorial](https://huggingface.co/datasets/YongchengYAO/MedVision/blob/main/doc/dataset_toothfairy2.md)
<br/>
# New Datasets Guide
To add new datasets, check this [blog](https://huggingface.co/blog/YongchengYAO/medvision-dataset) for an introduction of MedVision dataset.
# Requirement
πŸ“ Note: `trust_remote_code` is no longer supported in datasets>=4.0.0, install `dataset` with `pip install datasets==3.6.0`
<br/>
# Use
```python
import os
from datasets import load_dataset
# Set data folder
os.environ["MedVision_DATA_DIR"] = <your/data/folder>
# Pick a dataset config name and split
config = <config-name> # e.g., "OAIZIB-CM_BoxSize_Task01_Axial_Test"
split_name = "test" # use "test" for testing set config; use "train" for training set config
# Get dataset
ds = load_dataset(
"YongchengYAO/MedVision",
name=config,
trust_remote_code=True,
split=split_name,
)
```
πŸ“ List of config names [here](https://huggingface.co/datasets/YongchengYAO/MedVision/tree/main/info) (`./info`)
<br/>
# Environment Variables
```bash
# Set where data will be saved, requires ~1T for the complete dataset
export MedVision_DATA_DIR=<your/data/folder>
# Force download and process raw images, default to "False"
export MedVision_FORCE_DOWNLOAD_DATA="False"
# Force install dataset codebase, default to "False"
export MedVision_FORCE_INSTALL_CODE="False"
```
<br/>
# πŸ“– Essential Dataset Concept
We cover some essential concepts that help we use the MedVision dataset with ease.
## Concepts: Dataset & Data Configuration
- `MedVision`: the collection of public imaging data and our annotations
- `dataset`: name of the public datasets, such `BraTS24`, `MSD`, `OAIZIB-CM`
- `data-config`: name of predefined subsets
- naming convention: `{dataset}_{annotation-type}_{task-ID}_{slice}_{split}`
- `dataset`: [details](https://huggingface.co/datasets/YongchengYAO/MedVision#datasets)
- `annotation-type`:
- `BoxSize`: detection annotations (bounding box)
- `TumorLesionSize`: tumor/lesion size annotations
- `BiometricsFromLandmarks`: angle/distance annotations
- `task-ID`: `Task[xx]` (Note, this is a local ID in the dataset, not a glocal ID in MedVision.)
- For datasets with multiple image-mask pairs, we defined tasks in `medvision_ds/datasets/*/preprocess_*.py`
- source: [medvision_ds](https://huggingface.co/datasets/YongchengYAO/MedVision/tree/main/src)
- e.g., detection tasks for the `BraTS24` dataset is defined in the `benchmark_plan` in `medvision_ds/datasets/BraTS24/preprocess_detection.py`
- `slice`: [`Sagittal`, `Coronal`, `Axial`]
- `split`: [`Train`, `Test`]
## What's returned from MedVision Dataset?
We only share the annotations (https://huggingface.co/datasets/YongchengYAO/MedVision/tree/main/Datasets). The data loading script [`MedVision.py`](https://huggingface.co/datasets/YongchengYAO/MedVision/blob/main/MedVision.py) will handle raw image downloading and processing. The returned fields in each sample is defined as followed.
⚠️ In `MedVision.py`, the class `MedVision(GeneratorBasedBuilder)` defines the feature dict and the method `_generate_examples()` builds the dataset.
<details>
<summary>Code block in `MedVision(GeneratorBasedBuilder)` (Click to expand)</summary>
``` python
"""
MedVision dataset.
NOTE: To update the features returned by the load_dataset() method, the followings should be updated:
- the feature dict in this class
- the dict yielded by the _generate_examples() method
"""
# The feature dict for the task:
# - Mask-Size
features_dict_MaskSize = {
"dataset_name": Value("string"),
"taskID": Value("string"),
"taskType": Value("string"),
"image_file": Value("string"),
"mask_file": Value("string"),
"slice_dim": Value("uint8"),
"slice_idx": Value("uint16"),
"label": Value("uint16"),
"image_size_2d": Sequence(Value("uint16"), length=2),
"pixel_size": Sequence(Value("float16"), length=2),
"image_size_3d": Sequence(Value("uint16"), length=3),
"voxel_size": Sequence(Value("float16"), length=3),
"pixel_count": Value("uint32"),
"ROI_area": Value("float16"),
}
# The feature dict for the task:
# - Box-Size
features_dict_BoxSize = {
"dataset_name": Value("string"),
"taskID": Value("string"),
"taskType": Value("string"),
"image_file": Value("string"),
"mask_file": Value("string"),
"slice_dim": Value("uint8"),
"slice_idx": Value("uint16"),
"label": Value("uint16"),
"image_size_2d": Sequence(Value("uint16"), length=2),
"pixel_size": Sequence(Value("float16"), length=2),
"image_size_3d": Sequence(Value("uint16"), length=3),
"voxel_size": Sequence(Value("float16"), length=3),
"bounding_boxes": Sequence(
{
"min_coords": Sequence(Value("uint16"), length=2),
"max_coords": Sequence(Value("uint16"), length=2),
"center_coords": Sequence(Value("uint16"), length=2),
"dimensions": Sequence(Value("uint16"), length=2),
"sizes": Sequence(Value("float16"), length=2),
},
),
}
features_dict_BiometricsFromLandmarks = {
"dataset_name": Value("string"),
"taskID": Value("string"),
"taskType": Value("string"),
"image_file": Value("string"),
"landmark_file": Value("string"),
"slice_dim": Value("uint8"),
"slice_idx": Value("uint16"),
"image_size_2d": Sequence(Value("uint16"), length=2),
"pixel_size": Sequence(Value("float16"), length=2),
"image_size_3d": Sequence(Value("uint16"), length=3),
"voxel_size": Sequence(Value("float16"), length=3),
"biometric_profile": {
"metric_type": Value("string"),
"metric_map_name": Value("string"),
"metric_key": Value("string"),
"metric_value": Value("float16"),
"metric_unit": Value("string"),
"slice_dim": Value("uint8"),
},
}
features_dict_TumorLesionSize = {
"dataset_name": Value("string"),
"taskID": Value("string"),
"taskType": Value("string"),
"image_file": Value("string"),
"landmark_file": Value("string"),
"mask_file": Value("string"),
"slice_dim": Value("uint8"),
"slice_idx": Value("uint16"),
"label": Value("uint16"),
"image_size_2d": Sequence(Value("uint16"), length=2),
"pixel_size": Sequence(Value("float16"), length=2),
"image_size_3d": Sequence(Value("uint16"), length=3),
"voxel_size": Sequence(Value("float16"), length=3),
"biometric_profile": Sequence(
{
"metric_type": Value("string"),
"metric_map_name": Value("string"),
"metric_key_major_axis": Value("string"),
"metric_value_major_axis": Value("float16"),
"metric_key_minor_axis": Value("string"),
"metric_value_minor_axis": Value("float16"),
"metric_unit": Value("string"),
},
),
}
```
</details>
<details>
<summary>Code block in `_generate_examples` (Click to expand)</summary>
```python
# Task type: Mask-Size
if taskType == "Mask-Size":
flatten_slice_profiles = (
MedVision_BenchmarkPlannerSegmentation.flatten_slice_profiles_2d
)
if imageSliceType.lower() == "sagittal":
slice_dim = 0
elif imageSliceType.lower() == "coronal":
slice_dim = 1
elif imageSliceType.lower() == "axial":
slice_dim = 2
slice_profile_flattened = flatten_slice_profiles(biometricData, slice_dim)
for idx, case in enumerate(slice_profile_flattened):
# Skip cases with a mask size smaller than 200 pixels
if case["pixel_count"] < 200:
continue
else:
yield idx, {
"dataset_name": dataset_name,
"taskID": taskID,
"taskType": taskType,
"image_file": os.path.join(dataset_dir, case["image_file"]),
"mask_file": os.path.join(dataset_dir, case["mask_file"]),
"slice_dim": case["slice_dim"],
"slice_idx": case["slice_idx"],
"label": case["label"],
"image_size_2d": case["image_size_2d"],
"pixel_size": case["pixel_size"],
"image_size_3d": case["image_size_3d"],
"voxel_size": case["voxel_size"],
"pixel_count": case["pixel_count"],
"ROI_area": case["ROI_area"],
}
# Task type: Box-Size
if taskType == "Box-Size":
if imageType.lower() == "2d":
flatten_slice_profiles = (
MedVision_BenchmarkPlannerDetection.flatten_slice_profiles_2d
)
if imageSliceType.lower() == "sagittal":
slice_dim = 0
elif imageSliceType.lower() == "coronal":
slice_dim = 1
elif imageSliceType.lower() == "axial":
slice_dim = 2
slice_profile_flattened = flatten_slice_profiles(
biometricData, slice_dim
)
for idx, case in enumerate(slice_profile_flattened):
# Skip cases with multiple bounding boxes in the same slice
if len(case["bounding_boxes"]) > 1:
continue
# Skip cases with a bounding box size smaller than 10 pixels in any dimension
elif (
case["bounding_boxes"][0]["dimensions"][0] < 10
or case["bounding_boxes"][0]["dimensions"][1] < 10
):
continue
else:
yield idx, {
"dataset_name": dataset_name,
"taskID": taskID,
"taskType": taskType,
"image_file": os.path.join(dataset_dir, case["image_file"]),
"mask_file": os.path.join(dataset_dir, case["mask_file"]),
"slice_dim": case["slice_dim"],
"slice_idx": case["slice_idx"],
"label": case["label"],
"image_size_2d": case["image_size_2d"],
"pixel_size": case["pixel_size"],
"image_size_3d": case["image_size_3d"],
"voxel_size": case["voxel_size"],
"bounding_boxes": case["bounding_boxes"],
}
# Task type: Biometrics-From-Landmarks
if taskType == "Biometrics-From-Landmarks":
if imageType.lower() == "2d":
flatten_slice_profiles = (
MedVision_BenchmarkPlannerBiometry.flatten_slice_profiles_2d
)
if imageSliceType.lower() == "sagittal":
slice_dim = 0
elif imageSliceType.lower() == "coronal":
slice_dim = 1
elif imageSliceType.lower() == "axial":
slice_dim = 2
slice_profile_flattened = flatten_slice_profiles(
biometricData, slice_dim
)
for idx, case in enumerate(slice_profile_flattened):
yield idx, {
"dataset_name": dataset_name,
"taskID": taskID,
"taskType": taskType,
"image_file": os.path.join(dataset_dir, case["image_file"]),
"landmark_file": os.path.join(
dataset_dir, case["landmark_file"]
),
"slice_dim": case["slice_dim"],
"slice_idx": case["slice_idx"],
"image_size_2d": case["image_size_2d"],
"pixel_size": case["pixel_size"],
"image_size_3d": case["image_size_3d"],
"voxel_size": case["voxel_size"],
"biometric_profile": case["biometric_profile"],
}
# Task type: Biometrics-From-Landmarks-Distance
if taskType == "Biometrics-From-Landmarks-Distance":
if imageType.lower() == "2d":
flatten_slice_profiles = (
MedVision_BenchmarkPlannerBiometry.flatten_slice_profiles_2d
)
if imageSliceType.lower() == "sagittal":
slice_dim = 0
elif imageSliceType.lower() == "coronal":
slice_dim = 1
elif imageSliceType.lower() == "axial":
slice_dim = 2
slice_profile_flattened = flatten_slice_profiles(
biometricData, slice_dim
)
for idx, case in enumerate(slice_profile_flattened):
if case["biometric_profile"]["metric_type"] == "distance":
yield idx, {
"dataset_name": dataset_name,
"taskID": taskID,
"taskType": taskType,
"image_file": os.path.join(dataset_dir, case["image_file"]),
"landmark_file": os.path.join(
dataset_dir, case["landmark_file"]
),
"slice_dim": case["slice_dim"],
"slice_idx": case["slice_idx"],
"image_size_2d": case["image_size_2d"],
"pixel_size": case["pixel_size"],
"image_size_3d": case["image_size_3d"],
"voxel_size": case["voxel_size"],
"biometric_profile": case["biometric_profile"],
}
# Task type: Biometrics-From-Landmarks-Angle
if taskType == "Biometrics-From-Landmarks-Angle":
if imageType.lower() == "2d":
flatten_slice_profiles = (
MedVision_BenchmarkPlannerBiometry.flatten_slice_profiles_2d
)
if imageSliceType.lower() == "sagittal":
slice_dim = 0
elif imageSliceType.lower() == "coronal":
slice_dim = 1
elif imageSliceType.lower() == "axial":
slice_dim = 2
slice_profile_flattened = flatten_slice_profiles(
biometricData, slice_dim
)
for idx, case in enumerate(slice_profile_flattened):
if case["biometric_profile"]["metric_type"] == "angle":
yield idx, {
"dataset_name": dataset_name,
"taskID": taskID,
"taskType": taskType,
"image_file": os.path.join(dataset_dir, case["image_file"]),
"landmark_file": os.path.join(
dataset_dir, case["landmark_file"]
),
"slice_dim": case["slice_dim"],
"slice_idx": case["slice_idx"],
"image_size_2d": case["image_size_2d"],
"pixel_size": case["pixel_size"],
"image_size_3d": case["image_size_3d"],
"voxel_size": case["voxel_size"],
"biometric_profile": case["biometric_profile"],
}
# Task type: Tumor-Lesion-Size
if taskType == "Tumor-Lesion-Size":
if imageType.lower() == "2d":
# Get the target label for the task
target_label = benchmark_plan["tasks"][int(taskID) - 1]["target_label"]
flatten_slice_profiles = (
MedVision_BenchmarkPlannerBiometry_fromSeg.flatten_slice_profiles_2d
)
if imageSliceType.lower() == "sagittal":
slice_dim = 0
elif imageSliceType.lower() == "coronal":
slice_dim = 1
elif imageSliceType.lower() == "axial":
slice_dim = 2
slice_profile_flattened = flatten_slice_profiles(
biometricData, slice_dim
)
for idx, case in enumerate(slice_profile_flattened):
# Skip cases with multiple fitted ellipses in the same slice
if len(case["biometric_profile"]) > 1:
continue
else:
yield idx, {
"dataset_name": dataset_name,
"taskID": taskID,
"taskType": taskType,
"image_file": os.path.join(dataset_dir, case["image_file"]),
"mask_file": os.path.join(dataset_dir, case["mask_file"]),
"landmark_file": os.path.join(
dataset_dir, case["landmark_file"]
),
"slice_dim": case["slice_dim"],
"slice_idx": case["slice_idx"],
"label": target_label,
"image_size_2d": case["image_size_2d"],
"pixel_size": case["pixel_size"],
"image_size_3d": case["image_size_3d"],
"voxel_size": case["voxel_size"],
"biometric_profile": case["biometric_profile"],
}
```
</details>
## Dataset Building Workflow
### Workflow
<details>
<summary> MedVision Dataset Building Workflow (Black) </summary>
<img src="fig/medvision-dataset-flow-b.png" alt="MedVision Dataset Building Workflow (Black)" /><br>
</details>
<details>
<summary> MedVision Dataset Building Workflow (White) </summary>
<img src="fig/medvision-dataset-flow-w.png" alt="MedVision Dataset Building Workflow (White)" /><br>
</details>
</br>
There are a few venues to control the dataset loading and building behavior:
- **Rebuild Dataset (Arrow files)**: Use the `download_mode` argument in `load_dataset()` ([docs](https://huggingface.co/docs/datasets/v3.6.0/en/package_reference/builder_classes#datasets.DownloadMode)).
- **[1]** Set `download_mode="force_redownload"` to ignore the cached Arrow files and trigger the data loading script `MedVision.py` to rebuild the dataset.
- **Redownload Raw Data**:
- **[2]** `MedVision_FORCE_DOWNLOAD_DATA`: Set this environment variable to `True` to force re-downloading raw images and annotations.
- **[3]** `.downloaded_datasets.json`: This tracker file records downloaded status. Removing a dataset's entry here will trigger a re-download of the raw data for that dataset.
> [!Note] ⚠️
> **How to properly update/redownload raw data?**
>
> If you need to update raw data (images, masks, landmarks) using [2] or [3], you **MUST ALSO** use [1] (`download_mode="force_redownload"`).
>
> Why? Because if Hugging Face finds a valid cached dataset (Arrow files), it will load it directly and **skip running the script entirely**. Without running the script, the environment variable [2] or tracker file [3] will never be checked.
>
> **Summary:**
> - Update Arrow/Fields only: Use [1].
> - Update Raw Data: Use [1] **AND** ([2] or [3]).
>
> πŸ”₯ We will maintain a [change log](https://huggingface.co/datasets/YongchengYAO/MedVision/blob/main/doc/changelog.md) for essential updates.
### Examples
<details>
<summary> Run this for the first time will download the raw data and build the dataset </summary>
```python
import os
from datasets import load_dataset
# Set data folder
wd = os.path.join(os.getcwd(), "Data-testing")
os.makedirs(wd, exist_ok=True)
os.environ["MedVision_DATA_DIR"] = wd
# Pick a dataset config name and split
config = "OAIZIB-CM_BoxSize_Task01_Axial_Test"
split_name = "test" # use "test" for testing set config; use "train" for training set config
# Get dataset
ds = load_dataset(
"YongchengYAO/MedVision",
name=config,
trust_remote_code=True,
split=split_name,
)
```
</details>
<details>
<summary> Run the same script again will use the cached dataset </summary>
```python
import os
from datasets import load_dataset
# Set data folder
wd = os.path.join(os.getcwd(), "Data-testing")
os.makedirs(wd, exist_ok=True)
os.environ["MedVision_DATA_DIR"] = wd
# Pick a dataset config name and split
config = "OAIZIB-CM_BoxSize_Task01_Axial_Test"
split_name = "test" # use "test" for testing set config; use "train" for training set config
# Get dataset
ds = load_dataset(
"YongchengYAO/MedVision",
name=config,
trust_remote_code=True,
split=split_name,
)
```
</details>
<details>
<summary> Adding `download_mode="force_redownload"` will skip raw data downloading and rebuild the dataset </summary>
```python
import os
from datasets import load_dataset
# Set data folder
wd = os.path.join(os.getcwd(), "Data-testing")
os.makedirs(wd, exist_ok=True)
os.environ["MedVision_DATA_DIR"] = wd
# Pick a dataset config name and split
config = "OAIZIB-CM_BoxSize_Task01_Axial_Test"
split_name = "test" # use "test" for testing set config; use "train" for training set config
# Get dataset
ds = load_dataset(
"YongchengYAO/MedVision",
name=config,
trust_remote_code=True,
split=split_name,
download_mode="force_redownload",
)
```
</details>
<details>
<summary> Adding `download_mode="force_redownload"` and `os.environ["MedVision_FORCE_DOWNLOAD_DATA"] = "True"` will redownload raw data and rebuild the dataset </summary>
```python
import os
from datasets import load_dataset
# Set data folder
wd = os.path.join(os.getcwd(), "Data-testing")
os.makedirs(wd, exist_ok=True)
os.environ["MedVision_DATA_DIR"] = wd
# Pick a dataset config name and split
config = "OAIZIB-CM_BoxSize_Task01_Axial_Test"
split_name = "test" # use "test" for testing set config; use "train" for training set config
# Force redownload
os.environ["MedVision_FORCE_DOWNLOAD_DATA"] = "True"
# Get dataset
ds = load_dataset(
"YongchengYAO/MedVision",
name=config,
trust_remote_code=True,
split=split_name,
download_mode="force_redownload",
)
```
</details>
### Download Mode in MedVision Dataset
<details>
<summary> (Advanced) Understand how the customized dataset loading script `MedVision.py` changes the behavior of `download_mode` in `load_dataset()` </summary>
- `download_mode` can be one of these: `"reuse_dataset_if_exists"` (default), `"reuse_cache_if_exists"`, `"force_redownload"`
- Default behavior of `download_mode` in `load_dataset()`:
| | Downloads | Dataset |
| :-------------------------------- | :-------- | :------ |
| reuse_dataset_if_exists (default) | Reuse | Reuse |
| reuse_cache_if_exists | Reuse | Fresh |
| force_redownload | Fresh | Fresh |
- `download_mode` in MedVision dataset:
| | Downloads | Dataset |
| :----------------------------------------------------- | :-------- | :------ |
| reuse_dataset_if_exists (default) | Reuse | Reuse |
| reuse_cache_if_exists | Reuse | Fresh |
| force_redownload (MedVision_FORCE_DOWNLOAD_DATA=False) | Reuse | Fresh |
| force_redownload (MedVision_FORCE_DOWNLOAD_DATA=True) | Fresh | Fresh |
</details>
<br/>
# Advanced Usage
The dataset codebase `medvision_ds` can be used to scale the dataset, including adding new annotation types and datasets.
## πŸ› οΈ **Install**
```bash
pip install "git+https://huggingface.co/datasets/YongchengYAO/MedVision.git#subdirectory=src"
pip show medvision_ds
```
or
```bash
# First, install the benchmark codebase: medvision_bm
pip install "git+https://github.com/YongchengYAO/MedVision.git"
# Install the dataset codebase: medvision_ds
python -m medvision_bm.benchmark.install_medvision_ds --data_dir <local-data-folder>
```
## πŸ§‘πŸ»β€πŸ’» Use [utility functions](https://huggingface.co/datasets/YongchengYAO/MedVision/tree/main/src/medvision_ds/utils) for image processing
```python
from medvision_ds.utils.data_conversion import (
convert_nrrd_to_nifti,
convert_mha_to_nifti,
convert_nii_to_niigz,
convert_bmp_to_niigz,
copy_img_header_to_mask,
reorient_niigz_RASplus_batch_inplace,
)
from medvision_ds.utils.preprocess_utils import (
split_4d_nifti,
)
```
## πŸ‘©πŸΌβ€πŸ’»Examples of dataset scaling:
- Setup automatic data processing pipeline
- Download preprocessed data from HF: [medvision_ds/datasets/OAIZIB_CM/download.py](https://huggingface.co/datasets/YongchengYAO/MedVision/blob/main/src/medvision_ds/datasets/OAIZIB_CM/download.py)
- Download and processed data from source: [medvision_ds/datasets/BraTS24/download_raw.py](https://huggingface.co/datasets/YongchengYAO/MedVision/blob/main/src/medvision_ds/datasets/BraTS24/download_raw.py)
- Prepare annotations
- Generate b-box annotations from segmentation masks:
- [medvision_ds/datasets/BraTS24/preprocess_detection.py](https://huggingface.co/datasets/YongchengYAO/MedVision/blob/main/src/medvision_ds/datasets/BraTS24/preprocess_detection.py)
- Generate tumor/lesion size (TL) annotations from segmentation masks:
- [medvision_ds/datasets/BraTS24/preprocess_biometry.py](https://huggingface.co/datasets/YongchengYAO/MedVision/blob/main/src/medvision_ds/datasets/BraTS24/preprocess_biometry.py)
- Generate angle/distance (AD) annotations from landmarks:
- [medvision_ds/datasets/Ceph_Biometrics_400/preprocess_biometry.py](https://huggingface.co/datasets/YongchengYAO/MedVision/blob/main/src/medvision_ds/datasets/Ceph_Biometrics_400/preprocess_biometry.py)
- [medvision_ds/datasets/FeTA24/preprocess_biometry.py](https://huggingface.co/datasets/YongchengYAO/MedVision/blob/main/src/medvision_ds/datasets/FeTA24/preprocess_biometry.py)
<br/>
# License: CC-BY-NC-4.0
This repository is released under CC-BY-NC 4.0 (https://creativecommons.org/licenses/by-nc/4.0/).
βœ… **Do**
- Copy and redistribute the material in any medium or format.
- Adapt, remix, transform, and build upon the material.
- Use it privately or in non-commercial educational, research, or personal projects.
🚫 **Do not**
- Use the material for commercial purposes
πŸ“„ **Requirement**
| **Requirement** | **Description** |
| ---------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| **Attribution (BY)** | You must give appropriate credit, provide a link to [this dataset](https://huggingface.co/datasets/YongchengYAO/MedVision), and indicate if changes were made. |
| **NonCommercial (NC)** | You may not use the material for commercial purposes. |
| **Indicate changes** | If you modify the work, you must note that it has been changed. |