Add task categories and link to paper

#5
by nielsr HF Staff - opened
Files changed (1) hide show
  1. README.md +23 -54
README.md CHANGED
@@ -1,7 +1,10 @@
1
  ---
2
- # For reference on dataset card metadata, see the spec: https://github.com/huggingface/hub-docs/blob/main/datasetcard.md?plain=1
3
- # Doc / guide: https://huggingface.co/docs/hub/datasets-cards
4
- {}
 
 
 
5
  ---
6
 
7
  # MedSPOT: A Workflow-Aware Sequential Grounding Benchmark for Clinical GUI
@@ -9,7 +12,6 @@
9
  <div align="center">
10
  <figure align="center"> <img src="https://raw.githubusercontent.com/Tajamul21/MedSPOT/main/Images/MedSPOT2.png" width=65%> </figure>
11
  </div>
12
- <!-- Provide a quick summary of the dataset. -->
13
 
14
  ## Dataset Summary
15
  MedSPOT is a benchmark for evaluating Multimodal Large Language Models (MLLMs) on GUI grounding tasks in medical imaging software. It evaluates models on their ability to localize and interact with UI elements across 10 medical imaging applications including 3DSlicer, DICOMscope, Weasis, MITK, and others.
@@ -17,7 +19,6 @@ MedSPOT is a benchmark for evaluating Multimodal Large Language Models (MLLMs) o
17
  ## Dataset Details
18
 
19
  ### Dataset Description
20
- <!-- Provide a longer summary of what this dataset is. -->
21
 
22
  **MedSPOT** is a workflow-aware sequential GUI grounding benchmark designed to evaluate Multimodal Large Language Models (MLLMs) on their ability to interact with real-world clinical imaging software. Unlike conventional grounding benchmarks that evaluate isolated predictions, MedSPOT models grounding as a **temporally dependent sequence of spatial decisions** within evolving interface states — reflecting the procedural dependency structure inherent in clinical workflows.
23
 
@@ -42,18 +43,12 @@ MedSPOT evaluates models across three metrics:
42
  - **SHR** (Step Hit Rate) — per-step grounding accuracy
43
  - **S1A** (Step 1 Accuracy) — accuracy on the first step of each task
44
 
45
- <!-- - **Curated by:** [More Information Needed]
46
- - **Funded by [optional]:** [More Information Needed]
47
- - **Shared by [optional]:** [More Information Needed]
48
- - **Language(s) (NLP):** [More Information Needed]
49
- - **License:** [More Information Needed]
50
- -->
51
  ### Dataset Sources
52
- <!-- Provide the basic links for the dataset. -->
53
 
54
  - **Repository:** [GitHub](https://github.com/Tajamul21/MedSPOT)
55
  - **Dataset:** [HuggingFace](https://huggingface.co/datasets/Tajamul21/MedSPOT)
56
- - **Paper:** Coming soon
 
57
 
58
  ## Uses
59
 
@@ -75,6 +70,7 @@ MedSPOT is a **benchmark dataset** intended strictly for **evaluation** of Multi
75
  - **Training** — MedSPOT is a test-only benchmark and should not be used as training data
76
  - **Clinical decision-making** — Not intended for use in real clinical or diagnostic settings
77
  - **Autonomous clinical agents** — Should not be used to build unsupervised agents operating in real clinical environments.
 
78
  ## Dataset Structure
79
 
80
  The dataset is organized hierarchically by software platform:
@@ -130,10 +126,6 @@ Data was collected by directly recording real GUI interaction workflows on 10 op
130
 
131
  No external data sources, web scraping, or automated data collection methods were used. All data was generated directly by the authors through controlled GUI interaction sessions.
132
 
133
- <!-- #### Who are the source data producers?
134
-
135
- <!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
136
-
137
  ### Annotations
138
 
139
  #### Annotation Process
@@ -156,45 +148,22 @@ In total, the benchmark comprises **216 video tasks** and **597 annotated keyfra
156
 
157
  #### Who are the annotators?
158
 
159
- The annotations were created manually by the authors of the paper. Annotation was performed using Label Studio (open-source annotation tool), with each step verified for correctness and causal consistency across the interaction workflow.
160
-
161
- <!-- #### Personal and Sensitive Information
162
-
163
- <!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
164
-
165
- <!-- ## Bias, Risks, and Limitations
166
 
167
- <!-- This section is meant to convey both technical and sociotechnical limitations. -->
168
 
169
-
170
- <!-- ### Recommendations
171
-
172
- <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations.Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations. -->
173
- <!-- ## Citation
174
-
175
- <!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section.
176
 
177
  **BibTeX:**
178
- -->
179
-
180
- <!-- **APA:**
181
-
182
- [More Information Needed]
183
-
184
- ## Glossary [optional]
185
-
186
- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card.
187
-
188
- [More Information Needed]
189
-
190
- ## More Information [optional]
191
-
192
- [More Information Needed]
193
-
194
- ## Dataset Card Authors [optional]
195
-
196
- [More Information Needed]
197
-
198
- ## Dataset Card Contact
199
 
200
- [More Information Needed] -->
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ task_categories:
3
+ - image-text-to-text
4
+ tags:
5
+ - medical
6
+ - gui-grounding
7
+ - visual-grounding
8
  ---
9
 
10
  # MedSPOT: A Workflow-Aware Sequential Grounding Benchmark for Clinical GUI
 
12
  <div align="center">
13
  <figure align="center"> <img src="https://raw.githubusercontent.com/Tajamul21/MedSPOT/main/Images/MedSPOT2.png" width=65%> </figure>
14
  </div>
 
15
 
16
  ## Dataset Summary
17
  MedSPOT is a benchmark for evaluating Multimodal Large Language Models (MLLMs) on GUI grounding tasks in medical imaging software. It evaluates models on their ability to localize and interact with UI elements across 10 medical imaging applications including 3DSlicer, DICOMscope, Weasis, MITK, and others.
 
19
  ## Dataset Details
20
 
21
  ### Dataset Description
 
22
 
23
  **MedSPOT** is a workflow-aware sequential GUI grounding benchmark designed to evaluate Multimodal Large Language Models (MLLMs) on their ability to interact with real-world clinical imaging software. Unlike conventional grounding benchmarks that evaluate isolated predictions, MedSPOT models grounding as a **temporally dependent sequence of spatial decisions** within evolving interface states — reflecting the procedural dependency structure inherent in clinical workflows.
24
 
 
43
  - **SHR** (Step Hit Rate) — per-step grounding accuracy
44
  - **S1A** (Step 1 Accuracy) — accuracy on the first step of each task
45
 
 
 
 
 
 
 
46
  ### Dataset Sources
 
47
 
48
  - **Repository:** [GitHub](https://github.com/Tajamul21/MedSPOT)
49
  - **Dataset:** [HuggingFace](https://huggingface.co/datasets/Tajamul21/MedSPOT)
50
+ - **Paper:** [MedSPOT: A Workflow-Aware Sequential Grounding Benchmark for Clinical GUI](https://huggingface.co/papers/2603.19993)
51
+ - **Project Page:** [https://rozainmalik.github.io/MedSPOT_web/](https://rozainmalik.github.io/MedSPOT_web/)
52
 
53
  ## Uses
54
 
 
70
  - **Training** — MedSPOT is a test-only benchmark and should not be used as training data
71
  - **Clinical decision-making** — Not intended for use in real clinical or diagnostic settings
72
  - **Autonomous clinical agents** — Should not be used to build unsupervised agents operating in real clinical environments.
73
+
74
  ## Dataset Structure
75
 
76
  The dataset is organized hierarchically by software platform:
 
126
 
127
  No external data sources, web scraping, or automated data collection methods were used. All data was generated directly by the authors through controlled GUI interaction sessions.
128
 
 
 
 
 
129
  ### Annotations
130
 
131
  #### Annotation Process
 
148
 
149
  #### Who are the annotators?
150
 
151
+ The annotations were created manually by the authors of the paper: Rozain Shakeel, Abdul Rahman Mohammad Ali, Muneeb Mushtaq, Tausifa Jan Saleem, and Tajamul Ashraf. Annotation was performed using Label Studio (open-source annotation tool), with each step verified for correctness and causal consistency across the interaction workflow.
 
 
 
 
 
 
152
 
153
+ ## Citation
154
 
155
+ If you find MedSPOT useful in your research, please consider citing our paper:
 
 
 
 
 
 
156
 
157
  **BibTeX:**
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
158
 
159
+ ```bibtex
160
+ @misc{medspot,
161
+ title={MedSPOT: A Workflow-Aware Sequential Grounding Benchmark for Clinical GUI},
162
+ author={Rozain Shakeel and Abdul Rahman Mohammad Ali and Muneeb Mushtaq and Tausifa Jan Saleem and Tajamul Ashraf},
163
+ year={2026},
164
+ eprint={2603.19993},
165
+ archivePrefix={arXiv},
166
+ primaryClass={cs.CV},
167
+ url={https://arxiv.org/abs/2603.19993},
168
+ }
169
+ ```