Instructions to use MuMing0102/VGPO-RL-7B with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use MuMing0102/VGPO-RL-7B with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("image-text-to-text", model="MuMing0102/VGPO-RL-7B") messages = [ { "role": "user", "content": [ {"type": "image", "url": "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/p-blog/candy.JPG"}, {"type": "text", "text": "What animal is on the candy?"} ] }, ] pipe(text=messages)# Load model directly from transformers import AutoProcessor, AutoModelForImageTextToText processor = AutoProcessor.from_pretrained("MuMing0102/VGPO-RL-7B") model = AutoModelForImageTextToText.from_pretrained("MuMing0102/VGPO-RL-7B") messages = [ { "role": "user", "content": [ {"type": "image", "url": "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/p-blog/candy.JPG"}, {"type": "text", "text": "What animal is on the candy?"} ] }, ] inputs = processor.apply_chat_template( messages, add_generation_prompt=True, tokenize=True, return_dict=True, return_tensors="pt", ).to(model.device) outputs = model.generate(**inputs, max_new_tokens=40) print(processor.decode(outputs[0][inputs["input_ids"].shape[-1]:])) - Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use MuMing0102/VGPO-RL-7B with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "MuMing0102/VGPO-RL-7B" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "MuMing0102/VGPO-RL-7B", "messages": [ { "role": "user", "content": [ { "type": "text", "text": "Describe this image in one sentence." }, { "type": "image_url", "image_url": { "url": "https://cdn.britannica.com/61/93061-050-99147DCE/Statue-of-Liberty-Island-New-York-Bay.jpg" } } ] } ] }'Use Docker
docker model run hf.co/MuMing0102/VGPO-RL-7B
- SGLang
How to use MuMing0102/VGPO-RL-7B with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "MuMing0102/VGPO-RL-7B" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "MuMing0102/VGPO-RL-7B", "messages": [ { "role": "user", "content": [ { "type": "text", "text": "Describe this image in one sentence." }, { "type": "image_url", "image_url": { "url": "https://cdn.britannica.com/61/93061-050-99147DCE/Statue-of-Liberty-Island-New-York-Bay.jpg" } } ] } ] }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "MuMing0102/VGPO-RL-7B" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "MuMing0102/VGPO-RL-7B", "messages": [ { "role": "user", "content": [ { "type": "text", "text": "Describe this image in one sentence." }, { "type": "image_url", "image_url": { "url": "https://cdn.britannica.com/61/93061-050-99147DCE/Statue-of-Liberty-Island-New-York-Bay.jpg" } } ] } ] }' - Docker Model Runner
How to use MuMing0102/VGPO-RL-7B with Docker Model Runner:
docker model run hf.co/MuMing0102/VGPO-RL-7B
# Load model directly
from transformers import AutoProcessor, AutoModelForImageTextToText
processor = AutoProcessor.from_pretrained("MuMing0102/VGPO-RL-7B")
model = AutoModelForImageTextToText.from_pretrained("MuMing0102/VGPO-RL-7B")
messages = [
{
"role": "user",
"content": [
{"type": "image", "url": "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/p-blog/candy.JPG"},
{"type": "text", "text": "What animal is on the candy?"}
]
},
]
inputs = processor.apply_chat_template(
messages,
add_generation_prompt=True,
tokenize=True,
return_dict=True,
return_tensors="pt",
).to(model.device)
outputs = model.generate(**inputs, max_new_tokens=40)
print(processor.decode(outputs[0][inputs["input_ids"].shape[-1]:]))Model Card for VGPO-RL-7B
📖 Overview of VGPO
Standard RLVR methods treat every generated token equally, broadcasting a single reward signal indiscriminately. This leads to signal dilution — generic text tokens receive the same reinforcement as critical visually-grounded reasoning steps. Meanwhile, temporal visual forgetting causes attention to visual inputs to progressively decay as reasoning chains extend.
VGPO addresses these issues through three key mechanisms:
- Visual Attention Compensation (VAC): Uses the inherent hidden-state similarity between generated tokens and image tokens as a Visual Focus Score to localize visual activations without external supervision. A progressive incentive schedule counteracts temporal visual forgetting in later reasoning steps.
- Intra-Trajectory Re-weighting: At the token level, dynamically re-weights advantages by visual focus scores to amplify learning from visually-grounded tokens.
- Inter-Trajectory Re-weighting: At the trajectory level, prioritizes rollouts with superior visual accumulation, favoring trajectories that sustain consistent visual grounding.
🔗 Model Sources
- Github Repository:
VGPO - ArXiv Paper:
2604.09349
📕 Training Datasets
| Split | Dataset | Link |
|---|---|---|
| Train | ViRL39K | PAPOGalaxy/PAPO_ViRL39K_train |
| Val | MMK12 | PAPOGalaxy/PAPO_MMK12_test |
📊 Evaluation
We follow the evaluation script of Look-Back. All results are reported as average accuracy with inference temperature 0.0.
Supported Evaluation Benchmarks
| Benchmark | Focus Domain |
|---|---|
| MathVista | General Mathematical & Geometric Reasoning |
| MathVerse | General Mathematical & Geometric Reasoning |
| WeMath | General Mathematical & Geometric Reasoning |
| MMK12 | General Mathematical & Geometric Reasoning |
| GeoMath | General Mathematical & Geometric Reasoning |
| Geometry3K | General Mathematical & Geometric Reasoning |
| LogicVista | Vision-dependent Multimodal Reasoning |
| SuperClevr Counting | Vision-dependent Multimodal Reasoning |
| MMMU-Pro | Vision-dependent Multimodal Reasoning |
| MathVerse-V | Vision-dependent Multimodal Reasoning |
✍️ Citation
If you find this codebase useful in your research, please consider giving us a star ⭐ and citing our work 📝:
@article{wang2026vgpo,
title={Visually-Guided Policy Optimization for Multimodal Reasoning},
author={Zengbin Wang and Feng Xiong and Liang Lin and Xuecai Hu and Yong Wang and Yanlin Wang and Man Zhang and Xiangxiang Chu},
journal={arXiv preprint arXiv:2604.09349},
year={2026}
}
❤️ Acknowledgements
Our codebase is built upon EasyR1, VPPO-RL, PAPO, Look-Back. We thank the authors for their excellent work.
- Downloads last month
- 9
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("image-text-to-text", model="MuMing0102/VGPO-RL-7B") messages = [ { "role": "user", "content": [ {"type": "image", "url": "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/p-blog/candy.JPG"}, {"type": "text", "text": "What animal is on the candy?"} ] }, ] pipe(text=messages)