Any-to-Any
Transformers
Safetensors
qwen2_5_vl
image-text-to-text
custom_code
text-generation-inference
Instructions to use modelscope/Nexus-Gen with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use modelscope/Nexus-Gen with Transformers:
# Load model directly from transformers import AutoProcessor, AutoModelForImageTextToText processor = AutoProcessor.from_pretrained("modelscope/Nexus-Gen", trust_remote_code=True) model = AutoModelForImageTextToText.from_pretrained("modelscope/Nexus-Gen", trust_remote_code=True) - Notebooks
- Google Colab
- Kaggle

- Xet hash:
- f875c875d22e133a705774914164dda30046b1265b1c411dfec9241ef51a80c0
- Size of remote file:
- 567 kB
- SHA256:
- f1c25ffc3402814c00e66c4b7d611d47670577082de4e7f89759ecf82e489b24
·
Xet efficiently stores Large Files inside Git, intelligently splitting files into unique chunks and accelerating uploads and downloads. More info.