Instructions to use Salesforce/blip-image-captioning-base with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use Salesforce/blip-image-captioning-base with Transformers:
# Use a pipeline as a high-level helper # Warning: Pipeline type "image-to-text" is no longer supported in transformers v5. # You must load the model directly (see below) or downgrade to v4.x with: # 'pip install "transformers<5.0.0' from transformers import pipeline pipe = pipeline("image-to-text", model="Salesforce/blip-image-captioning-base")# Load model directly from transformers import AutoProcessor, AutoModelForImageTextToText processor = AutoProcessor.from_pretrained("Salesforce/blip-image-captioning-base") model = AutoModelForImageTextToText.from_pretrained("Salesforce/blip-image-captioning-base") - Notebooks
- Google Colab
- Kaggle
- Xet hash:
- fd40eaea33f3a6960fda94179358f29541afceb88a7f87b4915693f48c35bfa7
- Size of remote file:
- 990 MB
- SHA256:
- d6638651a5526cc2ede56f2b5104d6851b0755816d220e5e046870430180c767
·
Xet efficiently stores Large Files inside Git, intelligently splitting files into unique chunks and accelerating uploads and downloads. More info.