How to use UCSC-VLAA/openvision-vit-so400m-patch14-84 with OpenCLIP:
import open_clip model, preprocess_train, preprocess_val = open_clip.create_model_and_transforms('hf-hub:UCSC-VLAA/openvision-vit-so400m-patch14-84') tokenizer = open_clip.get_tokenizer('hf-hub:UCSC-VLAA/openvision-vit-so400m-patch14-84')
c9223c4 5bfebba
1
2
3
4
version https://git-lfs.github.com/spec/v1 oid sha256:5f2bbd19a1d1fbc8578274d880fb32e5bad3906fed9ec33a32ab3c386a471907 size 3453298506