Upload README.md with huggingface_hub
Browse files
README.md
CHANGED
|
@@ -4,7 +4,7 @@ license: apache-2.0
|
|
| 4 |
# A Touch, Vision, and Language Dataset for Multimodal Alignment
|
| 5 |
by <a href="https://max-fu.github.io">Max (Letian) Fu</a>, <a href="https://www.linkedin.com/in/gaurav-datta/">Gaurav Datta*</a>, <a href="https://qingh097.github.io/">Huang Huang*</a>, <a href="https://autolab.berkeley.edu/people">William Chung-Ho Panitch*</a>, <a href="https://www.linkedin.com/in/jaimyn-drake/">Jaimyn Drake*</a>, <a href="https://joeaortiz.github.io/">Joseph Ortiz</a>, <a href="https://www.mustafamukadam.com/">Mustafa Mukadam</a>, <a href="https://scholar.google.com/citations?user=p6DCMrQAAAAJ&hl=en">Mike Lambeta</a>, <a href="https://lasr.org/">Roberto Calandra</a>, <a href="https://goldberg.berkeley.edu">Ken Goldberg</a> at UC Berkeley, Meta AI, TU Dresden, and CeTI (*equal contribution).
|
| 6 |
|
| 7 |
-
[[Paper](
|
| 8 |
|
| 9 |
<p align="center">
|
| 10 |
<img src="img/splash_figure_alt.png" width="800">
|
|
@@ -33,3 +33,16 @@ CLIP_PRETRAIN_DATA = "datacomp_xl_s13b_b90k"
|
|
| 33 |
For TVL-LLaMA, please request access to the pre-trained LLaMA-2 from this [form](https://llama.meta.com/llama-downloads/). In particular, we use `llama-2-7b` as the base model. The weights here contains the trained [adapter](https://arxiv.org/abs/2309.03905), the tactile encoder, and the vision encoder for the ease of loading.
|
| 34 |
|
| 35 |
For the complete info, please take a look at the [GitHub repo](https://tactile-vlm.github.io/) to see instructions on pretraining, fine-tuning, and evaluation with these models.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 4 |
# A Touch, Vision, and Language Dataset for Multimodal Alignment
|
| 5 |
by <a href="https://max-fu.github.io">Max (Letian) Fu</a>, <a href="https://www.linkedin.com/in/gaurav-datta/">Gaurav Datta*</a>, <a href="https://qingh097.github.io/">Huang Huang*</a>, <a href="https://autolab.berkeley.edu/people">William Chung-Ho Panitch*</a>, <a href="https://www.linkedin.com/in/jaimyn-drake/">Jaimyn Drake*</a>, <a href="https://joeaortiz.github.io/">Joseph Ortiz</a>, <a href="https://www.mustafamukadam.com/">Mustafa Mukadam</a>, <a href="https://scholar.google.com/citations?user=p6DCMrQAAAAJ&hl=en">Mike Lambeta</a>, <a href="https://lasr.org/">Roberto Calandra</a>, <a href="https://goldberg.berkeley.edu">Ken Goldberg</a> at UC Berkeley, Meta AI, TU Dresden, and CeTI (*equal contribution).
|
| 6 |
|
| 7 |
+
[[Paper](https://tactile-vlm.github.io/files/tvl.pdf)] | [[Project Page](https://tactile-vlm.github.io/)] | [[Checkpoints](https://huggingface.co/mlfu7/Touch-Vision-Language-Models)] | [[Dataset](https://huggingface.co/datasets/mlfu7/Touch-Vision-Language-Dataset)] | [[Citation](#citation)]
|
| 8 |
|
| 9 |
<p align="center">
|
| 10 |
<img src="img/splash_figure_alt.png" width="800">
|
|
|
|
| 33 |
For TVL-LLaMA, please request access to the pre-trained LLaMA-2 from this [form](https://llama.meta.com/llama-downloads/). In particular, we use `llama-2-7b` as the base model. The weights here contains the trained [adapter](https://arxiv.org/abs/2309.03905), the tactile encoder, and the vision encoder for the ease of loading.
|
| 34 |
|
| 35 |
For the complete info, please take a look at the [GitHub repo](https://tactile-vlm.github.io/) to see instructions on pretraining, fine-tuning, and evaluation with these models.
|
| 36 |
+
|
| 37 |
+
## Citation
|
| 38 |
+
Please give us a star 🌟 on Github to support us!
|
| 39 |
+
|
| 40 |
+
Please cite our work if you find our work inspiring or use our code in your work:
|
| 41 |
+
```
|
| 42 |
+
@article{fu2024tvl,
|
| 43 |
+
title={A Touch, Vision, and Language Dataset for Multimodal Alignment},
|
| 44 |
+
author={Letian Fu and Gaurav Datta and Huang Huang and William Chung-Ho Panitch and Jaimyn Drake and Joseph Ortiz and Mustafa Mukadam and Mike Lambeta and Roberto Calandra and Ken Goldberg},
|
| 45 |
+
journal={arXiv preprint arXiv:2401.14391},
|
| 46 |
+
year={2024}
|
| 47 |
+
}
|
| 48 |
+
```
|