MMTEB
Collection
A collection of items telated the the MMTEB release • 133 items • Updated • 4
sentences stringlengths 2.76k 3.11M | labels class label 9
classes |
|---|---|
CROSS REFERENCE TO RELATED APPLICATIONS
[0001] This application relates to and claims priority from U.S. Ser. No. 61/819,547 filed May 4, 2013, the entire contents of which are incorporated herein by reference.
FIGURE FOR PUBLICATION
[0002]
FIG. 1
BACKGROUND ... | 1a |
FIELD OF THE INVENTION
The invention relates to an oatmeal food product that is optionally packaged aseptically and can be heated with microwave cooking and a process of making same.
BACKGROUND OF THE INVENTION
Oatmeal compositions have been known for a long period of time. More recently, oatmeal food... | 1a |
CROSS REFERENCE
[0001] This application is a divisional application of and claims the benefit of Ser. No. 11/200,358 filed Aug. 9, 2005.
FIELD OF THE INVENTION
[0002] The present invention relates generally to devices using dynamic movement of one's body. The invention may be ... | 1a |
BACKGROUND OF THE INVENTION
The present invention relates generally to electrically conductive gels which are used to transmit an electrical signal between the human skin and an electrode attached to an electrical recording or stimulating device.
Frequently in the practice of medicine it is desirable to mak... | 1a |
BACKGROUND OF THE INVENTION
1. Field of the Invention
The present invention relates to devices for rolling sheet materials and particulate burnable material, such as leaves or the like, into tubular shapes, and more particularly pertains to a log rolling apparatus which permits sheets of flammable material,... | 1a |
[0001] This applications claims priority to U.S. provisional application No. 62/290,226 filed Feb. 2, 2016, which is incorporated herein in its entirety.
BACKGROUND OF THE INVENTION
[0002] The present invention relates generally to pet-handling accessory items and, in particular, to a hea... | 1a |
TECHNICAL FIELD
Normal gingivae are pink and firmly attached to the underlying alveolar bone. At the enamel-gingival junction, the gingiva forms an epithelial-lined ridge around the teeth. The area between the enamel and the gingivae is called the gingival crevice. Gingivitis develops when large masses of bacteri... | 1a |
"BACKGROUND OF THE INVENTION \n [0001] (a) Field of the Invention \n [0002] This(...TRUNCATED) | 1a |
"TECHNICAL FIELD \n This invention relates to eye surgery and more particularly to cornea marker(...TRUNCATED) | 1a |
"This application is a continuation of U.S. Ser. No. 09/017,912, filed Feb. 3, 1998, now abandoned, (...TRUNCATED) | 1a |
Clustering of documents from the Big Patent dataset. Test set only includes documentsbelonging to a single category, with a total of 9 categories.
| Task category | t2c |
| Domains | Legal, Written |
| Reference | https://huggingface.co/datasets/NortheasternUniversity/big_patent |
You can evaluate an embedding model on this dataset using the following code:
import mteb
task = mteb.get_tasks(["BigPatentClustering.v2"])
evaluator = mteb.MTEB(task)
model = mteb.get_model(YOUR_MODEL)
evaluator.run(model)
To learn more about how to run models on mteb task check out the GitHub repitory.
If you use this dataset, please cite the dataset as well as mteb, as this dataset likely includes additional processing as a part of the MMTEB Contribution.
@article{DBLP:journals/corr/abs-1906-03741,
author = {Eva Sharma and
Chen Li and
Lu Wang},
bibsource = {dblp computer science bibliography, https://dblp.org},
biburl = {https://dblp.org/rec/journals/corr/abs-1906-03741.bib},
eprint = {1906.03741},
eprinttype = {arXiv},
journal = {CoRR},
timestamp = {Wed, 26 Jun 2019 07:14:58 +0200},
title = {{BIGPATENT:} {A} Large-Scale Dataset for Abstractive and Coherent
Summarization},
url = {http://arxiv.org/abs/1906.03741},
volume = {abs/1906.03741},
year = {2019},
}
@article{enevoldsen2025mmtebmassivemultilingualtext,
title={MMTEB: Massive Multilingual Text Embedding Benchmark},
author={Kenneth Enevoldsen and Isaac Chung and Imene Kerboua and Márton Kardos and Ashwin Mathur and David Stap and Jay Gala and Wissam Siblini and Dominik Krzemiński and Genta Indra Winata and Saba Sturua and Saiteja Utpala and Mathieu Ciancone and Marion Schaeffer and Gabriel Sequeira and Diganta Misra and Shreeya Dhakal and Jonathan Rystrøm and Roman Solomatin and Ömer Çağatan and Akash Kundu and Martin Bernstorff and Shitao Xiao and Akshita Sukhlecha and Bhavish Pahwa and Rafał Poświata and Kranthi Kiran GV and Shawon Ashraf and Daniel Auras and Björn Plüster and Jan Philipp Harries and Loïc Magne and Isabelle Mohr and Mariya Hendriksen and Dawei Zhu and Hippolyte Gisserot-Boukhlef and Tom Aarsen and Jan Kostkan and Konrad Wojtasik and Taemin Lee and Marek Šuppa and Crystina Zhang and Roberta Rocca and Mohammed Hamdy and Andrianos Michail and John Yang and Manuel Faysse and Aleksei Vatolin and Nandan Thakur and Manan Dey and Dipam Vasani and Pranjal Chitale and Simone Tedeschi and Nguyen Tai and Artem Snegirev and Michael Günther and Mengzhou Xia and Weijia Shi and Xing Han Lù and Jordan Clive and Gayatri Krishnakumar and Anna Maksimova and Silvan Wehrli and Maria Tikhonova and Henil Panchal and Aleksandr Abramov and Malte Ostendorff and Zheng Liu and Simon Clematide and Lester James Miranda and Alena Fenogenova and Guangyu Song and Ruqiya Bin Safi and Wen-Ding Li and Alessia Borghini and Federico Cassano and Hongjin Su and Jimmy Lin and Howard Yen and Lasse Hansen and Sara Hooker and Chenghao Xiao and Vaibhav Adlakha and Orion Weller and Siva Reddy and Niklas Muennighoff},
publisher = {arXiv},
journal={arXiv preprint arXiv:2502.13595},
year={2025},
url={https://arxiv.org/abs/2502.13595},
doi = {10.48550/arXiv.2502.13595},
}
@article{muennighoff2022mteb,
author = {Muennighoff, Niklas and Tazi, Nouamane and Magne, Lo{\"\i}c and Reimers, Nils},
title = {MTEB: Massive Text Embedding Benchmark},
publisher = {arXiv},
journal={arXiv preprint arXiv:2210.07316},
year = {2022}
url = {https://arxiv.org/abs/2210.07316},
doi = {10.48550/ARXIV.2210.07316},
}
The following code contains the descriptive statistics from the task. These can also be obtained using:
import mteb
task = mteb.get_task("BigPatentClustering.v2")
desc_stats = task.metadata.descriptive_stats
{
"test": {
"num_samples": 2048,
"number_of_characters": 65738274,
"min_text_length": 4907,
"average_text_length": 32098.7666015625,
"max_text_length": 3105802,
"unique_texts": 2007,
"min_labels_per_text": 17,
"average_labels_per_text": 1.0,
"max_labels_per_text": 439,
"unique_labels": 9,
"labels": {
"4": {
"count": 211
},
"7": {
"count": 274
},
"8": {
"count": 171
},
"3": {
"count": 439
},
"6": {
"count": 17
},
"5": {
"count": 436
},
"1": {
"count": 296
},
"2": {
"count": 145
},
"0": {
"count": 59
}
}
}
}
This dataset card was automatically generated using MTEB