Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks
Paper
•
1908.10084
•
Published
•
11
This is a sentence-transformers model finetuned from BAAI/bge-large-en-v1.5 on the json dataset. It maps sentences & paragraphs to a 1024-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
SentenceTransformer(
(0): Transformer({'max_seq_length': 384, 'do_lower_case': True, 'architecture': 'BertModel'})
(1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
First install the Sentence Transformers library:
pip install -U sentence-transformers
Then you can load this model and run inference.
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("sentence_transformers_model_id")
# Run inference
sentences = [
'Is there a specific location where I can find workspace filters?',
'## Windows\n\nYou can access the filters of a workspace in the grid of filters.\n\nThe filter window has most properties of the filter.',
'Find/Replace Panel\n\nSee\n\nchoosers and panels\n\nfor information on\n displaying the Find/Replace Panel.\n\nThe Find/Replace Panel allows you to search for specific text in the\n\nproperties\n\nof the\n\ncomponents\n\nof the open\n\nworkspaces\n\nand\n\nresults workspaces\n\n.\n\nYou should enter the text for which you wish to search in the\n\nFind what\n\nedit field.\n\nYou should select the parts of the open workspaces and results workspaces within which you\n wish to search in the tree under\n\nWithin\n\n.\n\nYou can select multiple items discontinuously by holding down the\n\nCtrl\n\nkey while clicking with the mouse.\n\nYou can use the\n\nName\n\n,\n\nFormula\n\nand\n\nAll\n fields\n\ncheckboxes to specify whether the search should include the\n\nName\n\nproperty, the\n\nFormula\n\nproperty or all properties, respectively. You\n must check at least one of these checkboxes so that there are some properties in which to\n search.\n\nYou can also select further search options:\n\n* Match case- check this checkbox to perform a case-sensitive search\n* Match whole- check this checkbox to exclude matches with parts of words, including\n names of variables and components\n* Ignore spaces- check this checkbox to ignore all white space in the properties being\n searched\n* Ignore info fields- check this checkbox to exclude theDescription,Documentation,Last modified,Modified by,Path,Protected byandReserved byproperties.\n\nYou should press the\n\nFind\n\nbutton to start the search.\n\nAfter searching the lower pane will display the number of occurrences of the text that have\n been found and provide a tree showing where these are. You can double-click on any of the\n results to open that component in the Central Window, with the found item selected.\n\nYou can select items in the tree if you wish to replace the found text in these items. You\n should then type the text to replace the found text in the\n\nReplace with\n\nedit field and click the\n\nReplace\n\nbutton.\n\nThe read-only icon\n\nnext to a tree\n item indicates that it has been\n\nprotected\n\nand so none of its\n text can be replaced using this feature.\n\nYou can drag or copy tree items from the Find/Replace Panel into the\n\nCentral Window\n\n.',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 1024]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities)
# tensor([[ 1.0000, -0.9967, -0.9964],
# [-0.9967, 1.0000, 0.9994],
# [-0.9964, 0.9994, 1.0000]])
anchor, positive, and negative| anchor | positive | negative | |
|---|---|---|---|
| type | string | string | string |
| details |
|
|
|
| anchor | positive | negative |
|---|---|---|
What is the purpose of the Analyzer tab in a results workspace? |
Analyzer |
Analyzer |
What kind of output is displayed in the Analyzer if available? |
Analyzer |
Accessing output |
Where can I find the dependency relationships between variables in my results? |
Analyzer |
Analyzer dependency diagram |
TripletLoss with these parameters:{
"distance_metric": "TripletDistanceMetric.EUCLIDEAN",
"triplet_margin": 5
}
per_device_train_batch_size: 16gradient_accumulation_steps: 2learning_rate: 2e-05num_train_epochs: 2warmup_ratio: 0.05bf16: Truedataloader_num_workers: 2remove_unused_columns: Falseoverwrite_output_dir: Falsedo_predict: Falseeval_strategy: noprediction_loss_only: Trueper_device_train_batch_size: 16per_device_eval_batch_size: 8per_gpu_train_batch_size: Noneper_gpu_eval_batch_size: Nonegradient_accumulation_steps: 2eval_accumulation_steps: Nonetorch_empty_cache_steps: Nonelearning_rate: 2e-05weight_decay: 0.0adam_beta1: 0.9adam_beta2: 0.999adam_epsilon: 1e-08max_grad_norm: 1.0num_train_epochs: 2max_steps: -1lr_scheduler_type: linearlr_scheduler_kwargs: {}warmup_ratio: 0.05warmup_steps: 0log_level: passivelog_level_replica: warninglog_on_each_node: Truelogging_nan_inf_filter: Truesave_safetensors: Truesave_on_each_node: Falsesave_only_model: Falserestore_callback_states_from_checkpoint: Falseno_cuda: Falseuse_cpu: Falseuse_mps_device: Falseseed: 42data_seed: Nonejit_mode_eval: Falseuse_ipex: Falsebf16: Truefp16: Falsefp16_opt_level: O1half_precision_backend: autobf16_full_eval: Falsefp16_full_eval: Falsetf32: Nonelocal_rank: 0ddp_backend: Nonetpu_num_cores: Nonetpu_metrics_debug: Falsedebug: []dataloader_drop_last: Falsedataloader_num_workers: 2dataloader_prefetch_factor: Nonepast_index: -1disable_tqdm: Falseremove_unused_columns: Falselabel_names: Noneload_best_model_at_end: Falseignore_data_skip: Falsefsdp: []fsdp_min_num_params: 0fsdp_config: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}fsdp_transformer_layer_cls_to_wrap: Noneaccelerator_config: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}deepspeed: Nonelabel_smoothing_factor: 0.0optim: adamw_torchoptim_args: Noneadafactor: Falsegroup_by_length: Falselength_column_name: lengthddp_find_unused_parameters: Noneddp_bucket_cap_mb: Noneddp_broadcast_buffers: Falsedataloader_pin_memory: Truedataloader_persistent_workers: Falseskip_memory_metrics: Trueuse_legacy_prediction_loop: Falsepush_to_hub: Falseresume_from_checkpoint: Nonehub_model_id: Nonehub_strategy: every_savehub_private_repo: Nonehub_always_push: Falsegradient_checkpointing: Falsegradient_checkpointing_kwargs: Noneinclude_inputs_for_metrics: Falseinclude_for_metrics: []eval_do_concat_batches: Truefp16_backend: autopush_to_hub_model_id: Nonepush_to_hub_organization: Nonemp_parameters: auto_find_batch_size: Falsefull_determinism: Falsetorchdynamo: Noneray_scope: lastddp_timeout: 1800torch_compile: Falsetorch_compile_backend: Nonetorch_compile_mode: Nonedispatch_batches: Nonesplit_batches: Noneinclude_tokens_per_second: Falseinclude_num_input_tokens_seen: Falseneftune_noise_alpha: Noneoptim_target_modules: Nonebatch_eval_metrics: Falseeval_on_start: Falseuse_liger_kernel: Falseeval_use_gather_object: Falseaverage_tokens_across_devices: Falseprompts: Nonebatch_sampler: batch_samplermulti_dataset_batch_sampler: proportionalrouter_mapping: {}learning_rate_mapping: {}| Epoch | Step | Training Loss |
|---|---|---|
| 0.0946 | 50 | 9.7648 |
| 0.1892 | 100 | 9.3037 |
| 0.2838 | 150 | 9.1803 |
| 0.3784 | 200 | 9.2374 |
| 0.4730 | 250 | 9.1815 |
| 0.5676 | 300 | 9.2019 |
| 0.6623 | 350 | 9.2085 |
| 0.7569 | 400 | 9.0603 |
| 0.8515 | 450 | 9.1276 |
| 0.9461 | 500 | 9.1794 |
| 1.0397 | 550 | 9.0348 |
| 1.1343 | 600 | 9.1246 |
| 1.2289 | 650 | 9.1251 |
| 1.3236 | 700 | 9.1681 |
| 1.4182 | 750 | 8.907 |
| 1.5128 | 800 | 9.0067 |
| 1.6074 | 850 | 9.1056 |
| 1.7020 | 900 | 9.0715 |
| 1.7966 | 950 | 8.9425 |
| 1.8912 | 1000 | 9.0148 |
| 1.9858 | 1050 | 9.0477 |
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
@misc{hermans2017defense,
title={In Defense of the Triplet Loss for Person Re-Identification},
author={Alexander Hermans and Lucas Beyer and Bastian Leibe},
year={2017},
eprint={1703.07737},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
Base model
BAAI/bge-large-en-v1.5