Text Classification
Transformers
Safetensors
llama
Generated from Trainer
trl
reward-trainer
text-embeddings-inference
4-bit precision
bitsandbytes
Instructions to use shirwu/iter_debug with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use shirwu/iter_debug with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-classification", model="shirwu/iter_debug")# Load model directly from transformers import AutoTokenizer, AutoModelForSequenceClassification tokenizer = AutoTokenizer.from_pretrained("shirwu/iter_debug") model = AutoModelForSequenceClassification.from_pretrained("shirwu/iter_debug") - Notebooks
- Google Colab
- Kaggle
- Xet hash:
- aea70efb3849ee1cb532760c574900a4bd62a3feb0a5aa46a0aeab27b9b6152b
- Size of remote file:
- 336 MB
- SHA256:
- 35b97ea522f781b0aa64f13957adf42ba1d2834ec730f62301af1416def3e6ae
·
Xet efficiently stores Large Files inside Git, intelligently splitting files into unique chunks and accelerating uploads and downloads. More info.