This GGUF file is converted from https://ollama.com/FradSer/deeptranslate-r2-4b via https://github.com/mattjamo/OllamaToGGUF
- Downloads last month
- 9
Hardware compatibility
Log In
to view the estimation
16-bit
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support