--- title: Replicate emoji: 🚀 colorFrom: blue colorTo: green sdk: static pinned: false --- # Run AI with an API Replicate lets developers run, fine-tune, and deploy open models with a production-ready API. On Hugging Face, you can use Replicate as an Inference Provider for popular models across image generation, video generation, speech, and audio. **Browse Replicate-powered models:** [Run with Replicate](https://huggingface.co/collections/replicate/run-with-replicate-6a04d0792d027edbf66c7155) **Read the integration docs:** [Replicate on Hugging Face Inference Providers](https://huggingface.co/docs/inference-providers/providers/replicate) **Reference examples:** [Replicate Inference Provider Examples](https://huggingface.co/spaces/replicate/inference-provider-examples) **AI & ML interests:** text-to-image, image-to-image, text-to-video, speech, audio, inference providers, fine-tuning ## Why use Replicate on Hugging Face? Use Replicate infrastructure through Hugging Face's standard Inference Providers interface. You can call image, video, speech, and audio models with the same `InferenceClient`, your existing `HF_TOKEN`, and a provider switch instead of wiring up a separate integration path. ## Get started in 30 seconds Install the Hub client, set your token, generate an image, and save it locally: ```bash pip install huggingface_hub pillow export HF_TOKEN=hf_... ``` ```python import os from huggingface_hub import InferenceClient client = InferenceClient( provider="replicate", api_key=os.environ["HF_TOKEN"], ) image = client.text_to_image( "A cinematic photo of an astronaut riding a horse", model="Tongyi-MAI/Z-Image-Turbo", ) image.save("replicate-astronaut.png") ``` For JavaScript, cURL, and task-specific examples, see the [Replicate provider docs](https://huggingface.co/docs/inference-providers/providers/replicate).