Built a free hosted demo of V3 — feedback welcome
Hi DeepSeek team + community,
I built a small Gradio Space that lets visitors try DeepSeek V3 with zero setup — the goal was to make it
easier to link a friend or a tweet reader straight to a working V3 chat without asking them to spin up
anything:
👉 [huggingface.co/spaces/MachineFi/QuickSilverPro-Chat](https://huggingface.co/spaces/MachineFi/QuickSilver
Pro-Chat)
It streams tokens, shows the hidden reasoning for R1, and compares V3 and Qwen 3.5-35B in the same dropdown
so reviewers can feel the difference. The backend is an OpenAI-compatible endpoint — switching models is a
dropdown change.
A few things I'd love community input on:
- Is V3 the right default for the "general chat" slot, or would you put R1 first for a demo audience
that skews technical? - System prompt: we're keeping it empty by default so users judge the raw model. Any reason to inject a
light "be concise" nudge? - Token limits per session: we rate-limit generously (~8 generations / min) — would you recommend
showing that up-front or letting it surprise?
Happy to fold any feedback back into the Space. Thanks for the model — it's genuinely replacing GPT-4o in a
lot of our internal tooling.