Papers
arxiv:2603.25746

ShotStream: Streaming Multi-Shot Video Generation for Interactive Storytelling

Published on Mar 26
· Submitted by
yawenluo
on Mar 30
#2 Paper of the day
Authors:
,
,
,

Abstract

ShotStream enables real-time interactive multi-shot video generation through causal architecture design, dual-cache memory mechanisms, and two-stage distillation to maintain visual consistency and reduce latency.

AI-generated summary

Multi-shot video generation is crucial for long narrative storytelling, yet current bidirectional architectures suffer from limited interactivity and high latency. We propose ShotStream, a novel causal multi-shot architecture that enables interactive storytelling and efficient on-the-fly frame generation. By reformulating the task as next-shot generation conditioned on historical context, ShotStream allows users to dynamically instruct ongoing narratives via streaming prompts. We achieve this by first fine-tuning a text-to-video model into a bidirectional next-shot generator, which is then distilled into a causal student via Distribution Matching Distillation. To overcome the challenges of inter-shot consistency and error accumulation inherent in autoregressive generation, we introduce two key innovations. First, a dual-cache memory mechanism preserves visual coherence: a global context cache retains conditional frames for inter-shot consistency, while a local context cache holds generated frames within the current shot for intra-shot consistency. And a RoPE discontinuity indicator is employed to explicitly distinguish the two caches to eliminate ambiguity. Second, to mitigate error accumulation, we propose a two-stage distillation strategy. This begins with intra-shot self-forcing conditioned on ground-truth historical shots and progressively extends to inter-shot self-forcing using self-generated histories, effectively bridging the train-test gap. Extensive experiments demonstrate that ShotStream generates coherent multi-shot videos with sub-second latency, achieving 16 FPS on a single GPU. It matches or exceeds the quality of slower bidirectional models, paving the way for real-time interactive storytelling. Training and inference code, as well as the models, are available on our

Community

The dual-cache memory mechanism here maps interestingly to agentic workflows — global context for cross-turn coherence, local context for intra-turn generation. We've hit similar error accumulation issues with long-running LangGraph agents. The two-stage self-forcing distillation approach seems like it could generalize beyond video to any autoregressive pipeline. Has anyone tried adapting this for multi-modal agent contexts?

Paper author Paper submitter

TL;DR: ShotStream is a novel causal multi-shot architecture that enables interactive storytelling and efficient on-the-fly frame generation, achieving 16 FPS on a single NVIDIA GPU.
This comment has been hidden

the streaming approach here is clever, using self-generated histories to bridge the train-test gap is something i hadnt seen before. good writeup of this paper if you want the details https://arxivexplained.com/paper/shotstream-streaming-multi-shot-video-generation-for-interactive-storytelling

·
Paper author

Thanks for your writeup. However, it looks like the link has errors—this one works correctly: https://arxivexplained.com/papers/shotstream-streaming-multi-shot-video-generation-for-interactive-storytelling.

the dual-cache memory system with a RoPE discontinuity indicator is the standout trick here, neatly separating inter-shot coherence from intra-shot continuity without bloated architectures. i'm curious about ablations on global vs local cache contributions, and how sensitive the RoPE boundary is to shot length, scene cuts, or narrative genre. the two-stage distillation to bridge train-test rollout feels practical, but i want to see how it handles prompt drift within a shot. the arXivLens breakdown (https://arxivlens.com/PaperView/Details/shotstream-streaming-multi-shot-video-generation-for-interactive-storytelling-6252-795a28c5) helped me parse the method details. would be interesting to push this toward even longer form narratives or more diverse prompts to test robustness.

Sign up or log in to comment

Get this paper in your agent:

hf papers read 2603.25746
Don't have the latest CLI?
curl -LsSf https://hf.co/cli/install.sh | bash

Models citing this paper 1

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2603.25746 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2603.25746 in a Space README.md to link it from this page.

Collections including this paper 2