πŸŽ›οΈ Audio-Omni

Unified Audio Understanding, Generation, and Editing (SIGGRAPH 2026)

GitHub Project Page arXiv

πŸ“– Overview

Audio-Omni is the first end-to-end framework that unifies understanding, generation, and editing across general sound, music, and speech domains. It combines a frozen Multimodal Large Language Model (Qwen2.5-Omni) for high-level reasoning with a trainable Diffusion Transformer for high-fidelity synthesis.

🎯 Capabilities

  • Understanding: Audio/video captioning, question answering
  • Generation: Text-to-Audio, Text-to-Music, Video-to-Audio, Video-to-Music, Text-to-Speech, Voice Conversion
  • Editing: Add, Remove, Extract, Style Transfer

πŸš€ Quick Start

Installation

# Clone the GitHub repository
git clone https://github.com/ZeyueT/Audio-Omni.git
cd Audio-Omni

# Install dependencies
pip install -e .
conda install -c conda-forge ffmpeg libsndfile

# Download model from Hugging Face
huggingface-cli download HKUSTAudio/Audio-Omni --local-dir model/

Sample Usage

from audio_omni import AudioOmni
import torchaudio

# Load model
model = AudioOmni("model/Audio-Omni.json", "model/model.ckpt")

# 1. Understanding
response = model.understand(
    "Describe the sounds in this audio.", 
    audio="example.wav"
)
print(response)

# 2. Generation (Text-to-Audio)
audio = model.generate("T2A", prompt="A clock ticking.")
torchaudio.save("output.wav", audio, model.sample_rate)

# 3. Editing (Add a sound)
audio = model.edit("Add", "input.wav", desc="skateboarding")
torchaudio.save("output_add.wav", audio, model.sample_rate)

πŸ“¦ Model Files

  • Audio-Omni.json β€” Model configuration
  • model.ckpt β€” Model checkpoint (~21 GB)
  • synchformer_state_dict.pth β€” Synchformer checkpoint for video conditioning

πŸ–₯️ Gradio Demo

# Launch interactive demo
python run_gradio.py \
    --model-config model/Audio-Omni.json \
    --ckpt-path model/model.ckpt \
    --server-port 7777

πŸ“ Citation

@article{tian2026audio,
  title={Audio-Omni: Extending Multi-modal Understanding to Versatile Audio Generation and Editing},
  author={Tian, Zeyue and Yang, Binxin and Liu, Zhaoyang and Zhang, Jiexuan and Yuan, Ruibin and Yin, Hubery and Chen, Qifeng and Li, Chen and Lv, Jing and Xue, Wei and others},
  journal={arXiv preprint arXiv:2604.10708},
  year={2026}
}

πŸ“„ License

CC-BY-NC-4.0 (Non-commercial use only). Commercial use of the model weights requires explicit written authorization from the authors. For commercial licensing inquiries, contact: ztianad@connect.ust.hk

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Spaces using HKUSTAudio/Audio-Omni 2

Paper for HKUSTAudio/Audio-Omni