SreyanG-NVIDIA commited on
Commit
c317565
·
verified ·
1 Parent(s): 7c5396e

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +226 -0
README.md ADDED
@@ -0,0 +1,226 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: other
3
+ language:
4
+ - en
5
+ tags:
6
+ - audio
7
+ - reasoning
8
+ - audio understanding
9
+ - ASR
10
+ ---
11
+ # Model Overview
12
+
13
+ ## Description:
14
+ Audio Flamingo 3 (AF3) is a fully open, state-of-the-art Large Audio-Language Model (LALM) that advances reasoning and understanding across speech, sounds, and music. AF3 builds on previous work with innovations in:
15
+
16
+ - Unified audio representation learning (speech, sound, music)
17
+ - Flexible, on-demand chain-of-thought reasoning
18
+ - Long-context audio comprehension (up to 10 minutes)
19
+ - Multi-turn, multi-audio conversational dialogue (AF3-Chat)
20
+ - Voice-to-voice interaction (AF3-Chat)
21
+
22
+ Extensive evaluations confirm AF3’s effectiveness, setting new benchmarks on over 20 public audio understanding and reasoning tasks.
23
+
24
+ **This model is for non-commercial research purposes only.**
25
+
26
+ <center><img src="static/af3_radial-1.png" width="400"></center>
27
+
28
+ <br>
29
+
30
+ <center><img src="static/af3_main_diagram-1.png" width="800"></center>
31
+
32
+
33
+ ## License / Terms of Use
34
+ The model is released under the [NVIDIA OneWay Noncommercial License](incl_licenses/NVIDIA_OneWay_Noncommercial_License.docx). Portions of the dataset generation are also subject to the [Qwen Research License](https://huggingface.co/Qwen/Qwen2.5-3B/blob/main/LICENSE) and OpenAI’s [Terms of Use](https://openai.com/policies/terms-of-use).
35
+
36
+ ## Deployment Geography
37
+ Global.
38
+
39
+ ## Use Case
40
+ Intended for researchers and developers to explore:
41
+ - Audio question answering and reasoning
42
+ - Long-context audio comprehension
43
+ - Interactive sound/music design assistants
44
+ - Multi-turn (voice) chat
45
+
46
+ ## Release Date
47
+ - Github (07/10/2025) via https://github.com/NVIDIA/audio-flamingo
48
+ - HuggingFace (07/10/2025) via https://huggingface.co/nvidia/audio-flamingo
49
+
50
+ ## References:
51
+ * [Audio Flamingo 3: Advancing Audio Intelligence with Fully Open Large Audio-Language Models]()
52
+ * [Project Page](https://github.com/NVIDIA/audio-flamingo)
53
+ * [Demo Website](https://research.nvidia.com/labs/adlr/AF3/)
54
+ * [Hugging Face](https://huggingface.co/nvidia/audio-flamingo-3)
55
+
56
+
57
+ ## Model Architecture:
58
+ **Architecture Type:** Transformer
59
+ **Network Architecture:** Audio Flamingo 3
60
+
61
+ AF3 uses:
62
+ - AF-Whisper unified audio encoder
63
+ - MLP-based audio adaptor
64
+ - Decoder-only LLM backbone (Qwen2.5-7B)
65
+ - Streaming TTS module (AF3-Chat)
66
+
67
+ **This model was developed based on [NVILA](https://github.com/NVlabs/VILA/tree/main/scripts/NVILA-Lite) and [Qwen-2.5-7B](https://huggingface.co/Qwen/Qwen2.5-7B) <br>
68
+
69
+ ## Input:
70
+ Input Type: Audio, Text <br>
71
+ Input Format: WAV/MP3/FLAC, UTF-8 text <br>
72
+ Input Parameters: Audio is Two-Dimensional (2D) and Text is One-Dimensional (1D)<br>
73
+ Other Properties Related to Input: <br>
74
+ -Max Audio Length: 10 Minutes <br>
75
+ -Max Text Length: 16000 tokens<br>
76
+
77
+
78
+ ## Output:
79
+ Output Type: Text (and optional speech) <br>
80
+ Text Format: UTF-8 string <br>
81
+ Output Parameters: One-Dimensional (1D)<br>
82
+ Other Properties Related to Output: <br>
83
+ -Max Text Length: 1024 tokens <br>
84
+ -Speech Format: streaming TTS (text-to-speech) waveform<br>
85
+
86
+
87
+ Our AI models are designed and/or optimized to run on NVIDIA GPU-accelerated systems (A100/H100). By leveraging NVIDIA’s hardware (e.g. GPU cores) and software frameworks (e.g., CUDA libraries), the model achieves faster training and inference times compared to CPU-only solutions. <br>
88
+
89
+ ## Software Integration:
90
+ **Runtime Engine:** PyTorch / HuggingFace Transformers
91
+
92
+ **Supported Hardware:**
93
+ * NVIDIA Ampere (A100)
94
+ * NVIDIA Hopper (H100)
95
+
96
+ **Supported OS:**
97
+ * Linux
98
+
99
+ ## Model Version:
100
+ * v3.0
101
+
102
+ ---
103
+
104
+ ## Training and Testing Datasets:
105
+
106
+ ### Training Dataset:
107
+ AF3 is trained entirely on open-source audio data, organized into four novel, large-scale collections. For each dataset, we mention whether the dataset annotations are collected by Human or they are Automated i.e. generated using AI models.
108
+
109
+ The data collection method noted below applies for all datasets used for training and testing:
110
+ Data Collection Method: Human
111
+ Labeling Collection Method: Please see below:
112
+
113
+ #### General Sound:
114
+ * [WavCaps](https://github.com/XinhaoMei/WavCaps) (Automated)
115
+ * [MACS](https://zenodo.org/records/5114771) (Human)
116
+ * [SoundDescs](https://github.com/akoepke/audio-retrieval-benchmark) (Human)
117
+ * [Clotho-v2](https://github.com/audio-captioning/clotho-dataset/tree/master) (Human)
118
+ * [WavText5K](https://github.com/microsoft/WavText5K) (Human)
119
+ * [Clotho-AQA](https://zenodo.org/records/6473207) (Human)
120
+ * [Open-AQA](https://github.com/YuanGongND/ltu?tab=readme-ov-file) (Automated)
121
+ * [CompA-R](https://github.com/Sreyan88/GAMA) (Automated)
122
+ * [Salmonn AQA](https://github.com/bytedance/SALMONN/tree/main) (Automated)
123
+ * [Audio Entailment](https://github.com/microsoft/AudioEntailment)(Automated)
124
+ * [CompA](https://github.com/Sreyan88/CompA) (Automated)
125
+ * [AudioSet](https://research.google.com/audioset/download.html) (Human)
126
+ * [YouTube-8M](https://research.google.com/youtube8m/) (Human)
127
+ * [FSD50k](https://zenodo.org/records/4060432) (Human)
128
+ * [CochlScene](https://github.com/cochlearai/cochlscene) (Human)
129
+ * [NonSpeech7K](https://zenodo.org/records/6967442) (Human)
130
+ * [Chime-Home](https://code.soundsoftware.ac.uk/projects/chime-home-dataset-annotation-and-baseline-evaluation-code) (Human)
131
+ * [Sonyc-UST](https://zenodo.org/records/3966543) (Human)
132
+
133
+ #### Music:
134
+ * [LP-MusicCaps](https://github.com/seungheondoh/lp-music-caps) (Automated)
135
+ * [MusicQA](https://github.com/shansongliu/MU-LLaMA?tab=readme-ov-file) (Automated)
136
+ * [MusicAVQA](https://gewu-lab.github.io/MUSIC-AVQA/) (Human)
137
+ * [MusicBench](https://huggingface.co/datasets/amaai-lab/MusicBench) (Automated)
138
+ * [Mu-LLAMA](https://github.com/shansongliu/MU-LLaMA) (Automated)
139
+ * [NSynth](https://magenta.tensorflow.org/datasets/nsynth) (Human)
140
+ * [FMA](https://github.com/mdeff/fma) (Human)
141
+ * [MusDB-HQ](https://zenodo.org/records/3338373) (Human)
142
+ * [Music4All](https://sites.google.com/view/contact4music4all) (Human)
143
+ * [Million Song Dataset](http://millionsongdataset.com/) (Human)
144
+
145
+ #### Speech:
146
+ * [MSP-Podcast](https://ecs.utdallas.edu/research/researchlabs/msp-lab/MSP-Podcast.html) (Human)
147
+ * [JL-Corpus](https://github.com/tli725/JL-Corpus) (Human)
148
+ * [MELD](https://github.com/declare-lab/MELD) (Human)
149
+ * [Tess](https://www.kaggle.com/datasets/ejlok1/toronto-emotional-speech-set-tess) (Human)
150
+ * [OMGEmotion](https://github.com/knowledgetechnologyuhh/OMGEmotionChallenge) (Human)
151
+ * [Emov-DB](https://github.com/numediart/EmoV-DB) (Human)
152
+ * [LibriSpeech](https://www.openslr.org/12) (Human)
153
+ * [SPGISpeech](https://datasets.kensho.com/datasets/spgispeech) (Human)
154
+ * [TEDLIUM](https://www.openslr.org/51/) (Human)
155
+ * [GigaSpeech](https://github.com/SpeechColab/GigaSpeech) (Human)
156
+ * [Common Voice 15](https://huggingface.co/datasets/mozilla-foundation/common_voice_12_0) (Human)
157
+ * [VoxPopuli](https://github.com/facebookresearch/voxpopuli) (Human)
158
+ * [VoxCeleb2](https://www.robots.ox.ac.uk/~vgg/data/voxceleb/vox2.html) (Human)
159
+ * [Switchboard](https://catalog.ldc.upenn.edu/LDC97S62) (Human)
160
+ * [AMI](https://groups.inf.ed.ac.uk/ami/corpus/) (Human)
161
+
162
+ #### Voice:
163
+ * [VoiceAssistant-400K](https://huggingface.co/datasets/gpt-omni/VoiceAssistant-400K) (Automated)
164
+
165
+ #### Mixed:
166
+ * [AudioSkills-XL (ours)](https://huggingface.co/datasets/nvidia/AudioSkills) (Automated)
167
+ * [LongAudio-XL (ours)](https://huggingface.co/datasets/nvidia/LongAudio) (Automated)
168
+ * [AF-Think (ours)](https://huggingface.co/datasets/nvidia/AF-Think) (Automated)
169
+ * [AF-Chat (ours)](https://huggingface.co/datasets/nvidia/AF-Chat) (Automated)
170
+
171
+ ---
172
+
173
+ ### Testing Dataset:
174
+ Audio Flamingo 3 is evaluated on the test split of the following datasets.
175
+
176
+ Data Collection Method: Human (for all datasets noted below)
177
+ Labeling Method: See below
178
+
179
+ * [ClothoAQA](https://zenodo.org/records/6473207) (Human)
180
+ * [MusicAVQA](https://gewu-lab.github.io/MUSIC-AVQA/) (Human)
181
+ * [Clotho-v2](https://github.com/audio-captioning/clotho-dataset/tree/master) (Human)
182
+ * [CochlScene](https://github.com/cochlearai/cochlscene) (Human)
183
+ * [NonSpeech7K](https://zenodo.org/records/6967442) (Human)
184
+ * [NSynth](https://magenta.tensorflow.org/datasets/nsynth) (Human)
185
+ * [AudioCaps](https://github.com/cdjkim/audiocaps) (Human)
186
+ * [US8K](https://urbansounddataset.weebly.com/urbansound8k.html) (Human)
187
+ * [GTZAN](https://www.tensorflow.org/datasets/catalog/gtzan) (Human)
188
+ * [MMAU](https://github.com/Sakshi113/mmau/tree/main) (Human)
189
+ * [MMAR](https://arxiv.org/abs/2505.13032) (Human)
190
+ * [Audio Entailment](https://github.com/microsoft/AudioEntailment)(Automated)
191
+ * [CompA-R-test](https://github.com/Sreyan88/GAMA) (Automated)
192
+ * [MuchoMusic](https://huggingface.co/datasets/yongyizang/RUListening) (Automated)
193
+ * [Open-AQA](https://github.com/YuanGongND/ltu?tab=readme-ov-file)(Automated)
194
+ * [MusicInstruct](https://huggingface.co/datasets/m-a-p/Music-Instruct) (Automated)
195
+ * [MusicQA](https://huggingface.co/datasets/mu-llama/MusicQA) (Automated)
196
+ * [CMM Hallucination](https://huggingface.co/datasets/DAMO-NLP-SG/CMM) (Human)
197
+ * [IEMOCAP](https://sail.usc.edu/iemocap/) (Human)
198
+ * [VoiceBench](https://github.com/MatthewCYM/VoiceBench) (Human)
199
+ * [OpenAudioBench](https://huggingface.co/datasets/baichuan-inc/OpenAudioBench) (Human)
200
+ * [SEED](https://github.com/BytedanceSpeech/seed-tts-eval) (Human)
201
+ * [LibriSpeech](https://www.openslr.org/12) (Human)
202
+ * [SPGISpeech](https://datasets.kensho.com/datasets/spgispeech) (Human)
203
+ * [TEDLIUM](https://www.openslr.org/51/) (Human)
204
+ * [GigaSpeech](https://github.com/SpeechColab/GigaSpeech) (Human)
205
+ * [Common Voice 15](https://huggingface.co/datasets/mozilla-foundation/common_voice_12_0) (Human)
206
+ * [VoxPopuli](https://github.com/facebookresearch/voxpopuli) (Human)
207
+ * [LongAudioBench (ours)](https://huggingface.co/datasets/nvidia/LongAudio) (Automated)
208
+ * [AF-Chat-test (ours)](https://huggingface.co/datasets/nvidia/AF-Chat) (Human)
209
+
210
+ ---
211
+
212
+ ## Inference:
213
+
214
+ **Engine:** HuggingFace Transformers
215
+ **Test Hardware:** NVIDIA A100 80 GB
216
+
217
+ ---
218
+
219
+ ## Ethical Considerations:
220
+ NVIDIA believes Trustworthy AI is a shared responsibility and we have established policies and practices to enable development for a wide array of AI applications. When downloaded or used in accordance with our terms of service, developers should work with their internal model team to ensure this model meets requirements for the relevant industry and use case and addresses unforeseen product misuse.
221
+ Please report security vulnerabilities or NVIDIA AI Concerns [here](https://www.nvidia.com/en-us/support/submit-security-vulnerability/).
222
+
223
+ ---
224
+
225
+ ## Acknowledgements
226
+ Built with Qwen, NVILA and the open audio-ML community.