Speak or Stay Silent: Context-Aware Turn-Taking in Multi-Party Dialogue
Paper • 2603.11409 • Published
LoRA adapter for Mistral-7B-Instruct fine-tuned on the AMI meeting corpus for proactive response prediction in multi-party conversations. Given a conversational context and a current utterance, the model predicts whether a target speaker will SPEAK next or remain SILENT.
from transformers import AutoModelForCausalLM, AutoTokenizer
from peft import PeftModel
base_model = AutoModelForCausalLM.from_pretrained("mistralai/Mistral-7B-Instruct-v0.2")
model = PeftModel.from_pretrained(base_model, "kraken07/mistral-7b-ami")
tokenizer = AutoTokenizer.from_pretrained("mistralai/Mistral-7B-Instruct-v0.2")
# Your input format should match training: context turns + current turn
# Output: SPEAK or SILENT prediction for the target speaker
If you use this model, please cite our work:
@misc{bhagtani2026speakstaysilentcontextaware,
title={Speak or Stay Silent: Context-Aware Turn-Taking in Multi-Party Dialogue},
author={Bhagtani, Kratika and Anand, Mrinal and Xu, Yu Chen and Yadav, Amit Kumar Singh},
year={2026},
archivePrefix={arXiv},
url={https://arxiv.org/abs/2603.11409}
}
Base model
mistralai/Mistral-7B-Instruct-v0.2