Mistral-7B-AMI: Proactive Response Prediction in Multi-Party Dialogue

LoRA adapter for Mistral-7B-Instruct fine-tuned on the AMI meeting corpus for proactive response prediction in multi-party conversations. Given a conversational context and a current utterance, the model predicts whether a target speaker will SPEAK next or remain SILENT.

Model Details

  • Model type: LoRA adapter for causal language model (text classification / sequence classification)
  • Language(s): English
  • License: Apache 2.0
  • Finetuned from: mistralai/Mistral-7B-Instruct-v0.2
  • AMI Corpus: Meeting recordings and transcripts: AMI Corpus

Model Sources

How to Get Started with the Model

from transformers import AutoModelForCausalLM, AutoTokenizer
from peft import PeftModel

base_model = AutoModelForCausalLM.from_pretrained("mistralai/Mistral-7B-Instruct-v0.2")
model = PeftModel.from_pretrained(base_model, "kraken07/mistral-7b-ami")
tokenizer = AutoTokenizer.from_pretrained("mistralai/Mistral-7B-Instruct-v0.2")

# Your input format should match training: context turns + current turn
# Output: SPEAK or SILENT prediction for the target speaker

Citation

If you use this model, please cite our work:

@misc{bhagtani2026speakstaysilentcontextaware,
  title={Speak or Stay Silent: Context-Aware Turn-Taking in Multi-Party Dialogue},
  author={Bhagtani, Kratika and Anand, Mrinal and Xu, Yu Chen and Yadav, Amit Kumar Singh},
  year={2026},
  archivePrefix={arXiv},
  url={https://arxiv.org/abs/2603.11409}
}
Downloads last month
45
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for ishiki-labs/mistral-7b-ami

Adapter
(1203)
this model

Paper for ishiki-labs/mistral-7b-ami