Climate Fallacy Detector (DeBERTa-v2-xlarge)
A fine-tuned DeBERTa-v2-xlarge model for detecting logical fallacies in climate misinformation.
This model is a fine-tuned version of microsoft/deberta-v2-xlarge on the FLICC taxonomy dataset. It detects logical fallacies in climate misinformation claims (e.g., Ad Hominem, False Equivalence, Fake Experts).
It successfully validates the results of the paper "Detecting Fallacies in Climate Misinformation: A Technocognitive Approach" (Zanartu et al., 2024), achieving comparable performance on consumer hardware (Mac M-Series).
Performance
- Test F1 Score (weighted)/ (macro): 0.69 / 0.68
- Validation Accuracy: 0.72
- Test Precision: 0.73
- Test Recall: 0.69
How to Use
from transformers import AutoTokenizer, AutoModelForSequenceClassification
import torch
model_name = "Gaanaman/deberta-v2-xlarge-climate-fallacy"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForSequenceClassification.from_pretrained(model_name)
text = "The climate has changed before naturally, so humans aren't causing it now."
inputs = tokenizer(text, return_tensors="pt")
with torch.no_grad():
logits = model(**inputs).logits
predicted_class_id = logits.argmax().item()
print(model.config.id2label[predicted_class_id])
# Output: Slothful Induction (or similar)
- Downloads last month
- 77
Model tree for Gaanaman/deberta-v2-xlarge-climate-fallacy
Base model
microsoft/deberta-v2-xlargeDataset used to train Gaanaman/deberta-v2-xlarge-climate-fallacy
Evaluation results
- Accuracy on FLICC Datasettest set self-reported0.688
- F1 Score on FLICC Datasettest set self-reported0.692
- Precision on FLICC Datasettest set self-reported0.730
- Recall on FLICC Datasettest set self-reported0.688