cross-lingual-sarcasm-detection
URL: https://github.com/Divyessh01/cross-lingual-sarcasm-detection
Description: A robust sarcasm and irony detection model utilizing transformer architectures like BERT and RoBERTa, combined with contrastive learning techniques. The goal is to improve sarcasm and irony detection, even with small and noisy language-specific datasets.
Project Overview
This project applies deep learning techniques to sarcasm and irony detection, focusing on context-dependent sarcasm handling. It provides a framework for training and testing models on sarcasm datasets.
Training Methods:
- Uses BERT and RoBERTa for sarcasm and irony detection.
- Employs contrastive learning to improve model robustness.
- Training is performed using Python 3.7+ and libraries like Transformers, PyTorch, Scikit-learn, and NumPy.
- Supports both training from scratch and using pre-trained model weights.
Results:
- No explicit performance metrics or evaluation results are provided in the repository.
- Offers pre-trained model weights for direct usage.
Dataset Files:
- news_headlines_train.csv - (size not provided) - Training data for sarcasm detection.
- news_headlines_dev.csv - (size not provided) - Development/validation set.
- news_headlines_test.csv - (size not provided) - Test dataset.
Pre-trained Model:
- Users can skip training and directly download pre-trained model weights from this link.