Hate Speech Detection using Large Language Models with Data Augmentation and Feature Enhancement
arXiv:2603.04698v1 Announce Type: new Abstract: This paper evaluates data augmentation and feature enhancement techniques for hate speech detection, comparing traditional classifiers, e.g., Delta Term Frequency-Inverse Document Frequency (Delta TF-IDF), with transformer-based models (DistilBERT, RoBERTa, DeBERTa, Gemma-7B, gpt-oss-20b) across diverse datasets. It examines the impact of Synthetic Minority Over-sampling Technique (SMOTE), weighted loss determined by inverse class proportions, Part-of-Speech (POS) tagging, and text data augmentation on model performance. The open-source gpt-oss-20b consistently achieves the highest results. On the other hand, Delta TF-IDF responds strongly to data augmentation, reaching 98.2% accuracy on the Stormfront dataset. The study confirms that implicit hate speech is more difficult to detect than explicit hateful content and that enhancement effectiveness depends on dataset, model, and technique interaction. Our research informs the development o
arXiv:2603.04698v1 Announce Type: new Abstract: This paper evaluates data augmentation and feature enhancement techniques for hate speech detection, comparing traditional classifiers, e.g., Delta Term Frequency-Inverse Document Frequency (Delta TF-IDF), with transformer-based models (DistilBERT, RoBERTa, DeBERTa, Gemma-7B, gpt-oss-20b) across diverse datasets. It examines the impact of Synthetic Minority Over-sampling Technique (SMOTE), weighted loss determined by inverse class proportions, Part-of-Speech (POS) tagging, and text data augmentation on model performance. The open-source gpt-oss-20b consistently achieves the highest results. On the other hand, Delta TF-IDF responds strongly to data augmentation, reaching 98.2% accuracy on the Stormfront dataset. The study confirms that implicit hate speech is more difficult to detect than explicit hateful content and that enhancement effectiveness depends on dataset, model, and technique interaction. Our research informs the development of hate speech detection by highlighting how dataset properties, model architectures, and enhancement strategies interact, supporting more accurate and context-aware automated detection.
Executive Summary
This article examines the effectiveness of large language models and data augmentation techniques in detecting hate speech. The study compares traditional classifiers with transformer-based models across diverse datasets, highlighting the impact of various enhancement strategies on model performance. The results show that the open-source gpt-oss-20b model consistently achieves the highest results, while Delta TF-IDF responds strongly to data augmentation. The research underscores the importance of considering dataset properties, model architectures, and enhancement strategies in developing accurate and context-aware hate speech detection systems.
Key Points
- ▸ Evaluation of data augmentation and feature enhancement techniques for hate speech detection
- ▸ Comparison of traditional classifiers with transformer-based models across diverse datasets
- ▸ Impact of Synthetic Minority Over-sampling Technique (SMOTE), weighted loss, Part-of-Speech (POS) tagging, and text data augmentation on model performance
Merits
Comprehensive Evaluation
The study provides a thorough evaluation of various models and techniques, offering valuable insights into their strengths and weaknesses.
State-of-the-Art Models
The research utilizes state-of-the-art models, including DistilBERT, RoBERTa, and gpt-oss-20b, to ensure the results are relevant and applicable to current hate speech detection systems.
Demerits
Limited Dataset Scope
The study's findings may be limited by the scope of the datasets used, which may not be representative of all types of hate speech or online platforms.
Lack of Human Evaluation
The research relies solely on automated metrics, which may not capture the nuances of human judgment and the complexities of hate speech detection.
Expert Commentary
This study provides a significant contribution to the field of hate speech detection, highlighting the importance of considering the interactions between dataset properties, model architectures, and enhancement strategies. The findings have important implications for the development of online harassment detection systems and raise critical questions about the balance between free speech and censorship. However, the research also underscores the need for ongoing evaluation and refinement of hate speech detection systems, given the evolving nature of online hate speech and the complexities of human judgment.
Recommendations
- ✓ Future research should prioritize the development of more diverse and representative datasets, incorporating a wider range of hate speech types and online platforms.
- ✓ The study's findings should be used to inform the development of more nuanced and context-aware hate speech detection systems, incorporating human evaluation and feedback to improve accuracy and effectiveness.