Document Type
Report
Publication Date
Spring 4-2024
Abstract
This paper delved into a comprehensive exploration of the inherent biases present in Large Language Models (LLMs) and various Transformer models, with a focus on their role in identifying and addressing instances of cyberbullying. The objective was to refine and enhance the accuracy and fairness of these models by mitigating the biases deeply ingrained in their structures. This was crucial because language models could inadvertently perpetuate and amplify existing biases present in the data they were trained on.
Recommended Citation
Ruiz, Dahana Moz; Watson, Annaliese; Manikandan, Anjana; and Gordon, Zachary, "Reducing Bias in Cyberbullying Detection with Advanced LLMs and Transformer Models" (2024). Center for Cybersecurity. 36.
Available at:
https://digitalcommons.kean.edu/cybersecurity/36