Document Type

Report

Publication Date

Spring 4-2024

Abstract

This paper delved into a comprehensive exploration of the inherent biases present in Large Language Models (LLMs) and various Transformer models, with a focus on their role in identifying and addressing instances of cyberbullying. The objective was to refine and enhance the accuracy and fairness of these models by mitigating the biases deeply ingrained in their structures. This was crucial because language models could inadvertently perpetuate and amplify existing biases present in the data they were trained on.

Share

COinS