Gauging Biases in Various Deep Learning AI Models

Document Type

Conference Proceeding

Publication Date

1-1-2023

Abstract

With the broader usage of Artificial Intelligence (AI) in all areas of our life, accountability of such systems is one of the most important topics for research. Trustworthiness of AI results require very detailed and careful validation of the applied algorithms, as some errors and biases could reside deeply inside AI components, which might affect inclusiveness, equity, justice and irreversibly influence human lives. It is critical to detect them and to reduce their negative effect on AI users. In this paper, we introduce a new approach to bias detection. Using the Deep Learning (DL) models as examples of a broader scope of AI systems, we make the models self-detective of the underlying defects and biases. Our system looks ‘under the hood’ of AI-model components layer by layer, treating the neurons as similarity estimators – as we claim the main indicator of hidden defects and bias. In this paper, we report on the result of applying our self-detection approach to a Transformer DL model, and its Detection Transformer object detection (DETR) framework, introduced by Facebook AI Research (FAIR) team in 2020. Our approach automatically measures the weights and biases of transformer encoding layers to identify and eventually mitigate the sources of bias. This paper focuses on the measurement and visualization of the weights and biases of the DETR-model layers. The outcome of this research will be our implementation of a modernistic Bias Testing and Mitigation platform. It will be open to the public to validate AI applications and mitigate their biases before their usage.

Publication Title

Lecture Notes in Networks and Systems

First Page Number

171

Last Page Number

186

DOI

10.1007/978-3-031-16075-2_11

This document is currently not available here.

Share

COinS