Academic

Detecting racial bias in algorithms and machine learning

Purpose The online economy has not resolved the issue of racial bias in its applications. While algorithms are procedures that facilitate automated decision-making, or a sequence of unambiguous instructions, bias is a byproduct of these computations, bringing harm to historically disadvantaged populations. This paper argues that algorithmic biases explicitly and implicitly harm racial groups and lead to forms of discrimination. Relying upon sociological and technical research, the paper offers commentary on the need for more workplace diversity within high-tech industries and public policies that can detect or reduce the likelihood of racial bias in algorithmic design and execution. Design/methodology/approach The paper shares examples in the US where algorithmic biases have been reported and the strategies for explaining and addressing them. Findings The findings of the paper suggest that explicit racial bias in algorithms can be mitigated by existing laws, including those governing h

N
Nicol Turner Lee
· · 1 min read · 4 views

Purpose The online economy has not resolved the issue of racial bias in its applications. While algorithms are procedures that facilitate automated decision-making, or a sequence of unambiguous instructions, bias is a byproduct of these computations, bringing harm to historically disadvantaged populations. This paper argues that algorithmic biases explicitly and implicitly harm racial groups and lead to forms of discrimination. Relying upon sociological and technical research, the paper offers commentary on the need for more workplace diversity within high-tech industries and public policies that can detect or reduce the likelihood of racial bias in algorithmic design and execution. Design/methodology/approach The paper shares examples in the US where algorithmic biases have been reported and the strategies for explaining and addressing them. Findings The findings of the paper suggest that explicit racial bias in algorithms can be mitigated by existing laws, including those governing housing, employment, and the extension of credit. Implicit, or unconscious, biases are harder to redress without more diverse workplaces and public policies that have an approach to bias detection and mitigation. Research limitations/implications The major implication of this research is that further research needs to be done. Increasing the scholarly research in this area will be a major contribution in understanding how emerging technologies are creating disparate and unfair treatment for certain populations. Practical implications The practical implications of the work point to areas within industries and the government that can tackle the question of algorithmic bias, fairness and accountability, especially African-Americans. Social implications The social implications are that emerging technologies are not devoid of societal influences that constantly define positions of power, values, and norms. Originality/value The paper joins a scarcity of existing research, especially in the area that intersects race and algorithmic development.

Executive Summary

This article effectively highlights the pervasive issue of racial bias in algorithms and machine learning, emphasizing the need for more workplace diversity and public policies to detect and mitigate biases. The paper draws upon sociological and technical research to illustrate the harm caused by algorithmic biases, particularly to historically disadvantaged populations. The author presents examples of algorithmic biases in the US and suggests that existing laws can mitigate explicit racial bias, but implicit biases require more diverse workplaces and bias detection policies. The article concludes by spotlighting the need for further research and practical applications to tackle algorithmic bias, fairness, and accountability, particularly for African-Americans. This paper contributes to the scarcity of research on intersectional race and algorithmic development.

Key Points

  • Algorithmic biases harm racial groups and lead to discrimination
  • Workplace diversity and public policies can detect or reduce racial bias
  • Existing laws can mitigate explicit racial bias, but implicit biases require more diverse workplaces and bias detection policies

Merits

Strength

The article effectively highlights the pervasive issue of racial bias in algorithms and machine learning, providing a comprehensive overview of the problem and potential solutions.

Objectivity

The author maintains objectivity by presenting multiple perspectives and examples, avoiding biased language and conclusions.

Originality

The paper contributes to the scarcity of research on intersectional race and algorithmic development, offering a unique perspective on the issue.

Demerits

Limitation

The article primarily focuses on the US context, limiting its generalizability to other countries and regions.

Scope

The paper's scope is narrow, focusing on racial bias in algorithms and machine learning, without exploring other forms of bias or algorithmic decision-making.

Methodology

The article lacks a rigorous methodology for detecting and mitigating algorithmic bias, relying on anecdotal evidence and examples.

Expert Commentary

This article makes a significant contribution to the ongoing discussion on racial bias in algorithms and machine learning. The author effectively highlights the need for more research and practical applications to tackle this issue, particularly for African-Americans. However, the article's limitations, such as its narrow scope and focus on the US context, should be addressed in future research. Nonetheless, the article's emphasis on diversity and inclusion in tech industries and the importance of accountability in algorithmic decision-making are crucial for mitigating racial bias and promoting fairness and equity.

Recommendations

  • Future research should explore the intersectionality of bias in AI decision-making, including factors like socioeconomic status, education level, and geographic location.
  • Tech industries and governments should prioritize diversity and inclusion initiatives, including bias detection policies and algorithms, to ensure fair and accountable algorithmic decision-making.

Sources