Academic

Reconciling Legal and Technical Approaches to Algorithmic Bias

In recent years, there has been a proliferation of papers in the algorithmic fairness literature proposing various technical definitions of algorithmic bias and methods to mitigate bias. Whether these algorithmic bias mitigation methods would be permissible from a legal perspective is a complex but increasingly pressing question at a time when there are growing concerns about the potential for algorithmic decision-making to exacerbate societal inequities. In particular, there is a tension around the use of protected class variables: most algorithmic bias mitigation techniques utilize these variables or proxies, but anti-discrimination doctrine has a strong preference for decisions that are blind to them. This Article analyzes the extent to which technical approaches to algorithmic bias are compatible with U.S. anti-discrimination law and recommends a path toward greater compatibility.\nThis question is vital to address because a lack of legal compatibility creates the possibility that

A
Alice Xiang
· · 2 min read · 16 views

In recent years, there has been a proliferation of papers in the algorithmic fairness literature proposing various technical definitions of algorithmic bias and methods to mitigate bias. Whether these algorithmic bias mitigation methods would be permissible from a legal perspective is a complex but increasingly pressing question at a time when there are growing concerns about the potential for algorithmic decision-making to exacerbate societal inequities. In particular, there is a tension around the use of protected class variables: most algorithmic bias mitigation techniques utilize these variables or proxies, but anti-discrimination doctrine has a strong preference for decisions that are blind to them. This Article analyzes the extent to which technical approaches to algorithmic bias are compatible with U.S. anti-discrimination law and recommends a path toward greater compatibility.\nThis question is vital to address because a lack of legal compatibility creates the possibility that biased algorithms might be considered legally permissible while approaches designed to correct for bias might be considered illegally discriminatory. For example, a recent proposed rule from the Department of Housing and Urban Development ("HUD"), which would have established the first instance of a U.S. regulatory definition for algorithmic discrimination, would have created a safe harbor from disparate impact liability for housing-related algorithms that do not use protected class variables or close proxies. An abundance of recent scholarship has shown, however, that simply removing protected class variables and close proxies does little to ensure that the algorithm will not be biased. In fact, this approach, known as "fairness through unawareness" in the machine learning community, is widely considered naive. While the language around algorithms was removed in the final rule, this focus on the visibility of protected attributes in decision-making is central in U.S. anti-discrimination law.\nCausal inference provides a potential way to reconcile algorithmic fairness techniques with anti-discrimination law. In U.S. law, discrimination is generally thought of as making decisions "because of" a protected class variable. In fact, in Texas Department of Housing and Community Affairs v. Inclusive Communities Project, Inc., the case that motivated the HUD proposed rule, the Court required a "causal connection" between the decision-making process and the disproportionate outcomes. Instead of examining whether protected class variables appear in the algorithm, causal inference would allow for techniques that use protected class variables with the intent of negating causal relationships in the data tied with race. While moving from correlation to causation is challenging; particularly in machine learning, where leveraging correlations to make accurate predictions is typically the goal-doing so offers a way to reconcile technical feasibility and legal precedence while providing protections against algorithmic bias.

Executive Summary

The article 'Reconciling Legal and Technical Approaches to Algorithmic Bias' explores the tension between technical methods for mitigating algorithmic bias and U.S. anti-discrimination law. It highlights the incompatibility between technical approaches that often use protected class variables and legal doctrines that prefer decisions blind to these variables. The article suggests that causal inference could reconcile these approaches by focusing on negating causal relationships tied to protected classes rather than merely removing these variables. This reconciliation is crucial to ensure that bias mitigation techniques are legally permissible and effective.

Key Points

  • Technical approaches to algorithmic bias often use protected class variables, which conflicts with U.S. anti-discrimination law.
  • The 'fairness through unawareness' approach, which removes protected class variables, is legally preferred but technically ineffective.
  • Causal inference offers a potential solution by focusing on negating causal relationships tied to protected classes.

Merits

Comprehensive Analysis

The article provides a thorough analysis of the tension between technical and legal approaches to algorithmic bias, highlighting the complexities and nuances involved.

Innovative Solution

The suggestion of using causal inference to reconcile technical and legal approaches is innovative and offers a potential path forward.

Demerits

Complexity of Causal Inference

The article acknowledges the challenges of moving from correlation to causation in machine learning, which could limit the practical applicability of the proposed solution.

Legal Precedent

The article does not fully explore the legal precedents and case law that could support or challenge the proposed reconciliation.

Expert Commentary

The article effectively highlights the critical need to reconcile technical and legal approaches to algorithmic bias. The tension between using protected class variables in technical solutions and the legal preference for decisions blind to these variables is a significant challenge. The suggestion of using causal inference to negate causal relationships tied to protected classes is a promising avenue. However, the complexity of implementing causal inference in machine learning and the need for further legal analysis are important considerations. The article's insights could guide both technical developers and legal scholars in creating more effective and legally sound bias mitigation strategies. It is crucial to continue exploring these intersections to ensure that algorithmic decision-making is both fair and legally compliant.

Recommendations

  • Further research should be conducted to explore the practical implementation of causal inference in bias mitigation techniques.
  • Legal scholars should examine the case law and precedents that could support or challenge the proposed reconciliation.

Sources