Predictive policing and algorithmic fairness
Abstract This paper examines racial discrimination and algorithmic bias in predictive policing algorithms (PPAs), an emerging technology designed to predict threats and suggest solutions in law enforcement. We first describe what discrimination is in a case study of Chicago’s PPA. We then explain their causes with Broadbent’s contrastive model of causation and causal diagrams. Based on the cognitive science literature, we also explain why fairness is not an objective truth discoverable in laboratories but has context-sensitive social meanings that need to be negotiated through democratic processes. With the above analysis, we next predict why some recommendations given in the bias reduction literature are not as effective as expected. Unlike the cliché highlighting equal participation for all stakeholders in predictive policing, we emphasize power structures to avoid hermeneutical lacunae. Finally, we aim to control PPA discrimination by proposing a governance solution—a framework of a
Abstract This paper examines racial discrimination and algorithmic bias in predictive policing algorithms (PPAs), an emerging technology designed to predict threats and suggest solutions in law enforcement. We first describe what discrimination is in a case study of Chicago’s PPA. We then explain their causes with Broadbent’s contrastive model of causation and causal diagrams. Based on the cognitive science literature, we also explain why fairness is not an objective truth discoverable in laboratories but has context-sensitive social meanings that need to be negotiated through democratic processes. With the above analysis, we next predict why some recommendations given in the bias reduction literature are not as effective as expected. Unlike the cliché highlighting equal participation for all stakeholders in predictive policing, we emphasize power structures to avoid hermeneutical lacunae. Finally, we aim to control PPA discrimination by proposing a governance solution—a framework of a social safety net.
Executive Summary
The article 'Predictive policing and algorithmic fairness' delves into the complexities of racial discrimination and algorithmic bias within predictive policing algorithms (PPAs), using Chicago’s PPA as a case study. The authors employ Broadbent’s contrastive model of causation and causal diagrams to elucidate the causes of bias, arguing that fairness is context-sensitive and socially constructed, requiring democratic negotiation. The paper critiques existing bias reduction recommendations and emphasizes the importance of power structures to avoid hermeneutical lacunae. It proposes a governance solution—a social safety net framework—to mitigate discrimination in PPAs.
Key Points
- ▸ Examination of racial discrimination and algorithmic bias in predictive policing algorithms.
- ▸ Use of Broadbent’s contrastive model of causation to explain bias causes.
- ▸ Argument that fairness is context-sensitive and socially constructed.
- ▸ Critique of existing bias reduction recommendations.
- ▸ Proposal of a governance solution—a social safety net framework—to control PPA discrimination.
Merits
Comprehensive Analysis
The article provides a thorough examination of the causes and implications of algorithmic bias in predictive policing, using a well-established theoretical framework.
Contextual Understanding of Fairness
The authors effectively argue that fairness is not an objective truth but a context-sensitive social construct, necessitating democratic processes for negotiation.
Practical Governance Solution
The proposal of a social safety net framework offers a practical and actionable solution to mitigate discrimination in predictive policing algorithms.
Demerits
Limited Empirical Evidence
The article relies heavily on theoretical models and case studies, which may limit the generalizability of its findings to other contexts.
Complexity of Implementation
The proposed governance solution, while innovative, may face significant challenges in implementation due to the complexity of power structures and democratic processes.
Lack of Stakeholder Engagement
The article does not sufficiently address the role of stakeholder engagement in the democratic negotiation of fairness, which is crucial for the success of the proposed framework.
Expert Commentary
The article 'Predictive policing and algorithmic fairness' offers a rigorous and well-reasoned analysis of the complexities surrounding algorithmic bias in predictive policing. The authors' use of Broadbent’s contrastive model of causation provides a robust theoretical foundation for understanding the causes of bias, while their argument for the context-sensitive nature of fairness is both insightful and timely. The proposal of a social safety net framework as a governance solution is innovative and practical, addressing a critical gap in the current literature. However, the article’s reliance on theoretical models and case studies may limit its generalizability, and the complexity of implementing the proposed framework should not be underestimated. Nonetheless, the article makes a significant contribution to the ongoing debate on algorithmic fairness and provides valuable insights for both practitioners and policymakers.
Recommendations
- ✓ Further empirical research should be conducted to validate the findings and proposals presented in the article, ensuring their applicability in diverse contexts.
- ✓ Future studies should explore the role of stakeholder engagement in the democratic negotiation of fairness, providing a more comprehensive understanding of the governance of predictive policing algorithms.