An Adaptive Conceptualisation of Artificial Intelligence and the Law, Regulation and Ethics
The description of a combination of technologies as ‘artificial intelligence’ (AI) is misleading. To ascribe intelligence to a statistical model without human attribution points towards an attempt at shifting legal, social, and ethical responsibilities to machines. This paper exposes the deeply flawed characterisation of AI and the unearned assumptions that are central to its current definition, characterisation, and efforts at controlling it. The contradictions in the framing of AI have been the bane of the incapacity to regulate it. A revival of applied definitional framing of AI across disciplines have produced a plethora of conceptions and inconclusiveness. Therefore, the research advances this position with two fundamental and interrelated arguments. First, the difficulty in regulating AI is tied to it characterisation as artificial intelligence. This has triggered existing and new conflicting notions of the meaning of ‘artificial’ and ‘intelligence’, which are broad and largely u
The description of a combination of technologies as ‘artificial intelligence’ (AI) is misleading. To ascribe intelligence to a statistical model without human attribution points towards an attempt at shifting legal, social, and ethical responsibilities to machines. This paper exposes the deeply flawed characterisation of AI and the unearned assumptions that are central to its current definition, characterisation, and efforts at controlling it. The contradictions in the framing of AI have been the bane of the incapacity to regulate it. A revival of applied definitional framing of AI across disciplines have produced a plethora of conceptions and inconclusiveness. Therefore, the research advances this position with two fundamental and interrelated arguments. First, the difficulty in regulating AI is tied to it characterisation as artificial intelligence. This has triggered existing and new conflicting notions of the meaning of ‘artificial’ and ‘intelligence’, which are broad and largely unsettled. Second, difficulties in developing a global consensus on responsible AI stem from this inconclusiveness. To advance these arguments, this paper utilises functional contextualism to analyse the fundamental nature and architecture of artificial intelligence and human intelligence. There is a need to establish a test for ‘artificial intelligence’ in order ensure appropriate allocation of rights, duties, and responsibilities. Therefore, this research proposes, develops, and recommends an adaptive three-elements, three-step threshold for achieving responsible artificial intelligence.
Executive Summary
The article critically examines the current conceptualisation of Artificial Intelligence (AI) and its implications for law, regulation, and ethics. The authors argue that the term 'artificial intelligence' is misleading and that the lack of a clear, consistent definition has hindered regulatory efforts and the development of a global consensus on responsible AI. They propose an adaptive three-elements, three-step threshold for achieving responsible AI, using functional contextualism to analyse the nature and architecture of AI and human intelligence.
Key Points
- ▸ The term 'artificial intelligence' is misleading and shifts responsibilities to machines.
- ▸ The lack of a clear definition of AI has hindered regulatory efforts and global consensus on responsible AI.
- ▸ The authors propose an adaptive three-elements, three-step threshold for achieving responsible AI.
Merits
Critical Examination of AI Terminology
The article provides a rigorous critique of the current use of the term 'artificial intelligence', highlighting its misleading nature and the ethical implications of ascribing intelligence to machines.
Interdisciplinary Approach
The authors utilise functional contextualism to analyse AI and human intelligence, providing a novel perspective that bridges the gap between different disciplines.
Proactive Solution
The proposed adaptive threshold offers a practical solution to the challenges of regulating AI and achieving responsible AI.
Demerits
Complexity of the Proposed Framework
The adaptive three-elements, three-step threshold may be too complex for practical implementation, particularly in jurisdictions with less developed regulatory frameworks.
Lack of Empirical Evidence
The article would benefit from empirical evidence or case studies to support the proposed framework and demonstrate its effectiveness.
Expert Commentary
The article makes a significant contribution to the ongoing debate about the definition and regulation of AI. The authors' critique of the term 'artificial intelligence' is well-founded and highlights the ethical implications of ascribing intelligence to machines. The proposed adaptive threshold offers a novel and proactive solution to the challenges of regulating AI. However, the complexity of the framework may pose challenges for practical implementation, and the article would benefit from empirical evidence to support its proposals. Overall, the article provides a valuable perspective on the need for a clear, consistent definition of AI and the importance of achieving a global consensus on responsible AI.
Recommendations
- ✓ The authors should consider providing empirical evidence or case studies to support the proposed adaptive threshold and demonstrate its effectiveness in practice.
- ✓ Future research could explore the potential challenges and barriers to implementing the proposed framework in different jurisdictions and regulatory contexts.