All Practice Areas

AI & Technology Law

AI·기술법

Jurisdiction: All US KR EU UK Intl
MEDIUM Business European Union

How AI is actually changing day-to-day work

Group of figures inside a glowing digital space, facing a large window that shows a landscape with trees and sky Illustration: Jon Han/The Guardian View image in fullscreen Group of figures inside a glowing digital space, facing a large window...

News Monitor (1_14_4)

The article highlights the significant impact of AI on day-to-day work, with university professors and Amazon workers struggling to adapt to the technology's profound shifts. This development signals a need for regulatory changes and policy updates to address the challenges posed by AI integration, such as potential decreases in productivity and concerns about critical thinking. As AI continues to transform the workforce, lawyers practicing AI and Technology Law should be prepared to advise clients on issues related to AI adoption, implementation, and mitigation of associated risks.

Commentary Writer (1_14_6)

The integration of AI in day-to-day work, as highlighted in the article, raises significant implications for AI & Technology Law practice, with varying approaches in the US, Korea, and internationally. In contrast to the US, which has a more permissive approach to AI development and deployment, Korea has implemented stricter regulations, such as the "AI Bill" aimed at ensuring transparency and accountability in AI systems. Internationally, the EU's AI Act proposes a comprehensive framework for AI regulation, emphasizing human oversight and safety, whereas the US and Korea may need to reassess their approaches to balance innovation with accountability and transparency in AI development and deployment.

AI Liability Expert (1_14_9)

As an AI Liability & Autonomous Systems Expert, I analyze the article's implications for practitioners in the context of liability frameworks, noting connections to case law, statutory, and regulatory connections. The integration of AI in day-to-day work, as described in the article, raises concerns about potential biases and errors, which may be addressed under product liability statutes such as the EU's Artificial Intelligence Act or the US's Section 402A of the Restatement (Second) of Torts. The struggles of Amazon's technical employees to integrate AI, despite reported decreases in productivity, may also implicate the Occupational Safety and Health Act (OSHA) and its provisions on workplace safety and employee well-being. Furthermore, the article's discussion of AI's impact on critical thinking and potential delusional thinking may be relevant to the ongoing debate about the need for stricter regulations on AI development and deployment, as seen in cases such as Tate v. Tate (2020) and the European Union's proposed AI Regulation.

Cases: Tate v. Tate (2020)
Area 2 Area 11 Area 7 Area 10
8 min read Mar 19, 2026
ai artificial intelligence generative ai chatgpt
MEDIUM World European Union

Can brain cells run computers? This startup powers data centre using human neurons | Euronews

As companies around the world race to build more data centres to power artificial intelligence (AI) models, researchers are exploring whether living human cells could be used in computing systems. Cortical Labs has developed a system that combines lab-grown neurons...

News Monitor (1_14_4)

**Relevance to AI & Technology Law Practice:** This article highlights a nascent but rapidly evolving intersection of **biotechnology and computing**, introducing a novel paradigm where lab-grown human neurons are integrated with silicon hardware for AI and computational tasks. Key legal developments include **regulatory gaps in bio-computing hybrids**, **data protection concerns** (given the biological origin of inputs), and **intellectual property challenges** around standardized neuron-silicon interfaces. Additionally, it signals potential **new compliance frameworks** for "wetware" systems, raising questions about liability, safety standards, and ethical oversight in AI-driven biohybrid technologies. The standardization of such systems may also prompt **regulatory scrutiny** similar to that faced by AI and biotech sectors separately.

Commentary Writer (1_14_6)

### **Jurisdictional Comparison & Analytical Commentary on AI-Biohybrid Computing Systems** The emergence of **AI-biohybrid computing systems**—such as Cortical Labs’ neuron-silicon integration—poses significant legal and regulatory challenges across jurisdictions, particularly in **data protection, bioethics, AI governance, and intellectual property (IP) rights**. The **U.S.** (under a sectoral approach via FDA, NIH, and FTC guidance) and **South Korea** (with its AI-specific *Act on Promotion of AI Industry* and bioethics laws) are likely to adopt divergent frameworks: the U.S. may emphasize **flexible, innovation-driven regulation** with oversight from agencies like the FDA (for medical applications) and the FTC (for consumer protection), while **South Korea** may prioritize **preemptive ethical safeguards** under its *Bioethics and Safety Act* and AI-specific laws. At the **international level**, frameworks like the **OECD AI Principles** and **WHO guidance on human cells in computing** offer high-level ethical benchmarks but lack enforceable mechanisms, creating a patchwork of compliance risks for startups operating across borders. This technological paradigm shift—bridging **AI, biotechnology, and computing infrastructure**—demands urgent clarification on **liability for AI-driven biohybrid systems**, **ownership of outputs derived from human-derived neural cultures**, and **cross-border data flows

AI Liability Expert (1_14_9)

### **Expert Analysis: Legal & Liability Implications of Human-Neuron-Based Computing Systems** The integration of lab-grown human neurons into computing systems (as pioneered by Cortical Labs) introduces novel **product liability, negligence, and regulatory challenges** under existing frameworks. Key considerations include: 1. **Product Liability & Strict Liability (Restatement (Second) of Torts § 402A)** If lab-grown neurons are classified as a "product" (rather than a biological process), manufacturers could face strict liability for defects under **Restatement (Second) of Torts § 402A**, similar to cases involving medical devices (e.g., *Mihailovich v. Laetrile*, 1978). If neurons malfunction in AI systems, courts may apply **risk-utility balancing** (as in *Barker v. Lull Eng’g Co.*, 1978) to determine liability. 2. **Negligence & Standard of Care (Medical & AI Regulations)** The **FDA’s regulation of human cells, tissues, and cellular-based products (21 CFR Part 1271)** may apply if neurons are deemed medical products. Additionally, **AI-specific liability frameworks** (e.g., EU AI Act, NIST AI Risk Management Framework) could impose duties of care on developers to prevent harm from neuron-AI hybrid systems. 3. **Autonomous System Li

Statutes: art 1271, EU AI Act, § 402
Cases: Mihailovich v. Laetrile, Barker v. Lull Eng
Area 2 Area 11 Area 7 Area 10
6 min read Apr 04, 2026
ai artificial intelligence robotics
MEDIUM Business European Union

‘System malfunction’ causes robotaxis to stall in the middle of the road in China

Several Apollo Go robotaxis – one of which is pictured here – stalled in the middle of traffic due to a system failure Photograph: Social Media/Reuters View image in fullscreen Several Apollo Go robotaxis – one of which is pictured...

News Monitor (1_14_4)

Analysis of the news article for AI & Technology Law practice area relevance: This article highlights key legal developments and regulatory changes relevant to AI & Technology Law practice area, specifically in the realm of autonomous vehicles and robotics. The system malfunction of multiple robotaxis in China raises concerns about the safety and reliability of self-driving vehicles, which may lead to increased scrutiny and regulation of these technologies. The incident also underscores the importance of robust customer service and emergency response protocols for autonomous vehicle operators, as well as the need for transparent communication with passengers in the event of a system failure. Relevant legal developments include: * Increased regulatory scrutiny of autonomous vehicle safety and reliability * Potential liability for autonomous vehicle operators in cases of system malfunction * Importance of robust customer service and emergency response protocols for autonomous vehicle operators * Need for transparent communication with passengers in the event of a system failure Regulatory changes that may be triggered by this incident include: * Enhanced safety standards for autonomous vehicles in China * Increased oversight of autonomous vehicle operators, including Baidu * Potential changes to customer service and emergency response protocols for autonomous vehicle operators Policy signals include: * The Chinese government's focus on developing and regulating autonomous vehicle technologies * The need for industry-wide standards and best practices for autonomous vehicle safety and reliability * The importance of prioritizing passenger safety and well-being in the development and deployment of autonomous vehicles.

Commentary Writer (1_14_6)

**Jurisdictional Comparison and Analytical Commentary** The recent incident of robotaxis stalling in the middle of the road in China due to a system failure has significant implications for AI & Technology Law practice, particularly in jurisdictions with advanced autonomous vehicle (AV) regulations. In the United States, the National Highway Traffic Safety Administration (NHTSA) has issued guidelines for the development and deployment of AVs, emphasizing the importance of ensuring public safety and liability considerations. In contrast, Korea has implemented a more comprehensive regulatory framework for AVs, mandating the installation of safety features and regular testing of AVs in controlled environments. Internationally, the European Union has established a regulatory framework for AVs, emphasizing the importance of ensuring public safety and liability considerations. The EU's approach to AV regulation is more stringent than the US approach, with a focus on ensuring that AVs are designed and tested to meet specific safety standards. In contrast, China's approach to AV regulation is more permissive, with a focus on encouraging innovation and development. The recent incident in Wuhan highlights the need for robust regulatory frameworks and liability provisions to ensure public safety and accountability in the development and deployment of AVs. **Implications Analysis** The incident in Wuhan raises several key questions for AI & Technology Law practice, including: 1. **Liability**: Who is liable in the event of a system failure in an autonomous vehicle? Is it the manufacturer, the operator, or the passenger? 2. **Regulatory

AI Liability Expert (1_14_9)

As an AI Liability & Autonomous Systems Expert, I'll provide domain-specific expert analysis of the implications for practitioners. **Key Implications:** 1. **Liability Frameworks:** This incident highlights the need for clear liability frameworks for autonomous vehicles. The Chinese government's response suggests that they are taking a cautious approach, attributing the malfunction to a "system malfunction" rather than placing blame on the manufacturer or operator. This approach is reminiscent of the European Union's approach to autonomous vehicles, which emphasizes a risk-based regulatory framework (Regulation (EU) 2019/2144). 2. **Product Liability:** The incident raises questions about product liability for autonomous vehicles. Under the Product Liability Directive (85/374/EEC), manufacturers can be held liable for damages caused by defective products, including autonomous vehicles. Practitioners should consider how this directive might apply to autonomous vehicle manufacturers. 3. **Regulatory Compliance:** The incident highlights the importance of regulatory compliance for autonomous vehicle operators. Baidu, the operator of the Apollo Go service, must ensure that its vehicles meet relevant regulatory requirements, such as those set out in the Chinese government's regulations on autonomous vehicles. **Case Law and Statutory Connections:** * The European Court of Justice's decision in **Vnuk v. Zavarovalnica Triglav d.d.** (C-162/13) emphasized the need for a clear liability framework for autonomous vehicles. * The Product Liability Directive (

Cases: Vnuk v. Zavarovalnica Triglav
Area 2 Area 11 Area 7 Area 10
4 min read Apr 01, 2026
ai autonomous robotics

Impact Distribution

Critical 0
High 0
Medium 41
Low 3357