Zero-Day Vulnerabilities in Enterprise AI Systems: Legal and Technical Implications
The discovery of critical zero-day vulnerabilities in widely deployed AI systems raises urgent questions about cybersecurity liability and disclosure obligations.
Quality follows upgrading
All Articles
The discovery of critical zero-day vulnerabilities in widely deployed AI systems raises urgent questions about cybersecurity liability and disclosure obligations.
arXiv:2602.22697v1 Announce Type: new Abstract: The rapid evolution of Large Language Models (LLMs) has accelerated the transition from conversational chatbots to general agents. However, effectively …
arXiv:2602.22698v1 Announce Type: new Abstract: Leveraging Large Language Models (LLMs) for Knowledge Graph Completion (KGC) is promising but hindered by a fundamental granularity mismatch. LLMs …
arXiv:2602.22723v1 Announce Type: new Abstract: There is growing recognition that many NLP tasks lack a single ground truth, as human judgments reflect diverse perspectives. To …
arXiv:2602.22730v1 Announce Type: new Abstract: This paper introduces a novel Czech dataset in the restaurant domain for aspect-based sentiment analysis (ABSA), enriched with annotations of …
arXiv:2602.22752v1 Announce Type: new Abstract: The transition of Large Language Models (LLMs) from exploratory tools to active "silicon subjects" in social science lacks extensive validation …
arXiv:2602.22755v1 Announce Type: new Abstract: We introduce AuditBench, an alignment auditing benchmark. AuditBench consists of 56 language models with implanted hidden behaviors. Each model has …
arXiv:2602.22765v1 Announce Type: new Abstract: Reinforcement Learning (RL) has empowered Large Language Models (LLMs) with strong reasoning capabilities, but vanilla RL mainly focuses on generation …
arXiv:2602.22766v1 Announce Type: new Abstract: Latent visual reasoning aims to mimic human's imagination process by meditating through hidden states of Multimodal Large Language Models. While …
arXiv:2602.22787v1 Announce Type: new Abstract: Large language models (LLMs) often generate fluent but unfounded claims, or hallucinations, which fall into two types: (i) faithfulness violations …
arXiv:2602.22790v1 Announce Type: new Abstract: The rapid evolution of large language models (LLMs) has transformed prompt engineering from a localized craft into a systems-level governance …
arXiv:2602.22827v1 Announce Type: new Abstract: This paper presents a comprehensive evaluation framework for assessing the cultural competence of large language models (LLMs) in Persian. Existing …