Academic

AI Planning Framework for LLM-Based Web Agents

arXiv:2603.12710v1 Announce Type: new Abstract: Developing autonomous agents for web-based tasks is a core challenge in AI. While Large Language Model (LLM) agents can interpret complex user requests, they often operate as black boxes, making it difficult to diagnose why they fail or how they plan. This paper addresses this gap by formally treating web tasks as sequential decision-making processes. We introduce a taxonomy that maps modern agent architectures to traditional planning paradigms: Step-by-Step agents to Breadth-First Search (BFS), Tree Search agents to Best-First Tree Search, and Full-Plan-in-Advance agents to Depth-First Search (DFS). This framework allows for a principled diagnosis of system failures like context drift and incoherent task decomposition. To evaluate these behaviors, we propose five novel evaluation metrics that assess trajectory quality beyond simple success rates. We support this analysis with a new dataset of 794 human-labeled trajectories from the WebA

O
Orit Shahnovsky, Rotem Dror
· · 1 min read · 10 views

arXiv:2603.12710v1 Announce Type: new Abstract: Developing autonomous agents for web-based tasks is a core challenge in AI. While Large Language Model (LLM) agents can interpret complex user requests, they often operate as black boxes, making it difficult to diagnose why they fail or how they plan. This paper addresses this gap by formally treating web tasks as sequential decision-making processes. We introduce a taxonomy that maps modern agent architectures to traditional planning paradigms: Step-by-Step agents to Breadth-First Search (BFS), Tree Search agents to Best-First Tree Search, and Full-Plan-in-Advance agents to Depth-First Search (DFS). This framework allows for a principled diagnosis of system failures like context drift and incoherent task decomposition. To evaluate these behaviors, we propose five novel evaluation metrics that assess trajectory quality beyond simple success rates. We support this analysis with a new dataset of 794 human-labeled trajectories from the WebArena benchmark. Finally, we validate our evaluation framework by comparing a baseline Step-by-Step agent against a novel Full-Plan-in-Advance implementation. Our results reveal that while the Step-by-Step agent aligns more closely with human gold trajectories (38% overall success), the Full-Plan-in-Advance agent excels in technical measures such as element accuracy (89%), demonstrating the necessity of our proposed metrics for selecting appropriate agent architectures based on specific application constraints.

Executive Summary

This article introduces a novel AI planning framework for Large Language Model (LLM)-based web agents, bridging the gap between LLM interpretability and sequential decision-making processes. By mapping modern agent architectures to traditional planning paradigms, the authors propose a principled diagnosis of system failures and five novel evaluation metrics to assess trajectory quality. The framework is validated through a comparison of a baseline Step-by-Step agent and a Full-Plan-in-Advance implementation on the WebArena benchmark, revealing the necessity of the proposed metrics for selecting appropriate agent architectures. The study contributes significantly to the field of autonomous web agents, providing a framework for more effective and explainable AI systems.

Key Points

  • The article introduces a taxonomy that maps modern agent architectures to traditional planning paradigms.
  • The authors propose five novel evaluation metrics to assess trajectory quality beyond simple success rates.
  • The framework is validated through a comparison of a baseline Step-by-Step agent and a Full-Plan-in-Advance implementation on the WebArena benchmark.

Merits

Strength in Addressing the 'Black Box' Problem

The article effectively addresses the challenge of LLM interpretability by introducing a principled diagnosis of system failures.

Novel Evaluation Metrics

The proposed evaluation metrics provide a more comprehensive assessment of trajectory quality, beyond simple success rates.

Improved Agent Architecture Selection

The framework enables the selection of appropriate agent architectures based on specific application constraints, leading to more effective AI systems.

Demerits

Limited Application Domain

The framework is currently designed for web-based tasks, and its applicability to other domains remains to be explored.

Dataset Limitation

The study relies on a single dataset, WebArena benchmark, which may not be representative of all possible scenarios.

Expert Commentary

The article makes a significant contribution to the field of autonomous web agents by addressing the long-standing challenge of LLM interpretability. The proposed framework provides a principled diagnosis of system failures and a more comprehensive evaluation of trajectory quality. However, the study's reliance on a single dataset and limited application domain may limit its generalizability. Nevertheless, the framework's potential applications in real-world web-based applications and its implications for regulatory frameworks make it a valuable addition to the research agenda.

Recommendations

  • Future research should aim to explore the applicability of the framework to other domains and develop more comprehensive evaluation metrics.
  • The development of more transparent and accountable AI systems, as demonstrated by this study, should be a priority for regulatory frameworks and policy-making.

Sources