Academic

Benchmarking Zero-Shot Reasoning Approaches for Error Detection in Solidity Smart Contracts

arXiv:2603.13239v1 Announce Type: new Abstract: Smart contracts play a central role in blockchain systems by encoding financial and operational logic. Still, their susceptibility to subtle security flaws poses significant risks of financial loss and erosion of trust. LLMs create new opportunities for automating vulnerability detection, yet the effectiveness of different prompting strategies and model choices in real-world contexts remains uncertain. This paper evaluates state-of-the-art LLMs on Solidity smart contract analysis using a balanced dataset of 400 contracts under two tasks: (i) Error Detection, where the model performs binary classification to decide whether a contract is vulnerable, and (ii) Error Classification, where the model must assign the predicted issue to a specific vulnerability category. Models are evaluated using zero-shot prompting strategies, including zero-shot, zero-shot Chain-of-Thought (CoT), and zero-shot Tree-of-Thought (ToT). In the Error Detection task

arXiv:2603.13239v1 Announce Type: new Abstract: Smart contracts play a central role in blockchain systems by encoding financial and operational logic. Still, their susceptibility to subtle security flaws poses significant risks of financial loss and erosion of trust. LLMs create new opportunities for automating vulnerability detection, yet the effectiveness of different prompting strategies and model choices in real-world contexts remains uncertain. This paper evaluates state-of-the-art LLMs on Solidity smart contract analysis using a balanced dataset of 400 contracts under two tasks: (i) Error Detection, where the model performs binary classification to decide whether a contract is vulnerable, and (ii) Error Classification, where the model must assign the predicted issue to a specific vulnerability category. Models are evaluated using zero-shot prompting strategies, including zero-shot, zero-shot Chain-of-Thought (CoT), and zero-shot Tree-of-Thought (ToT). In the Error Detection task, CoT and ToT substantially increase recall (often approaching $\approx 95$--$99\%$), but typically reduce precision, indicating a more sensitive decision regime with more false positives. In the Error Classification task, Claude 3 Opus attains the best Weighted F1-score (90.8) under the ToT prompt, followed closely by its CoT.

Executive Summary

This paper presents a rigorous evaluation of zero-shot prompting strategies—zero-shot, zero-shot Chain-of-Thought (CoT), and zero-shot Tree-of-Thought (ToT)—applied to LLMs for detecting and classifying errors in Solidity smart contracts. Using a balanced dataset of 400 contracts across two tasks (Error Detection and Error Classification), the study demonstrates that CoT and ToT significantly enhance recall in Error Detection (approaching 95–99%), albeit at the cost of reduced precision due to increased false positives. Conversely, in Error Classification, Claude 3 Opus achieves the highest Weighted F1-score (90.8) under the ToT prompt, indicating nuanced model performance across tasks. The findings highlight the trade-off between sensitivity and accuracy depending on the evaluation metric and task context.

Key Points

  • CoT and ToT improve recall in Error Detection but reduce precision
  • Claude 3 Opus excels in Error Classification under ToT
  • Zero-shot strategies yield differential outcomes across error detection and classification

Merits

Methodological Rigor

The study uses a balanced dataset and evaluates multiple prompting strategies across distinct tasks, enhancing generalizability of findings.

Practical Relevance

Results provide actionable insights for developers and security analysts seeking to leverage LLMs for smart contract auditing.

Demerits

Precision Trade-off

Increased recall via CoT and ToT comes at the expense of higher false positives, which may complicate real-world deployment without additional filtering.

Limited Scope

Evaluation is confined to Solidity and specific LLMs; applicability to other languages or models remains unaddressed.

Expert Commentary

The paper contributes meaningfully to the intersection of AI and blockchain security by empirically validating the effectiveness of zero-shot prompting strategies in real-world contract analysis. Notably, the differential impact of CoT and ToT on recall versus precision underscores a critical design consideration: while ToT offers superior classification precision, CoT’s sensitivity may be more valuable in early-stage detection pipelines where coverage matters more than exactness. The choice between these strategies should align with the audit phase—screening versus validation—rather than a one-size-fits-all adoption. Furthermore, the absence of comparative evaluations against non-LLM baselines limits the ability to assess true added value; future work should incorporate traditional static analysis tools as control groups to quantify the incremental benefit of linguistic models. Overall, this work advances the field by providing empirical evidence on how prompting architecture influences audit outcomes, offering a nuanced roadmap for integrating AI into software security workflows.

Recommendations

  • 1. Adopt ToT for Error Classification tasks in production environments due to superior F1-score performance.
  • 2. Implement CoT for preliminary error detection due to high recall, complemented by post-processing filters to mitigate false positives.
  • 3. Conduct comparative studies incorporating traditional static analysis tools to establish baseline performance benchmarks for AI-augmented auditing.

Sources