Conference

ICLR 2026 Response to LLM-Generated Papers and Reviews

· · 3 min read · 11 views

November 19 2025 ICLR 2026 Response to LLM-Generated Papers and Reviews ICLR 2026 Program Chairs ICLR 2026 In the past few days, there have been many concerns raised about potential LLM-generated papers and low quality LLM-generated reviews. We take these concerns seriously, and we want to update the community on the steps we are taking and will be taking over the next two weeks. These steps below are based on the policies we outlined in our previous blog post: Policies on Large Language Model Usage at ICLR 2026 . The core of this policy is twofold: (a) if an author or reviewer uses an LLM, they must disclose this and they also are ultimately responsible for the LLM’s outputs (b) whether or not authors and reviewers use LLMs, they must not make false or misleading claims, fabricate or falsify data, or misrepresent results. We have planned and are undertaking punitive measures against authors and reviewers who violate these policies. LLM-generated papers Papers that make extensive usage of LLMs and do not disclose this usage will be desk rejected. Extensive and/or careless LLM usage often results in false claims, misrepresentations, or  hallucinated content, including hallucinated references. As stated in our previous blog post: hallucinations of this kind would be considered a Code of Ethics violation on the part of the paper’s authors. We have been desk -rejecting, and will continue to desk -reject, any paper that includes such issues. We have been relying on ACs and SACs to identify papers that have these issues. To help triage this, we will be leveraging recent LLM detection tools to identify papers that potentially have a significant amount of LLM-generated content. These will then be given to ACs for further checking. Given the possibility of false positives from detection tools, we will only take action if an AC or SAC identifies concrete evidence as identified above. Dual submission policy violation In addition to these desk rejections, we are also aware of possible cases where authors submitted multiple slightly different variants of the same paper (LLM-paraphrased or otherwise) without acknowledging or citing the concurrent submissions. While this is not always done with the help of LLMs, the use of LLMs can faciltate this process and may therefore exacerbate this issue. We are in the process of defining severe consequences for authors who try to spam the conference with multiple very similar variants of the same paper. The process and the policy will be detailed in a subsequent blog post. LLM-generated or very-low-quality reviews As mentioned above, reviewers are responsible for the content they post. Therefore, if they use LLMs, they are responsible for any issues in their posted review. Very poor quality reviews that feature false claims, misrepresentations or hallucinated references are also a code of ethics violation as expressed in the previous blog post. As such, reviewers who posted such poor quality reviews will also face consequences, including the desk rejection of their submitted papers. This follows policies that were already laid out in the previous blog post as well as the reviewer guide. Once again, we will be using LLM detection tools to triage and will rely on ACs and SACs to identify such poor quality/LLM-generated reviews. Authors who got such reviews (with many hallucinated references or false claims) should post a confidential message to ACs and SACs pointing out the poor quality reviews and provide the necessary evidence. Conclusion The actions described above will play out over the next 1-2 weeks, as ACs monitor discussions and identify problematic papers and reviewers. We plan to make another post to update the community about these desk rejected papers and irresponsible reviewers in a transparent manner. We are thankful to the community for identifying some of these issues, as well as for running large-scale meta-analyses. These efforts are supported by ICLR’s policy of making reviews and discussions public: it allows the community to see issues that are only visible at scale. At the same time, we would like to reassure the community that these issues were anticipated, which is why we articulated careful policies in the previous blog post. Our job now is to rigorously enforce this policy, which is going to be our focus going forward.

Executive Summary

The ICLR 2026 Program Chairs have responded to concerns regarding potential LLM-generated papers and low-quality LLM-generated reviews by outlining steps to mitigate these issues. The measures include desk-rejecting papers that extensively use LLMs without disclosure, leveraging LLM detection tools to identify potential issues, and enforcing a dual submission policy. The response also emphasizes that reviewers are responsible for the content they post, including any issues related to LLM usage or poor quality reviews. The Program Chairs have stated that punitive measures will be taken against authors and reviewers who violate these policies.

Key Points

  • Desk-rejection of papers that extensively use LLMs without disclosure
  • Leveraging LLM detection tools to identify potential issues
  • Enforcement of dual submission policy to prevent spamming with multiple variants of the same paper

Merits

Transparency and Clarity

The Program Chairs have clearly outlined their policies and steps to address concerns regarding LLM-generated papers and reviews, providing transparency and clarity for the community.

Demerits

Potential False Positives

The reliance on LLM detection tools may result in false positives, which could lead to unnecessary desk rejections or delays in the review process.

Expert Commentary

The ICLR 2026 response is a significant step towards addressing concerns regarding LLM-generated papers and reviews. While the measures outlined are necessary to maintain the integrity of the conference, the potential for false positives highlights the need for further refinement of LLM detection tools. Additionally, the response underscores the importance of a broader discussion on LLM ethics, including issues related to authorship, accountability, and transparency.

Recommendations

  • Continued development and refinement of LLM detection tools to minimize false positives
  • Establishment of clear guidelines and protocols for LLM usage in academic submissions

Sources

Related Articles