Academic

Large Neighborhood Search meets Iterative Neural Constraint Heuristics

arXiv:2603.20801v1 Announce Type: new Abstract: Neural networks are being increasingly used as heuristics for constraint satisfaction. These neural methods are often recurrent, learning to iteratively refine candidate assignments. In this work, we make explicit the connection between such iterative neural heuristics and Large Neighborhood Search (LNS), and adapt an existing neural constraint satisfaction method-ConsFormer-into an LNS procedure. We decompose the resulting neural LNS into two standard components: the destroy and repair operators. On the destroy side, we instantiate several classical heuristics and introduce novel prediction-guided operators that exploit the model's internal scores to select neighborhoods. On the repair side, we utilize ConsFormer as a neural repair operator and compare the original sampling-based decoder to a greedy decoder that selects the most likely assignments. Through an empirical study on Sudoku, Graph Coloring, and MaxCut, we find that adapting t

Y
Yudong W. Xu, Wenhao Li, Scott Sanner, Elias B. Khalil
· · 1 min read · 6 views

arXiv:2603.20801v1 Announce Type: new Abstract: Neural networks are being increasingly used as heuristics for constraint satisfaction. These neural methods are often recurrent, learning to iteratively refine candidate assignments. In this work, we make explicit the connection between such iterative neural heuristics and Large Neighborhood Search (LNS), and adapt an existing neural constraint satisfaction method-ConsFormer-into an LNS procedure. We decompose the resulting neural LNS into two standard components: the destroy and repair operators. On the destroy side, we instantiate several classical heuristics and introduce novel prediction-guided operators that exploit the model's internal scores to select neighborhoods. On the repair side, we utilize ConsFormer as a neural repair operator and compare the original sampling-based decoder to a greedy decoder that selects the most likely assignments. Through an empirical study on Sudoku, Graph Coloring, and MaxCut, we find that adapting the neural heuristic to an LNS procedure yields substantial gains over its vanilla settings and improves its competitiveness with classical and neural baselines. We further observe consistent design patterns across tasks: stochastic destroy operators outperform greedy ones, while greedy repair is more effective than sampling-based repair for finding a single high-quality feasible assignment. These findings highlight LNS as a useful lens and design framework for structuring and improving iterative neural approaches.

Executive Summary

This article delves into the intersection of Large Neighborhood Search (LNS) and iterative neural constraint heuristics. The authors adapt the ConsFormer neural constraint satisfaction method into an LNS procedure, decomposing it into destroy and repair operators. The study explores various destroy and repair strategies, yielding substantial gains over vanilla settings and improved competitiveness with classical and neural baselines. Key findings include the superiority of stochastic destroy operators and greedy repair over sampling-based repair. These results demonstrate LNS as a valuable framework for structuring and enhancing iterative neural approaches. The study's empirical analysis on Sudoku, Graph Coloring, and MaxCut provides a comprehensive understanding of the proposed method's efficacy and shed light on design patterns across tasks.

Key Points

  • Adaptation of ConsFormer neural constraint satisfaction method into an LNS procedure
  • Decomposition of LNS into destroy and repair operators
  • Exploration of various destroy and repair strategies
  • Substantial gains over vanilla settings and improved competitiveness with classical and neural baselines

Merits

Transferability of LNS framework

The study demonstrates the versatility of LNS as a framework for structuring iterative neural approaches, facilitating its application to diverse constraint satisfaction problems.

Demerits

Limited scope of empirical analysis

The study focuses on three specific problems (Sudoku, Graph Coloring, and MaxCut), limiting the generalizability of its findings to a broader range of constraint satisfaction problems.

Expert Commentary

This study marks a significant contribution to the field of constraint satisfaction, demonstrating the potential of LNS as a framework for structuring and enhancing iterative neural approaches. By adapting ConsFormer into an LNS procedure, the authors provide a novel perspective on the intersection of neural constraint satisfaction methods and LNS. The study's empirical analysis offers valuable insights into the efficacy of various destroy and repair strategies, shedding light on design patterns across tasks. However, the limited scope of the empirical analysis is a notable limitation, highlighting the need for further research to generalize its findings to a broader range of constraint satisfaction problems.

Recommendations

  • Future studies should explore the application of the proposed method to a wider range of constraint satisfaction problems, including more complex and real-world scenarios.
  • Researchers should investigate the extension of LNS to other neural constraint satisfaction methods, further solidifying its position as a valuable framework for improving iterative neural approaches.

Sources

Original: arXiv - cs.LG