Academic

Auto-Unrolled Proximal Gradient Descent: An AutoML Approach to Interpretable Waveform Optimization

arXiv:2603.17478v1 Announce Type: new Abstract: This study explores the combination of automated machine learning (AutoML) with model-based deep unfolding (DU) for optimizing wireless beamforming and waveforms. We convert the iterative proximal gradient descent (PGD) algorithm into a deep neural network, wherein the parameters of each layer are learned instead of being predetermined. Additionally, we enhance the architecture by incorporating a hybrid layer that performs a learnable linear gradient transformation prior to the proximal projection. By utilizing AutoGluon with a tree-structured parzen estimator (TPE) for hyperparameter optimization (HPO) across an expanded search space, which includes network depth, step-size initialization, optimizer, learning rate scheduler, layer type, and post-gradient activation, the proposed auto-unrolled PGD (Auto-PGD) achieves 98.8% of the spectral efficiency of a traditional 200-iteration PGD solver using only five unrolled layers, while requirin

A
Ahmet Kaplan
· · 1 min read · 7 views

arXiv:2603.17478v1 Announce Type: new Abstract: This study explores the combination of automated machine learning (AutoML) with model-based deep unfolding (DU) for optimizing wireless beamforming and waveforms. We convert the iterative proximal gradient descent (PGD) algorithm into a deep neural network, wherein the parameters of each layer are learned instead of being predetermined. Additionally, we enhance the architecture by incorporating a hybrid layer that performs a learnable linear gradient transformation prior to the proximal projection. By utilizing AutoGluon with a tree-structured parzen estimator (TPE) for hyperparameter optimization (HPO) across an expanded search space, which includes network depth, step-size initialization, optimizer, learning rate scheduler, layer type, and post-gradient activation, the proposed auto-unrolled PGD (Auto-PGD) achieves 98.8% of the spectral efficiency of a traditional 200-iteration PGD solver using only five unrolled layers, while requiring only 100 training samples. We also address a gradient normalization issue to ensure consistent performance during training and evaluation, and we illustrate per-layer sum-rate logging as a tool for transparency. These contributions highlight a notable reduction in the amount of training data and inference cost required, while maintaining high interpretability compared to conventional black-box architectures.

Executive Summary

This paper introduces Auto-PGD, an AutoML approach to optimizing wireless beamforming and waveforms using a deep neural network representation of the proximal gradient descent (PGD) algorithm. By utilizing AutoGluon and a tree-structured parzen estimator (TPE) for hyperparameter optimization, Auto-PGD achieves significant reductions in training data and inference cost while maintaining high interpretability. The proposed architecture incorporates a hybrid layer for learnable linear gradient transformation and addresses a gradient normalization issue for consistent performance. The authors demonstrate the effectiveness of Auto-PGD in achieving 98.8% of the spectral efficiency of a traditional 200-iteration PGD solver using only five unrolled layers. This novel approach has the potential to revolutionize the field of wireless communication and signal processing.

Key Points

  • Auto-PGD combines AutoML with model-based deep unfolding for optimizing wireless beamforming and waveforms.
  • The proposed architecture utilizes a deep neural network representation of the PGD algorithm with learnable parameters.
  • AutoGluon and TPE are employed for hyperparameter optimization across an expanded search space.

Merits

Strength in Interpretability

The proposed approach maintains high interpretability compared to conventional black-box architectures, enabling the analysis of per-layer sum-rate logging for transparency.

Efficiency in Training Data

Auto-PGD achieves a significant reduction in the amount of training data required compared to traditional methods.

Inference Cost Reduction

The proposed approach reduces the inference cost required for wireless beamforming and waveform optimization.

Demerits

Limited Scalability

The current implementation of Auto-PGD may face challenges in scaling to larger problem sizes or more complex network architectures.

Dependence on AutoGluon

The performance of Auto-PGD is heavily reliant on the capabilities of AutoGluon and TPE for hyperparameter optimization.

Expert Commentary

The proposed Auto-PGD approach represents a significant milestone in the application of AutoML to wireless communication and signal processing. By leveraging deep unfolding and AutoGluon, the authors demonstrate a notable reduction in training data and inference cost while maintaining high interpretability. However, the current implementation may face challenges in scaling to larger problem sizes or more complex network architectures. To further advance this research, it would be beneficial to explore the application of Auto-PGD in more complex wireless communication systems and to develop more robust and scalable methods for hyperparameter optimization.

Recommendations

  • Future research should focus on developing more scalable and robust methods for hyperparameter optimization in Auto-PGD.
  • The authors should explore the application of Auto-PGD in more complex wireless communication systems and evaluate its performance in real-world scenarios.

Sources