[edit]
Black-box Optimization with Unknown Constraints via Overparameterized Deep Neural Networks
Proceedings of the Forty-first Conference on Uncertainty in Artificial Intelligence, PMLR 286:4266-4289, 2025.
Abstract
Optimizing expensive black-box functions under unknown constraints is a fundamental challenge across a range of real-world domains, such as hyperparameter tuning in machine learning, safe control in robotics, and material or drug discovery. In these settings, each function evaluation may be costly or time-consuming, and the system may need to operate within unknown or difficult-to-specify safety boundaries. We apply the Expected Improvement (EI) acquisition function to select the next samples within a feasible region, determined by Lower Confidence Bound (LCB) conditions for all constraints. The LCB approach guarantees constraint feasibility, while EI efficiently balances exploration and exploitation, especially when the feasible regions are much smaller than the overall search space. To model both the objective function and constraints, we use Deep Neural Networks (DNNs) instead of Gaussian Processes (GPs) to improve scalability and handle complex structured data. We provide a theoretical analysis showing our method’s convergence using recent Neural Tangent Kernel (NTK) theory. Under regularity conditions, both cumulative regret and constraint violation are bounded by the maximum information gain, with equivalent upper bounds to GP-based methods. To validate our algorithm, we conduct experiments on synthetic and real-world benchmarks, showing its benefit over recent methods in black-box optimization with unknown constraints.