Nonlinear optimization problems present significant computational challenges due to their inherent nonconvexity.
In this talk, we explore how machine learning can be integrated into key algorithmic components of nonlinear optimization solvers to enhance their efficiency. We examine how learning can be used to predict branching decisions and variable selection within branch-and-bound frameworks, and to generate effective cutting planes or conic constraints that strengthen relaxations of these problems.
We present experiments on different benchmark instances from the literature showing that the learning-based branching selection and learning-based constraint generation outperform the standard approaches.



