Final doctoral examination and defense of dissertation of Guanyi Wang
Date: March 30, 2021, 1:00pm to 2:30pm EST
Bluejeans Link is https://bluejeans.com/764851981
Title: Approximation Algorithms for Mixed-Integer Nonlinear Optimization Problems
Advisor: Prof. Santanu Dey (ISYE), Georgia Institute of Technology
Dr. Rahul Mazumder, OR and Stats Group, MIT Sloan School of Management, MIT
Dr. Mohit Singh, School of Industrial and Systems Engineering, Georgia Institute of Technology
Dr. Santosh Vempala, School of Computer Science, Georgia Institute of Technology
Dr. Yao Xie, School of Industrial and Systems Engineering, Georgia Institute of Technology
Reader: Dr. Marco Molinaro, Computer Science Department, Pontifical Catholic University of Rio de Janeiro (PUC-Rio), Brazil.
The thesis is available here:
Summary of the thesis is below.
Mixed integer non-linear optimization (MINLO) problems are usually NP-hard. Although obtaining feasible solutions is relatively easy via heuristic or local search methods sometimes, it is still challenging to guarantee the quality (i.e., the gap to optimal value) of a given feasible solution even under mild assumptions in a tractable fashion. This thesis proposes efficient mixed integer linear programming-based algorithms to find feasible solutions and prove the quality of these solutions for three widely applied MINLO problems.
In chapter 1, we study the sparse principal component analysis (SPCA). SPCA is a dimensionality reduction tool. Comparing with the principal component analysis (PCA), SPCA enhances the interpretability by incorporating a sparsity constraint. However, unlike PCA, conventional methods for SPCA cannot guarantee the qualities of obtained solutions efficiently. We present a convex integer programming (IP) framework of SPCA to derive dual bounds with a theoretical worst-case guarantee. We empirically illustrate that our convex IP framework outperforms existing SPCA methods of finding dual bounds.
Chapter 2 focuses on solving a non-trivial generalization of SPCA -- the (row) sparse principal component analysis (rsPCA). Solving rsPCA is to find the top-r leading principal components with a global row sparsity constraint. We propose: (a) a convex integer programming relaxation of rsPCA that gives dual bounds for rsPCA, and (b) a new local search algorithm for finding primal feasible solutions. We show that the convex IP's dual bounds are within an affine function of the optimal value. We also demonstrate our techniques applied to large-scale covariance matrices.
Chapter 3 considers a training problem of finding the best-fitting ReLU concerning square-loss, called “ReLU Regression”. We begin by proving the NP-hardness of the ReLU regression. We then present an approximation algorithm to solve the ReLU regression and analyze this algorithm under two regimes: (1) given an arbitrary set of training samples, the algorithm guarantees a multiplicative approximation ratio; and (2) given training samples from an underlying teacher model, the same algorithm achieves a much better asymptotic approximation ratio which is independent of the number of samples. Numerical studies show that (a) our approximation algorithm outperforms the classical gradient descent (GD) algorithms, and (b) the solution obtained from this algorithm could be a good initialization for GD type algorithms.