Large-scale optimization is a central topic in big data science. First-order black-box optimization methods have been widely applied in machine learning problems, since the oracle complexity of these methods can be independent of the parameter dimension. In this dissertation, we formulate linear matrix inequality (LMI) conditions to analyze the convergence rates of various deterministic and stochastic optimization methods. We derive these LMIs using integral quadratic constraints (IQCs) and dissipation inequalities. (Abstract shortened by ProQuest.). |