Back to overview

Continuous Optimization

EC6
LocationUtrecht University
Weeks37 - 51
LectureMonday, 15:15 - 17:00
Provider Operations Research (LNMB), 4TU
LinksCourse page (requires login)

Summary

Prerequisites
The student should have a solid knowledge of linear algebra and multivariable calculus. The student should also have knowledge of linear programming (including linear programming duality) and convex analysis to the level of being able to follow the text and do the exercises from:

Chapters 1 and 2 (including the exercises) from the book 'Linear Programming, A Concise Introduction, Thomas S. Ferguson' https://www.math.ucla.edu/~tom/LP.pdf

Exercises 2.1, 2.2, 2.12, 3.1, 3.3, 3.5, and 3.7 from the book 'Convex Optimization, Stephen Boyd and Lieven Vandenberghe http://stanford.edu/~boyd/cvxbook

Aim of the course
Continuous optimization is the branch of optimization where we optimize a (differentiable) function over continuous (as opposed to discrete) variables. Here the variables can be subject to equality and inequality constraints. Optimization problems of this form are common in science and engineering, machine learning, and as relaxations of discrete optimization problems. One can use linear algebra, multivariable calculus, and convex geometry to study the properties of continuous optimization problems and to design and analyze efficient algorithms.

In this course, we study theory, algorithms, and some applications of continuous optimization. In the theory part we discuss convexity, Lagrangian duality, optimality conditions, and conic programming. In the algorithmic part of the course, we discuss derivative-free methods, first-order optimization methods, neural networks/supervised learning, second-order optimization methods, interior-point methods, and support vector machines. For some of the algorithms, we analyze the convergence.

Lecturers
David de Laat (TU Delft)