Site menu:
Optimization@NIPS
We welcome you to participate in the 8th NIPS Workshop on Optimization for Machine Learning, to be held at: Montreal, Quebec, Canada on Dec 11th, 2015. Location: Room 510 a,c on Level 5
Invited Talks
- Jorge Nocedal (Northwestern University, US)
- Guanghui Lan (University of Florida, US)
- Elad Hazan (Princeton University, US)
Title: An Evolving Gradient Resampling Method
Abstract: We propose an algorithm for minimizing expected risk F that shares some properties with randomized incremental aggregated gradient methods as well as dynamic sampling methods. The algorithm is well suited for problems involving a very large training set, where one (or a few) passes over the data suffice to produce an acceptable solution. By progressively decreasing the gradient variance, the algorithm is able to achieve linear convergence in expected risk F (not just in training error).
Title: Complexity of composite optimization
Abstract: Composite optimization, with objective function given by the summation of several smooth and nonsmooth components, is prevalent in machine learning models and applications. Theoretical studies on the solution efficiency of these types of problems, however, are still limited. In this talk, we introduce a new set of optimization algorithms that can greatly advance our understanding about the complexity of composite optimization. More specifically, we discuss: 1) gradient sliding algorithms that can skip gradient computations for the smooth component from time to time and thus achieve separate and optimal complexity bounds for both gradient and subgradient evaluations; 2) optimal randomized incremental gradient methods for finite-sum minimization which exhibit a tight complexity matching our newly developed lower complexity bound for randomized algorithms and leading to significant savings on gradient computations than their deterministic counterparts with overwhelming probability; and 3) randomized stochastic gradient methods for a class of nonconvex stochastic composite problems. Extensions of these basic algorithms will also be briefly discussed.
Title: Faster Convex Optimization: Simulated Annealing with an Efficient
Universal Barrier
Abstract: Interior point methods and random walk approaches have been long
considered disparate approaches for convex optimization. We show how
simulated annealing, one of the most common random walk algorithms, is
equivalent, in a certain sense, to the central path interior point
algorithm applied to the enhanced entropic universal barrier function.
Using this observation we improve the state of the art in polynomial
time convex optimization in the membership-oracle model. Joint work with Jacob Abernethy.