Site menu:
Optimization@NIPS
We welcome you to participate in the 8th NIPS Workshop on Optimization for Machine Learning, to be held at: Montreal, Quebec, Canada on Dec 11th, 2015. Location: Room 510 a,c on Level 5
LOCATION: Room 510a,c on Level 5
Time 
Speaker 
Title 

8:30am9:30am  Jorge Nocedal
 An Evolving Gradient Resampling Method  [abstract][slides] 
We propose an algorithm for minimizing expected risk F that shares some properties with randomized incremental aggregated gradient methods as well as dynamic sampling methods. The algorithm is well suited for problems involving a very large training set, where one (or a few) passes over the data suffice to produce an acceptable solution. By progressively decreasing the gradient variance, the algorithm is able to achieve linear convergence in expected risk F (not just in training error).

9:30am9:50am  Yossi Arjevani, Shai ShalevShwartz and Ohad Shamir
 On Lower and Upper Bounds in Smooth and Strongly Convex Optimization 
9:50am11:50am  POSTER SESSION (including coffee break 10:00am  10:30am) 
11:50am12:10pm  Horia Mania, Xinghao Pan, Dimitris Papailiopoulos, Benjamin Recht, Kannan Ramchandran and Michael I. Jordan
 Perturbed Iterate Analysis for Asynchronous Stochastic Optimization 
12:10pm12:30pm  Ruoyu Sun, ZhiQuan Luo and Yinyu Ye
 On the Expected Convergence of Randomly Permuted ADMM 
12:30pm2:30pm  Lunch (+ Microsoft CNTK Tutorial by Dong Yu & Xuedong Huang, 1:15pm2:15pm, same room but independent of OPT) 
2:30pm3:30pm  Guanghui Lan
 Complexity of composite optimization  [abstract] [slides] 
Composite optimization, with objective function given by the summation of several smooth and nonsmooth components, is prevalent in machine learning models and applications. Theoretical studies on the solution efficiency of these types of problems, however, are still limited. In this talk, we introduce a new set of optimization algorithms that can greatly advance our understanding about the complexity of composite optimization. More specifically, we discuss: 1) gradient sliding algorithms that can skip gradient computations for the smooth component from time to time and thus achieve separate and optimal complexity bounds for both gradient and subgradient evaluations; 2) optimal randomized incremental gradient methods for finitesum minimization which exhibit a tight complexity matching our newly developed lower complexity bound for randomized algorithms and leading to significant savings on gradient computations than their deterministic counterparts with overwhelming probability; and 3) randomized stochastic gradient methods for a class of nonconvex stochastic composite problems. Extensions of these basic algorithms will also be briefly discussed.

3:30pm5:30pm  POSTER SESSION (including coffee break 4:00pm  4:30pm) 
5:30pm6:30pm  Elad Hazan
 Faster Convex Optimization: Simulated Annealing with an Efficient
Universal Barrier  [abstract] [slides] 
Interior point methods and random walk approaches have been long
considered disparate approaches for convex optimization. We show how
simulated annealing, one of the most common random walk algorithms, is
equivalent, in a certain sense, to the central path interior point
algorithm applied to the enhanced entropic universal barrier function.
Using this observation we improve the state of the art in polynomial
time convex optimization in the membershiporacle model.
Joint work with Jacob Abernethy
