Site menu:

picture by

OPT2019

We welcome you to participate in the 11th OPT Workshop on Optimization for Machine Learning. This year's OPT workshop will be run as an independent event, co-located with NeurIPS in Vancouver. It will happen on December 14th, overlapping with the NeurIPS workshops, to facilitate easy switching between the two venues for attendees.


We are looking forward to an exciting OPT 2019!

Location: Exchange Hotel Vancouver


Location

Exchange Hotel Vancouver (5th floor, Townley Studio) (a short 5 minutes walk from the conference center)

Schedule

Time Speaker Title
8:15am-8:50amLight Breakfast and Coffee
8:50am-9:00amOrganizers Opening Remarks
9:00am-9:30amJohn Duchi Optimality in optimization[abstract]
9:30am-10:00amMert Gürbüzbalaban Recent Advances in Stochastic Gradient Methods: From Convex to Non-Convex Optimization and Deep Learning[abstract]
10:00am-10:20amSpotlight Talks for posters See accepted papers[posters]
10:20am-11:15amPOSTER SESSION (including coffee break 10:30am - 11:00am)
11:15am-11:45amMengdi Wang Sample-Efficient Reinforcement Learning in Feature Space[abstract]
11:45am-12:00pmItay M Safran, Ohad Shamir How Good is SGD with Random Shuffling?
12:00pm-12:15pmOliver Hinder, Aaron Sidford, Nimit S Sohoni Near-Optimal Methods for Minimizing Star-Convex Functions and Beyond
12:15pm-12:30pmDmitrii Avdiukhin, Grigory Yaroslavtsev, Chi Jin Escaping Saddle Points with Inequality Constraints via Noisy Sticky Projected Gradient Descent
12:15pm-2:00pmLunch Break (on your own)
2:00am-2:30pmSwati Gupta Impact of Bias on Hiring Decisions[abstract]
2:30am-2:45pmAlejandro Carderera, Jelena Diakonikolas, Sebastian Pokutta Breaking the Curse of Dimensionality (Locally) to Accelerate Conditional Gradients
2:45pm-3:00pmGeoffrey Negiar, Armin Askari, Martin Jaggi, Fabian Pedregosa Linearly Convergent Frank-Wolfe without Prior Knowledge
3:00pm-3:15pmMarwa El Halabi, Stefanie Jegelka Minimizing approximately submodular functions
3:15pm-4:15pmPOSTER SESSION (including coffee break 3:30pm - 4:00pm)
4:15am-4:45pmYin-Tat Lee Solving Linear Programs in the Current Matrix Multiplication Time[abstract]
4:45am-5:00pmWei Hu, Jeffrey Pennington Provable Benefit of Orthogonal Initialization in Optimizing Deep Linear Networks
5:00am-5:15pmJingfeng Wu, Vladimir Braverman, Lin Yang Obtaining Regularization for Free via Iterate Averaging
5:15pm-5:16pmOrganizers Closing Remarks
5:16pm-6:30pmPOSTER SESSION