Site menu:
Optimization@NIPS
We welcome you to participate in the 10th NIPS Workshop on Optimization for Machine Learning, to be held at Long Beach, USA on Dec. 8th, 2017. Location: Hall A
Accepted Papers
- The Energy Landscape of a Simple Neural Network - Anthony Gamst and Alden Walker
- A Conservation Law Method in Optimization - Bin Shi, Tao Li and Sitharama Iyengar
- Oracle Complexity of Second-Order Methods for Smooth Convex Optimization - Yossi Arjevani, Ohad Shamir and Ron Shiff
- Convergence Analysis of Zeroth-Order Online Alternating Direction Method of Multipliers - Sijia Liu, Pin-Yu Chen, Jie Chen and Alfred Hero
- Convex Feature Clustering and Selection With Class Label Information - Daniel Andrade, Kenji Fukumizu and Yuzuru Okajima
- Improved Optimization of Finite Sums with Minibatch Stochastic Variance Reduced Proximal Iterations - Jialei Wang and Tong Zhang
- Clustering with sparse feature selection using alternating minimization and an exact projection-gradient splitting method - Michel Barlaud, Jean-Baptiste Caillau, Cyprien Gilet and Marie Deprez
- Distributed Inexact Newton-type Pursuit for Non-convex Sparse Learning - Bo Liu, Xiao-Tong Yuan, Qingshan Liu and Dimitris Metaxas
- Streaming Robust PCA - Yang Shi and U N Niranjan
- "Active-set complexity" of proximal gradient: How long does it take to find the sparsity pattern? - Julie Nutini, Mark Schmidt and Warren Hare
- A Graph Theory Approach To QP Problem Reformulation: An Example With SVM - William Brendel and Luis Marujo
- Bridging the Gap between Constant Step Size Stochastic Gradient Descent and Markov Chains - Aymeric Dieuleveut, Alain Durmus and Francis Bach
- Robust Recovery of Low-Rank Matrices using Multi-Penalty Regularization - Massimo Fornasier, Johannes Maly and Valeriya Naumova
- Stochastic Gradient Descent: Going As Fast As Possible But Not Faster - Alice Schoenauer Sebag, Marc Schoenauer and Michèle Sebag
- Stochastic Non-convex Optimization with Strong High Probability Second-order Convergence - Mingrui Liu and Tianbao Yang
- Consistent Robust Regression - Kush Bhatia, Prateek Jain, Parameswaran Kamalaruban and Purushottam Kar
- Structured Local Optima in Sparse Blind Deconvolution - Yuqian Zhang, Han-Wen Kuo and John Wright
- Natasha 2: Faster Non-Convex Optimization Than SGD - Zeyuan Allen-Zhu
- Frank-Wolfe Splitting via Augmented Lagrangian Method - Gauthier Gidel, Fabian Pedregosa and Simon Lacoste-Julien
- Characterization of Gradient Dominance and Regularity Conditions for Neural Networks - Yi Zhou and Yingbin Liang
- Online Generalized Eigenvalue Decomposition: Primal Dual Geometry and Inverse-Free Stochastic Optimization - Xingguo Li, Zhehui Chen, Lin Yang, Jarvis Haupt and Tuo Zhao
- Performance Evaluation of Iterative Methods for Solving Robust Convex Quadratic Problems - Christian Kroer, Nam Ho-Nguyen, George Lu and Fatma Kilinc-Karzan
- Online Factorization and Partition of Complex Networks From Random Walks - Lin Yang, Vladimir Braverman, Tuo Zhao and Mengdi Wang
- A unified framework for structured low-rank matrix learning - Pratik Jawanpuria and Bamdev Mishra
- Accelerated Block Coordinate Proximal Gradients with Applications in High Dimensional Statistics - Tsz Kit Lau and Yuan Yao
- From safe screening rules to working sets for faster Lasso-type solvers - Mathurin Massias, Alexandre Gramfort and Joseph Salmon
- Using stochastic computational graphs formalism for optimization of sequence-to-sequence model - Eugene Golikov, Vlad Zhukov and Maksim Kretov
- Linearly Convergent Stochastic Heavy Ball Method for Minimizing Generalization Error - Nicolas Loizou and Peter Richtarik
- ADMM and Random Walks on Graphs - Guilherme França and José Bento
- FreezeOut: Accelerate Training by Progressively Freezing Layers - Andrew Brock, Theodore Lim, J.M. Ritchie and Nick Weston
- A Note on Extended Formulations for Cardinality-based Sparsity - Cong Han Lim
- Low-Rank Boolean Matrix Approximation by Integer Programming - Reka Kovacs, Oktay Gunluk and Raphael Hauser
- Tight Risk Bounds for Multi-class Learning - Loubna Benabbou and Pascal Lang
- Adaptive Stochastic Dual Coordinate Ascent for Conditional Random Fields - Rémi Le Priol, Ahmed Touati and Simon Lacoste-Julien
- Gradient Diversity: a Key Ingredient for Scalable Distributed Learning - Dong Yin, Ashwin Pananjady, Max Lam, Dimitris Papailiopoulos, Kannan Ramchandran and Peter Bartlett
- Neural network model inversion beyond gradient descent - Eric Wong and J. Zico Kolter
- Gradient Descent using Duality Structures - Thomas Flynn
- Multi-Objective Maximization of Monotone Submodular Functions with Cardinality Constraint - Rajan Udwani
- Optimizing Circulant Support Vector Machines: the Exact Solution - Ramin Raziperchikolaei and Miguel Carreira-Perpinan
- Convergence of Expectation-Maximization - Raunak Kumar and Mark Schmidt
- Variable Metric Proximal Gradient Method with Diagonal Barzilai-Borwein Stepsize - Youngsuk Park, Stephen Boyd, Sauptik Dhar and Mohak Shah
- A Generic Approach for Escaping Saddle points - Sashank J. Reddi, Manzil Zaheer, Suvrit Sra, Barnabas Poczos, Francis Bach, Ruslan Salakhutdinov and Alex Smola
- Model Compression As Constrained Optimization, with Application to Neural Nets - Miguel Carreira-Perpinan and Yerlan Idelbayev
- A Local Analysis of Block Coordinate Descent for Gaussian Phase Retrieval - David Barmherzig and Ju Sun
- Lifted Neural Networks for Weight Initialization - Geoffrey Negiar, Armin Askari, Fabian Pedregosa and Laurent El Ghaoui
- Lower Bounds for Finding Stationary Points of Non-Convex, Smooth High-Dimensional Functions - Yair Carmon, John Duchi, Oliver Hinder and Aaron Sidford
- Efficiently Optimizing over (Non-Convex) Cones via Approximate Projections - Michael Cohen, Chinmay Hegde, Stefanie Jegelka and Ludwig Schmidt
- Multi-scale Nystrom Method - Woosang Lim, Rundong Du, Bo Dai, Kyomin Jung, Le Song and Haesun Park
- Linear Convergence and Support Vector Identifiation of Sequential Minimal Optimization - Xin Bei She and Mark Schmidt
- Graphical Newton for Huge-Block Coordinate Descent on Sparse Graphs - Issam Hadj Laradji, Julie Nutini and Mark Schmidt
- Black Box Optimization via a Bayesian-Optimized Genetic Algorithm - John Karro, Greg Kochanski and Daniel Golovin