Site menu:
Optimization@NIPS
We welcome you to participate in the 7th NIPS Workshop on Optimization for Machine Learning, to be held at:
Montreal, Quebec, Canada, Dec 12th, 2014
Room 513e,f on Level 5
Accepted Papers
- A Stochastic PCA Algorithm with an Exponential Convergence Rate - Ohad Shamir
- Non-Uniform Stochastic Average Gradient Method for Training Conditional Random Fields - Mark Schmidt, Ann Clifton and Anoop Sarkar
- Robust minimum volume ellipsoids and higher-order polynomial level sets - Amir Ali Ahmadi, Dmitry Malioutov and Ronny Luss
- Convergence Analysis of ADMM for a Family of Nonconvex Problems - Mingyi Hong, Zhi-Quan Luo and Meisam Razaviyayn
- Provable Learning of Overcomplete Latent Variable Models: Semi-supervised and Unsupervised Settings - Majid Janzamin, Anima Anandkumar and Rong Ge
- Adaptive Communication Bounds for Distributed Online Learning - Michael Kamp, Mario Boley and Michael Mock
- Efficient Training of Structured SVMs via Soft Constraints - Ofer Meshi, Nathan Srebro and Tamir Hazan
- Approximate Low-Rank Tensor Learning - Yaoliang Yu, Hao Cheng and Xinhua Zhang
- Complexity Issues and Randomization Strategies in Frank-Wolfe Algorithms for Machine Learning - Emanuele Frandi, Ricardo Ñanculef and Johan Suykens
- On Iterative Hard Thresholding Methods for High-dimensional M-Estimation - Prateek Jain, Ambuj Tewari and Purushottam Kar
- Accelerated Parallel Optimization Methods for Large Scale Machine Learning - Haipeng Luo, Patrick Haffner and Jean-Francois Paiement
- A Multilevel Framework for Sparse Inverse Covariance Estimation - Eran Treister, Javier Turek and Irad Yavneh
- Fast large-scale optimization by unifying stochastic gradient and quasi-Newton methods - Jascha Sohl-Dickstein, Ben Poole and Surya Ganguli
- Distributed Latent Dirichlet Allocation via Tensor Factorization - Furong Huang, Sergiy Matusevych, Animashree Anandkumar, Nikos Karampatziakis and Paul Mineiro
- Coresets for the DP-Means Clustering Problem - Olivier Bachem, Mario Lucic and Andreas Krause
- RadaGrad: Random Projections for Adaptive Stochastic Optimization - Gabriel Krummenacher and Brian Mcwilliams
- Scaling up Lloyd’s algorithm: stochastic and parallel block-wise optimization perspectives - Cheng Tang and Claire Monteleoni
- CqBoost : A Column Generation Method for Minimizing the C-Bound - François Laviolette, Mario Marchand and Jean-Francis Roy
- Tighter Low-rank Approximation via Sampling the Leveraged Element - Srinadh Bhojanapalli, Prateek Jain and Sujay Sanghavi
- S2CD: Semi-Stochastic Coordinate Descent - Jakub Konečný, Zheng Qu and Peter Richtárik
- Stochastic Relaxation over the Exponential Family: Second-Order Geometry - Luigi Malagò and Giovanni Pistone
- Learning with stochastic proximal gradient - Lorenzo Rosasco, Silvia Villa and Bang Cong Vu
- Asynchronous Parallel Block-Coordinate Frank-Wolfe - Yu-Xiang Wang, Veeranjaneyulu Sadhanala, Wei Dai, Willie Neiswanger, Suvrit Sra and Eric Xing
- Fast Nonnegative Matrix Factorization with Rank-one ADMM - Dongjin Song, David Meyer and Martin Renqiang Min
- Neurally Plausible Algorithms Find Globally Optimal Sparse Codes - Sanjeev Arora, Rong Ge, Tengyu Ma and Ankur Moitra
- mS2GD: Mini-Batch Semi-Stochastic Gradient Descent in the Proximal Setting - Jakub Konecny, Jie Liu, Peter Richtárik and Martin Takáč
- Randomized Subspace Descent - Rafael Frongillo and Mark Reid
- Coordinate descent converges faster with the Gauss-Southwell rule than random selection - Mark Schmidt and Michael Friedlander