Site menu:
Optimization@NIPS
We welcome you to participate in the 8th NIPS Workshop on Optimization for Machine Learning, to be held at: Montreal, Quebec, Canada on Dec 11th, 2015.Location: Room 510 a,c on Level 5
Accepted Papers
- Distributed Training of Structured SVM - Ching-Pei Lee, Kai-Wei Chang, Shyam Upadhyay and Dan Roth
- Riemannian preconditioning for tensor completion - Hiroyuki Kasai and Bamdev Mishra
- Dropping Convexity for Faster Semi-definite Optimization - Srinadh Bhojanapalli, Anastasios Kyrillidis and Sujay Sanghavi
- On the Tightness of LP Relaxations for Structured Prediction - Ofer Meshi, Mehrdad Mahdavi and David Sontag
- On Lower and Upper Bounds in Smooth and Strongly Convex Optimization - Yossi Arjevani, Shai Shalev-Shwartz and Ohad Shamir
- Convergence properties of the randomized extended Gauss-Seidel and Kaczmarz methods - Anna Ma, Deanna Needell and Aaditya Ramdas
- Continuous-Time Limit of Stochastic Gradient Descent Revisited - Stephan Mandt, Matthew Hoffman and David Blei
- Next Generation Multicut Optimization for Semi-Planar Graphs - Julian Yarkony
- Federated Optimization: Distributed Optimization Beyond the Datacenter - Jakub Konečný, Brendan McMahan and Daniel Ramage
- HAMSI: Distributed Incremental Optimization Algorithm Using Quadratic Approximations for Partially Separable Problems - Umut Simsekli, Hazal Koptagel, Figen Oztoprak, S. Ilker Birbil and A. Taylan Cemgil
- Understanding symmetries in deep networks - Vijay Badrinarayanan, Bamdev Mishra and Roberto Cipolla
- Lass-0: sparse non-convex regression by local search - William Herlands, Maria De-Arteaga, Daniel Neill and Artur Dubrawski
- Linear Convergence of Proximal-Gradient Methods under the Polyak-Lojasiewicz Condition - Hamed Karimi and Mark Schmidt
- Convergence Rates for Greedy Kaczmarz Algorithms - Julie Nutini, Behrooz Sepehry, Alim Virani, Issam Laradji, Mark Schmidt and Hoyt Koepke
- A Newton-type Incremental Method with a Superlinear Rate of Convergence - Anton Rodomanov and Dmitry Kropotov
- Mixed Robust/Average Submodular Partitioning - Kai Wei, Rishabh Iyer, Shengjie Wang, Wenruo Bai and Jeff Bilmes
- A Stochastic Gradient Method with Linear Convergence Rate for a Class of Non-smooth Non-strongly Convex Optimizations - Tianbao Yang and Qihang Lin
- Towards stability and optimality in stochastic gradient descent - Panos Toulis, Dustin Tran and Edoardo Airoldi
- Sparse and Greedy: Sparsifying Submodular Facility Location Problems - Erik Lindgren, Shanshan Wu and Alexandros Dimakis
- Parallelizing Randomized Convex Optimization - Michael Kamp, Mario Boley and Thomas Gärtner
- Fast Convergence of Online Pairwise Learning Algorithms - Martin Boissier, Siwei Lyu, Yiming Ying and Ding-Xuan Zhou
- On the Efficiency of Recurrent Neural Network Optimization Algorithms - Ben Krause, Liang Lu, Iain Murray and Steve Renals
- Manifold Optimization for Gaussian Mixture Models - Reshad Hosseini and Suvrit Sra
- Stochastic Semi-Proximal Mirror-Prox - Niao He and Zaid Harchaoui
- Primal-Dual Algorithms for Subquadratic Norms - Raman Sankaran, Francis Bach and Chiranjib Bhattacharyya
- Conjugate Descent for the Minimum Norm Problem - Alberto Torres-Barrán and Jose R. Dorronsoro-Ibero
- Natural Gradient for the Gaussian Distribution via Least Squares Regression - Luigi Malagò
- Random Walk Distributed Dual Averaging Method For Decentralized Consensus Optimization - Cun Mu, Asim Kadav, Erik Kruus, Donald Goldfarb and Martin Renqiang Min
- Clustering for set partitioning: a case study in carpooling - Cathy Wu, Ece Kamar and Eric Horvitz
- Perturbed Iterate Analysis for Asynchronous Stochastic Optimization - Horia Mania, Xinghao Pan, Dimitris Papailiopoulos, Benjamin Recht, Kannan Ramchandran and Michael I. Jordan
- Accelerating Optimization via Adaptive Prediction - Scott Yang and Mehryar Mohri
- Doubly Stochastic Primal-Dual Coordinate Method for Regularized Empirical Risk Minimization with Factorized Data - Adams Wei Yu, Qihang Lin and Tianbao Yang
- An Approximate Formulation based on Integer Linear Programming for Learning Maximum Weighted (k+1)-order Decomposable Graphs - Aritz Pérez Martínez, Christian Blum and Jose A. Lozano
- Multiple Kernel Learning for Prediction on Unknown Graph Structures - Simon Cousins, John Shawe-Taylor, Mario Marchand, Juho Rousu and Hongyu Su
- Safe screening for support vector machines - Julian Zimmert, Christian Schröder de Witt, Giancarlo Kerg and Marius Kloft
- A Multilevel Acceleration for l1-regularized Logistic Regression - Javier Turek and Eran Treister
- ADASECANT: Robust Adaptive Secant Method for Stochastic Gradient - Caglar Gulcehre, Marcin Moczulski and Yoshua Bengio
- Accelerating SVRG via second-order information - Ritesh Kolte, Murat Erdogdu and Ayfer Özgür
- Dual Free SDCA for Empirical Risk Minimization with Adaptive Probabilities - Xi He and Martin Takac
- Partitioning Data on Features or Samples in Communication-Efficient Distributed Optimization? - Chenxin Ma and Martin Takáč
- Learning Positive Functions in a Hilbert Space - J. Andrew Bagnell and Amir-Massoud Farahmand
- Open Problem: Optimization for Inducing Block-diagonal Matrices - Jin-Ge Yao
- On the Expected Convergence of Randomly Permuted ADMM - Ruoyu Sun, Zhi-Quan Luo and Yinyu Ye
- Classification with Margin Constraints: A Unification with Applications to Optimization - Pooria Joulani, Csaba Szepesvari and Andras Gyorgy
- Epigraph proximal algorithms for general convex programming - Matt Wytock, Po-Wei Wang and J. Zico Kolter
- (Bandit) Convex Optimization with Biased Noisy Gradient Oracles - Xiaowei Hu, Prashanth L.A., András György and Csaba Szepesvári
- Scaling Up Simultaneous Diagonalization - Volodymyr Kuleshov, Arun Chaganty and Percy Liang
- Comparing Gibbs, EM and SEM for MAP Inference in Mixture Models - Manzil Zaheer, Michael Wick, Satwik Kottur and Jean-Baptiste Tristan