Preprints
- Low-Rank Extragradient Methods for Scalable Semidefinite Optimization (with Atara Kaplan)
- Projection-Free Online Convex Optimization with Time-Varying Constraints (with Ben Kretzu)
- From Oja’s Algorithm to the Multiplicative Weights Update Method with Applications
- Efficiency of First-Order Methods for Low-Rank Tensor Recovery with the Tensor Nuclear Norm Under Strict Complementarity (with Atara Kaplan)
- Low-Rank Mirror-Prox for Nonsmooth and Low-Rank Matrix Optimization Problems (with Atara Kaplan)
Conference & Journal Papers
- Projection-free Online Exp-concave Optimization (with Ben Kretzu)
Accepted to Conference on Learning Theory (COLT), 2023. - Faster Projection-Free Augmented Lagrangian Methods via Weak Proximal Oracle (with Tsur Livney and Shoham Sabach)
Conference on Artificial Intelligence and Statistics (AISTATS), 2023. - On the Efficient Implementation of the Matrix Exponentiated Gradient Algorithm for Low-Rank Matrix Optimization (with Atara Kaplan)
Accepted to Mathematics of Operations Research (MOR). - Frank-Wolfe-based Algorithms for Approximating Tyler’s M-estimator (with Lior Danon)
Neural Information Processing Systems (NeurlPS), 2022. - Local Linear Convergence of Gradient Methods for Subspace Optimization via Strict Complementarity (with Ron Fisher)
Neural Information Processing Systems (NeurlPS), 2022. - New Projection-free Algorithms for Online Convex Optimization with Adaptive Regret Guarantees (with Ben Kretzu)
Conference on Learning Theory (COLT), 2022.
Murk Fulk Award for Best Student Paper - Linear Convergence of Frank-Wolfe for Rank-One Matrix Recovery Without Strong Convexity
Mathematical Programming SERIES A, 2022. - Low-Rank Extragradient Method for Nonsmooth and Low-Rank Matrix Optimization Problems (with Atara Kaplan)
Neural Information Processing Systems (NeurlPS), 2021. - Frank-Wolfe with a Nearest Extreme Point Oracle (with Noam Wolf)
Conference on Learning Theory (COLT), 2021. - Revisiting Projection-free Online Learning: the Strongly Convex Case (with Ben Kretzu)
Conference on Artificial Intelligence and Statistics (AISTATS), 2021. - On the Convergence of Projected-Gradient Methods with Low-Rank Projections for Smooth Convex Minimization over Trace-Norm Balls and Related Problems
SIAM Journal on Optimization, 2021. - Revisiting Frank-Wolfe for Polytopes: Strict Complementarity and Sparsity
Neural Information Processing Systems (NeurlPS), 2020.
Spotlight presentation (top 3% of submissions) - Online Convex Optimization in the Random Order Model (with Gal Korcia, Kfir Y. Levy)
International Conference on Machine Learning (ICML), 2020. - On the Convergence of Stochastic Gradient Descent with Low-Rank Projections for Convex Low-Rank Matrix Problems
Conference on Learning Theory (COLT), 2020. - Learning of Optimal Forecast Aggregation in Partial Evidence Environments (with Yakov Babichenko).
Mathematics of Operations Research (MOR), 2021. - Improved Regret Bounds for Projection-free Bandit Convex Optimization (with Ben Kretzu)
Conference on Artificial Intelligence and Statistics (AISTATS), 2020. - Improved Complexities of Conditional Gradient-Type Methods with Applications to Robust Matrix Recovery Problems (with Atara Kaplan, Shoham Sabach)
Mathematical Programming SERIES A, 2019. - Stochastic Canonical Correlation Analysis (with Chao Gao, Nathan Srebro, Jialei Wang, Weiran Wang)
Journal of Machine Learning Research (JMLR), 2019. - On the Regret Minimization of Nonconvex Online Gradient Ascent for Online PCA
Conference on Learning Theory (COLT), 2019. - Fast Stochastic Algorithms for Low-rank and Nonsmooth Matrix Problems (with Atara Kaplan)
Conference on Artificial Intelligence and Statistics (AISTATS), 2019. - Logarithmic Regret for Online Gradient Descent Beyond Strong Convexity
Conference on Artificial Intelligence and Statistics (AISTATS), 2019. - Efficient Coordinate-wise Leading Eigenvector Computation (with Jialei Wang, Weiran Wang, Nathan Srebro)
Conference on Algorithmic Learning Theory (ALT), 2018. - Efficient Online Linear Optimization with Approximation Algorithms
Mathematics of Operations Research (MOR), 2020.
Preliminary version in Conference on Neural Information Processing Systems (NIPS), 2017. - Communication-efficient Algorithms for Distributed Stochastic Principal Component Analysis (with Ohad Shamir, Nathan Srebro)
International Conference on Machine Learning (ICML), 2017. - Linear-Memory and Decomposition-Invariant Linearly Convergent Conditional Gradient Algorithm for Structured Polytopes (with Ofer Meshi)
Conference on Neural Information Processing Systems (NIPS), 2016.
Oral presentation (top 1.8% of submissions) - Faster Projection-free Convex Optimization over the Spectrahedron
Conference on Neural Information Processing Systems (NIPS), 2016. - Efficient Globally Convergent Stochastic Optimization for Canonical Correlation Analysis (with Weiran Wang, Jialei Wang, and Nati Srebro)
Conference on Neural Information Processing Systems (NIPS), 2016. - Faster Eigenvector Computation via Shift-and-Invert Preconditioning (with Elad Hazan, Chi Jin, Sham Kakade, Cameron Musco, Praneeth Netrapalli, and Aaorn Sidford)
International Conference on Machine Learning (ICML), 2016. See also this previous and somewhat different manuscript with Elad Hazan. - A Linearly Convergent Conditional Gradient Algorithm With Applications to Online and Stochastic Optimization (with Elad Hazan)
SIAM Journal on Optimization, July 2016. - Sublinear Time Algorithms for Approximate Semidefinite Programming (with Elad Hazan)
Mathematical Programming SERIES A, July 2016. - Online Learning of Eigenvectors (with Elad Hazan, Tengyu Ma)
International Conference on Machine Learning (ICML), 2015. - Faster Rates for the Frank-Wolfe Method over Strongly-Convex Sets (with Elad Hazan)
International Conference on Machine Learning (ICML), 2015. - Online Principal Component Analysis (with Christos Boutsidis, Zohar Karnin, Edo Liberty)
Symposium on Discrete Algorithms (SODA), 2015. - Playing Non-linear Games with Linear Oracles (with Elad Hazan)
Foundations of Computer Science (FOCS), 2013. - Adaptive Universal Linear Filtering (with Elad Hazan)
IEEE Transections on Signal Processing, April 2013. - Approximating Semidefinite Programs in Sublinear Time (with Elad Hazan)
Conference on Neural Information Processing Systems (NIPS), 2011.
Technical Reports
- Fast and Simple PCA via Convex Optimization (with Elad Hazan)
See also ICML 2016 paper.
Dissertation
- Projection-free Algorithms for Convex Optimization and Online Learning
PhD dissertation, 2016.