publications

The symbol * denotes shared first authorship; the order of the authors is alphabetical in these cases.

2025

  1. NeurIPS thumbnail for publication The Computational Advantage of Depth: Learning High-Dimensional Hierarchical Functions with Gradient Descent
    The Computational Advantage of Depth: Learning High-Dimensional Hierarchical Functions with Gradient Descent
    Yatin Dandi,  LP, Lenka Zdeborová, and Florent Krzakala
    In Advances in Neural Information Processing Systems (Spotlight, Notable top 3.5%) , 2025
  2. AISTATS thumbnail for publication A Random Matrix Theory Perspective on the Spectrum of Learned Features and Asymptotic Generalization Capabilities
    A Random Matrix Theory Perspective on the Spectrum of Learned Features and Asymptotic Generalization Capabilities
    Yatin Dandi,  LP, Hugo Cui, Florent Krzakala, Yue M. Lu, and Bruno Loureiro
    In Artificial Intelligence and Statistics (Oral, Notable top 2%) , 2025

2024

  1. preprint thumbnail for publication Repetita iuvant: Data repetition allows sgd to learn high-dimensional multi-index functions
    Repetita iuvant: Data repetition allows sgd to learn high-dimensional multi-index functions
    Luca Arnaboldi, Yatin Dandi, Florent Krzakala,  LP*, and Ludovic Stephan
    2024
  2. ICML thumbnail for publication Online Learning and Information Exponents: On The Importance of Batch size, and Time/ComLPexity Tradeoffs
    Online Learning and Information Exponents: On The Importance of Batch size, and Time/ComLPexity Tradeoffs
    Luca Arnaboldi, Yatin Dandi, Florent Krzakala, Bruno Loureiro,  LP*, and Ludovic Stephan
    In Proceedings of the 41th International Conference on Machine Learning, 2024
  3. ICML thumbnail for publication Asymptotics of feature learning in two-layer networks after one gradient-step
    Asymptotics of feature learning in two-layer networks after one gradient-step
    Hugo Cui,  LP, Yatin Dandi, Florent Krzakala, Yue M. Lu, Lenka Zdeborová, and Bruno Loureiro
    In Proceedings of the 41th International Conference on Machine Learning (Spotlight, Notable top 3.5%) , 2024
  4. ICML thumbnail for publication The benefits of reusing batches for gradient descent in two-layer networks: Breaking the curse of information and leap exponents
    The benefits of reusing batches for gradient descent in two-layer networks: Breaking the curse of information and leap exponents
    Yatin Dandi, Emanuele Troiani, Luca Arnaboldi,  LP, Lenka Zdeborová, and Florent Krzakala
    In Proceedings of the 41th International Conference on Machine Learning, 2024
  5. JMLR thumbnail for publication Learning Two-Layer Neural Networks, One (Giant) Step at a Time
    Learning Two-Layer Neural Networks, One (Giant) Step at a Time
    Yatin Dandi, Florent Krzakala, Bruno Loureiro,  LP*, and Ludovic Stephan
    Journal of Machine Learning Research, 2024
  6. JSTAT thumbnail for publication Theory and Applications of the Sum-Of-Squares technique
    Theory and Applications of the Sum-Of-Squares technique
    Francis Bach, Elisabetta Cornacchia,  LP, and Giovanni Piccioli
    Journal of Statistical Mechanics: Theory and Experiment, 2024

2023

  1. ICML thumbnail for publication Are Gaussian Data All You Need? The Extents and Limits of Universality in High-Dimensional Generalized Linear Estimation
    Are Gaussian Data All You Need? The Extents and Limits of Universality in High-Dimensional Generalized Linear Estimation
    LP, Florent Krzakala, Bruno Loureiro, and Ludovic Stephan
    In Proceedings of the 40th International Conference on Machine Learning, 2023

2022

  1. NeurIPS thumbnail for publication Subspace clustering in high-dimensions: Phase transitions & Statistical-to-Computational gap
    Subspace clustering in high-dimensions: Phase transitions & Statistical-to-Computational gap
    LP, Bruno Loureiro, Florent Krzakala, and Lenka Zdeborová
    In Advances in Neural Information Processing Systems, 2022