Andrea Zanette, Postdoctoral Scholar
Electrical Engineering and Computer Sciences
University of California at Berkeley
[lastname] at berkeley.edu

 

I am a postdoctoral scholar in the Department of Electrical Engineering and Computer Sciences at the University of California at Berkeley. I work primarily with Martin Wainwright on the statistical foundations of Reinforcement Learning, a subarea of Artificial Intelligence that deals with decision making under uncertainty. My research is generously supported by the Foundation of Data Science Institute.

I completed my PhD (2017-2021) in the Institute for Computational and Mathematical Engineering at Stanford University advised by prof Emma Brunskill and Mykel J. Kochenderfer. During my candidacy I had the pleasure to work with Alessandro Lazaric from Facebook Artificial Intelligence Research and Alekh Agarwal from Microsoft Research.

My PhD dissertation investigated modern Reinforcement Learning challenges such as exploration, function approximation, adaptivity, and learning from offline data; it was awarded the Gene Golub Outstanding Dissertation Award from my department.

Before starting my PhD, I was a master’s student in the same department (2015-2017). In my former life I was a mechanical engineer. I worked in the civil construction sector and for M3E, developing high performance linear algebra software. I also spent some time at the The von Karman Institute for Fluid Dynamics, a NATO-affiliated international research establishment.

Publications

  • Andrea Zanette
    When is Realizability Sufficient for Off-Policy Reinforcement Learning?
    ICML (International Conference on Machine Learning), 2023 [Paper]
  • Andrea Zanette, Martin J. Wainwright
    Bellman Residual Orthogonalization for Offline Reinforcement Learning [Paper]
    NeurIPS (Neural Information Processing Systems), Full Oral, 2022
  • Andrea Zanette, Martin J. Wainwright
    Stabilizing Q-learning with Linear Architectures for Provably Efficient Learning [Paper]
    ICML (International Conference on Machine Learning), 2022
  • Andrea Zanette, Martin J. Wainwright, Emma Brunskill
    Provable Benefits of Actor-Critic Methods for Offline Reinforcement Learning [Paper]
    Spotlight presentation in ICML 2021 Workshop on Reinforcement Learning Theory
    NeurIPS (Neural Information Processing Systems), 2021
  • Andrea Zanette*, Kefan Dong*, Jonathan Lee*, Emma Brunskill
    Design of Experiments for Stochastic Contextual Linear Bandits [Paper]
    NeurIPS (Neural Information Processing Systems), 2021
    (* denotes equal contribution)
  • Andrea Zanette
    Exponential Lower Bounds for Batch Reinforcement Learning:
    Batch RL can be Exponentially Harder than Online RL
    ICML (International Conference on Machine Learning), 2021, Long Oral, [Paper][Csaba’s Class Explanation]
  • Andrea Zanette, Ching-An Cheng, Alekh Agarwal
    Cautiously Optimistic Policy Optimization and Exploration with Linear Function Approximation,
    COLT (Conference on Learning Theory) 2021 [Paper]
  • Andrea Zanette, Alessandro Lazaric, Mykel Kochenderfer, Emma Brunskill
    Provably Efficient Reward-Agnostic Navigation with Linear Value Iteration,
    NeurIPS (Neural Information Processing Systems), 2020 [Paper]
  • Andrea Zanette, Alessandro Lazaric, Mykel Kochenderfer, Emma Brunskill
    Learning Near Optimal Policies with Low Inherent Bellman Error
    ICML (International Conference on Machine Learning), 2020 [Paper]
  • Andrea Zanette*, David Brandfonbrener*, Emma Brunskill, Matteo Pirotta, Alessandro Lazaric
    Frequentist Regret Bounds for Randomized Least-Squares Value Iteration
    AISTATS (International Conference on Artificial Intelligence and Statistics), 2020 [Paper]
    (* denotes equal contribution)
  • Andrea Zanette, Mykel J. Kochenderfer, Emma Brunskill
    Almost Horizon-Free Structure-Aware Best Policy Identification with a Generative Model
    NeurIPS (Neural Information Processing Systems), 2019 [Paper]
  • Andrea Zanette, Alessandro Lazaric, Mykel J. Kochenderfer, Emma Brunskill
    Limiting Extrapolation in Linear Approximate Value Iteration
    NeurIPS (Neural Information Processing Systems), 2019 [Paper]
  • Andrea Zanette, Emma Brunskill
    Tighter Problem-Dependent Regret Bounds in Reinforcement Learning without Domain Knowledge using Value Function Bounds
    ICML (International Conference on Machine Learning), 2019 [Paper]
  • Andrea Zanette, Junzi Zhang, Mykel J. Kochenderfer
    Robust Super-Level Set Estimation using Gaussian Processes
    in ECML-PKDD (European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases), 2018 [Paper]
  • Andrea Zanette, Emma Brunskill
    Problem Dependent Reinforcement Learning Bounds Which Can Identify Bandit Structure in MDPs
    in ICML (International Conference on Machine Learning), 2018, Long Oral [Paper]
  • Andrea Zanette, Massimiliano Ferronato, Carlo Janna
    Enriching the finite element method with meshfree techniques in structural mechanics
    in IJNME (International Journal for Numerical Methods in Engineering), 2017, [Paper]
    Awarded by Advances in Engineering as key scientific article contributing to excellence in science and engineering research [Award]
  • Andrea Zanette, Massimiliano Ferronato, Carlo Janna
    Enriching the Finite Element Method with meshfree particles in structural mechanics
    in PAMM (Proceedings in Applied Mathematics and Mechanics), 2015, Oral
    Best Poster Award at International CAE Conference 2014 [Award]
    Featured in Enginsoft 2014, issue number 4 [Media]

Teaching

  • TA for Math of Machine Learning Summer School, University of Washington, August 2019
  • Instructor for ICME Workshop on Reinforcement Learning, Stanford University, August 2018
  • TA: CS234 (Reinforcement Learning) in 2018, 2019, 2020; CS332 (Advanced Reinforcement Learning) in 2018; CS238 (Decision Making Under Uncertainty) in 2018; CME 200 (Linear Algebra) in 2016-2020; CME 307 (Optimization) in 2017; AA222 (Engineering Design and Optimization) in 2017, 2018

Professional Service

  • Area Chair: ICML ’20, ICLR ’21
  • Conference Reviewer: COLT ’19, NeurIPS ’19-’20-’21-22′, ICML ’21-’22, AAAI ’20, AISTATS ’20-’21-’22
  • Journal Reviewer: Journal of Artificial Intelligence Research, Annals of Statistics, Journal of the American Statistical Association