Senior Research Scientist
I am a senior research scientist at Google Brain, where I lead the “Deep Phenomena” team. My approach is to bond theory and practice in large-scale machine learning by designing algorithms with theoretical guarantees that also work efficiently in practice. Over the recent years, I have been working on understanding and improving deep learning.
Prior to Google, I was a Research Scientist at Allen Institute for Artificial Intelligence and before that, a postdoctoral fellow at UC Irvine. I received my PhD from University of Southern California with a minor in mathematics in 2015.
- Jan 2022: We have three papers accepted in ICLR 2022.
- Sept. 2021: I will be serving as Tutorial co-Chair at ICML 2022.
- May 2021: I am organizing “Overparametrization: Pitfalls and Opportunities” Workshop at ICML 2021.
- March 2021: I will serve as an Area Chair for NeurIPS 2021.
- March 2021: Our GoogleAI blog post on new framework for understanding deep learning generalization is now live!
- Jan. 2021: Deep Bootstrap framework was accepted at ICLR 2021.
Gradual Domain Adaptation in the Wild: When Intermediate Distributions are Absent,
Samira Abnar, Rianne van den Berg, Golnaz Ghiasi, Mostafa Dehghani, Nal Kalchbrenner, Hanie Sedghi
In submission, [arXiv:2016.06080]
The Deep Bootstrap: Good Online Learners are Good Offline Generalizers,
Preetum Nakkiran, Behnam Neyshabur, Hanie Sedghi
International Conference on Learning Representations (ICLR), 2021.
[arXiv:2010.08127] [GoogleAI blog post] [Cifar-5m dataset][code]
What is being transferred in transfer learning?,
Hanie Sedghi*, Behnam Neyshabur*, Chiyuan Zhang*. (equal contribution)
Neural Information Processing Systems (NeurIPS), 2020.
[arXiv:2008.11687] [code] [poster] [short video] [talk at Harvard ML Theory]
The intriguing role of module criticality in the generalization of deep networks,
Niladri Chatterji, Behnam Neyshabur, Hanie Sedghi.
International Conference on Learning Representations (ICLR), 2020 (spotlight).
Generalization bounds for deep convolutional neural networks,
Phil Long*, Hanie Sedghi*. (alphabetical order)
International Conference on Learning Representations (ICLR), 2020.
[arXiv:1912.00528] [talk at Simons Institute]
MLSys: The new frontiers of machine learning systems,
Alexander Ratner, Dan Alistarh, Gustavo Alonso, David G Andersen, Peter Bailis, Sarah Bird, Nicholas Carlini, Bryan Catanzaro, Jennifer Chayes, Eric Chung, Bill Dally, Jeff Dean, Inderjit S Dhillon, Alexandros Dimakis, Pradeep Dubey, Charles Elkan, Grigori Fursin, Gregory R Ganger, Lise Getoor, Phillip B Gibbons, Garth A Gibson, Joseph E Gonzalez, Justin Gottschlich, Song Han, Kim Hazelwood, Furong Huang, Martin Jaggi, Kevin Jamieson, Michael I Jordan, Gauri Joshi, Rania Khalaf, Jason Knight, Jakub Konečný, Tim Kraska, Arun Kumar, Anastasios Kyrillidis, Aparna Lakshmiratan, Jing Li, Samuel Madden, H Brendan McMahan, Erik Meijer, Ioannis Mitliagkas, Rajat Monga, Derek Murray, Kunle Olukotun, Dimitris Papailiopoulos, Gennady Pekhimenko, Theodoros Rekatsinas, Afshin Rostamizadeh, Christopher Ré, Christopher De Sa, Hanie Sedghi, Siddhartha Sen, Virginia Smith, Alex Smola, Dawn Song, Evan Sparks, Ion Stoica, Vivienne Sze, Madeleine Udell, Joaquin Vanschoren, Shivaram Venkataraman, Rashmi Vinayak, Markus Weimer, Andrew Gordon Wilson, Eric Xing, Matei Zaharia, Ce Zhang, Ameet Talwalkar
How Good Are My Predictions? Efficiently Approximating Precision-Recall Curves for Massive Datasets.,
Ashish Sabharwal*, Hanie Sedghi* (alphabetical order)
Conference on Uncertainty in Artificial Intelligence (UAI), 2017
Beating the perils of non-convexity: Guaranteed training of neural networks using tensor methods
Majid Janzamin, Hanie Sedghi, Anima Anandkumar
Artificial Intelligence and Statistics (AIStats), 2016
[arXiv:1506.08473] [talk at MlConf]
FEAST at play: Feature ExtrAction using score function Tensors
Majid Janzamin*, Hanie Sedghi*, U.N. Niranjan, Anima Anandkumar (equal contribution)
Feature Extraction: Modern Questions and challenges, NeurIPS , 2015
Multi-step stochastic ADMM in high dimensions: Applications to sparse optimization and matrix decomposition
Hanie Sedghi, Anima Anandkumar, Edmond Jonckheere
Neural Information Processing Systems (NIPS), 2014.
[conference version] [full paper] [video]
A Game-Theoretic Approach for Power Allocation in Bidirectional Cooperative Communication
Majid Janzamin, MohammadReza Pakravan, Hanie Sedghi
IEEE Wireless Communication and Networking Conference, 2010