Hanie Sedghi

Senior Research Scientist

Google Research, Brain team

Email: haniesedghi(at)google.com

I am a senior research scientist at Google Brain, where I lead the “Deep Phenomena” team. My approach is to bond theory and practice in large-scale machine learning by designing algorithms with theoretical guarantees that also work efficiently in practice. Over the recent years, I have been working on understanding and improving deep learning.

Prior to Google, I was a Research Scientist at Allen Institute for Artificial Intelligence and before that, a postdoctoral fellow at UC Irvine. I received my PhD from University of Southern California with a minor in mathematics in 2015.

Curriculum Vitae

News

Google Scholar

Publications

Exploring the limits of large scale pretraining,
Samira Abnar, Mostafa Dehghani, Behnam NeyshaburHanie Sedghi
ICLR 2022, spotlight, [arXiv:2110.02095]

Leveraging Unlabeled Data to Predict Out-of-Distribution Performance,
Saurabh Garg, Sivaraman Balakrishnan, Zachary C. Lipton, Behnam NeyshaburHanie Sedghi
ICLR 2022, [arXiv:2201.04234]

The role of permutation invariance in linear mode connectivity of neural networks,
Rahim Entezari, Hanie Sedghi, Olga Saukh, Behnam Neyshabur
ICLR 2022, [arXiv:2110.06296]

Avoiding Spurious Correlations: Bridging Theory and Practice,
Thao Nguyen, Vaishnavh Nagarajan, Hanie Sedghi, Behnam Neyshabur
NeurIPS 2021, DistShift workshop, [paper]

Gradual Domain Adaptation in the Wild: When Intermediate Distributions are Absent,
Samira Abnar, Rianne van den Berg, Golnaz Ghiasi, Mostafa Dehghani, Nal KalchbrennerHanie Sedghi
In submission, [arXiv:2016.06080]

Understanding the effect of sparsity on neural networks robustness,
Lukas Timpl, Rahim Entezari, Hanie Sedghi, Behnam Neyshabur, Olga Saukh
Overparametrization: Pitfalls and Opportunities, ICML 2021

The Deep Bootstrap: Good Online Learners are Good Offline Generalizers,
Preetum Nakkiran, Behnam NeyshaburHanie Sedghi
International Conference on Learning Representations (ICLR), 2021.
[arXiv:2010.08127] [GoogleAI blog post] [Cifar-5m dataset][code]

What is being transferred in transfer learning?,
Hanie Sedghi*, Behnam Neyshabur*, Chiyuan Zhang*. (equal contribution)
Neural Information Processing Systems (NeurIPS), 2020.
[arXiv:2008.11687] [code] [poster] [short video] [talk at Harvard ML Theory]

Regularizing the training of convolutional neural networks,
Vineet GuptaPhil Long, Hanie Sedghi.
US Patent 16422797

The intriguing role of module criticality in the generalization of deep networks,
Niladri Chatterji, Behnam NeyshaburHanie Sedghi.
International Conference on Learning Representations (ICLR), 2020 (spotlight).
[arXiv:1912.00528] [code]

Generalization bounds for deep convolutional neural networks,
Phil Long*, Hanie Sedghi*. (alphabetical order)
International Conference on Learning Representations (ICLR), 2020.
[arXiv:1912.00528] [talk at Simons Institute]

On the effect of activation function on distribution of hidden nodes in a deep network,
Phil Long*, Hanie Sedghi*. (alphabetical order)
Neural Computation 31 (12), 2562-2580.
[arXiv:1901.02104]

The singular values of convolutional layers,
Hanie Sedghi
, Vineet GuptaPhil Long,
International Conference on Learning Representations (ICLR), 2019.
[arXiv:1805.10408][code]

MLSys: The new frontiers of machine learning systems,
Alexander Ratner, Dan Alistarh, Gustavo Alonso, David G Andersen, Peter Bailis, Sarah Bird, Nicholas Carlini, Bryan Catanzaro, Jennifer Chayes, Eric Chung, Bill Dally, Jeff Dean, Inderjit S Dhillon, Alexandros Dimakis, Pradeep Dubey, Charles Elkan, Grigori Fursin, Gregory R Ganger, Lise Getoor, Phillip B Gibbons, Garth A Gibson, Joseph E Gonzalez, Justin Gottschlich, Song Han, Kim Hazelwood, Furong Huang, Martin Jaggi, Kevin Jamieson, Michael I Jordan, Gauri Joshi, Rania Khalaf, Jason Knight, Jakub Konečný, Tim Kraska, Arun Kumar, Anastasios Kyrillidis, Aparna Lakshmiratan, Jing Li, Samuel Madden, H Brendan McMahan, Erik Meijer, Ioannis Mitliagkas, Rajat Monga, Derek Murray, Kunle Olukotun, Dimitris Papailiopoulos, Gennady Pekhimenko, Theodoros Rekatsinas, Afshin Rostamizadeh, Christopher Ré, Christopher De Sa, Hanie Sedghi, Siddhartha Sen, Virginia Smith, Alex Smola, Dawn Song, Evan Sparks, Ion Stoica, Vivienne Sze, Madeleine Udell, Joaquin Vanschoren, Shivaram Venkataraman, Rashmi Vinayak, Markus Weimer, Andrew Gordon Wilson, Eric Xing, Matei Zaharia, Ce Zhang, Ameet Talwalkar
[arXiv:1904.03257]

Knowledge completion for generics using guided tensor factorization,
Hanie Sedghi
, Ashish Sabharwal
Transactions of the Association for Computational Linguistics 6, 197-210,2018
[paper]

How Good Are My Predictions? Efficiently Approximating Precision-Recall Curves for Massive Datasets.,
Ashish Sabharwal*, Hanie Sedghi* (alphabetical order)
Conference on Uncertainty in Artificial Intelligence (UAI), 2017
[paper]

Provable tensor methods for learning mixtures of generalized linear models,
Hanie Sedghi, Majid Janzamin, Anima Anandkumar
Artificial Intelligence and Statistics (AIStats), 2016
[paper]

Training Input-Output Recurrent Neural Networks through Spectral Methods,
Hanie Sedghi, Anima Anandkumar
[arXiv:1603.00954]

Beating the perils of non-convexity: Guaranteed training of neural networks using tensor methods
Majid Janzamin, Hanie Sedghi, Anima Anandkumar
Artificial Intelligence and Statistics (AIStats), 2016
[arXiv:1506.08473] [talk at MlConf]

FEAST at play: Feature ExtrAction using score function Tensors
Majid Janzamin*, Hanie Sedghi*, U.N. Niranjan, Anima Anandkumar (equal contribution)
Feature Extraction: Modern Questions and challenges, NeurIPS , 2015
[paper]

Learning mixed membership community models in social tagging networks through tensor methods,
Anima Anandkumar, Hanie Sedghi
[arXiv:1503.04567]

Score Function Features for Discriminative Learning
Majid Janzamin, Hanie Sedghi, Anima Anandkumar
International Conference on Learning Representations (ICLR), 2015.
[arXiv:1412.2863]

Provable Methods for training neural networks with sparse connectivity,
Hanie Sedghi, Anima Anandkumar
International Conference on Learning Representations (ICLR), 2015.
[arXiv:1412.2693]

Stochastic optimization in high dimensions
Hanie Sedghi
University of Southern California.
[thesis]

Multi-step stochastic ADMM in high dimensions: Applications to sparse optimization and matrix decomposition
Hanie Sedghi, Anima Anandkumar, Edmond Jonckheere
Neural Information Processing Systems (NIPS), 2014.
[conference version] [full paper] [video]

Statistical Structure Learning to Ensure Data Integrity in Smart Grid
Hanie Sedghi, Edmond Jonckheere
IEEE Transactions on Smart Grid, Vol 6, issue 4.
[paper]

Statistical Structure Learning of Smart Grid for Detection of False Data Injection
Hanie Sedghi, Edmond Jonckheere
IEEE Power and Energy Society General Meeting, 2013.
[paper]

On Conditional Mutual Information in Gaussian-Markov Structured Grids
Hanie Sedghi, Edmond Jonckheere
Information and Control in Networks, G. Como, B. Bernhardson, and A. Rantzer, Springer.
[paper]

A Misbehavior-Tolerant Multipath Routing Protocol for Wireless Ad hoc Networks
Hanie Sedghi, MohammadReza Pakravan, MohammadReza Aref
International Journal of Wireless Information Networks, 2011
[paper]

A Game-Theoretic Approach for Power Allocation in Bidirectional Cooperative Communication
Majid Janzamin, MohammadReza Pakravan, Hanie Sedghi
IEEE Wireless Communication and Networking Conference, 2010
[paper]