Harvard Machine Learning Foundations Group

We are a research group focused on some of the foundational questions in modern modern machine learning. We are interested in both experimental and theoretical approaches that advance our understanding. Our group contains ML practitioners, theoretical computer scientists, statisticians, and neuroscientists, all sharing the goal of placing machine and natural learning on firmer foundations, and elucidating their fundamental capabilities and limitations.

We also run a research-level seminar series on recent advances in the field. Join the seminar mailing list for talk announcements.

Opportunities: We are looking for graduate students and postdocs. See opportunities section below. Announcements on positions will also be posted on social media.

People

Researchers

Avatar

Demba Ba

Faculty

Avatar

Yamini Bansal

PhD Student

Avatar

Boaz Barak

Faculty

Avatar

Chi-Ning Chou

PhD Student

Avatar

Alex Atanasov

PhD Student

Avatar

Ben Edelman

PhD Student

Avatar

Gal Kaplun

PhD Student

Avatar

Sharon Qian

PhD Student

Avatar

Yonadav Shavit

PhD Student

Avatar

Nikhil Vyas

PhD Student

Avatar

Tristan Yang

Undergraduate

Avatar

Fred Zhang

PhD Student

Affiliated

Avatar

Na Li

Faculty

Emeritus

Avatar

Preetum Nakkiran

PhD Student

Coming Soon

Avatar

Sitan Chen

Faculty

Recent Publications

By our group and its members.

(This list is not comprehensive. Also we’re sometimes slow in updates, see individual homepages and the arXiv for the latest publications.)

Contrasting random and learned features in deep Bayesian linear regression

Deconstructing Distributions: A Pointwise Framework of Learning

Depth induces scale-averaging in overparameterized linear Bayesian neural networks

Neural Networks as Kernel Learners: The Silent Alignment Effect

Inductive Biases and Variable Creation in Self-Attention Mechanisms

Capacity of Group-invariant Linear Readouts from Equivariant Representations: How Many Objects can be Linearly Classified Under All Possible Views?

Revisiting Model Stitching to Compare Neural Representations

Learning Curves for SGD on Structured Features

Out-of-Distribution Generalization in Kernel Regression

Asymptotics of Representation Learning in Finite Bayesian Neural Networks

For Self-supervised Learning, Rationality Implies Generalization, Provably

The Deep Bootstrap: Good Online Learners are Good Offline Generalizers

Distributional Generalization: A New Kind of Generalization

Learning From Strategic Agents: Accuracy, Improvement, and Causality

Deep Double Descent: Where Bigger Models and More Data Hurt

SGD on Neural Networks Learns Functions of Increasing Complexity

More Data Can Hurt for Linear Regression: Sample-wise Double Descent

Computational Limitations in Robust Classification and Win-Win Results

Minnorm training: an algorithm for training over-parameterized deep neural networks

Adversarial Robustness May Be at Odds With Simplicity

On the Information Bottleneck Theory of Deep Learning

Recent & Upcoming Talks

Visual intelligence is a cornerstone of intelligence. From passive perception to embodied interaction with the world, vision plays a …

Video games have become an attractive testbed for evaluating AI systems, by capturing some aspects of real-world complexity (rich …

Multiagent reinforcement learning has received a growing interest with various problem settings and applications. We will first present …

How and why are we succeeding in training huge non-convex deepnetworks? How can deep neural networks with billions of parameters …

Inverse problems in image processing and computer vision are often solved using prior probability densities, such as spectral or …

Seminar Calendar

Join the mailing list for talk announcements.

Opportunities

We are looking for graduate students and postdocs in the theory of machine learning.

For graduate students we have openings in Computer Science, Electrical Engineering,applied mathematics or statistics degrees. If you are applying for graduate studies in CS interested in the theory of machine learning, please mark both “Machine Learning” and “Theory of Computation” as areas of interest. Please also list the names of faculty you want to work with on your application. ML theory group faculty include Demba Ba (Electrical Engineering and Bioengineering), Boaz Barak and Sham Kakade (Computer Science), Cengiz Pehlevan (Applied Mathematics), and Lucas Janson (Statistics). There are also ML theory affiliated faculty in all of the above departments and more. All of us are also open to the possibilities of co-advising students, including across different departments and schools.

Other opportunities include:

Follow us on social media for announcements of more opportunities.