Jascha Sohl-Dickstein - Understanding infinite width neural networks

Abstract

As neural networks become wider their accuracy improves, and their behavior becomes easier to analyze theoretically. I will give an introduction to a rapidly growing body of work which examines the learning dynamics and prior over functions induced by infinitely wide, randomly initialized, neural networks. Core results that I will discuss include: that the distribution over functions computed by a wide neural network often corresponds to a Gaussian process with a particular compositional kernel, both before and after training; that the predictions of wide neural networks are linear in their parameters throughout training; and that this perspective enables analytic predictions for how trainability of finite width networks depends on hyperparameters and architecture. These results provide for surprising capabilities – for instance, the evaluation of test set predictions which would come from an infinitely wide trained neural network without ever instantiating a neural network, or the rapid training of 10,000+ layer convolutional networks. I will argue that this growing understanding of neural networks in the limit of infinite width is foundational for future theoretical and practical understanding of deep learning. Neural Tangents: https://github.com/google/neural-tangents Bio: Jascha is a staff research scientist in Google Brain, and leads a research team with interests spanning machine learning, physics, and neuroscience. He was previously a visiting scholar in Surya Ganguli’s lab at Stanford, and an academic resident at Khan Academy. He earned his PhD in 2012 in Bruno Olshausen’s lab in the Redwood Center for Theoretical Neuroscience at UC Berkeley. Prior to his PhD, he spent several years working for NASA on the Mars Exploration Rover mission.

Date
Event
Location
Virtual.