Skip to main content
Can Uniform Convergence Explain Interpolation Learning?
Add to Calendar 2020-10-08T19:30:00 2020-10-08T20:30:00 UTC Can Uniform Convergence Explain Interpolation Learning?
Start DateThu, Oct 08, 2020
3:30 PM
End DateThu, Oct 08, 2020
4:30 PM
Presented By
D.J. Sutherland (University of British Columbia)
Event Series: Statistics Colloquia


Some modern machine learning methods exactly fit noisy training data, yet still generalize well – counter to traditional intuition. Recently, several groups have provided theoretical accounts of various models exhibiting this behavior. Strikingly, however, none of it is based on the core workhorse of statistical learning theory, uniform convergence. Nagarajan and Kolter (2019) have also separately raised significant questions about the ability of uniform convergence to explain generalization in some settings. Is it time, then, to abandon uniform convergence? We show that in a particular high-dimensional linear regression problem, where the minimum-norm interpolating predictor is consistent, uniform convergence cannot explain learning. Yet we demonstrate that a slightly weaker (but standard) notion, uniform convergence of zero-error predictors, can explain consistency here. As such, we argue that we as a field should consider this weaker notion more broadly. (Based on joint work with Lijia Zhou and Nathan Srebro.)

For more information about D.J. visit:


Watch Stream