Array-valued predictors ubiquitously arise in brain connectivity studies, sensor network localization, and integrative genomics. We consider the problem of learning the relationship between a scale-valued response and a high-dimensional array-valued predictor. Traditional regression methods take a parametric procedure by imposing a priori functional form between variables. These parametric models, however, are inadequate for structure learning and often fail in accurate prediction. Here, we develop a learning reduction framework to address a range of learning tasks from classification to regression for matrix-valued predictors. Our proposal achieves interpretable prediction via a low-rank two-way sparse halfspace learning for the target function level sets. Statistical accuracy, excess risk bounds, and efficient algorithms are established. We demonstrate the advantage of our method over previous approaches through applications to human brain connectome data. Time allowed, I will present our recent results on nonparametric approaches to tensor estimation and completion.
Joint work with Chanwoo Lee (UW-Madison), Lexin Li (UC-Berkeley), and Helen Hao Zhang (UArizona).