Taylor & Francis Group
Browse
1/3
50 files

Information-Based Optimal Subdata Selection for Big Data Linear Regression

Version 2 2018-06-28, 20:08
Version 1 2018-01-15, 13:20
dataset
posted on 2018-06-28, 20:08 authored by HaiYing Wang, Min Yang, John Stufken

Extraordinary amounts of data are being produced in many branches of science. Proven statistical methods are no longer applicable with extraordinary large datasets due to computational limitations. A critical step in big data analysis is data reduction. Existing investigations in the context of linear regression focus on subsampling-based methods. However, not only is this approach prone to sampling errors, it also leads to a covariance matrix of the estimators that is typically bounded from below by a term that is of the order of the inverse of the subdata size. We propose a novel approach, termed information-based optimal subdata selection (IBOSS). Compared to leading existing subdata methods, the IBOSS approach has the following advantages: (i) it is significantly faster; (ii) it is suitable for distributed parallel computing; (iii) the variances of the slope parameter estimators converge to 0 as the full data size increases even if the subdata size is fixed, that is, the convergence rate depends on the full data size; (iv) data analysis for IBOSS subdata is straightforward and the sampling distribution of an IBOSS estimator is easy to assess. Theoretical results and extensive simulations demonstrate that the IBOSS approach is superior to subsampling-based methods, sometimes by orders of magnitude. The advantages of the new approach are also illustrated through analysis of real data. Supplementary materials for this article are available online.

Funding

Wang’s research was supported by a Microsoft Azure for Research Award and a Simons Foundation Collaboration Grant for Mathematicians (515599). Yang’s research was supported by NSF grant DMS-140751. Stufken’s research was supported by NSF grant DMS-1506125.

History