Taylor & Francis Group
Browse

Feature Screening for Massive Data Analysis by Subsampling

dataset
posted on 2021-11-29, 14:01 authored by Xuening Zhu, Rui Pan, Shuyuan Wu, Hansheng Wang

Modern statistical analysis often encounters massive datasets with ultrahigh-dimensional features. In this work, we develop a subsampling approach for feature screening with massive datasets. The approach is implemented by repeated subsampling of massive data and can be used for analyzing tasks with memory constraints. To conduct the procedure, we first calculate an R-squared screening measure (and related sample moments) based on subsamples. Second, we consider three methods to combine the local statistics. In addition to the simple average method, we design a jackknife debiased screening measure and an aggregated moment screening measure. Both approaches reduce the bias of the subsampling screening measure and therefore increase the accuracy of the feature screening. Last, we consider a novel sequential sampling method, that is more computationally efficient than the traditional random sampling method. The theoretical properties of the three screening measures under both sampling schemes are rigorously discussed. Finally, we illustrate the usefulness of the proposed method with an airline dataset containing 32.7 million records.

Funding

Xuening Zhu is supported by the National Natural Science Foundation of China (nos. 11901105, 71991472, U1811461), and the Shanghai Sailing Program for Youth Science and Technology Excellence (19YF1402700). The research of Rui Pan is supported by National Natural Science Foundation of China (NSFC, 11601539, 11631003), and the Emerging Interdisciplianry Project of Central University of Finance and Economics. Hansheng Wang’s research is partially supported by National Natural Science Foundation of China (No. 11831008).

History