Taylor & Francis Group
Browse

Penalized Quantile Regression for Distributed Big Data Using the Slack Variable Representation

Download (197.06 kB)
journal contribution
posted on 2020-12-11, 19:40 authored by Ye Fan, Nan Lin, Xianjun Yin

Penalized quantile regression is a widely used tool for analyzing high-dimensional data with heterogeneity. Although its estimation theory has been well studied in the literature, its computation still remains a challenge in big data, due to the nonsmoothness of the check loss function and the possible nonconvexity of the penalty term. In this article, we propose the QPADM-slack method, a parallel algorithm formulated via the alternating direction method of multipliers (ADMM) that supports penalized quantile regression in big data. Our proposal is different from the recent QPADM algorithm and uses the slack variable representation of the quantile regression problem. Simulation studies demonstrate that this new formulation is significantly faster than QPADM, especially when the data volume n or the dimension p is large, and has favorable estimation accuracy in big data analysis for both nondistributed and distributed environments. We further illustrate the practical performance of QPADM-slack by analyzing a news popularity dataset.

Funding

The authors thank the Editor, an associate editor and two anonymous reviewers for their helpful comments and suggestions that greatly improved the article. This work is in part supported by NVDIA GPU grant program. We thank NVDIA for giving us Titan V GPU as a grant to carry out our work.

History

Usage metrics

    Journal of Computational and Graphical Statistics

    Licence

    Exports

    RefWorks
    BibTeX
    Ref. manager
    Endnote
    DataCite
    NLM
    DC