Linear Time Dynamic Programming for Computing Breakpoints in the Regularization Path of Models Selected From a Finite Set

Joseph Vargovich, Toby Dylan Hocking

Research output: Contribution to journalArticlepeer-review

Abstract

Many learning algorithms are formulated in terms of finding model parameters which minimize a data-fitting loss function plus a regularizer. When the regularizer involves the (Formula presented.) pseudo-norm, the resulting regularization path consists of a finite set of models. The fastest existing algorithm for computing the breakpoints in the regularization path is quadratic in the number of models, so it scales poorly to high-dimensional problems. We provide new formal proofs that a dynamic programming algorithm can be used to compute the breakpoints in linear time. Our empirical results include analysis of the proposed algorithm in the context of various learning problems (regression, changepoint detection, clustering, and matrix factorization). We use a detailed analysis of changepoint detection problems to demonstrate the improved accuracy and speed of our approach relative to grid search and a previous quadratic time algorithm.

Original languageEnglish (US)
Pages (from-to)313-323
Number of pages11
JournalJournal of Computational and Graphical Statistics
Volume31
Issue number2
DOIs
StatePublished - 2022

Keywords

  • Binary segmentation
  • Changepoint detection
  • Dynamic programming
  • Model selection

ASJC Scopus subject areas

  • Statistics and Probability
  • Discrete Mathematics and Combinatorics
  • Statistics, Probability and Uncertainty

Fingerprint

Dive into the research topics of 'Linear Time Dynamic Programming for Computing Breakpoints in the Regularization Path of Models Selected From a Finite Set'. Together they form a unique fingerprint.

Cite this