Low-rank approximation

previous arrow
next arrow

Low-rank approximation: an overview

The low-rank matrix (more generally, tensor) approximation is approximating a matrix by one whose rank is less than that of the original matrix. The goal of this is to obtain more compact representations of the data with limited loss of information. Let {\bf V} be m \times n matrix, then the low rank approximation by two factor matrices {\bf W} \in \mathbb{R}^{m \times k} and {\bf H} \in \mathbb{R}^{k \times n} with rank k of {\bf V} is given by

{\bf V} \approx {\bf WH}.

The low-rank approximation of the matrix can be stored and manipulated more economically than the matrix itself. One can see from the above approximation that only k(m + n) entries have to be stored instead of mn entries of the original matrix {\bf V}. The low-rank approximation of a matrix appears in many applications. The list of applications includes image processing, data mining, noise reduction, seismic inversion, latent semantic indexing, principal component analysis (PCA), machine-learning, regularization for ill-posed problems, statistical data analysis applications, DNA microarray data, web search model and so on. The low-rank approximation of matrices also plays a very important role in tensor decompositions.

Table: Various matrix factorization (decomposition) algorithms and its geometry

Algorithms Geometry
Singular value decomposition (SVD)  {\rm St}(r,m) \times {\rm Diag}_{++}(r) \times {\rm St}(r,n)
Polar factorization {\rm St}(r,m)\times {\rm Sym}(r) \times {\rm St}(r,n)
2-factor full-rank factorization  \mathbb{R}_*^{m \times r} \times \mathbb{R}_*^{n \times r}
3-factor full-rank factorization  {\rm St}(r,m) \times {\rm GL}(r) \times {\rm St}(r,n)
Bi-Grassmann  {\rm Gr}(r,m) \times\ \mathbb{R}^{n \times r} \times {\rm Gr}(r,n)
Subspace-projection  {\rm Gr}(r,m) \times\ \mathbb{R}^{n \times r}
Non-negative matrix factorization (NMF)  \mathbb{R}^{m \times r}_{+} \times\ \mathbb{R}^{r \times n}_{+}
Orthogonal NMF  \mathbb{R}^{m \times r}_{+} \times\ {\rm St}_+(r,n)

However, these suffer from severe degradation due to large outliers in the data. To address the issue, robust low-rank approximation methods have been developed that are tailored to the low-rank and sparse data model instead, which assumes that disturbances may have arbitrary magnitude but occur only at few positions. The low-rank constraint is addressed either through nuclear norm minimization or by factorizing the low-rank matrix (fixed-rank matrix). On the other hand, the sparsity of the residual error is commonly enforced with the l_1 norm because it is the closest convex relaxation to the idea l_0 measure.

Our laboratory has tackled some issues above, especially, with respect to handling of “large-scale” data as below;


  • Low-rank tensor approximation, online learning and its applications (video and image analysis, network traffic analysis, environmental data analysis, time-series data analysis, etc. )
    • HK, “Fast online low-rank tensor subspace tracking by CP decomposition using recursive least squares from incomplete observations,” Neurocomputing2019arXiv2017 (Extended version of below).
    • HK, “Online low-rank tensor subspace tracking from incomplete data by CP decomposition using recursive least squares,” IEEE ICASSP2016arXiv2016.
  • Online low-rank tensor learning and its application to network traffic analysis
  • Nonnegative (low-rank) matrix factorization (NMF) and its stochastic algorithms, and its applications (clustering, image noise reduction, etc.)
    • HK, “Accelerated stochastic multiplicative update with gradient averaging for nonnegative matrix factorizations,” EUSIPCO2018.
    • HK, “Stochastic variance reduced multiplicative update for nonnegative matrix factorization,” IEEE ICASSP2018arXiv2017.