Low-rank approximation
Low-rank approximation: an overview
The low-rank matrix (more generally, tensor) approximation is approximating a matrix by one whose rank is less than that of the original matrix. The goal of this is to obtain more compact representations of the data with limited loss of information. Let be matrix, then the low rank approximation by two factor matrices and with rank of is given by
.
The low-rank approximation of the matrix can be stored and manipulated more economically than the matrix itself. One can see from the above approximation that only entries have to be stored instead of mn entries of the original matrix . The low-rank approximation of a matrix appears in many applications. The list of applications includes image processing, data mining, noise reduction, seismic inversion, latent semantic indexing, principal component analysis (PCA), machine-learning, regularization for ill-posed problems, statistical data analysis applications, DNA microarray data, web search model and so on. The low-rank approximation of matrices also plays a very important role in tensor decompositions.
Table: Various matrix factorization (decomposition) algorithms and its geometry
Algorithms | Geometry |
Singular value decomposition (SVD) | |
Polar factorization | |
2-factor full-rank factorization | |
3-factor full-rank factorization | |
Bi-Grassmann | |
Subspace-projection | |
Non-negative matrix factorization (NMF) | |
Orthogonal NMF |
However, these suffer from severe degradation due to large outliers in the data. To address the issue, robust low-rank approximation methods have been developed that are tailored to the low-rank and sparse data model instead, which assumes that disturbances may have arbitrary magnitude but occur only at few positions. The low-rank constraint is addressed either through nuclear norm minimization or by factorizing the low-rank matrix (fixed-rank matrix). On the other hand, the sparsity of the residual error is commonly enforced with the norm because it is the closest convex relaxation to the idea measure.
Our laboratory has tackled some issues above, especially, with respect to handling of “large-scale” data as below;
Publications:
- Low-rank tensor approximation, online learning and its applications (video and image analysis, network traffic analysis, environmental data analysis, time-series data analysis, etc. )
- HK, “Fast online low-rank tensor subspace tracking by CP decomposition using recursive least squares from incomplete observations,” Neurocomputing2019, arXiv2017 (Extended version of below).
- HK, “Online low-rank tensor subspace tracking from incomplete data by CP decomposition using recursive least squares,” IEEE ICASSP2016, arXiv2016.
- Online low-rank tensor learning and its application to network traffic analysis
- HK, W.Kellerer and M.Kleinsteuber, “Network volume anomaly detection and identification in large-scale networks based on online time-structured traffic tensor tracking,” IEEE Transactions on Network and Service Management 2016, arXiv2016.
- Nonnegative (low-rank) matrix factorization (NMF) and its stochastic algorithms, and its applications (clustering, image noise reduction, etc.)
- HK, “Accelerated stochastic multiplicative update with gradient averaging for nonnegative matrix factorizations,” EUSIPCO2018.
- HK, “Stochastic variance reduced multiplicative update for nonnegative matrix factorization,” IEEE ICASSP2018, arXiv2017.