One-Sided Matrix Completion from Ultra-Sparse Samples

arXiv:2601.12213v1 Announce Type: cross
Abstract: Matrix completion is a classical problem that has received recurring interest across a wide range of fields. In this paper, we revisit this problem in an ultra-sparse sampling regime, where each entry of an unknown, $ntimes d$ matrix $M$ (with $n ge d$) is observed independently with probability $p = C / d$, for a fixed integer $C ge 2$. This setting is motivated by applications involving large, sparse panel datasets, where the number of rows far exceeds the number of columns. When each row contains only $C$ entries — fewer than the rank of $M$ — accurate imputation of $M$ is impossible. Instead, we estimate the row span of $M$ or the averaged second-moment matrix $T = M^{top} M / n$.
The empirical second-moment matrix computed from observed entries exhibits non-random and sparse missingness. We propose an unbiased estimator that normalizes each nonzero entry of the second moment by its observed frequency, followed by gradient descent to impute the missing entries of $T$. The normalization divides a weighted sum of $n$ binomial random variables by the total number of ones. We show that the estimator is unbiased for any $p$ and enjoys low variance. When the row vectors of $M$ are drawn uniformly from a rank-$r$ factor model satisfying an incoherence condition, we prove that if $n ge O({d r^5 epsilon^{-2} C^{-2} log d})$, any local minimum of the gradient-descent objective is approximately global and recovers $T$ with error at most $epsilon^2$.
Experiments on both synthetic and real-world data validate our approach. On three MovieLens datasets, our algorithm reduces bias by $88%$ relative to baseline estimators. We also empirically validate the linear sampling complexity of $n$ relative to $d$ on synthetic data. On an Amazon reviews dataset with sparsity $10^{-7}$, our method reduces the recovery error of $T$ by $59%$ and $M$ by $38%$ compared to baseline methods.

Liked Liked