Bias-variance Tradeoff in Tensor Estimation

arXiv:2509.17382v2 Announce Type: replace
Abstract: We study denoising of a third-order tensor when the ground-truth tensor is not necessarily Tucker low-rank. Specifically, we observe $$ Y=X^ast+Zin mathbb{R}^{p_{1} times p_{2} times p_{3}}, $$ where $X^ast$ is the ground-truth tensor, and $Z$ is the noise tensor. We propose a simple variant of the higher-order tensor SVD estimator $widetilde{X}$. We show that uniformly over all user-specified Tucker ranks $(r_{1},r_{2},r_{3})$, $$ | widetilde{X} – X^* |_{ mathrm{F}}^2 = O Big( kappa^2 Big{ r_{1}r_{2}r_{3}+sum_{k=1}^{3} p_{k} r_{k} Big} ; + ; xi_{(r_{1},r_{2},r_{3})}^2Big) quad text{ with high probability.} $$ Here, the bias term $xi_{(r_1,r_2,r_3)}$ corresponds to the best achievable approximation error of $X^ast$ over the class of tensors with Tucker ranks $(r_1,r_2,r_3)$; $kappa^2$ quantifies the noise level; and the variance term $kappa^2 {r_{1}r_{2}r_{3}+sum_{k=1}^{3} p_{k} r_{k}}$ scales with the effective number of free parameters in the estimator $widetilde{X}$. Our analysis achieves a clean rank-adaptive bias–variance tradeoff: as we increase the ranks of estimator $widetilde{X}$, the bias $xi(r_{1},r_{2},r_{3})$ decreases and the variance increases. As a byproduct we also obtain a convenient bias-variance decomposition for the vanilla low-rank SVD matrix estimators.

Liked Liked