Mutual Information Collapse Explains Disentanglement Failure in $beta$-VAEs

arXiv:2602.09277v1 Announce Type: new
Abstract: The $beta$-VAE is a foundational framework for unsupervised disentanglement, using $beta$ to regulate the trade-off between latent factorization and reconstruction fidelity. Empirically, however, disentanglement performance exhibits a pervasive non-monotonic trend: benchmarks such as MIG and SAP typically peak at intermediate $beta$ and collapse as regularization increases. We demonstrate that this collapse is a fundamental information-theoretic failure, where strong Kullback-Leibler pressure promotes marginal independence at the expense of the latent channel’s semantic informativeness. By formalizing this mechanism in a linear-Gaussian setting, we prove that for $beta > 1$, stationarity-induced dynamics trigger a spectral contraction of the encoder gain, driving latent-factor mutual information to zero. To resolve this, we introduce the $lambdabeta$-VAE, which decouples regularization pressure from informational collapse via an auxiliary $L_2$ reconstruction penalty $lambda$. Extensive experiments on dSprites, Shapes3D, and MPI3D-real confirm that $lambda > 0$ stabilizes disentanglement and restores latent informativeness over a significantly broader range of $beta$, providing a principled theoretical justification for dual-parameter regularization in variational inference backbones.

Liked Liked