FIPS 204-Compatible Threshold ML-DSA via Masked Lagrange Reconstruction

arXiv:2601.20917v1 Announce Type: new
Abstract: We present masked Lagrange reconstruction, a technique that enables threshold ML-DSA (FIPS 204) with arbitrary thresholds $T$ while producing standard 3.3 KB signatures verifiable by unmodified FIPS 204 implementations. Concurrent approaches have limitations: Bienstock et al. (ePrint 2025/1163) achieve arbitrary $T$ but require honest-majority and 37–136 rounds; Celi et al. (ePrint 2026/013) achieve dishonest-majority but are limited to $T leq 6$. Our technique addresses the barrier that Lagrange coefficients grow as $Theta(q)$ for moderate $T$, making individual contributions too large for ML-DSA’s rejection sampling.
Unlike ECDSA threshold schemes where pairwise masks suffice for correctness, ML-DSA requires solving three additional challenges absent in prior work: (1) rejection sampling on $|z|_infty$ must still pass after masking, (2) the $r_0$-check exposes $c s_2$ enabling key recovery if unprotected, and (3) the resulting Irwin-Hall nonce distribution must preserve EUF-CMA security. We solve all three.
We instantiate this technique in three deployment profiles with full security proofs. Profile P1 (TEE-assisted) achieves 3-round signing with a trusted coordinator, with EUF-CMA security under Module-SIS. Profile P2 (fully distributed) eliminates hardware trust via MPC in 8 rounds, achieving UC security against malicious adversaries corrupting up to $n-1$ parties. Profile P3 (2PC-assisted) uses lightweight 2PC for the $r_0$-check in 3–5 rounds, achieving UC security under a 1-of-2 CP honest assumption with the best empirical performance (249ms).
Our scheme requires $|S| geq T+1$ signers and achieves success rates of 23–32%, matching single-signer ML-DSA.

Liked Liked