Logic-Gated Time-Shared Feedforward Networks for Alternating Finite Automata: Exact Simulation and Learnability

arXiv:2604.01228v1 Announce Type: new
Abstract: We present a formal and constructive framework for simulating Alternating Finite Automata (AFAs) using Logic-Gated Time-Shared Feedforward Networks (LG-TS-FFNs). Unlike prior neural automata models limited to Nondeterministic Finite Automata (NFAs) and existential reachability, our architecture integrates learnable, state-dependent biases that function as differentiable logic gates, enabling the representation of both Existential textsc{textsc{OR}} and Universal textsc{textsc{AND}} aggregation within a shared-parameter linear recurrence. We prove that this architectural modification upgrades the network’s computational class to be structurally isomorphic to AFAs, thereby inheriting their exponential succinctness: the network can represent regular languages requiring $2^n$ states in an NFA with only $n$ neurons. We rigorously establish that the forward pass of an LG-TS-FFN exactly simulates the reachability dynamics of an AFA, including instantaneous $varepsilon$-closures. Furthermore, we demonstrate empirical learnability: a continuous relaxation of the logic gates allows the network to simultaneously recover the automaton’s topology and logical semantics from binary labels via standard gradient descent. Extensive experiments confirm that our model achieves perfect recovery of ground-truth automata, bridging the gap between statistical learning and succinct, universal logical reasoning.

Liked Liked