[Research] Opponent State Inference for 2026 F1: An HMM-POMDP Framework – Seeking arXiv Endorsement (cs.AI / cs.LG)
Hi everyone,
I’m an independent researcher (incoming MSc AI, University of Edinburgh) and I’ve written a pre-registration paper modelling the 2026 Formula 1 energy regulations as a Partially Observable Stochastic Game. I’m looking for an arXiv endorsement in cs.AI or cs.LG to upload it before the Melbourne GP on 8 March, ideally even before the race weekend starts.
The paper: Opponent State Inference Under Partial Observability: An HMM–POMDP Framework for 2026 Formula 1 Energy Strategy
The problem: The 2026 regulations introduce a 50/50 ICE/battery power split and a proximity-gated energy award (Override Mode) replacing DRS. Optimal energy deployment now depends on the rival’s hidden battery state, creating a POSG that single-agent methods can’t solve.
The approach:
∙ Layer 1: A 30-state HMM over rival ERS charge, Override Mode status, and tyre degradation, inferred from 5 publicly observable telemetry signals via Baum-Welch EM
∙ Layer 2: A DQN policy trained on the HMM belief state
Key result: The framework formalises the Counter-Harvest Trap a deceptive strategy where a car uses Active Aero to mask super-clipping, making a rival misread its energy state. Standard threshold rules cannot detect it; belief-state inference can (95.7% recall on synthetic data, 92.3% ERS accuracy).
Melbourne is the first real validation environment and the hardest case, because mandatory super-clipping compresses the diagnostic signal.
The ask: If you’re qualified in cs.AI and think the work holds up, I’d genuinely appreciate an endorsement (Endorsement Code: XH3ME3 https://arxiv.org/auth/endorse?x=XH3ME3)
Happy to answer any technical questions here also.
submitted by /u/Ginger_Rook
[link] [comments]