[R] Teaching AI to Know What It Doesn’t Know: Epistemic Uncertainty with Complementary Fuzzy Sets
Hey everyone! I wanted to share something I’ve been working on that I think is a cool approach to uncertainty in ML.
The Problem: Neural networks confidently classify everything, even stuff they’ve never seen before. Feed a model random noise? It’ll say “cat, 92% confident.” This is dangerous in real applications.
What I Built: STLE (Set Theoretic Learning Environment)
Instead of just modeling P(y|x), it models TWO complementary spaces:
– μ_x: “How familiar is this to my training data?” (accessibility)
– μ_y: “How unfamiliar is this?” (inaccessibility)
– They always sum to 1: μ_x + μ_y = 1
Why This Helps:
– Medical AI can defer to doctors when μ_x < 0.5
– Active learning can query “frontier” samples (0.4 < μ_x < 0.6)
– Explainable: “This looks 85% familiar” is human-interpretable
Results:
– Detects out-of-distribution data: AUROC 0.668 (without training on any OOD examples!)
– Perfect complementarity (0.00 error)
– Fast: trains in < 1 second, inference < 1ms
Code: https://github.com/strangehospital/Frontier-Dynamics-Project
– NumPy version (zero dependencies)
– PyTorch version (production-ready)
– Full documentation and visualizations
I’m learning as I go, so if you have questions or feedback, I’d love to hear it! Especially interested in:
– Ways to improve the approach
– Other applications this could help with
– Comparison with other uncertainty methods
submitted by /u/Strange_Hospital7878
[link] [comments]