This “Flash” AI Model Is Fast and Dangerous at Math—Here’s What It Can Do
GLM-4.7-Flash is Z.ai’s 30B MoE model built for low-latency reasoning and tool calling—plus benchmarks like AIME 2025, GPQA, and SWE-bench.
Like
0
Liked
Liked
GLM-4.7-Flash is Z.ai’s 30B MoE model built for low-latency reasoning and tool calling—plus benchmarks like AIME 2025, GPQA, and SWE-bench.