PM2Lat: Highly Accurate and Generalized Prediction of DNN Execution Latency on GPUs

arXiv:2603.00549v1 Announce Type: new
Abstract: We present PM2Lat, a fast and generalized framework for accurately predicting the latency of deep neural network models on GPUs, with special focus on NVIDIA. Unlike prior methods that rely on deep learning models or handcrafted heuristics, PM2Lat leverages the Single-Instruction-Multiple-Thread architecture of GPUs to model execution time of DNN models. First, we dive into fine-grained GPU operation modeling by studying computational behavior and memory access patterns. After identifying these characteristics, we found that different GPU kernels exhibit significant performance disparities, even when serving the same purpose. Hence, the core idea of PM2Lat is to differentiate kernels based on their configurations and analyze them accordingly. This kernel-aware modeling enables PM2Lat to achieve consistently low prediction error across diverse data types and hardware platforms. In addition, PM2Lat generalizes beyond standard matrix multiplication to support complex GPU kernels such as Triton, Flash Attention, and Cutlass Attention. Experimental results show that PM2Lat consistently achieves error rates below 10% across different data types and hardware platforms on Transformer models, outperforming the state-of-the-art NeuSight by 10-20% for FP32 and by at least 50% for BF16. When applying to diverse kernels, the error rate is maintained at 3-8%.

Liked Liked