Inside the Forward Pass: Pre-Fill, Decode, and the GPU Economics of Serving Large Language Models

Author(s): Utkarsh Mittal Originally published on Towards AI. Why Inference Is the Endgame Pre-training a frontier large language model typically consumes somewhere between 15 trillion and 30 trillion tokens. That sounds like an enormous number — until you do the arithmetic on the inference side. There are roughly 7 to 8 billion people on the planet. If each person sent just one query per day to a model like ChatGPT, and each query consumed approximately 2,000 tokens when you account for both the input prompt and the generated output, that alone would amount to about 14 trillion tokens — per day. A single day of modest global usage nearly matches the entire token budget of pre-training. And in reality, heavy users are sending dozens or hundreds of queries daily. Scale that to 100 queries per person per day, and you need 100× more tokens every day than were used to train the model in the first place. Figure1- Prefill PhaseThis article delves into the shifting economics of artificial intelligence, emphasizing the shift from training to inference, which is becoming increasingly complex and costly as larger language models proliferate. While companies race to improve their training capabilities, the real expenses arise from inference, where models must process trillions of tokens daily. The article discusses the intricacies of how language models like Llama 3 operate, detailing the steps involved in generating responses, the computational demands at each stage, and the challenges of ensuring efficient GPU utilization. Ultimately, understanding this workflow is essential for optimizing AI deployment and managing associated costs effectively. Read the full blog for free on Medium. Join thousands of data leaders on the AI newsletter. Join over 80,000 subscribers and keep up to date with the latest developments in AI. From research to projects and ideas. If you are building an AI startup, an AI-related product, or a service, we invite you to consider becoming a sponsor. Published via Towards AI

Liked Liked