Quoting Thibault Sottiaux
We’ve made GPT-5.3-Codex-Spark about 30% faster. It is now serving at over 1200 tokens per second.
— Thibault Sottiaux, OpenAI
Tags: openai, llms, ai, generative-ai, llm-performance
Like
0
Liked
Liked
We’ve made GPT-5.3-Codex-Spark about 30% faster. It is now serving at over 1200 tokens per second.
— Thibault Sottiaux, OpenAI
Tags: openai, llms, ai, generative-ai, llm-performance