True Zero-shot MT
Little over a week ago, Gemini 1.5 reported close to human-level performance on MTOB, a recent challenging translation dataset. In this post, we’ll dig into this result, explore true zero-shot machine translation (MT), and consider how to teach LLMs a new language like humans. This post was first published in NLP News. Low-resource MT To set the scene, let’s first consider what it means for a language to be considered “low-resource”. As with LLMs, the performance of MT models depends on […]