ggml.ai joins Hugging Face to ensure the long-term progress of Local AI
ggml.ai joins Hugging Face to ensure the long-term progress of Local AI I don’t normally cover acquisition news like this, but I have some thoughts. It’s hard to overstate the impact Georgi Gerganov has had on the local model space. Back in March 2023 his release of llama.cpp made it possible to run a local LLM on consumer hardware. The original README said: The main goal is to run the model using 4-bit quantization on a MacBook. […] […]