A beginner’s guide to the Qwopus-glm-18b-merged-gguf model by Kylehessling1 on Huggingface

Qwopus-GLM-18B-Merged-GGUF is a healed 18B model for 12GB GPUs, offering strong coding, tool-calling, and 262K context performance.

Liked Liked