[P] NTTuner – GUI to Locally Fine-Tune AI Models with Unsloth GPU + CPU Support!

Hey everyone — I’ve been building a desktop toolchain to make fine-tuning + deploying local LLMs feel more like a normal app workflow, and I wanted to share it.

What I made

NTTuner (fine-tuning + deployment GUI)

A desktop GUI app that covers the full fine-tuning workflow end-to-end:

  • LoRA fine-tuning (GPU via Unsloth, with CPU fallback)
  • Automatic GGUF conversion
  • Direct import into Ollama
  • Real-time training logs (non-blocking UI)
  • Reproducible config saving

NTCompanion (dataset builder)

A dataset creation tool designed for quickly turning websites into usable training data:

  • Universal web scraper for dataset generation
  • Smart extraction to pull actual content (not menus / boilerplate)
  • 6-factor quality scoring to filter junk
  • Outputs directly in the format NTTuner expects
  • GitHub repository cloning and processing

Why I built it

I got tired of the same loop every time I wanted to fine-tune something locally:

  • bounce between CLI tools + Python scripts
  • manually clean datasets
  • manually convert to GGUF
  • manually import into Ollama

I wanted a workflow where I could just:
build dataset → drag & drop → fine-tune → model shows up in Ollama.

Key features

NTTuner

  • Drag-and-drop JSONL dataset support
  • Auto-detects GPU and installs the correct dependencies
  • Training runs in the background without freezing the UI
  • Saves training configs as JSON for reproducibility
  • One-click export to Ollama (with quantization)

NTCompanion

  • Multi-threaded crawling (1–50 workers configurable)
  • Filters out junk like navigation menus, cookie banners, etc.
  • Presets for common content types (recipes, tutorials, docs, blogs)
  • Supports major chat templates (Llama, Qwen, Phi, Mistral, Gemma)

Technical notes

  • GUI built with DearPyGUI (responsive + GPU accelerated)
  • Training via Unsloth for 2–5x speedups on compatible GPUs
  • Graceful CPU fallback when GPU isn’t available
  • Scraping/parsing with BeautifulSoup
  • Optional Bloom filter for large crawls

Requirements

  • Python 3.10+
  • 8GB RAM minimum (16GB recommended)
  • NVIDIA GPU w/ 8GB+ VRAM recommended (CPU works too)
  • Windows / Linux / macOS

Example workflow

  1. Scrape ~1000 cooking recipes using NTCompanion
  2. Quality filter removes junk → outputs clean JSONL
  3. Drag JSONL into NTTuner
  4. Choose a base model (ex: Llama-3.2-3B-Instruct)
  5. Start training
  6. Finished model automatically appears in Ollama
  7. Run: ollama run my-cooking-assistant

Links

Current limitations

  • JavaScript-heavy sites aren’t perfect yet (no headless browser support)
  • GGUF conversion has some manual steps in CPU-only training cases
  • Quality scoring works best on English content right now

What’s next

I’m currently working on:

  • Better JS rendering support
  • Multi-language dataset support
  • Fine-tuning presets for common use cases
  • More export targets / model formats

If anyone tries it, I’d love feedback — especially on what would make this more useful in your fine-tuning workflow.

TL;DR: Built a desktop GUI that makes local LoRA fine-tuning + deployment mostly drag-and-drop, plus a dataset scraper tool that outputs training-ready JSONL.

submitted by /u/Muted_Impact_9281
[link] [comments]

Liked Liked