Story

Show HN: GEKO (up to 80% compute savings on LLM fine-tuning)

SyedAbdurR2hman Saturday, February 28, 2026

Hey HN, Most fine-tuning loops waste a huge amount of compute by treating every sample equally every epoch — even the ones the model has already mastered. I built GEKO (Gradient-Efficient Knowledge Optimization) to fix that. It tracks per-sample confidence and correctness in real time and:

Completely skips samples the model has mastered Gives up to 5× more compute to hard/confidently-wrong samples Dynamically adjusts sample weights using a "Mountain Curriculum" Just dropped v0.3.0 with native LoRA/PEFT, BF16, gradient checkpointing, torch.compile, and 8-bit optimizer support. I'm currently building a clean UI for it. I'm a 17-year-old indie dev working on this. Would love honest feedback, especially from people who do a lot of fine-tuning.

1 1
github.com
Visit article Read on Hacker News Comments 1