Story

Show HN: Improving search ranking with chess Elo scores

ghita_ Wednesday, July 16, 2025

Hello HN,

I'm Ghita, co-founder of ZeroEntropy (YC W25). We build high accuracy search infrastructure for RAG and AI Agents.

We just released two new state-of-the-art rerankers zerank-1, and zerank-1-small. One of them is fully open-source under Apache 2.0.

We trained those models using a novel Elo score inspired pipeline which we describe in detail in the blog attached. In a nutshell, here is an outline of the training steps: * Collect soft preferences between pairs of documents using an ensemble of LLMs. * Fit an ELO-style rating system (Bradley-Terry) to turn pairwise comparisons into absolute per-document scores. * Normalize relevance scores across queries using a bias correction step, modeled using cross-query comparisons and solved with MLE.

You can try the models either through our API (https://docs.zeroentropy.dev/models), or via HuggingFace (https://huggingface.co/zeroentropy/zerank-1-small).

We would love this community's feedback on the models, and the training approach. A full technical report is also going to be released soon.

Thank you!

Summary
The article discusses a method for improving the Retrieval-Augmented Generation (RAG) model by incorporating Elo ratings, a system commonly used to rank players in competitive games. The proposed approach aims to enhance the model's ability to retrieve relevant information and generate more coherent and informative responses.
191 65
Summary
zeroentropy.dev
Visit article Read on Hacker News Comments 65