Story

A 27M parameter model beating LLMs on reasoning tasks

SteadySurfdom Friday, November 28, 2025

I came across this HRM (Hierarchical Reasoning Models) explainer, and it claimed some wild numbers. It claims to beat LLMs like 3.7 sonnet and o3-mini on reasoning tasks like Sudoku, 30X30 mazes, and ARC-AGI. Here is the explainer: https://towardsdatascience.com/your-next-large-language-model-might-not-be-large-afterall-2/

I am currently working in a product-based startup and am working on automating PCB design. It also requires some hardcore reasoning, as in knowing that USB connectors should be at the edge of the board, inductive and capacitive loads should be apart, all while optimizing the routing length.

I wanted to ask if this is a viable approach to solving my use case? Think this would work out? Because I did see some similarities with the problems HRM solves better than LLMs, and my use case.

4 0
Read on Hacker News