Story

Show HN: WLM – A 70B model trained to decode "I'm fine" with 94.7% accuracy

gwillen85 Monday, January 26, 2026

I built WLM, a 70B parameter model trained on 847M text message arguments to decode relationship communication.

Key contributions:

- Infinite Grievance Memory™ with O(1) retrieval and zero decay rate - Subtext Attention Mechanism that attends to what was NOT said - "You Should Know Why" solver using Guilt-Weighted Retrospective Search - Partner-Specific Fine-Tuning on your SO's message history - MoodNet classifier for availability prediction

Achieves 94.7% accuracy on the "It's Fine" Benchmark vs 14.7% male human baseline. Paper includes ablation studies, architecture diagrams, and a failure cases section noting that "Where Do You Want To Eat?" remains NP-hard.

License: MIT (Marriage Is Tough)

GitHub: https://github.com/gabewillen/wlm

1 0
github.com
Visit article Read on Hacker News