Pretraining at home: 20B tokens from 222 hours to 12

Optimizing training a Llama 3.2 1B model so we can pretrain in a day without going broke.

November 23, 2025 · 21 min · 4329 words · Shane Caldwell

RL Needed LLMs Because Agency Requires Priors

We tried RL once. It didn’t work. I’m confident it will this time.

August 19, 2025 · 26 min · 5458 words · Shane Caldwell

GPT-5 is Good, Actually: The Agony and Ecstasy of Public Benchmarks

An attempt to explain why benchmarks are either bad or secret, and why the bar charts don’t matter so much.

August 17, 2025 · 17 min · 3488 words · Shane Caldwell