Twenty Billion Tokens of What, Exactly?
Looking at the data and letting it look back at us.
Looking at the data and letting it look back at us.
Optimizing training a Llama 3.2 1B model so we can pretrain in a day without going broke.
If you contribute a public benchmark, are you giving free capability to your competitors?
Distributed training sans datacenter.
We tried RL once. It didn’t work. I’m confident it will this time.
An attempt to explain why benchmarks are either bad or secret, and why the bar charts don’t matter so much.
A brief(ish) anecdote and investigation into the religious devotion of Haskell programmers.
So, you mixed user input and instructions.
Exploring the unique challenges of doing real science in the world’s most paranoid industry.
A manifesto on RL in cybersecurity, from when deep RL was the thing.