Intro to GPUs For the Researcher
Getting comfortable with the hardware on a quest for more MFU.
Getting comfortable with the hardware on a quest for more MFU.
The practical realities of devestatingly high communication cost in training.
Looking at the data and letting it look back at us.
Optimizing training a Llama 3.2 1B model so we can pretrain in a day without going broke.
If you contribute a public benchmark, are you giving free capability to your competitors?
Distributed training sans datacenter.
We tried RL once. It didn’t work. I’m confident it will this time.
An attempt to explain why benchmarks are either bad or secret, and why the bar charts don’t matter so much.
So, you mixed user input and instructions.