This post by Sander Dieleman (Google DeepMind) is the definitive explanation for one of the most persistent parameters in generative AI: the Guidance Scale. Dieleman explains the trade-off between sample diversity and fidelity, detailing how “Classifier-Free Guidance” (CFG) works by amplifying the difference between a conditional model (one with a text prompt) and an unconditional one. While newer techniques like “guidance distillation” are emerging in 2025, CFG remains the standard method used to force diffusion models to adhere to user prompts, making this post essential reading for understanding why your “CFG Scale” slider actually works.