Papers in Deep Learning
Research paper reading notes focused on key ideas, math intuition, and practical takeaways.
LoRA: Low-Rank Adaptation of Large Language Models Freeze the base model and learn a low-rank update \(\Delta W=BA\) for selected layers, achieving strong performance with far fewer trainable parameters and easy deployment.