alphaXiv
@askalphaxiv
“Understanding LoRA as Knowledge Memory” Right now, most research treats LoRA like a cheap fine-tune toggle, but if you want to use it as swappable knowledge memory, the rule of thumb has been mostly vibes. This paper fixes that with a systematic audit of LoRA-as-memory...