Linear Warmup With Linear Decay is a learning rate schedule in which we increase the learning rate linearly for $n$ updates and then linearly decay afterwards.
Paper | Code | Results | Date | Stars |
---|
Task | Papers | Share |
---|---|---|
Retrieval | 120 | 12.96% |
Language Modelling | 99 | 10.69% |
Question Answering | 62 | 6.70% |
Large Language Model | 39 | 4.21% |
Sentence | 30 | 3.24% |
Text Classification | 28 | 3.02% |
Sentiment Analysis | 26 | 2.81% |
Text Generation | 23 | 2.48% |
Information Retrieval | 22 | 2.38% |
Component | Type |
|
---|---|---|
🤖 No Components Found | You can add them if they exist; e.g. Mask R-CNN uses RoIAlign |