Model Compression

Pruning

Introduced by Li et al. in Pruning Filters for Efficient ConvNets

Papers


Paper Code Results Date Stars

Tasks


Task Papers Share
Network Pruning 44 7.93%
Quantization 39 7.03%
Model Compression 33 5.95%
Language Modelling 30 5.41%
Image Classification 25 4.50%
Federated Learning 21 3.78%
Computational Efficiency 17 3.06%
Large Language Model 10 1.80%
Question Answering 10 1.80%

Components


Component Type
🤖 No Components Found You can add them if they exist; e.g. Mask R-CNN uses RoIAlign

Categories