Convolutional Neural Networks

Residual Network

Introduced by He et al. in Deep Residual Learning for Image Recognition

Residual Networks, or ResNets, learn residual functions with reference to the layer inputs, instead of learning unreferenced functions. Instead of hoping each few stacked layers directly fit a desired underlying mapping, residual nets let these layers fit a residual mapping. They stack residual blocks ontop of each other to form network: e.g. a ResNet-50 has fifty layers using these blocks.

Formally, denoting the desired underlying mapping as $\mathcal{H}(x)$, we let the stacked nonlinear layers fit another mapping of $\mathcal{F}(x):=\mathcal{H}(x)-x$. The original mapping is recast into $\mathcal{F}(x)+x$.

There is empirical evidence that these types of network are easier to optimize, and can gain accuracy from considerably increased depth.

Source: Deep Residual Learning for Image Recognition

Papers


Paper Code Results Date Stars

Tasks


Task Papers Share
Image Classification 64 9.67%
Self-Supervised Learning 56 8.46%
Classification 27 4.08%
Semantic Segmentation 24 3.63%
Object Detection 16 2.42%
Quantization 13 1.96%
Image Segmentation 8 1.21%
Denoising 8 1.21%
Benchmarking 7 1.06%

Categories