DenseNet is a network architecture where each layer is directly connected to every other layer in a feed-forward fashion (within each dense block). For each layer, the feature maps of all preceding layers are treated as separate inputs whereas its own feature maps are passed on as inputs to all subsequent layers. This connectivity pattern yields state-of-the-art accuracies on CIFAR10/100 (with or without data augmentation) and SVHN.
Residual networks add identity skip connnection to layers which makes the layers learn residual transformations. This helps in efficient gradient flow and is currently used in all major network implemetations.
Googles imagenet models using inception architecture that have networks in networks (hence the name). The latest implementation combines inception along with resnets.
From their website : "Our main contribution is a rigorous evaluation of networks of increasing depth, which shows that a significant improvement on the prior-art configurations can be achieved by increasing the depth to 16-19 weight layers, which is substantially deeper than what has been used in the prior art. To reduce the number of parameters in such very deep networks, we use very small 3ร3 filters in all convolutional layers (the convolution stride is set to 1)."