[PDF] An Overview of Convolutional Neural Network Architectures for Deep Learning | Semantic Scholar

A large, deep convolutional neural network was trained to classify the 1.2 million high-resolution images in the ImageNet LSVRC-2010 contest into the 1000 different classes and employed a recently-developed regularization method called “dropout” that proved to be very effective.

A large, deep convolutional neural network was trained to classify the 1.2 million high-resolution images in the ImageNet LSVRC-2010 contest into the 1000 different classes and employed a recently-developed regularization method called “dropout” that proved to be very effective.

This work investigates the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting using an architecture with very small convolution filters, which shows that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 weight layers.

This work investigates the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting using an architecture with very small convolution filters, which shows that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 weight layers.

Going deeper with convolutions

  • Christian Szegedy

    ,

    Wei Liu

    ,

    Andrew Rabinovich

  • Computer Science

    2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)

We propose a deep convolutional neural network architecture codenamed Inception that achieves the new state of the art for classification and detection in the ImageNet Large-Scale Visual Recognition

  • 34,535

  • Highly Influential

  • PDF