Top 10 GitHub Papers :: Image classification
Image classification refers to a process in computer vision that can classify an image according to its visual content. For example, an image classification algorithm may be designed to tell if an image contains an animal or not. While detecting an object is irrelevant for humans, robust image classification is still a challenge in computer vision applications.
In this section, you can find state-of-the-art, greatest papers for image classification along with the authors’ names, link to the paper, Github link & stars, number of citations, dataset used and date published. Enjoy.
1. Searching for MobileNetV3
Abstract: We present the next generation of MobileNets based on a combination of complementary search techniques as well as a novel architecture design. MobileNetV3 is tuned to mobile phone CPUs through a combination of hardware-aware network architecture search (NAS) complemented by the NetAdapt algorithm and then subsequently improved through novel architecture advances. This paper starts the exploration of how automated search algorithms and network design can work together to harness complementary approaches improving the overall state of the art. Through this process we create two new MobileNet models for release: MobileNetV3-Large and MobileNetV3-Small which are targeted for high and low resource use cases. These models are then adapted and applied to the tasks of object detection and semantic segmentation. For the task of semantic segmentation (or any dense pixel prediction), we propose a new efficient segmentation decoder Lite Reduced Atrous Spatial Pyramid Pooling (LR-ASPP). We achieve new state of the art results for mobile classification, detection and segmentation. MobileNetV3-Large is 3.2\% more accurate on ImageNet classification while reducing latency by 15\% compared to MobileNetV2. MobileNetV3-Small is 4.6\% more accurate while reducing latency by 5\% compared to MobileNetV2. MobileNetV3-Large detection is 25\% faster at roughly the same accuracy as MobileNetV2 on COCO detection. MobileNetV3-Large LR-ASPP is 30\% faster than MobileNetV2 R-ASPP at similar accuracy for Cityscapes segmentation.
- Authors: Andrew Howard • Mark Sandler • Grace Chu • Liang-Chieh Chen • Bo Chen • Mingxing Tan • Weijun Wang • Yukun Zhu • Ruoming Pang • Vijay Vasudevan • Quoc V. Le • Hartwig Adam
- Paper: https://arxiv.org/pdf/1905.02244v5.pdf
- Github: https://github.com/tensorflow/models/tree/master/research/object_detection
- Dataset: COCO, Cityscapes
- Github ⭐: 61,954 and the stars were counted on 01/03/2020
- Citations: Cited by 77
- Published: 6 May 2019
2. Searching for Efficient Multi-Scale Architectures for Dense Image Prediction
Abstract: The design of neural network architectures is an important component for achieving state-of-the-art performance with machine learning systems across a broad array of tasks. Much work has endeavored to design and build architectures automatically through clever construction of a search space paired with simple learning algorithms. Recent progress has demonstrated that such meta-learning methods may exceed scalable human-invented architectures on image classification tasks. An open question is the degree to which such methods may generalize to new domains. In this work we explore the construction of meta-learning techniques for dense image prediction focused on the tasks of scene parsing, person-part segmentation, and semantic image segmentation. Constructing viable search spaces in this domain is challenging because of the multi-scale representation of visual information and the necessity to operate on high resolution imagery. Based on a survey of techniques in dense image prediction, we construct a recursive search space and demonstrate that even with efficient random search, we can identify architectures that outperform human-invented architectures and achieve state-of-the-art performance on three dense prediction tasks including 82.7\% on Cityscapes (street scene parsing), 71.3\% on PASCAL-Person-Part (person-part segmentation), and 87.9\% on PASCAL VOC 2012 (semantic image segmentation). Additionally, the resulting architecture is more computationally efficient, requiring half the parameters and half the computational cost as previous state of the art systems.
- Authors: Liang-Chieh Chen • Maxwell D. Collins • Yukun Zhu • George Papandreou • Barret Zoph • Florian Schroff • Hartwig Adam • Jonathon Shlens
- Paper: https://arxiv.org/pdf/1809.04184v1.pdf
- Github: https://github.com/eriklindernoren/Keras-GAN
- Dataset: Cityscapes
- Github ⭐: 61,918 and the stars were counted on 28/02/2020
- Citations: Cited by 100
- Published: 11 September 2018
3. AutoAugment: Learning Augmentation Policies from Data
Abstract: Data augmentation is an effective technique for improving the accuracy of modern image classifiers. However, current data augmentation implementations are manually designed. In this paper, we describe a simple procedure called AutoAugment to automatically search for improved data augmentation policies. In our implementation, we have designed a search space where a policy consists of many sub-policies, one of which is randomly chosen for each image in each mini-batch. A sub-policy consists of two operations, each operation being an image processing function such as translation, rotation, or shearing, and the probabilities and magnitudes with which the functions are applied. We use a search algorithm to find the best policy such that the neural network yields the highest validation accuracy on a target dataset. Our method achieves state-of-the-art accuracy on CIFAR-10, CIFAR-100, SVHN, and ImageNet (without additional data). On ImageNet, we attain a Top-1 accuracy of 83.5% which is 0.4% better than the previous record of 83.1%. On CIFAR-10, we achieve an error rate of 1.5%, which is 0.6% better than the previous state-of-the-art. Augmentation policies we find are transferable between datasets. The policy learned on ImageNet transfers well to achieve significant improvements on other datasets, such as Oxford Flowers, Caltech-101, Oxford-IIT Pets, FGVC Aircraft, and Stanford Cars.
- Authors: Ekin D. Cubuk • Barret Zoph • Dandelion Mane • Vijay Vasudevan • Quoc V. Le
- Paper: https://arxiv.org/pdf/1805.09501v3.pdf
- Github: https://github.com/tensorflow/models/tree/master/research/autoaugment
- Dataset: ImageNet, CIFAR-10, CIFAR-100, SVHN
- Github ⭐: 61,918 and the stars were counted on 28/02/2020
- Citations: Cited by 244
- Published: 24 May 2018
4. Encoder-Decoder with Atrous Separable Convolution for Semantic Image Segmentation
Abstract: Spatial pyramid pooling module or encode-decoder structure are used in deep neural networks for semantic segmentation task. The former networks are able to encode multi-scale contextual information by probing the incoming features with filters or pooling operations at multiple rates and multiple effective fields-of-view, while the latter networks can capture sharper object boundaries by gradually recovering the spatial information. In this work, we propose to combine the advantages from both methods. Specifically, our proposed model, DeepLabv3+, extends DeepLabv3 by adding a simple yet effective decoder module to refine the segmentation results especially along object boundaries. We further explore the Xception model and apply the depthwise separable convolution to both Atrous Spatial Pyramid Pooling and decoder modules, resulting in a faster and stronger encoder-decoder network. We demonstrate the effectiveness of the proposed model on PASCAL VOC 2012 and Cityscapes datasets, achieving the test set performance of 89.0\% and 82.1\% without any post-processing.
- Authors: Liang-Chieh Chen • Yukun Zhu • George Papandreou • Florian Schroff • Hartwig Adam
- Paper: https://arxiv.org/pdf/1802.02611v3.pdf
- Github: https://github.com/tensorflow/models/tree/master/research/deeplab
- Dataset: PASCAL VOC 2012 test
- Github ⭐: 61,887 and the stars were counted on 28/02/2020
- Citations: Cited by 1254
- Published: 7 February 2018
5. MobileNetV2: Inverted Residuals and Linear Bottlenecks
Abstract: In this paper we describe a new mobile architecture, MobileNetV2, that improves the state of the art performance of mobile models on multiple tasks and benchmarks as well as across a spectrum of different model sizes. We also describe efficient ways of applying these mobile models to object detection in a novel framework we call SSDLite. Additionally, we demonstrate how to build mobile semantic segmentation models through a reduced form of DeepLabv3 which we call Mobile DeepLabv3. The MobileNetV2 architecture is based on an inverted residual structure where the input and output of the residual block are thin bottleneck layers opposite to traditional residual models which use expanded representations in the input an MobileNetV2 uses lightweight depthwise convolutions to filter features in the intermediate expansion layer. Additionally, we find that it is important to remove non-linearities in the narrow layers in order to maintain representational power. We demonstrate that this improves performance and provide an intuition that led to this design. Finally, our approach allows decoupling of the input/output domains from the expressiveness of the transformation, which provides a convenient framework for further analysis. We measure our performance on Imagenet classification, COCO object detection, VOC image segmentation. We evaluate the trade-offs between accuracy, and number of operations measured by multiply-adds (MAdd), as well as the number of parameters.
- Authors: Mark Sandler • Andrew Howard • Menglong Zhu • Andrey Zhmoginov • Liang-Chieh Chen
- Paper: https://arxiv.org/pdf/1801.04381v4.pdf
- Github: https://github.com/tensorflow/models/tree/master/research/object_detection
- Dataset: COCO, ImageNet
- Github ⭐: 61,955 and the stars were counted on 01/03/2020
- Citations: Cited by 1302
- Published:13 January 2018
6. Progressive Neural Architecture Search
Abstract: We propose a new method for learning the structure of convolutional neural networks (CNNs) that is more efficient than recent state-of-the-art methods based on reinforcement learning and evolutionary algorithms. Our approach uses a sequential model-based optimization (SMBO) strategy, in which we search for structures in order of increasing complexity, while simultaneously learning a surrogate model to guide the search through structure space. Direct comparison under the same search space shows that our method is up to 5 times more efficient than the RL method of Zoph et al. (2018) in terms of number of models evaluated, and 8 times faster in terms of total compute. The structures we discover in this way achieve state of the art classification accuracies on CIFAR-10 and ImageNet.
- Authors: Chenxi Liu • Barret Zoph • Maxim Neumann • Jonathon Shlens • Wei Hua • Li-Jia Li • Li Fei-Fei • Alan Yuille • Jonathan Huang • Kevin Murphy
- Paper: https://arxiv.org/pdf/1712.00559v3.pdf
- Github: https://github.com/tensorflow/models/tree/master/research/slim
- Dataset: CIFAR-10, ImageNet
- Github ⭐: 61,955 and the stars were counted on 01/03/2020
- Citations: Cited by 448
- Published: 2 December 2017
7. Learning Transferable Architectures for Scalable Image Recognition
Abstract: Developing neural network image classification models often requires significant architecture engineering. In this paper, we study a method to learn the model architectures directly on the dataset of interest. As this approach is expensive when the dataset is large, we propose to search for an architectural building block on a small dataset and then transfer the block to a larger dataset. The key contribution of this work is the design of a new search space (the “NASNet search space”) which enables transferability. In our experiments, we search for the best convolutional layer (or “cell”) on the CIFAR-10 dataset and then apply this cell to the ImageNet dataset by stacking together more copies of this cell, each with their own parameters to design a convolutional architecture, named “NASNet architecture”. We also introduce a new regularization technique called ScheduledDropPath that significantly improves generalization in the NASNet models. On CIFAR-10 itself, NASNet achieves 2.4% error rate, which is state-of-the-art. On ImageNet, NASNet achieves, among the published works, state-of-the-art accuracy of 82.7% top-1 and 96.2% top-5 on ImageNet. Our model is 1.2% better in top-1 accuracy than the best human-invented architectures while having 9 billion fewer FLOPS – a reduction of 28% in computational demand from the previous state-of-the-art model. When evaluated at different levels of computational cost, accuracies of NASNets exceed those of the state-of-the-art human-designed models. For instance, a small version of NASNet also achieves 74% top-1 accuracy, which is 3.1% better than equivalently-sized, state-of-the-art models for mobile platforms. Finally, the learned features by NASNet used with the Faster-RCNN framework surpass state-of-the-art by 4.0% achieving 43.1% mAP on the COCO dataset.
- Authors: Barret Zoph • Vijay Vasudevan • Jonathon Shlens • Quoc V. Le
- Paper: https://arxiv.org/pdf/1707.07012v4.pdf
- Github: https://github.com/tensorflow/models/tree/master/research/object_detection
- Dataset: CIFAR-10, ImageNet, COCO
- Github ⭐: 61,955 and the stars were counted on 01/03/2020
- Citations: Cited by 1154
- Published: 21 July 2017
8. The iNaturalist Species Classification and Detection Dataset
Abstract: Existing image classification datasets used in computer vision tend to have a uniform distribution of images across object categories. In contrast, the natural world is heavily imbalanced, as some species are more abundant and easier to photograph than others. To encourage further progress in challenging real world conditions we present the iNaturalist species classification and detection dataset, consisting of 859,000 images from over 5,000 different species of plants and animals. It features visually similar species, captured in a wide variety of situations, from all over the world. Images were collected with different camera types, have varying image quality, feature a large class imbalance, and have been verified by multiple citizen scientists. We discuss the collection of the dataset and present extensive baseline experiments using state-of-the-art computer vision classification and detection models. Results show that current non-ensemble based methods achieve only 67% top one classification accuracy, illustrating the difficulty of the dataset. Specifically, we observe poor results for classes with small numbers of training examples suggesting more attention is needed in low-shot learning.
- Authors: Grant Van Horn • Oisin Mac Aodha • Yang Song • Yin Cui • Chen Sun • Alex Shepard • Hartwig Adam • Pietro Perona • Serge Belongie
- Paper: https://arxiv.org/pdf/1707.06642v2.pdf
- Github: https://github.com/tensorflow/models/tree/master/research/object_detection
- Dataset: iNaturalist Classification and Detection Dataset (iNat2017)
- Github ⭐: 61,918 and the stars were counted on 28/02/2020
- Citations: Cited by 62
- Published: 20 July 2017
9. MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications
Abstract: We present a class of efficient models called MobileNets for mobile and embedded vision applications. MobileNets are based on a streamlined architecture that uses depth-wise separable convolutions to build light weight deep neural networks. We introduce two simple global hyper-parameters that efficiently trade off between latency and accuracy. These hyper-parameters allow the model builder to choose the right sized model for their application based on the constraints of the problem. We present extensive experiments on resource and accuracy tradeoffs and show strong performance compared to other popular models on ImageNet classification. We then demonstrate the effectiveness of MobileNets across a wide range of applications and use cases including object detection, finegrain classification, face attributes and large scale geo-localization.
- Authors: Andrew G. Howard • Menglong Zhu • Bo Chen • Dmitry Kalenichenko • Weijun Wang • Tobias Weyand • Marco Andreetto • Hartwig Adam
- Paper: https://arxiv.org/pdf/1704.04861v1.pdf
- Github: https://github.com/tensorflow/models/tree/master/research/slim
- Dataset: ImageNet
- Github ⭐: 61,911 and the stars were counted on 27/02/2020
- Citations: Cited by 3549
- Published: 17 April 2017
10. Xception: Deep Learning with Depthwise Separable Convolutions
Abstract: We present an interpretation of Inception modules in convolutional neural networks as being an intermediate step in-between regular convolution and the depthwise separable convolution operation (a depthwise convolution followed by a pointwise convolution). In this light, a depthwise separable convolution can be understood as an Inception module with a maximally large number of towers. This observation leads us to propose a novel deep convolutional neural network architecture inspired by Inception, where Inception modules have been replaced with depthwise separable convolutions. We show that this architecture, dubbed Xception, slightly outperforms Inception V3 on the ImageNet dataset (which Inception V3 was designed for), and significantly outperforms Inception V3 on a larger image classification dataset comprising 350 million images and 17,000 classes. Since the Xception architecture has the same number of parameters as Inception V3, the performance gains are not due to increased capacity but rather to a more efficient use of model parameters.
- Authors: François Chollet
- Paper: https://arxiv.org/pdf/1610.02357v3.pdf
- Github: https://github.com/tensorflow/models/tree/master/research/deeplab
- Dataset: ImageNet, FastEval14k
- Github ⭐: 61,989 and the stars were counted on 01/03/2020
- Citations: Cited by 2090
- Published: 7 October 2016