Metric-based Few-shot Classification in Remote Sensing Image
Source: By:Author(s)
DOI: https://doi.org/10.30564/aia.v4i1.4124
Abstract:Target recognition based on deep learning relies on a large quantity of samples, but in some specific remote sensing scenes, the samples are very rare. Currently, few-shot learning can obtain high-performance target classification models using only a few samples, but most researches are based on the natural scene. Therefore, this paper proposes a metric-based few-shot classification technology in remote sensing. First, we constructed a dataset (RSD-FSC) for few-shot classification in remote sensing, which contained 21 classes typical target sample slices of remote sensing images. Second, based on metric learning, a k-nearest neighbor classification network is proposed, to find multiple training samples similar to the testing target, and then the similarity between the testing target and multiple similar samples is calculated to classify the testing target. Finally, the 5-way 1-shot, 5-way 5-shot and 5-way 10-shot experiments are conducted to improve the generalization of the model on few-shot classification tasks. The experimental results show that for the newly emerged classes few-shot samples, when the number of training samples is 1, 5 and 10, the average accuracy of target recognition can reach 59.134%, 82.553% and 87.796%, respectively. It demonstrates that our proposed method can resolve fewshot classification in remote sensing image and perform better than other few-shot classification methods.
References:[1] Ren, S., He, K., Girshick, R., et al., 2015. Faster R-CNN: Towards real-time object detection with region proposal networks. Advances in Neural Information Processing Systems. pp. 91-99. [2] Redmon, J., Divvala, S., Girshick, R., et al., 2016. You only look once: Unified, real-time object detection. Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition. pp. 779-788. [3] Girshick, R., Donahue, J., Darrell, T., et al., 2014. Rich feature hierarchies for accurate object detection and semantic segmentation. Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition. pp. 580-587. [4] Liu, W., Anguelov, D., Erhan, D., et al., 2016. SSD: Single shot multibox detector. Lect. Notes Comput. Sci. (including Subser. Lect. Notes Artif. Intell. Lect. Notes Bioinformatics). vol. 9905 LNCS. pp. 21-37. DOI: https://doi.org/10.1007/978-3-319-46448-0_2. [5] Boiman, O., Shechtman, E., Irani, M., 2008. In defense of nearest-neighbor based image classification. 26th IEEE Conf. Comput. Vis. Pattern Recognition. CVPR. pp. 1-8. [6] Goodfellow, I.J., et al., 2014. Generative adversarial nets. Adv. Neural Inf. Process. Syst. 3, 2672-2680. [7] Radford, A., Metz, L., Chintala, S., 2015. Unsupervised representation learning with deep convolutional generative adversarial networks. arXiv preprint arXiv:1511.06434. [8] Mao, X., Li, Q., Xie, H., et al., 2017. Least Squares Generative Adversarial Networks. Proceedings of the IEEE International Conference on Computer Vision. pp. 2813-2821. [9] Durugkar, I., Gemp, I., Mahadevan, S., 2016. Generative multi-adversarial networks. arXiv preprint arXiv:1611.01673. [10] Ratner, A.J., Ehrenberg, H.R., Hussain, Z., et al., 2017. Learning to compose domain-specific transformations for data augmentation. Advances in Neural Information Processing Systems. pp. 3237-3247. [11] Alfassy, A., Karlinsky, L., Aides, A., et al., 2019. Laso: Label-set operations networks for multi-label few-shot learning. Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition. pp. 6541-6550. [12] Snell, J., Swersky, K., Zemel, R., 2017. Prototypical networks for few-shot learning. Advances in Neural Information Processing Systems. pp. 4078-4088. [13] Chen, L., Wu, H., Cui, X., et al., 2018. Convolution neural network SAR image target recogniton based on transfer learning. Chinese Space Science and Technology. 38.6(2018), 46-51. (in Chinese) [14] Jamal, M.A., Qi, G.J., 2019. Task agnostic meta-learning for few-shot learning. Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition. pp. 11711-11719. [15] Wang, Y., Yao, Q., Kwok, J.T., et al., 2020. Generalizing from a Few Examples: A Survey on Few-shot Learning. ACM Comput. Surv. 53(3), 1-34. [16] Oriol, V., Charles, B., Tim, L., 2016. kavukcuoglu koray, and W. Daan. Matching Networks for One Shot Learning. Advances in Neural Information Processing Systems. pp. 3630-3638. [17] Khosla, A., Jayadevaprakash, N., Yao, B., et al., 2011. Novel dataset for fine-grained image categorization: Stanford dogs. Proc. CVPR Workshop on Fine-Grained Visual Categorization (FGVC). 2(1). [18] Krause, J., Stark, M., Deng, J., 2013. 3D object representations for fine-grained categorization. Proceedings of the IEEE International Conference on Computer Vision. pp. 554-561. [19] Xia, G.S., Bai, X., Ding, J., et al., 2018. DOTA: A Large-Scale Dataset for Object Detection in Aerial Images. Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition. pp. 3974-3983. [20] Cheng, G., Han, J., Zhou, P., et al., 2014. Multi-class geospatial object detection and geographic image classification based on collection of part detectors. ISPRS J. Photogramm. Remote Sens. 98, 119-132. [21] Cheng, G., Han, J., 2016. A survey on object detection in optical remote sensing images. ISPRS J. Photogramm. Remote Sens. 117, 11-28. [22] Cheng, G., Zhou, P., Han, J., 2016. Learning Rotation-Invariant Convolutional Neural Networks for Object Detection in VHR Optical Remote SensingImages. IEEE Trans. Geosci. Remote Sens. 54(12), 7405-7415. [23] Zhuang, S., Wang, P., Jiang, B., et al., 2019. A single shot framework with multi-scale feature fusion for geospatial object detection. Remote Sens. 11(5), 594. [24] He, K., Zhang, X., Ren, S., et al., 2016. Deep residual learning for image recognition. Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition. IEEE, Las Vegas. 2016-December, 770-778.