skip to main content
10.1145/3441369.3441382acmotherconferencesArticle/Chapter ViewAbstractPublication PagesdmipConference Proceedingsconference-collections
research-article

Research on Classification of UAV Optical Image Tree Species Based on Res2Net

Published: 24 March 2021 Publication History

Abstract

The internal features of remote sensing images of plant communities are complex and the boundaries between classes are blurred. The traditional image processing methods based on pixel spectral information cannot make full use of the image feature information, making the extraction effect poor. Therefore, this paper proposes a deep convolutional neural network. Convolutional neural network (CNN) high-resolution remote sensing image plant community automatic classification method. Segment drone images to obtain regular images, and then use the CNN-based Res2Net model to abstract and learn the features of the image to automatically obtain deeper abstractions and more representative image deep features, realize the extraction of the distribution area of the plant community, and output the automatic classification result of the plant community in the form of the original image and the result image superimposed on each other. The number of samples with different gradients are used as training samples, and the method proposed in this paper is used to analyze the influence of the number of training samples with different gradients on the results of automatic classification. Experimental results show that the number of training samples has a significant impact on classification accuracy, the modeling accuracy of the ResNet50 model and the Res2Net model are increased from 82% and 83% to 90% and 92%, compared with the traditional supervised classification method, the deep convolutional network significantly improves the classification accuracy. The classification results show that when the number of training samples is not less than 200, the CNN-based Res2Net model shows the best classification results.

References

[1]
Z. Shi, Z. Z. Liang, and Y. Y. Yang. (2015) "Status and Prospect of Agricultural Remote Sensing." Transactions of The Chinese Society for Agricultural Machinery 46 (2): 247-260.
[2]
Wardlow B D, Egbert S L. (2008) “Large-area crop mapping using time-series MODIS 250m NDVI data: An assessment for the U.S. Central Great Plains.” Remote Sensing of Environment 112 (03): 1096-1116.
[3]
J. V. Marcaccio, C. E. Markle, and P. Chow-Fraser. (2015) “Unmanned aerial vehicles produce high-resolution seasonally-relevant imagery for classifying wetland vegetation.” International Archives of the Photogrammetry Remote Sensing and Spatial Information Sciences 40 (4): 29-256.
[4]
Zylshal, Sulma S, Yulianto F. (2016) “A support vector machine object-based image analysis approach on urban green space extraction using Pleiades-1A imagery.” Modeling Earth Systems and Environment 2 (2): 54.
[5]
Durduran S S. (2015) “Automatic classification of high resolution land cover using a new data weighting procedure: The combination of K-means clustering algorithm and central tendency measures(KMC-CTM).” Applied Soft Computing 35 (1): 136-150.
[6]
Gidaris S, Komodakis N. (2015) “Object detection via a multi-region and semantic segmentation-aware CNN model.” IEEE International Conference on Computer Vision. IEEE.
[7]
Marmanis D, Wegner J D, Galliani S. (2016) “Semantic segmentation of aerial images with an ensemble of CNSS.” ISPRS Congress.
[8]
Long J, Shelhamer E, Darrell T. (2014) “Fully convolutional networks for semantic segmentation.” IEEE Transactions on Pattern Analysis and Machine Intelligence 39 (4): 640-651.
[9]
Zheng S, Jayasumana S, Romera-Paredes B. (2015) “Conditional random fields as recurrent neural networks.” IEEE International Conference on Computer Vision (ICCV). IEEE.
[10]
Yu F, Koltun V, Funkhouser T. (2017) “Dalated residual networks.” IEEE Conference on Computer Vision and Pattern Recognition (CVPR), July 21-26, 2017, Honolulu, HI, USA. New York. IEEE.
[11]
Manuel L A, Ruben G O, Nicolai P. (2017) “Appearance-invariant place recognition by discriminatively training a convolutional neural network.” Pattern Recognition Letters 92: 89-95.
[12]
ZHOU Xun, SUN Zhongping, LIU Suhong. (2018) “A method for extracting the leaf litter distribution area in forest using chip feature.” International Journal of Remote Sensing 39 (15): 5310-5329.
[13]
SUN Chao, LU Junwei, LI Jianwei. (2017) “Method of rapid image super-resolution based on deconvolution.” Acta Optica Sinica 37 (12): 142-152.

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Other conferences
DMIP '20: Proceedings of the 2020 3rd International Conference on Digital Medicine and Image Processing
November 2020
80 pages
ISBN:9781450389044
DOI:10.1145/3441369
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 24 March 2021

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. Automatic classification
  2. CNN deep convolutional network
  3. Res2Net model

Qualifiers

  • Research-article
  • Research
  • Refereed limited

Conference

DMIP '20

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • 0
    Total Citations
  • 35
    Total Downloads
  • Downloads (Last 12 months)3
  • Downloads (Last 6 weeks)0
Reflects downloads up to 28 Feb 2025

Other Metrics

Citations

View Options

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

HTML Format

View this article in HTML Format.

HTML Format

Figures

Tables

Media

Share

Share

Share this Publication link

Share on social media